Culture

Outcomes of coronavirus patients treated with extracorporeal membrane oxygenation in China offers guidance for management of critically ill COVID-19 patients worldwide

April 1, 2020 - The initial experience of extracorporeal membrane oxygenation (ECMO) management for coronavirus 2019 (COVID-19) patients in Shanghai, China provides guidance for management of critically ill COVID-19 patients worldwide, reports a study in the ASAIO Journal. The journal is published in the Lippincott portfolio by Wolters Kluwer.

For the most critically ill COVID-19 patients, standard ventilator care may not provide adequate support while allowing the lungs to heal and recover. Xin Li, MD and colleagues from University of Louisville, Louisville, KY and Zhongshan Hospital, Shanghai, China report on COVID-19 patients in China managed with ECMO and their outcomes, to guide health practitioners in treating this challenging patient population during the worldwide pandemic.

Because there is little worldwide experience using ECMO to support COVID-19 patients, Zhongshan Hospital established a dedicated ECMO team consisting of a physician perfusionist, a critical care physician, and a pulmonologist to be available at all times to oversee ECMO management of COVID-19 patients. The team was also responsible for communicating with the newly established Shanghai COVID-19 ECMO Expert Team, a group of 12 ECMO units from Shanghai's major hospitals. Together they developed the "Shanghai ECMO Support for COVID-19 Guideline" to ensure consistency and standardization across all hospital centers.

Credit: 
Wolters Kluwer Health

Fashion designers in a country of shortages

Why was there always a shortage of fashionable clothing in the USSR? What was the typical career path for a Soviet fashion designer? Who had power and influence in the socialist fashion industry? HSE Associate Professor Yulia Papushina examined these questions by reconstructing the everyday life of the Perm Fashion House during the late socialism era. Her study is the first to look into the recent history of clothing design and manufacturing in Russian provinces. https://www.nlobooks.ru/magazines/teoriya_mody/54_tm_4_2019/article/21895/

Strict Hierarchy

What is now termed the 'late socialism' era is the period between the so-called 'thaw' in the mid-1950s and 'perestroika' in the mid-1980s. The Soviet authorities at that time were trying to upgrade the country's highly centralised economy by introducing free-market elements and lifting the iron curtain just slightly; consumer behaviour was evolving, and 'fashion houses' tasked with designing clothes for mass production were set up in many provinces.

In 1961, a fashion house was opened in the city of Perm, controlled by the Ministry of Light Industry as part of a rigid hierarchy of actors.

The industry was led by three main entities: the All-Union Fashion House, the All-Union Institute for Consumer Goods Industry and Garment Culture (VIALEGPROM), and the Ministry of Light Industry's Special Design Bureau.

Acting as the 'official source of judgment regarding good taste and style', this trio dictated the fashion trends which provincial fashion houses were expected to adopt and translate into production and marketing.

The Perm Fashion House (PFH) was supervised by the All-Union Fashion House, guided by its workplans and performance targets. However, by interviewing former PFH employees and studying archival documents*, Papushina https://www.hse.ru/en/org/persons/7161403 realised that, 'The creation of Soviet fashion and the profession of fashion designer in the USSR had more in common with Western bourgeois fashion than the Soviet authorities were willing to admit'.

The most obvious things that this system had in common with the capitalist West included a clear distinction between mass production and runway fashions - and also the way careers were built in fashion design by gradually acquiring professional capital such as knowledge, skills, work experience and qualifications.

Human Capital for Career Advancement

The first and often critical step to a career in fashion was training, preferably at a university in Moscow or Leningrad. Designers with degrees from metropolitan universities were rare, and highly valued in the provinces and could expect to be appointed to senior positions in the industry immediately after graduation.

In contrast to provincial cities, Moscow and Leningrad had no shortage of local human resources, so recent graduates of design schools still had to wait for a good job.

The perceived value of provincial school degrees was not as high, and graduates could only expect lower-level appointments.

Sewing professionals (tailors and dressmakers trained in vocational schools) could be promoted to higher positions with the PFH after taking a professional development course, e.g. from the All-Union Correspondence Institute of Textile Industry and Forestry.

Someone who had spent some time studying art but never obtained a degree could still have a career with the PFH by starting out as a model sketcher, then being referred by their employer to a textile industry course before returning as a certified fashion designer.

The PFH also hired people without any training other than secondary school and an amateur art studio. Usually, they were initially given a low-level job and the option of further training, e.g. at a regional light industry college. Enrolling in a university art and design course was less accessible, because the entrance exam included a demanding test in drawing which could be too hard to pass for an applicant who had only attended a provincial art studio.

Creative Freedom

According to the researcher, 'Soviet fashion designers lived in a paradox, where the state had established one of the world's least flexible systems for mass production of clothing but encouraged the creation of limited-edition fashion.' As a result, the sphere of fashion design split into mass production and runway collections, and designers identified with either mass-market producers or fashion artists.

This was similar to how fashion worked in the West, with certain differences due to some specific features of the socialist system. Fashion artists in the USSR used fashion as a means of self-expression and contributed to their fashion house's standing in the industry. But neither the designers nor the fashion houses were allowed to 'dictate fashion'. Moreover, provincial fashion collections were not even intended for the runway per se but only featured at internal industry meetings hosted by the All-Union Fashion House and VIALEGPROM.

In that internal space, fashion designers were given creative freedom. According to former PFH designers, their supervisors rarely interfered with their process or results.

For example, the PFH design team was allowed to work from home to create the 1968 collection. In another instance, in the late 1970s the PFH art director complained of excessive administrative workload and threatened to quit -- and was freed from all administrative tasks for almost five years.

While all collections were the result of teamwork, the role of the team leader as idea generator was never challenged. Their creative fantasy was not necessarily limited to typically Soviet imagery, but fashion designers always stayed within the boundaries of what was permissible by official aesthetics.

Reflecting Soviet Identity

The artists' Soviet identity determined their understanding of what was permitted by the system and considered beautiful by the masses.

Here is an excerpt from an interview with the head of the PFH 'experimentation team' that existed between 1968 and 1972:

- What was not permitted, for example?

- Well, all sorts of vulgarity. But if [a design] resembled something by Kandinsky or the like, it was okay.

- And what was considered vulgar?

- Maybe a monkey necktie. Bad taste.

According to a dress designer who worked for the PFH between 1982 and the late 1990s, a Soviet woman was supposed to wear high-quality, inexpensive and comfortable garments allowing for ease of movement and free from health risks: 'low necklines were not allowed even when they looked flattering'.

Another question to the same interviewee:

- What made a Soviet woman different from a non-Soviet woman?

- First, there were no non-Soviet women [in the USSR] to begin with. And second, a woman making money, for example, would be considered 'non-Soviet'. Or perhaps female dancers in restaurants--they were perceived as ... not entirely Soviet. These things were not openly discussed but ingrained in people's minds. A non-Soviet woman was one who... no, we did not even describe a person in terms of Soviet or non-Soviet.

- What did you say instead?

- A woman who lived off earned or unearned income.

- Living off earned income implied working at an enterprise?

- Yes, working at an enterprise.

These responses reflect how the designers perceived their target audience. When asked to elaborate on the role of their Soviet identity, the respondents explained the prevalent clothing style by a specific public mentality: 'Russia and the Soviet Union are different from Western society ... it does not even need to be proven. So the idea was for us to be true to ourselves ... to express ourselves in our own language'.

The Soviet ideology was so deeply rooted in the mind that it was not perceived as a major constraint on creative expression; indeed, the shortages of essential materials were seen as a far greater problem, concludes the study author.

Runway versus Street

Since fashion designers were supposed to be contributing to industrial production, they faced the demands of a planned economy. Clothing factories were expected to cost-effectively produce garments suitable for street wear and therefore called for 'less imagination and more practicality' in design.

Unlike its western counterpart, the USSR clothing industry did not have rigid boundaries between its different segments: designers working at factories could help their fashion house colleagues, while the latter were sometimes assigned to mass production facilities. Some of those who considered themselves fashion artists were not entirely enthusiastic about mass production; in contrast, designers at factories were often proud of their work: 'It felt so rewarding to have your design approved for production... and then perhaps 20 people or so would buy an item I had designed, isn't that nice?'

It was up to an expert board (khudsovet) which included representatives of factories and retail stores, as well as fashion designers, to decide what would and would not go into mass production. Some expert board meetings involved confrontations in which the PFH would often lose.

While formally the three parties represented on such boards were equal, in reality the interests of factories and shops prevailed over those of fashion designers.

Clothing factories used their own criteria -- such as industry standards and targets, cost minimisation and product replicability in different sizes -- to assess PFH proposals. A factory could decide to simplify a design or use a different type or colour of fabric without consulting the author who could only agonise, 'sometimes I was thinking, God forbid someone might find out that I designed this item'.

In contrast to factories which were reluctant to change their product range, retailers pushed for novelty during expert board meetings. Constrained by a limited choice of virtually everything they needed, designers often resented the retailers: 'They kept asking if we had anything new to offer, but how could we have produced anything new with the same type of fabric?'

Autonomy Minus Freedom

Mass production of clothing in the USSR, therefore, faced the proverbial Swan, Pike and Crawfish effect [i.e. each party pulling in a different direction], with the PFH being in the least favourable position, perceived as 'a hindrance to the other two parties' plans and thus causing dissatisfaction of the government bureaucracy,' Papushina concludes.

The fact that fashion designers never had the end consumers in mind, nor received any feedback from the public, made the situation even more absurd.

Any likeness to the Western fashion industry was distorted by the dictatorship of a planned economy. While the fashion house director could free an artist from bureaucratic responsibilities, and the designers were allowed to balance between the dominant ideology and their own sense of style and enjoyed relative autonomy in creating their collections, they still operated within a rigid system, in which:

clothing factories - rather than fashion trends - dictated what designers could and could not do;

to be successful in their profession, fashion designers needed to submit to this diktat in producing new designs;

provincial designers enjoyed a degree of autonomy in creating their collections precisely because their creations were considered safe, since they were not intended for mass production--and therefore could not have a negative impact on the clothing industry--and never made it to international events, i.e. presented no risk of 'undermining the international image of Soviet fashions'.

Credit: 
National Research University Higher School of Economics

Modern humans, Neanderthals share a tangled genetic history, study affirms

BUFFALO, N.Y. -- In recent years, scientists have uncovered evidence that modern humans and Neanderthals share a tangled past. In the course of human history, these two species of hominins interbred not just once, but at multiple times, the thinking goes.

A new study supports this notion, finding that people in Eurasia today have genetic material linked to Neanderthals from the Altai mountains in modern-day Siberia. This is noteworthy because past research has shown that Neanderthals connected to a different, distant location -- the Vindija Cave in modern-day Croatia -- have also contributed DNA to modern-day Eurasian populations.

The results reinforce the concept that Neanderthal DNA has been woven into the modern human genome on multiple occasions as our ancestors met Neanderthals time and again in different parts of the world.

The study was published on March 31 in the journal Genetics.

"It's not a single introgression of genetic material from Neanderthals," says lead researcher Omer Gokcumen, a University at Buffalo biologist. "It's just this spider web of interactions that happen over and over again, where different ancient hominins are interacting with each other, and our paper is adding to this picture. This project will now add to an emerging chorus -- we've been looking into this phenomenon for a couple of years, and there are a couple of papers that came out recently that deal with similar concepts."

"The picture in my mind now is we have all these archaic hominin populations in Europe, in Asia, in Siberia, in Africa. For one reason or another, the ancestors of modern humans in Africa start expanding in population, and as they expand their range, they meet with these other hominins and absorb their DNA, if you will," Gokcumen says. "We probably met different Neanderthal populations at different times in our expansion into other parts of the globe."

Gokcumen, associate professor of biological sciences in the UB College of Arts and Sciences, led the study with first author Recep Ozgur Taskent, a recent UB PhD graduate in the department. Co-authors include UB PhD graduate Yen Lung Lin, now a postdoctoral scholar at the University of Chicago; and Ioannis Patramanis and Pavlos Pavlidis, PhD, of the Foundation for Research and Technology in Greece.

The research was funded by the U.S. National Science Foundation.

To complete the project, scientists analyzed the DNA of hundreds of people of Eurasian ancestry. The goal was to hunt for fragments of genetic material that may have been inherited from Neanderthals.

This research found that the Eurasian populations studied could trace some genetic material back to two different Neanderthal lineages: one represented by a Neanderthal whose remains were discovered in the Vindija cave in Croatia, and another represented by a Neanderthal whose remains were discovered in the Altai mountains in Russia.

Scientists also discovered that the modern-day populations they studied also share genetic deletions -- areas of DNA that are missing -- with both the Vindija and Altai Neanderthal lineages.

The DNA of the Vindija and Altai Neanderthals, along with the modern human populations studied, were previously sequenced by different research teams.

"It seems like the story of human evolution is not so much like at tree with branches that just grow in different directions. It turns out that the branches have all these connections between them," Gokcumen says. "We are figuring out these connections, which is really exciting. The story is not as neat as it was before. Every single ancient genome that is sequenced seems to create a completely new perspective in our understanding of human evolution, and every new genome that's sequenced in the future may completely change the story again."

Credit: 
University at Buffalo

Oldest ever human genetic evidence clarifies dispute over our ancestors

image: Skeletal remains of Homo antecessor

Image: 
Prof. José María Bermúdez de Castro

Genetic information from an 800.000-year-old human fossil has been retrieved for the first time. The results from the University of Copenhagen shed light on one of the branching points in the human family tree, reaching much further back in time than previously possible.

An important advancement in human evolution studies has been achieved after scientists retrieved the oldest human genetic data set from an 800,000-year-old tooth belonging to the hominin species Homo antecessor.

The findings by scientists from the University of Copenhagen (Denmark), in collaboration with colleagues from the CENIEH (National Research Center on Human Evolution) in Burgos, Spain, and other institutions, are published April 1st in Nature.

"Ancient protein analysis provides evidence for a close relationship between Homo antecessor, us (Homo sapiens), Neanderthals, and Denisovans. Our results support the idea that Homo antecessor was a sister group to the group containing Homo sapiens, Neanderthals, and Denisovans", says Frido Welker, Postdoctoral Research Fellow at the Globe Institute, University of Copenhagen, and first author on the paper.

Reconstructing the human family tree

By using a technique called mass spectrometry, researchers sequenced ancient proteins from dental enamel, and confidently determined the position of Homo antecessor in the human family tree.

The new molecular method, palaeoproteomics, developed by researchers at the Faculty of Health and Medical Sciences, University of Copenhagen, enables scientists to retrieve molecular evidence to accurately reconstruct human evolution from further back in time than ever before.

The human and the chimpanzee lineages split from each other about 9-7 million years ago. Scientists have relentlessly aimed to better understand the evolutionary relations between our species and the others, all now extinct, in the human lineage.

"Much of what we know so far is based either on the results of ancient DNA analysis, or on observations of the shape and the physical structure of fossils. Because of the chemical degradation of DNA over time, the oldest human DNA retrieved so far is dated at no more than approximately 400.000 years", says Enrico Cappellini, Associate Professor at the Globe Institute, University of Copenhagen, and leading author on the paper.

"Now, the analysis of ancient proteins with mass spectrometry, an approach commonly known as palaeoproteomics, allow us to overcome these limits", he adds.

Theories on human evolution

The fossils analyzed by the researchers were found by palaeoanthropologist José María Bermúdez de Castro and his team in 1994 in stratigraphic level TD6 from the Gran Dolina cave site, one of the archaeological and paleontological sites of the Sierra de Atapuerca, Spain.

Initial observations led to conclude that Homo antecessor was the last common ancestor to modern humans and Neanderthals, a conclusion based on the physical shape and appearance of the fossils. In the following years, the exact relation between Homo antecessor and other human groups, like ourselves and Neanderthals, has been discussed intensely among anthropologists.

Although the hypothesis that Homo antecessor could be the common ancestor of Neanderthals and modern humans is very difficult to fit into the evolutionary scenario of the genus Homo, new findings in TD6 and subsequent studies revealed several characters shared among the human species found in Atapuerca and the Neanderthals. In addition, new studies confirmed that the facial features of Homo antecessor are very similar to those of Homo sapiens and very different from those of the Neanderthals and their more recent ancestors.

"I am happy that the protein study provides evidence that the Homo antecessor species may be closely related to the last common ancestor of Homo sapiens, Neanderthals, and Denisovans. The features shared by Homo antecessor with these hominins clearly appeared much earlier than previously thought. Homo antecessor would therefore be a basal species of the emerging humanity formed by Neanderthals, Denisovans, and modern humans", adds José María Bermúdez de Castro, Scientific Co-director of the excavations in Atapuerca and co-corresponding author on the paper.

World class-expertise

Findings like these are made possible through an extensive collaboration between different research fields: from paleoanthropology to biochemistry, proteomics and population genomics.

Retrieval of ancient genetic material from the rarest fossil specimens requires top quality expertise and equipment. This is the reason behind the now ten-years-long strategic collaboration between Enrico Cappellini and Jesper Velgaard Olsen, Professor at the Novo Nordisk Foundation Center for Protein Research, University of Copenhagen and co-author on the paper.

"This study is an exciting milestone in palaeoproteomics. Using state of the art mass spectrometry, we determine the sequence of amino acids within protein remains from Homo antecessor dental enamel. We can then compare the ancient protein sequences we 'read' to those of other hominins, for example Neanderthals and Homo sapiens, to determine how they are genetically related", says Jesper Velgaard Olsen.

"I really look forward to seeing what palaeoproteomics will reveal in the future", concludes Enrico Cappellini.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

A sensational discovery: Traces of rainforests in West Antarctica

image: Germany's icebreaking research vessel POLARSTERN, operated by the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI)

Image: 
© Alfred-Wegener-Institut/Johann Klages

An international team of researchers led by geoscientists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) have now provided a new and unprecedented perspective on the climate history of Antarctica. In a sediment core collected in the Amundsen Sea, West Antarctica, in February 2017, the team discovered pristinely preserved forest soil from the Cretaceous, including a wealth of plant pollen and spores and a dense network of roots. These plant remains confirm that, roughly 90 million years ago, the coast of West Antarctica was home to temperate, swampy rainforests where the annual mean temperature was ca. 12 degrees Celsius - an exceptionally warm climate for a location near the South Pole. The researchers surmise that this warmth was only possible because there was no Antarctic ice sheet and because the atmospheric carbon dioxide concentration was significantly higher than indicated by climate models to date. The study, which provides the southernmost directly assessable climate and environmental data from the Cretaceous and poses new challenges for climate modellers around the globe, will be released in the journal NATURE on 1 April 2020.

The mid-Cretaceous time interval, from ca. 115 million to 80 million years ago, is not only considered the age of the dinosaurs, but was also the warmest period in the past 140 million years. Sea surface temperatures in the tropics at this time were likely as high as ca. 35 degrees Celsius, and sea level was 170 metres higher than today. Yet we still know very little about environmental conditions in the Cretaceous south of the polar circle, since there are virtually no reliable climate archives that extend that far back in time. The new sediment core offers the team of experts the first chance to reconstruct the West Antarctic climate during the warmest interval of the Cretaceous, thanks to the unique evidence it contains.

In the sediment core, which the team collected with the University of Bremen's seafloor drill rig MARUM-MeBo70 near the Pine Island Glacier on an RV Polarstern expedition, they found pristinely preserved forest soil from the Cretaceous. "During the initial shipboard assessments, the unusual colouration of the sediment layer quickly caught our attention; it clearly differed from the layers above it. Moreover, the first analyses indicated that, at a depth of 27 to 30 metres below the ocean floor, we had found a layer originally formed on land, not in the ocean," reports first author Dr Johann Klages, a geologist at the AWI.

Evidence of a swamp landscape rich in vegetation

Yet it didn't become clear just how unique the climate archive truly was until the sediment core was subjected to X-ray computed tomography (CT) scans. The CT images revealed a dense network of roots that spread through the entire soil layer of fine-grained clay and silt, and which was so well-preserved that the researchers could make out individual cell structures. In addition, the soil sample contains countless traces of pollen and spores from various vascular plants, including the first remnants of flowering plants ever found at these high Antarctic latitudes.

"The numerous plant remains indicate that 93 to 83 million years ago the coast of West Antarctica was a swampy landscape in which temperate rainforests grew - similar to the forests that can still be found, say, on New Zealand's South Island," explains co-author Prof Ulrich Salzmann, a palaeoecologist at Northumbria University in Newcastle upon Tyne.

The results of the vegetation analysis puzzled the researchers: under what climatic conditions could temperate rainforests have formed back then at a geographic latitude of roughly 82 degrees South? Even during the Cretaceous, the Antarctic continent was at the South Pole, which means the region where the forest soil originated was subject to a four-month polar night; for a third of every year, there was no life-giving sunlight at all.

"To get a better idea of what the climate was like in this warmest phase of the Cretaceous, we first assessed the climatic conditions under which the plants' modern descendants live," says Johann Klages. The researchers subsequently searched for biological and geochemical temperature and precipitation indicators in the soil sample, on the basis of which they could reconstruct the air and water temperature in the West Antarctic rainforests, as well as the amount of precipitation they received.

Numerous analyses, one result: In the Cretaceous, Antarctica was ice-free and extremely warm

The outcomes of the various analyses fit together like the pieces of a puzzle: Roughly 90 million years ago, there was a temperate climate just 900 km from the South Pole. The annual mean air temperature was ca. 12 degrees Celsius; in other words, back in the Cretaceous, the average temperature near the South Pole was roughly two degrees warmer than the mean temperature in Germany today. Summer temperatures were ca. 19 degrees Celsius on average; water temperatures in the rivers and swamps reached up to 20 degrees; and the amount and intensity of rainfall in West Antarctica were similar to those in today's Wales.

The researchers then used this new vegetation, temperature, and precipitation data from West Antarctica as target values for simulations of the mid-Cretaceous climate. Their calculations with a palaeoclimate model revealed that the reconstructed conditions could only be achieved when (1) the Antarctic continent was covered with dense vegetation, (2) there were no land-ice masses on the scale of an ice sheet in the South Pole region, and (3) the carbon dioxide concentration in the atmosphere was far higher than previously assumed for the Cretaceous. "Before our study, the general assumption was that the global carbon dioxide concentration in the Cretaceous was roughly 1000 ppm. But in our model-based experiments, it took concentration levels of 1120 to 1680 ppm to reach the average temperatures back then in the Antarctic," says co-author and AWI climate modeller Prof Gerrit Lohmann.

Accordingly, the study shows both the enormous potency of the greenhouse gas carbon dioxide, and how essential the cooling effects of today's ice sheets are. "We now know that there could easily be four straight months without sunlight in the Cretaceous. But because the carbon dioxide concentration was so high, the climate around the South Pole was nevertheless temperate, without ice masses," explains co-author Dr Torsten Bickert, a geoscientist at the University of Bremen's MARUM research centre.

The big question now is: if it became so warm in the Antarctic back then, what caused the climate to subsequently cool so dramatically to form ice sheets again? "Our climate simulations haven't yet provided a satisfactory answer," says Gerrit Lohmann. Finding the causes of these tipping points is now a key challenge for the international climate research community.

Credit: 
Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research

Measures for care of cancer patients during COVID-19 outbreak in China

What The Viewpoint Says: The authors describe measures taken to reduce the risk of transmitting severe acute respiratory syndrome coronavirus 2 (SARS- CoV-2) to medical staff and cancer patients seeking treatment during the COVID-19 outbreak in China.

Authors: Jie Wang, M.D., Ph.D., and Jie He, M.D., of the Chinese Academy of Medical Sciences & Peking Union Medical College in Beijing, are the corresponding authors.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamaoncol.2020.1198)

Editor's Note: Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

New discovery: Evidence for a 90-million-year-old rainforest near the South Pole

video: CT scan of the sediment core, showing sand at the top and tree roots and pollen with roots in-situ approximately 30 metres below sea bed

Image: 
AWI/Bremen

Researchers have found unexpected fossil traces of a temperate rainforest near the South Pole 90 million years ago, suggesting the continent had an exceptionally warm climate in prehistoric times.

A team from the UK and Germany, which includes experts from Northumbria University's Department of Geography and Environmental Sciences, discovered a forest soil from the Cretaceous period in the seabed near the South Pole.

Their analysis of the pristinely preserved roots, pollen and spores show that the world at that time was a lot warmer than previously thought, with rainforests in Antarctica similar to the forests we have in New Zealand today.

The international team's findings are published today (1 April) as the lead story in the scientific journal Nature.

The mid-Cretaceous period is considered the age of the dinosaurs and was the warmest period in the past 140 million years. Sea levels were 170 metres higher than today and sea surface temperatures in the tropics are believed to have been as high as 35 degrees Celsius. Until now, little was known about the environmental conditions south of the Polar Circle.

The evidence of Antarctica's rainforest comes from a core of sediment taken from the seabed near West Antarctica's Pine Island Glacier in 2017.

"During the initial shipboard assessments, the unusual colouration of the sediment layer quickly caught our attention; it clearly differed from the layers above it," said first author Dr Johann Klages, a geologist at the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research in Germany.

The team CT-scanned the sediment core and found a fascinating dense network of roots spreading through the entire soil layer. The 90-million-year-old soil is so well preserved that it contains countless traces of pollen, spores, remnants of flowering plants and the researchers could even make out individual cell structures.

Co-author Professor Ulrich Salzmann, a palaeoecologist at Northumbria University, used the preserved pollen and spores to reconstruct the past vegetation and climate. He describes the process of reconstructing past environments and climates as similar to working on a huge jigsaw puzzle, which revealed an amazingly detailed picture of the past Antarctic landscape.

"It was particular fascinating to see the well-preserved diverse fossil pollen and other plant remains in a sediment deposited some 90 million years ago, near the South Pole," he said.

"The numerous plant remains indicate that the coast of West Antarctica was, back then, a dense temperate, swampy forest, similar to the forests found in New Zealand today."

When they pieced together their analyses, the international research team found evidence for a mild climate around 500 miles from the South Pole, with annual mean air temperatures of about 12 degrees Celsius. This is roughly the mean temperature of Hobart, Australia, today. Summer temperatures averaged 19 degrees Celsius and water temperatures in rivers and swamps reached up to 20 degrees. This was despite a four-month polar night, meaning for a third of every year there was no life-giving sunlight at all. They also found that the amount and intensity of rainfall in West Antarctica was similar to that in Wales today.

Such climate conditions could only be achieved with a dense vegetation cover on the Antarctic continent and the absence of any major ice-sheets in the South Pole region. Carbon dioxide concentration in the atmosphere was also far higher than previously assumed.

Co-author, climate modeller Professor Gerrit Lohmann, from Germany's Alfred Wegener Institute said: "Before our study, the general assumption was that the global carbon dioxide concentration in the Cretaceous was roughly 1000 ppm. But in our model-based experiments, it took concentration levels of 1120 to 1680 ppm to reach the average temperatures back then in the Antarctic."

As such, the study shows both the enormous potency of the greenhouse gas carbon dioxide, and how essential the cooling effects of today's ice sheets are.

Scientists are now working to understand what caused the climate to cool so dramatically, to form the ice sheets we see today.

Credit: 
Northumbria University

New 3D cultured cells mimic the progress of NASH

image: Left panel: Expression of Collagen I protein in green indicates that fibrosis, one of the NASH characters, is formed in the NASH organoids. DAPI staining in blue shows nuclie of cells.
Right panel: Expression of ?-Smooth muscle actin protein in green indicates that hepatic stellate cells were activated in the NASH organoids. This also represents one of the NASH characters.

Image: 
T. Usui / TUAT

A research team led by scientists from Tokyo University of Agriculture and Technology (TUAT), Japan, has successfully established 3D cultured tissue that mimics liver fibrosis, a key characteristic of non-alcoholic steatohepatitis (NASH). For making the 3D culture, cells were collected from liver tissues of NASH model mice. Their findings open up an alternative avenue for developing drugs for NASH patients, identifying new markers for early diagnosis, and better understanding the disease progression.

Their findings were published in Biomaterials on Jan 27th, 2020.

In Japan, about 10 million of people (8% of the population) are thought to carry NASH or to be at high risk for NASH. In the USA, a ballpark estimate indicates about 3% to 12% of adults have NASH. The NASH patients develop fatty liver regardless of alcohol intake and progress to liver cirrhosis or ultimately liver cancer. NASH symptoms include liver tissue inflammation, fat deposition, and fibrosis. Unfortunately, there are no medications available for treating NASH, thus this becomes a big social problem.

In general, to test which drugs could cure NASH, researchers used the method that experimental animals were fed by a NASH-inducing diet, followed by drugs were continuously given to these animals. It is, however, obvious that using experimental animals to test drugs is not practical. "To mimic NASH in a dish, we started three-dimensionally growing cells that were collected from liver tissues of NASH model mice", said Dr. Tatsuya Usui, corresponding author of the paper, Senior Assistant Professor, Laboratory of Veterinary Pharmacology, Department of Veterinary Medicine, Faculty of Agriculture, TUAT. "These cells successfully grew in a dish and formed mini-organs called organoids."

For producing NASH organoids in a dish, we used cells from liver in the NASH mice with 3 different disease stages, such as the early stage (fatty liver), the middle stage (fatty liver), and the late stage (advanced fibrosis). The organoids were examined by standard histology methods, such as HE staining, oil red staining, and Masson's trichrome staining to visualize cell shapes, oil production in cells, and connective issues, respectively. In addition, immuno-staining, quantitative PCR, RNA-sequencing were performed to see localization and amount of bio-markers.

"After careful scientific analyzes, we found that these cells' characters in the organoids were very similar to those of NASH liver tissues. Interestingly our NASH organoids also mimic characters of these stages. We therefore concluded that NASH was reproduced in a dish using the organoid culture method for the first time," Usui explained. "We expect that drug discovery targeted for each stage can be done using these organoids. In addition our RNA-sequencing analysis found that several genes were elevated at the all stages of organoids and some others were highly expressed at a specific state (patent application filed). So far, no effective diagnostic bio-markers have been found that accurately reflects the degree of progression of the NASH. Therefore, we expect that new reliable bio-markers for diagnosis can be identified using our NASH organoids", added Usui.

Credit: 
Tokyo University of Agriculture and Technology

The discovery of new compounds for acting on the circadian clock

image: New small molecules KL101 and TH301 target CRY1 and CRY2, respectively, to control the circadian clock.

Image: 
Issey Takahashi | ITBM, Nagoya University

The circadian clock controls a variety of biological phenomena that occur during the course of the day, such as sleeping and waking. Perturbation of the circadian clock has been associated with many diseases such as sleep disorders, metabolic syndrome, and cancer. The development of small-molecule compounds to regulate specific components of the circadian clock facilitates the elucidation of the molecular basis of clock function, and provides a platform for the therapeutic treatment of clock-related diseases.

In this study, the research team discovered the small molecules, KL101 and TH301, that lengthen the period of the circadian clock. They found that KL101 and TH301 are the first compounds that selectively target clock components CRY1 and CRY2, respectively. By utilizing X-ray crystallography to determine the structures, they revealed how KL101 and TH301 bind to CRY1 and CRY2.

However, additional experiments were required to determine the mechanism of CRY1 and CRY2 selectivity. It was found that the disordered tail regions of CRY proteins impart compound selectivity. Additionally, in collaboration with Project Associate Professor Megumi Hatori and Postdoctoral Fellow You Lee Son of the Keio University School of Medicine, they found that CRY1 and CRY2 are required for the differentiation of brown adipocytes, and both KL101 and TH301 are expected to provide a promising foundation for the therapeutic treatment of obesity.

Credit: 
Institute of Transformative Bio-Molecules (ITbM), Nagoya University

Graphene-based actuator swarm enables programmable deformation

image: (a) Schematic illustration of the fabrication of patterned SU-8/GO bilayer film using UV lithography. (b) The paper model of patterned SU-8/GO ribbon and its predictable moisture-responsive deformation under humidity actuation.

Image: 
©Science China Press

Actuators that can convert various environmental stimuli to mechanical works have revealed great potential for developing smart devices such as soft robots, micro-electromechanical systems (MEMS), and automatic Lab-on-a-Chip systems. Generally, bilayer structures are widely used for design and fabrication of stimuli responsive actuators. In the past decade, to pursue fast and large-scale deformation, great efforts have been devoted to the development of novel smart materials. To date, various stimuli response materials/structures have been successfully developed and employed for bimorph actuators.

Recently, graphene and graphene oxide (GO) that possess a series of outstanding physical/chemical properties have emerged as a new type of smart material for actuator design. Various graphene-based bimorph actuators have been successfully reported. However, these actuators are only capable of simple deformation, such as bending. To the best of our knowledge, less attention has been paid to the refined control of their deformation. Despite some previous works have reported that the bending direction can be controlled by the patterned constrained layer, their deformation is passively restricted due to the anisotropic mechanical resistance. Currently, the development of bimorph actuators that enable active and programmable deformation remains a challenging task.

In a new paper published in the Beijing-based National Science Review, scientists at Jilin University and Tsinghua University present a self-healing graphene actuator swarm that enables programmable 3D deformation by integrating SU-8 pattern arrays with GO. Unlike previously published works, the actuator swarm can realize active and programmable deformation under moisture actuation. In this work, the SU-8 pattern arrays can be fabricated into any desired structures, in which an individual SU-8 pattern can be considered as an inert layer. In combination with the bottom GO layer, each SU-8 structure can form an individual bimorph actuator and deform actively under stimulation. In this regard, these SU-8/GO bilayer arrays can be considered as a swarm of actuators (actuator-1, actuator-2, and actuator-n). Under external stimulation, each actuator deforms individually, and the deformation of the entire structure is the collective coupling and coordination of the actuator swarm. Therefore, by controlling the size, shape and orientation of the SU-8 patterns, more complex deformations can be programmed. This work proposed a new way to program the deformation of bilayer actuators, expanding the capabilities of existing bimorph actuators for applications in various smart devices.

Credit: 
Science China Press

Blocking the iron transport could stop tuberculosis

image: IrtAB (purple/turquoise/blue) sits in the inner membrane of M. tuberculosis and imports iron-loaded mycobactin (yellow/orange) from the host cell into the bacterial cell, where iron is released.

Image: 
Imre Gonda, University of Zurich

One of the most devastating pathogens that lives inside human cells is Mycobacterium tuberculosis, the bacillus that causes tuberculosis. According to the World Health Organization, 1.5 million people died in 2019 from this disease that generally affects the lungs. The rise of multidrug resistant M. tuberculosis strains, which are resistant to many of the most effective anti-tuberculosis drugs, is particularly worrying. In other words, novel drugs to treat tuberculosis are urgently needed.

Tuberculosis bacteria need iron to survive

All living organisms, including pathogens, need iron to survive. When a human cell is infected by pathogens like M. tuberculosis, it reduces the iron concentration to a minimum and thereby tries to starve the invader. The tuberculosis bacteria, in turn, start to release small molecules called mycobactins. These can bind free iron extremely well and thus steel it from the host cell. The iron captured by mycobactin is then transported into the bacteria by a protein named IrtAB.

A team of researchers led by Markus Seeger, professor at the Institute of Medical Microbiology of the University of Zurich (UZH), has now analyzed in detail the protein responsible for transporting iron from the infected host cell into the bacteria. "The transport protein, which is located in the bacterial membrane, is essential for the survival of the pathogens. If IrtAB is absent or not functioning, M. tuberculosis can no longer reproduce inside the human cell", says Seeger.

Iron transport protein works in the opposite direction

Using a combination of cryo-electron microscopy and X-ray crystallography, the researchers solved for the first time a high-resolution structure of the transport protein IrtAB. This analysis was done in collaboration with Ohad Medalia, professor at the Department of Biochemistry of UZH. According to its spatial structure, IrtAB belongs to the so-called ABC exporters, which are typically involved in the efflux of molecules out of the bacterial cell. "However, we were able to show that IrtAB in fact imports mycobactins into M. tuberculosis. It therefore transports molecules in the opposite direction than expected," says Markus Seeger.

Together with scientists from the University of Texas, USA, the research team identified an additional peculiarity of the transport protein IrtAB: It can modify the iron bound to mycobactin after it is imported into the bacteria. The iron is thus released inside the cell and the empty mycobactin can be recycled.

Inhibiting the iron transport could lead to new tuberculosis drugs

"IrtAB is a potential drug target, because its deletion renders M. tuberculosis inactive and incapable of infection. With our structural and functional elucidation of IrtAB, we opened avenues to develop novel tuberculosis drugs that inhibit the iron transport into the bacteria", Seeger concludes. "In view of Covid-19, a disease that also affects the lungs, tuberculosis will likely play a more important role again in the future. It is quite conceivable that patients weakened by Covid-19 will show increased infection rates with tuberculosis," he adds.

Credit: 
University of Zurich

First complete German shepherd DNA offers new tool to fight disease

Scientists have mapped the genome of the German shepherd, one of the world's most popular canine breeds, after using a blood sample from 'Nala,' a healthy five-year-old German shepherd living in Sydney.

In a paper published today in respected 'big data' journal GigaScience, a global team of researchers from institutions including UNSW Sydney detailed the mammoth task of unravelling the 38 pairs of dog chromosomes to decode the 19,000 genes and 2.8 billion base pairs of DNA, using advanced genetic sequencing technology.

The new genome not only provides science with a more complete biological snapshot of the dog species (Canis lupus familiaris) in general, but also offers a reference for future studies of the typical diseases that afflict this much-loved breed.

Popular choice

UNSW Science's Professor Bill Ballard, an evolutionary biologist who sequenced the genome of the Australian dingo in 2017, says German shepherds are popular choices in the home and the workplace because of their natural intelligence, balanced temperament and protective nature. But after more than a century of breeding for desired physical characteristics, they are particularly vulnerable to genetic diseases.

"One of the most common health problems affecting German shepherds is canine hip dysplasia, which is a painful condition that can restrict their mobility," says Professor Ballard.

"Because German shepherds make such good working dogs, there has been a lot of money spent looking into the causes and predictors of this problem. When working dogs - such as those trained to work with police or to help people with disabilities - end up getting hip dysplasia, then that's a lot of lost time and money that has gone into the training of that dog.

"Now that we have the genome, we can determine much earlier in life whether the dog is likely to develop the condition. And over time, it will enable us to develop a breeding program to reduce hip dysplasia in future generations."

Top dog

Nala, who was described in the paper as "an easy going and approachable 5.5 year old," was selected because she was free of all known genetic diseases, including no sign of hip dysplasia. She was located by well-known TV and radio vet Dr Robert Zammit - credited as an author of this paper - who Professor Ballard says has amassed X-rays and blood samples of more than 600 German shepherds.

"Now we'll be able to look at those hip x-rays and all the DNA of those dogs and compare them back to this healthy reference female," Professor Ballard says.

Nala isn't the first domestic dog to provide a sample for the mapping of the dog genome. In 2003 a poodle called Shadow provided a sample that resulted in a genome that was 80 per cent complete, followed two years later by the first complete mapping of the genome of 'Tasha' the Boxer.

Gene machines

But in the decade and a half since, technology has vastly improved to the point that the number of gaps - or regions of DNA bases that are unreadable - has fallen dramatically, making the mapping of Nala's genes the most complete yet.

"The biggest difference between the mapping today and in 2005 is that we now use long read sequencing," says Professor Ballard.

"The Boxer's genome was put together with 'Sanger' sequencing, which can read about 1000 bases in length at a time, while the technology that is available today - Next Generation sequencing - can read up to 15,000 bases.

"What this means is if you've got a region of genes that is duplicated and running more than 1000 bases, Sanger sequencing will not be able to tell you which part of the genes that particular sequence comes from. So whereas there were about 23,000 gaps in Sanger's Boxer genome, the Next Gen sequencer had just over 300."

Bred for success

The German shepherd genome is also an advance on 2005's Boxer genome because of the breed itself. As Boxers are more specialised, with more inbreeding in their genetic history, the German shepherd's genome is therefore more generic. The authors believe that this will provide better understanding of the evolution of dog breeds in general.

Professor Ballard reckons this will not be the last time a domestic dog breed's genome is sequenced.

"I would expect that as the costs come down, all the major breeds will have a genome mapped within 10 years, because this will help identify specific diseases, and lots of breeds have known specific diseases."

Credit: 
University of New South Wales

Urban dogs are more fearful than their cousins from the country

Fearfulness is one of the most common behavioural disorders in dogs. As an emotion, fear is a normal and vital reaction that helps individuals survive in threatening circumstances. When the fearfulness is excessive and disturbs the dog's life, it is referred to as a behavioural problem. Excessive fearfulness can significantly impair the dog's welfare, and it is also known to weaken the relationship between dog and owner.

Social fearfulness in dogs is particularly associated with fearfulness related to unfamiliar human beings and dogs. At the University of Helsinki, risk factors predisposing dogs to social fearfulness were investigated with the help of a dataset pertaining to nearly 6,000 dogs. The dataset was selected from a larger set of data, a behavioural survey encompassing almost 14,000 dogs.

Based on the survey, inadequate socialisation of puppies to various situations and stimuli had the strongest link with social fearfulness. The living environment also appears to make a difference, as dogs that live in urban environments were observed to be more fearful than dogs living in rural environments.

"This has not actually been previously investigated in dogs. What we do know is that human mental health problems occur more frequently in the city than in rural areas. However, further studies are needed before any more can be said about causes pertaining to the living environment," says Jenni Puurunen, a postdoctoral researcher at the Faculty of Veterinary Medicine, University of Helsinki.

Supporting prior research evidence, social fearfulness was demonstrated to be more common among neutered females and small dogs.

Alongside size and gender, activity is another factor associated with fearfulness. Fearful dogs were less active than bolder ones, and their owners also involved them in training and other activities significantly less often. Professor Hannes Lohi from the University of Helsinki speculates whether this is a cause or consequence.

"Activity and stimuli have already been found to have a positive effect on behaviour, in both dogs and humans. Of course, the lesser activity of fearful dogs can also be down to their owners wanting to avoid exposing their dogs to stressful situations. It may be that people just are not as active with fearful dogs," Lohi points out.

Furthermore, significant differences between breeds were identified in the study. Spanish Water Dogs and Shetland Sheepdogs expressed social fearfulness the most, while Wheaten Terriers were among the bravest breeds. The Cairn Terrier and the Pembroke Welsh Corgi expressed only little fearfulness towards other dogs.

"Differences between breeds support the notion that genes have an effect on fearfulness, as well as on many other mental health problems. This encourages us to carry out further research especially in terms of heredity. All in all, this study provides us with tools to improve the welfare of our best friend: diverse socialisation in puppyhood, an active lifestyle and carefully made breeding choices can significantly decrease social fearfulness," Lohi sums up.

Professor Lohi's group investigates the epidemiology of canine behaviour, as well as related environmental and genetic factors and metabolic changes.

Credit: 
University of Helsinki

Consumption of 3-6 eggs/week lowers the risk of cardiovascular disease and death

image: Associations of egg consumption with risk of CVD endpoints and all-cause mortality

Image: 
©Science China Press

Eggs have been acknowledged as a good source of high-quality proteins and contain bioactive components beneficial for health, while they are also loaded with abundant cholesterol in the yolks, making the public hesitant about consuming whole eggs. Up to now, most studies exploring the association of egg consumption with incident CVD or total death were conducted in high-income countries and findings were inconsistent across populations and CVD subtypes. Accordingly, no consensus has been reached on the recommendation of egg consumption around the world.

The current study conducted by Xia and her colleagues from Fuwai Hospital, Chinese Academy of Medical Sciences suggested that there were U-shaped relationships between egg consumption and the risks of incident CVD and total death among general Chinese, and those consumed 3-6 eggs/week was at the lowest risk. More specifically, consumption of =10 eggs/week was associated with 39% and 13% higher risk for incident CVD and total death, respectively.

In addition, researchers pointed out that the influence of egg consumption seemed to be different across CVD subtypes. Individuals had higher consumption of eggs was more likely to have increased risk of coronary heart disease (CHD) and ischemic stroke, while the elevated risk of hemorrhagic stroke was only found among those with lower consumption.

The current study was conducted based on the project of Prediction for Atherosclerotic Cardiovascular Disease Risk in China (China-PAR), which was established to estimate the epidemic of CVD and identify the related risk factors in general Chinese population. A total of 102136 participants from 15 provinces across China were included, who were all free of CVD, cancer or end-stage renal diseases at baseline. During up to 17 years of follow-up, 4848 cases of incident CVD (including 1273 CHD and 2919 stroke), and 5511 total death were identified, with over 90% follow-up rate.

A previous Chinese evidence from the China Kadoorie Biobank (CKB) study indicated that low to moderate intake of eggs (about 5 eggs/week) was significantly associated with lower risk of CVD in comparison with never or rare consumption (about 2 eggs/week). However, lacking participants with consumption of >=1 egg/d limited them to further assess the influence of higher egg consumption. In the China-PAR project, about 25% participants consuming 3-6 eggs/week, and the percentage of participants consuming =10 eggs/week was 12% and 24%, respectively. Benefiting from the wide range of egg consumption, the present study firstly demonstrated the potential adverse effects of too much egg intake among Chinese population.

The removal of limits on dietary cholesterol in the most recent US and Chinese dietary guidelines have provoked considerable reaction. Both the American Heart Association and the Chinese Preventive Medicine Association subsequently released scientific reports and emphasized that "dietary cholesterol should not be given a free pass to be consumed in unlimited quantities". Considering the rapid increase of both cholesterol intake and hypercholesteremia prevalence in China, measures should be taken to encourage the public to limit dietary cholesterol intake. Meanwhile, those with rare egg consumption could be recommended to eat a bit more in the future. This novel evidence should be considered in the update of guidelines on dietary cholesterol and CVD prevention for the general Chinese and probably for other populations in the low-and middle- income countries.

Credit: 
Science China Press

Possible lives for food waste from restaurants

image: The researcher team at the University of Cordoba

Image: 
University of Cordoba

More than a third of the food produced ends up being wasted. This situation creates environmental, ethical and financial issues, that also alter food security. Negative effects from waste management, such as bad smells or the emission of greenhouse gases, make the bioeconomy one of the best options to reduce these problems.

Research into the field of the bioeconomy and the search for waste valorization strategies, such as agricultural by-products, is the field of research for the BIOSAHE (a Spanish acronym of biofuels and energy-saving systems) research group at the University of Cordoba. Led by Professor Pilar Dorado, they are now taking a step further: they aim to establish the best valorization paths for restaurant food waste. Among the possible lives for restaurant scraps, they are looking to find which one is most effective and which provides the most value.

Along these lines, researcher Miguel Carmona and the rest of the BIOSAHE group, including Javier Sáez, Sara Pinzi, Pilar Dorado and Isabel López García, developed a methodology that assesses food waste and selects the best valorization path.

After analyzing food waste from a variety of different kinds of restaurants with varying degrees of caliber, the main chemical components were characterized, those being starches, proteins, lipids and fibers. The aim of this process was to find out what amounts of what compounds are held in food waste in order to link it to the best option for its transformation.

Once the chemical compounds of the scraps were identified, a statistical study was performed to analyze the variability (how compounds vary and the amounts of some waste compared to other waste).

Identifying compound typology and variability makes it possible to predict the most optimal valorization process depending on the waste, thus helping industries within the circular economy and the resource valorization sector to make decisions.

In this way, the lives of restaurant scraps can be turned into biodiesel, electricity or bioplastic. Specifically, the project that Pilar Dorado heads is developing a biorefinery that would, just as oil refineries do, generate biofuel, bioplastic, biolubricants and products with added value in the chemical, electrical and heat industries from restaurant food waste. In this project, in addition to the methodology that characterizes scraps and chooses the best paths, they have produced bioplastic that can be used as sutures in surgery procedures.

Credit: 
University of Córdoba