Culture

Tel Aviv University researchers discover evidence of biblical kingdom in Arava Desert

image: More than 6 m of copper production waste were excavated at Khirbat en-Nahas, Jordan. The excavated materials from here and other sites were used to track more than four centuries of technological and social evolution in biblical Edom.

Image: 
T. Levy/American Friends of Tel Aviv University (AFTAU)

Genesis 36:31 describes an early, pre-10th century BCE Edomite kingdom: "... the kings who reigned in Edom before any Israelite king reigned." But the archaeological record has led to conflicting interpretations of this text.

Now a Tel Aviv University study published in PLOS ONE on September 18 finds that the kingdom of Edom flourished in the Arava Desert in today's Israel and Jordan during the 12th-11th centuries BCE.

Expert analysis of specimens found in copper production sites in the Arava, led by Prof. Erez Ben-Yosef of TAU's Department of Archaeology and Ancient Near Eastern Cultures and Prof. Tom Levy of the University of California, San Diego, reveals the untold story of an affluent, flourishing society led by a copper "high-tech network."

Copper, used in ancient times to produce tools and weapons, was the most valuable resource in the Ancient Near East. Copper production is a complex process, requiring different stages and levels of expertise.

Prof. Ben-Yosef's team analyzed hundreds of findings from ancient copper mines in Jordan (Faynan) and Israel (Timna) to reconstruct the evolution and refinement of the copper manufacturing industry over 500 years, spanning the beginning of the first millennium BCE (1300-800 BCE). They identified dramatic changes in the copper slag discovered at the Arava sites.

"Using technological evolution as a proxy for social processes, we were able to identify and characterize the emergence of the biblical kingdom of Edom," explains Prof. Ben-Yosef. "Our results prove it happened earlier than previously thought and in accordance with the biblical description."

Prof. Ben-Yosef's analyses of copper slag, the waste of copper extraction by smelting, show a clear statistical fall in the amount of copper in the slag over time, indicating that production had become expertly streamlined for efficiency. The researchers attribute this sudden improvement to one of the most famous Egyptian invasions of the Holy Land: the military campaign of Pharaoh Shoshenq I (the biblical "Shishak"), who sacked Jerusalem in the 10th century BCE.

The new research indicates that Egypt's intervention in the land of Edom was not accompanied by destruction. Instead, it triggered a "technological leap" that included more efficient copper production and trade.

"We demonstrated a sudden standardization of the slag in the second half of the 10th century BC, from the Faynan sites in Jordan to the Timna sites in Israel, an extensive area of some 2,000 square kilometers, which occurred just as the Egyptians entered the region," says Prof. Ben-Yosef. "The efficiency of the copper industry in the region was increasing. The Edomites developed precise working protocols that allowed them to produce a very large amount of copper with minimum energy."

But Egypt at this time was a weak power, according to Prof. Ben-Yosef. While its influence in the region is clear, it probably did not command the copper industry, which remained a local Edomite enterprise.

"As a consumer of imported copper, Egypt had a vested interest in streamlining the industry. It seems that, through their long-distance ties, they were a catalyst for technological innovations across the region. For example, the camel first appeared in the region immediately after the arrival of Shoshenq I," Prof. Ben-Yosef says.

"Our new findings contradict the view of many archaeologists that the Arava was populated by a loose alliance of tribes, and they're consistent with the biblical story that there was an Edomite kingdom here," Prof. Ben-Yosef concludes. "A flourishing copper industry in the Arava can only be attributed to a centralized and hierarchical polity, and this might fit the biblical description of the Edomite kingdom."

Credit: 
American Friends of Tel Aviv University

Coral reefs and squat lobsters flourished 150 million years ago

image: The shells, or carapaces, of a modern porcelain crab (left) and the oldest known fossil porcelain crab (right). The carapaces are 8 and 3 millimeters wide, respectively. Porcelain crabs are really members of the squat lobster family that became adapted to the same intertidal environment as true crabs.

Image: 
Cristina Robins & Adiel Klompmaker, UC Berkeley

Coral reefs and the abundant life they support are increasingly threatened today, but a snapshot of a coral reef that flourished 150 million years ago shows that many animals were then at their peak of diversity, just offshore of the land ruled by dinosaurs.

In a paper published this month in the Zoological Journal of the Linnean Society, University of California, Berkeley, paleontologists describe a rich reef life during the Jurassic Period in the shallow sea covering what today is central Europe. The reef teemed with animals that snorkelers would recognize today -- fish, crabs, sea urchins, snails, clams and oysters -- but also now-extinct ammonites, which are essentially squid with external shells.

Joining them were 53 distinct species of squat lobsters, the rather unappealing name for creatures that South Americans relish as langostinos.

At a time of peak biodiversity for reef life in what was then the Tethys Sea, parasites also flourished. Some 10 percent of all the squat lobster fossils had bumps on their gill region that betrayed the existence of a blood-sucking parasite that was probably kin to the isopods that parasitize squat lobsters today.

"The reef would have looked similar to coral reefs today, just in terms of diversity and the fact that the corals back then belonged to the same group as the ones we see today," said lead author Cristina Robins, a senior museum scientist at the University of California Museum of Paleontology at UC Berkeley. She personally has described more than 50 new fossil species of squat lobsters, or galatheoids, over the past decade: about 30 percent of known fossil species.

"Squat lobsters became very diverse for the first time in Earth's history at the end of the Jurassic, alongside true crabs, which means that the Late Jurassic coral-associated habitats were key ecosystems in the evolution of galatheoids and their parasites," she said.

The diversity of life among these reefs 150 million years ago contrasts with the situation some 50 million years later, revealed by 100 million-year-old Cretaceous fossils from Spain. The diversity of squat lobsters in reefs was lower in the Cretaceous and also, subsequently, in the Cenozoic Era.

"We know that reefs aren't doing well today because of coral bleaching and other factors," said co-author Adiel Klompmaker, a project scientist in the Department of Integrative Biology. "If these ecosystems continue to deteriorate, it is very likely that associated organisms, including the squat lobsters, are going to take a big hit, as well, in terms of their abundance and diversity."

Squat lobsters: not lobsters, not crabs

As one of just a few experts worldwide on squat lobsters fossils, Robins admits that these animals -- living and extinct -- are underappreciated and understudied, even though they are one of most diverse groups of macrocrustaceans living today, with about 1,250 known species.

On reefs, where most people might encounter them, squat lobsters tend to be about the size of the fingernail on your little finger and easily missed. But some species living in Chile, Peru and Norway grow up to half-foot long and are harvested and sold as langostinos, a marketing term for the meaty tails of these and other lobster-like marine creatures.

They are found worldwide in many ocean environments, including the continental shelf and the deep sea. Large ones often congregate by the hundreds at hydrothermal vents in the abyss miles under the water's surface.

If you did see a squat lobster, you could easily mistake it for either a crab or a true lobster, though it is neither. The often-colorful porcelain crabs, or Porcellanidae -- popular for salt-water aquariums -- are squat lobsters but look nearly identical to true crabs, distinguished primarily by an easily overlooked tail tucked underneath. On the other hand, many squat lobsters look like small-clawed versions of the Maine lobster and have a similarly meaty and tasty, though smaller, tail.

"Most of the squat lobsters look like a crayfish: a lobster, but somewhat squished," Robins said. "But the squat lobsters have secondarily evolved a crab-like body in the porcelain crabs. That body shape helps in rocky intertidal areas."

In her quest to understand the diversity of these animals since they first appeared about 180 million years ago, Robins has studied fossils in collections around the world, but was blown away by a collection of 150 million-year-old fossils she stumbled across at the Vienna Museum of Natural History in Austria.

"I opened up several of their storage cabinets, and they contained nearly 7,000 decapods, a diverse group within crustaceans," Robins said. "The squat lobsters were a pretty high percentage of the number of decapods there: about 2,350 fossils, or a third. This collection is really special, just because they had a very dedicated collector who collected everything, so we have a really great snapshot of what the squat lobster fauna looked like, as well as some of the associated reef fauna."

The collection, one of the most diverse coral reef decapod fossil collections in the world, contains the oldest specimens of four of the six known families of squat lobsters. In their paper, the team described two of these, a new species of porcelain crab that is 50 million years older than the oldest known porcelain crab and the oldest Galatheidae by 25 million years.

Klompmaker, who is interested in the interactions between parasites and their hosts, was amazed by the isopod parasites visible as swellings on the carapaces of 10 percent of the squat lobster fossils. Today, parasites on squat lobsters are usually much less prevalent.

"Parasites such as isopods capitalized on the high abundance of their host and infected squat lobsters at rates not seen previously," he said. "This swelling in the gill region of the lobster's shell is caused a blood-sucking parasitic isopod crustacean, distant relatives and look-alikes of modern pill bugs. These are some of the oldest records of parasitism in galatheoids."

Robins and Klompmaker plan to continue their exploration of fossil squat lobsters in museum collections around the world, knowing that there are many undescribed species that can tell about reef life in the past and the ever-present warfare between parasites and their hosts.

Credit: 
University of California - Berkeley

Study gives the green light to the fruit fly's color preference

video: This time-lapse video shows the fruit flies' color choices during an eight-hour period, beginning four hours before lights turn on.

Image: 
Stanislav Lazopulo/University of Miami Department of Physics

For more than a century, the humble and ubiquitous fruit fly has helped scientists shed light on human genetics, disease, and behavior. Now a new study by University of Miami researchers reveals that the tiny, winged insects have an innate time- and color-dependent preference for light, raising the intriguing possibility that our own color choices depend on the time of day.

In a study published in the journal Nature on Wednesday, the researchers made two unexpected discoveries. First, they found that, given a choice, fruit flies are drawn to green light early in the morning and late in the afternoon, when they are most active, and to red, or dim light, in midday, when like many humans, they slow down to eat and perhaps take a siesta.

Much to the researchers' surprise, they also found that fruit flies, Drosophila melanogaster, demonstrate a "robust avoidance" for blue light throughout the day, a finding that turns a decades-long assumption on its head. Previous experiments dating back to the 1970s determined that fruit flies are attracted to blue light, the main driver for the circadian clock, or the genetic 24-hour timekeeper that controls the lives of humans and most other animals.

"If given a choice, the fact that flies would not choose blue is surprising, but the most surprising thing, which is relevant not just to flies, but to color preference in general, is the fact that color preference changes with time of day," said senior author Sheyum Syed, assistant professor of physics, who conceived and designed the study with post-doctoral student Stanislav Lazopulo. "This finding opens the possibility that human color preference also changes with the time of day, which may explain why it's been so difficult to nail down how color guides our choices."

Added study coauthor James D. Baker, research assistant professor of biology who helped supervise the study, "Stan has shown that these animals have a very clear preference for different colors of light at different times of day that's repeatable day to day, individual to individual, genotype to genotype. Our research community didn't have any idea that was happening."

Four years ago, while a graduate student in Syed's lab, Lazopulo set out to determine how Drosophila would respond to the colored light they would experience at their leisure in nature. If given a choice, what light would they choose? Would there be a pattern? Would their choices be guided by the circadian clock that guides all organisms?

With assistance from his brother, Andrey, he created an elaborate set of behavioral experiments that involved placing hundreds of single flies into tiny multicolored tubes that had a stopper on one end, food at the other end, and three distinct "rooms"--one green, one red, and one blue--that the insects could freely navigate.

Then he recorded their movements around the clock, through 12 hours of constant light and 12 hours of complete darkness, for as many as two weeks at a time. When Lazopulo reviewed the initial computer analysis of the recordings, he thought he had miscoded the computer program.

"They actually don't like blue light. They run away from blue light," he said. "It was absolutely an unexpected result. Based on all the previous knowledge we were not expecting to have such a preference for green, an avoidance for blue, and such robust patterns in this behavior."

But neither the computer program, nor the video, nor his eyes were flawed. During the day, the flies consistently avoided the blue zones, even when their food was placed in one. Under those circumstances, they would make brief incursions into a blue zone, but only to feed.

In contrast, the flies began to occupy the green zones about two hours after the lights came on in the morning. By midday, their preference for green and their activity diminished, with about half the population split between the green and red, or dim, zones. Then, about an hour before the lights turned off, the flies returned to the green zones, and their more active state. Later, during the lights-off phase, the flies randomly distributed themselves across the three zones, indicating, the researchers said, "that light is essential for generating the observed pattern."

Lazopulo and Syed, whose lab studies fruit fly behavior to better understand animal sleep, grooming, and color preferences, attributed the disparate results from earlier studies to improvements in long-term tracking methods and to the differences in the design of the experiments, particularly the difference in the time and conditions that the flies had to choose their color preference.

Past researchers, Syed said, tested Drosophila's color preference by releasing the flies into the bottom of a T-shaped vial and giving them 30 seconds to decide which arm of the T to exit--one with a green light and the other with a blue. The UM researchers suspect the flies chose blue under duress, as "an avoidance response to a noxious stimulus."

But now they know that, at their leisure and under more natural conditions, Drosophila prefer green, like the leaves of the fruit trees where, to the frustration of many a farmer, they like to lay their eggs.

Through what Baker called a "tour de force set of experiments" that included a series of genetic manipulations, the researchers also discovered that the fruit fly's color-driven behavior doesn't depend just on its visual system, as previously documented, but also on light-sensitive cells in the insect's abdomen that sense blue. Their internal clock guides the decision to stay in green or choose dim light in the middle of the day. Delete the clock genes, and the fruit flies always stay in green, never switching to dim light in midday.

But even without the clock, they still avoid blue, thanks to those abdominal cells that signal independently of the clock genes.

What this means for humans remains to be seen. But 110 years after embryologist Thomas Hunt Morgan began breeding fruit flies to confirm how genetic traits are inherited, UM researchers have shown there's still much to learn from the common pest that has evolved into the most studied and written about animal on the planet. In all, 10 scientists have earned six Nobel Prizes for their groundbreaking biological discoveries using fruit flies, whose genetic and physiological makeup is far simpler than humans, yet very similar.

In 1933, Morgan earned the first Nobel for discovering the role chromosomes play in heredity; in 2017, a trio of scientists earned the latest one, for isolating the circadian clock genes that control the rhythm of nearly every organism's daily life--not just in the brain, but in almost every cell in the body.

Credit: 
University of Miami

WSU grizzly research reveals remarkable genetic regulation during hibernation

image: Dr. Lynne Nelson (left foreground), and Dr. Charlie Robbins (left side back row) conduct a cardiac ultrasound on a groggy bear during hibernation in past research done at Washington State University's Bear Research, Education, and Conservation Center. Nelson's work contributed to the understanding of a grizzly's unique physiology.

Image: 
Photo by Henry Moore Jr. BCU/WSU

PULLMAN, Wash.--Being a human couch potato can greatly increase fat accumulation, hasten the onset of Type II diabetes symptoms, result in detrimental blood chemistry and cardiovascular changes, and eventually, bring about one's death.

Large hibernators such as bears however have evolved to adapt to and reverse similar metabolic stressors they face each year before and during hibernation to essentially become immune to these ill effects.

New RNA sequencing-based genetic research conducted at Washington State University's Bear Research, Education, and Conservation Center shows grizzlies express a larger number of genes in preparation for, and during hibernation to cope with such stressors, than do any other species studied.

The king-of-the-gene switching superlative even holds true when one corrects for the different sample sizes used in other hibernation studies.

The work was conducted in Pullman, Washington, home of the only university-based captive grizzly bear population in the world. It was published Sept. 13, in Communications Biology, a Springer Nature publication. The WSU scientists biopsied muscle, liver, and fat tissues for the study.

It begins with the wonders of hibernation

For centuries, people have been fascinated with various species known to hibernate. Science fiction writers describe fantastic space journeys and hibernation states employed with humans. Medically-induced comas in humans get them past extraordinary traumatic or disease states, organs are cooled for storage and transport, and scientists continue to wonder if hibernation could be induced as a therapeutic tool. A wide variety of species have been studied, including those that 'hibernate' in the warmer months and ones that hibernate in winter and whose body temperature can sometimes drop to near freezing. But not bears. Bears appear to be more like humans.

Unlike what some assume is a sleep equivalent, hibernation is a very specialized metabolic state that varies by species and the environment where they hibernate. The mechanism for hibernation in all species studied is controlled by their gene expression.

"Bears and other hibernators have sleep and wake cycles, but these differ in both the type of sleep and the frequency with which they occur," said Professor Heiko Jansen, lead author of the paper.

With grizzlies, the observable details of hibernation are astounding. During hibernation for nearly five months, grizzlies maintain only a slightly lowered body temperature and essentially do not eat, urinate, or defecate. They do however, give birth and produce milk and all the while they do not lose significant bone or muscle mass. In metabolic terms, during hibernation they are the ultimate recyclers of the waste products that mammals usually have to eliminate or suffer with toxicity from their buildup.

Studies done with humans show that even as little as 24 hours of fasting and confinement to a bed can lead to measureable blood glucose and chemistry changes and bone and muscle loss. Grizzlies though maintain a near normal blood glucose level throughout hibernation. By turning down their sensitivity to insulin, bears can conserve the glucose they produce.

Grizzly cardiac studies conducted previously at WSU show that during hibernation bears' heart rates can slow down to as low as 5 beats per minute, with 12 to 15 being average, while the consistency of their blood resembles thick gravy. Their heart saves energy by essentially confining its pumping to two of its four chambers. Yet, startle a hibernating grizzly, and it can rev its heart up to 100 beats per minute in mere seconds.

Grizzlies have developed this unique set of adaptations in order to survive harsh winters when food is scarce. By hibernating then they expend little energy and survive, give birth, and nurse until food becomes abundant again.

Gene expression in hibernating bears is remarkable

While other studies in other species have looked at gene expression in tissues before and during hibernation, the work until now has never been done with grizzlies. The WSU results while somewhat expected, far exceeded the level of differing genetic expression seen before.

"The number of differentially expressed genes is striking," said WSU Associate Professor, Joanna Kelley.

Through sequencing RNAs, the team looked at hyperphagia [pronounced HY-per-fay-gee-uh] and subsequent hibernation across six bears. Hyperphagia means the period right before hibernation when bears begin eating to excess in order to store energy as fat. Bears at this time of year would be considered morbidly obese by human standards.

Among the discoveries was that all three tissues studied had dynamic gene expression changes occurring during hibernation. Perhaps more importantly, they discovered there was a subset of the same genes in all three tissues making the same changes at the same time.

Fat is the tissue that fuels hibernation and probably orchestrates the sparing of other tissues. But, despite the calorie intake and fat accumulation, bears do not suffer the same negative effects people do. Furthermore, they reverse the process by switching genes on and off based upon the season.

Fasting a bear during the active season as if it is time to hibernate does not make the same genes switch on and off like it would in late fall. Feed a bear in hibernation, and the genes can't be fooled then either; hibernation continues.

"Many people assumed that as bears go through hyperphagia, fat just sort of accumulates and sits there as a fuel reservoir," explained Jansen. "In fact, our studies have shown that the fatty tissue is far from inert. Fat is actually very metabolically active being driven by the expression of over 1000 unique genes in fat during hibernation as compared to the level of expression seen during the normal seasonal activities.

"What fat's entire role is in the overall hyperphagia and hibernation process remains an exciting area to continue to explore."

Jansen went on to explain that during the active period and subsequent hyperphagia, the genetic expression varied among the tissues studied. While many genes in fat were being differentially expressed, there were no genes being expressed like that in muscle tissue and only three were expressed differentially in liver tissue.

Differential gene expression also means genes may be upregulated or downregulated depending on the gene, sort of like a panel of light switches being on or off. Of the genes expressed in fatty tissue, more than 2000 were upregulated and about 1800 were downregulated in hibernation compared to the active season.

"Seeing the same sets of genes in different tissues being upregulated or downregulated that best serves the animal at the same time suggests there may be a common control mechanism for the process making this more one of an 'on-demand' regulation," Kelley said.

Credit: 
Washington State University

Shape-shifting robots built from smarticles could navigate Army operations

image: Five identical "smarticles" -- smart active particles -- interact with one another in an enclosure. By nudging each other, the group -- dubbed a "supersmarticle" -- can move in random ways. The research could lead to robotic systems capable of changing their shapes, modalities and functions.

Image: 
Rob Felt, Georgia Tech)

RESEARCH TRIANGLE PARK, N.C. -- A U.S. Army project took a new approach to developing robots -- researchers built robots entirely from smaller robots known as smarticles, unlocking the principles of a potentially new locomotion technique.

Researchers at Georgia Institute of Technology and Northwestern University published their findings in the journal Science Robotics.

The research could lead to robotic systems capable of changing their shapes, modalities and functions, said Sam Stanton, program manager, complex dynamics and systems at the Army Research Office, an element of U.S. Army Combat Capabilities Development Command's Army Research Laboratory, the Army's corporate research laboratory.

"For example, as envisioned by the Army Functional Concept for Maneuver, a robotic swarm may someday be capable of moving to a river and then autonomously forming a structure to span the gap," he said.

The 3D-printed smarticles -- short for smart active particles -- can do just one thing: flap their two arms. But when five of these smarticles are confined in a circle, they begin to nudge one another, forming a robophysical system known as a "supersmarticle" that can move by itself. Adding a light or sound sensor allows the supersmarticle to move in response to the stimulus -- and even be controlled well enough to navigate a maze.

The notion of making robots from smaller robots -- and taking advantage of the group capabilities that arise by combining individuals -- could provide mechanically based control over very small robots. Ultimately, the emergent behavior of the group could provide a new locomotion and control approach for small robots that could potentially change shapes.

"These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics," said Dan Goldman, a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology and the project's principal investigator. "We are not looking to put sophisticated control, sensing and computation on them all. As robots become smaller and smaller, we'll have to use mechanics and physics principles to control them because they won't have the level of computation and sensing we would need for conventional control."

The foundation for the research came from an unlikely source: a study of construction staples. By pouring these heavy-duty staples into a container with removable sides, former doctoral student Nick Gravish -- now a faculty member at the University of California San Diego -- created structures that would stand by themselves after the container's walls were removed.

Shaking the staple towers eventually caused them to collapse, but the observations led to a realization that simple entangling of mechanical objects could create structures with capabilities well beyond those of the individual components.

"Dan Goldman's research is identifying physical principles that may prove essential for engineering emergent behavior in future robot collectives as well as new understanding of fundamental tradeoffs in system performance, responsiveness, uncertainty, resiliency and adaptivity," Stanton said.

The researchers used a 3D printer to create battery-powered smarticles, which have motors, simple sensors and limited computing power. The devices can change their location only when they interact with other devices while enclosed by a ring.

"Even though no individual robot could move on its own, the cloud composed of multiple robots could move as it pushed itself apart and shrink as it pulled itself together," Goldman said. "If you put a ring around the cloud of little robots, they start kicking each other around and the larger ring -- what we call a supersmarticle -- moves around randomly."

The researchers noticed that if one small robot stopped moving, perhaps because its battery died, the group of smarticles would begin moving in the direction of that stalled robot. The researchers learned to could control the movement by adding photo sensors to the robots that halt the arm flapping when a strong beam of light hits one of them.

"If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light," Goldman said. "That allowed steering of the ensemble in a very rudimentary, stochastic way."

In future work, Goldman envisions more complex interactions that use the simple sensing and movement capabilities of the smarticles. "People have been interested in making a certain kind of swarm robots that are composed of other robots," he said. "These structures could be reconfigured on demand to meet specific needs by tweaking their geometry."

Swarming formations of robotic systems could be used to enhance situational awareness and mission-command capabilities for small Army units in difficult-to-maneuver environments like cities, forests, caves or other rugged terrain.

Credit: 
U.S. Army Research Laboratory

Study shows pre-disaster collaboration key to community resilience

New Orleans, LA - LSU Health New Orleans-led research reports that the key to improving community resiliency following disasters is a dynamic partnership between community-based organizations and public health agencies established pre-disaster. The results are published in the American Journal of Public Health available here.

"Promoting community resilience to disasters has recently become a national public health priority," notes Benjamin Springgate, MD, MPH, Chief of Community and Population Medicine at LSU Health New Orleans. "This is especially important in South Louisiana, at risk for both natural and man-made disasters, and its vulnerable populations."

The Community Resilience Learning Collaborative and Research Network (C-LEARN) is a multiphase study examining opportunities to improve community resilience to the threats of disaster in South Louisiana. Although community and faith-based organizations are trusted and often fill vital roles when local, state or federal response to disasters is delayed or inadequate, members of these organizations feel that local health authorities do not include them in pre-disaster planning.

During Phase I of the study, the researchers interviewed 48 participants from 12 parishes who were employees or volunteers at community-based organizations focused on health, social services or community development. Participants represented 47 agencies that provide primary care, housing and homelessness services, social services and advocacy, faith-based services including spiritual, social and cultural needs, consulting, funding and education. Key themes included maintaining continuous, effective communication and year-round network building; forging pre-disaster strategic partnerships; providing appropriate education and training; and building an integrated system that enables rapid disaster response.

"Many of those we interviewed do not specialize in disaster management, yet their firsthand experience in disaster response after hurricanes Katrina and Rita, the BP Oil Disaster and the 2016 Great Flood in Baton Rouge offered invaluable insights," adds Dr. Springgate, who is also the principle investigator of C-LEARN.

One of the new insights participants revealed was that preventive coordination of community members, faith-based organizations, nonprofits, academic institutions, hospitals, police, public health services, neighborhood associations and government agencies contributes to planning and response systems that react to disasters quickly, equitably and effectively.

The authors conclude, "Results of this study indication that to most effectively bolster community resilience in disaster-prone areas, community-based organizations and public health agencies must maintain continuous, effective communication and year-round network building, participate in partnerships before a disaster strikes, provide appropriate education and training and contribute to building an integrated system that enables rapid disaster response."

"By strengthening interagency relationships between sectors, we are now conducting Phase II of our Community Resilience Learning Collaborative and Research Network study testing whether agencies are better equipped to support each other and address their communities' diverse needs," says Springgate.

Credit: 
Louisiana State University Health Sciences Center

Unlock your smartphone with earbuds

image: A University at Buffalo-led research team is developing EarEcho, a biometric tool that uses modified wireless earbuds to authenticate smartphone users via the unique geometry of their ear canal.

Image: 
University at Buffalo

BUFFALO, N.Y. -- Visit a public space. Chances are you'll see people wearing earbuds or earphones.

The pervasiveness of this old-meets-new technology, especially on college campuses, intrigued University at Buffalo computer scientist Zhanpeng Jin.

"We have so many students walking around with speakers in their ears. It led me to wonder what else we could do with them," says Jin, PhD, associate professor in the Department of Computer Science and Engineering in the UB School of Engineering and Applied Sciences.

That curiosity has led to EarEcho, a biometric tool a research team led by Jin is developing that uses modified wireless earbuds to authenticate smartphone users via the unique geometry of their ear canal.

A prototype of the system, described in this month's Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, a journal published quarterly by the Association for Computing Machinery, proved roughly 95% effective.

UB's Technology Transfer office has filed a provisional patent application for the technology.

How EarEcho works

The team built the prototype with off-the-shelf products, including a pair of in-ear earphones and a tiny microphone. Researchers developed acoustic signal processing techniques to limit noise interference, and models to share information between EarEcho's components.

When a sound is played into someone's ear, the sound propagates through and is reflected and absorbed by the ear canal -- all of which produce a unique signature that can be recorded by the microphone.

"It doesn't matter what the sound is, everyone's ears are different and we can show that in the audio recording," says Jin. "This uniqueness can lead to a new way of confirming the identity of the user, equivalent to fingerprinting."

The information gathered by the microphone is sent by the earbuds' Bluetooth connection to the smartphone where it is analyzed.

To test the device, 20 subjects listened to audio samples that included a variety of speech, music and other content. The team conducted tests in different environmental settings (on the street, in a shopping mall, etc.) and with the subjects in different positions (sitting, standing, head tilted, etc.).

EarEcho proved roughly 95 percent effective when given 1 second to authenticate the subjects. The score improved to 97.5 percent when it continued to monitor the subject in 3 second windows.

How EarEcho can be used

Theoretically, users could rely on EarEcho to unlock their smartphones, thereby reducing the need for passcodes, fingerprints, facial recognition and other biometrics.

But Jin sees its greatest potential use in continuously monitoring a smartphone user. EarEcho, which works when users are listening to their earbuds, is a passive system, meaning users need not take any action, such as submitting a fingerprint or voice command, for it to work, he says.

Such a system, he argues, is ideal for situations where users are required to verify their identity such as making mobile payments. It also could eliminate the need to re-enter passcodes or fingerprints when a phone locks up after not being used.

"Think about that," says Jin, "just by wearing the earphones, which many people already do, you wouldn't have to do anything to unlock your phone."

Credit: 
University at Buffalo

Learning to read boosts the visual brain

image: Illiterate people in India learning to read

Image: 
Falk Huettig

Reading is a recent invention in the history of human culture--too recent for dedicated brain networks to have evolved specifically for it. How, then, do we accomplish this remarkable feat? As we learn to read, a brain region known as the 'visual word form area' (VWFA) becomes sensitive to script (letters or characters). However, some have claimed that the development of this area takes up (and thus detrimentally affects) space that is otherwise available for processing culturally relevant objects such as faces, houses or tools.

An international research team led by Falk Huettig (MPI and Radboud University Nijmegen) and Alexis Hervais-Adelman (MPI and University of Zurich) set out to test the effect of reading on the brain's visual system. The team scanned the brains of over ninety adults living in a remote part of Northern India with varying degrees of literacy (from people unable to read to skilled readers), using functional Magnetic Resonance Imaging (fMRI). While in the scanner, participants saw sentences, letters, and other visual categories such as faces.

If learning to read leads to 'competition' with other visual areas in the brain, readers should have different brain activation patterns from non-readers--and not just for letters, but also for faces, tools, or houses. 'Recycling' of brain networks when learning to read has previously been thought to negatively affect evolutionary old functions such as face processing. Huettig and Hervais-Adelman, however, hypothesised that reading, rather than negatively affecting brain responses to non-orthographic (non-letter) objects, may, conversely, result in increased brain responses to visual stimuli in general.

"When we learn to read, we exploit the brain's capacity to form category-selective patches in visual brain areas. These arise in the same cortical territory as specialisations for other categories that are important to people, such as faces and houses. A long-standing question has been whether learning to read is detrimental to those other categories, given that there is limited space in the brain", explains Alexis Hervais-Adelman.

Reading-induced recycling did not detrimentally affect brain areas for faces, houses, or tools--neither in location nor size. Strikingly, the brain activation for letters and faces was more similar in readers than in non-readers, particularly in the left hemisphere (the left ventral temporal lobe).

"Far from cannibalising the territory of its neighbours, the visual word form area (VWFA) is rather overlaid upon these, remaining responsive to other visual categories", explains Falk Huettig. "Thus learning to read is good for you", he concludes. "It sharpens visual brain responses beyond reading and has a general positive impact on your visual system".

Credit: 
Max Planck Institute for Psycholinguistics

How sleepless nights compromise the health of your gut

It is well known that individuals who work night-shifts, or travel often across different time zones, have a higher tendency to become overweight and suffer from gut inflammation. The underlying cause for this robust phenomenon has been the subject of many studies that tried to relate physiological processes with the activity of the brain's circadian clock, which is generated in response to the daylight cycle.

Now, the group of Henrique Veiga-Fernandes, at the Champalimaud Centre for the Unknown in Lisbon, Portugal, discovered that the function of a group of immune cells, which are known to be strong contributors to gut health, is directly controlled by the brain's circadian clock. Their findings were published today in the scientific journal Nature.

"Sleep deprivation, or altered sleep habits, can have dramatic health consequences, resulting in a range of diseases that frequently have an immune component, such as bowel inflammatory conditions", says Veiga-Fernandes, the principal investigator who led the study. "To understand why this happens, we started by asking whether immune cells in the gut are influenced by the circadian clock."

The big clock and the little clock

Almost all cells in the body have an internal genetic machinery that follows the circadian rhythm through the expression of what are commonly known as "clock genes". The clock genes work like little clocks that inform cells of the time of day and thereby help the organs and systems that the cells make up together, anticipate what is going to happen, for instance if it's time to eat or sleep.

Even though these cell clocks are autonomous, they still need to be synchronised in order to make sure that "everyone is on the same page". "The cells inside the body don't have direct information about external light, which means that individual cell clocks can be off", Veiga-Fernandes explains. "The job of the brain's clock, which receives direct information about daylight, is to synchronise all of these little clocks inside the body so that all systems are in synch, which is absolutely crucial for our wellbeing".

Among the variety of immune cells that are present in the intestine, the team discovered that Type 3 Innate Lymphoid Cells (ILC3s) were particularly susceptible to perturbations of their clock genes. "These cells fulfill important functions in the gut: they fight infection, control the integrity of the gut epithelium and instruct lipid absorption", explains Veiga-Fernandes. "When we disrupted their clocks, we found that the number of ILC3s in the gut was significantly reduced. This resulted in severe inflammation, breaching of the gut barrier, and increased fat accumulation."

These robust results drove the team to investigate why is the number of ILC3s in the gut affected so strongly by the brain's circadian clock. The answer to this question ended up being the missing link they were searching for.

It's all about being in the right place at the right time

When the team analysed how disrupting the brain's circadian clock influenced the expression of different genes in ILC3s, they found that it resulted in a very specific problem: the molecular zip-code was missing! It so happens that in order to localise to the intestine, ILC3s need to express a protein on their membrane that works as a molecular zip-code. This 'tag' instructs ILC3s, which are transient residents in the gut, where to migrate. In the absence of the brain's circadian inputs, ILC3s failed to express this tag, which meant they were unable to reach their destination.

According to Veiga-Fernandes, these results are very exciting, because they clarify why gut health becomes compromised in individuals who are routinely active during the night. "This mechanism is a beautiful example of evolutionary adaptation", says Veiga-Fernandes. "During the day's active period, which is when you feed, the brain's circadian clock reduces the activity of ILC3s in order to promote healthy lipid metabolism. But then, the gut could be damaged during feeding. So after the feeding period is over, the brain's circadian clock instructs ILC3s to come back into the gut, where they are now needed to fight against invaders and promote regeneration of the epithelium."

"It comes as no surprise then", he continues, "that people who work at night can suffer from inflammatory intestinal disorders. It has all to do with the fact that this specific neuro-immune axis is so well-regulated by the brain's clock that any changes in our habits have an immediate impact on these important, ancient immune cells."

This study joins a series of groundbreaking discoveries produced by Veiga-Fernandes and his team, all drawing new links between the immune and nervous systems. "The concept that the nervous system can coordinate the function of the immune system is entirely novel. It has been a very inspiring journey; the more we learn about this link, the more we understand how important it is for our wellbeing and we are looking forward to seeing what we will find next", he concludes.

Credit: 
Champalimaud Centre for the Unknown

A promising HIV vaccine shows signs of cross-protective benefits

video: The journey towards an effective HIV vaccine. This material relates to a paper that appeared in the Sep. 18, 2019, issue of Science Translational Medicine, published by AAAS. The paper, by G.E. Gray at University of the Witwatersrand in Johannesburg; South Africa; and colleagues was titled, "Immune correlates of the Thai RV144 HIV vaccine regimen in South Africa."

Image: 
©South African Medical Research Council, Produced by JP Crouch for Blue Pear Visuals

One of the most successful candidate HIV vaccines to date - initially tested in Thailand, where it had modest effects - showed surprisingly strong efficacy when evaluated in a South African cohort, where a different strain of HIV is known to circulate. The research hints the RV144 vaccine regimen could provide protection against multiple strains of HIV, whose genetic diversity is a challenge for vaccine strategies. Scientists have attempted to create a vaccine for HIV that can provide long-lasting protection, but most candidates have failed to provide substantial benefits in early clinical trials. The RV144 vaccine granted modest protection against the clade B HIV subtype in a clinical trial in Thailand. However, it is unclear whether RV144 could provide benefits to people living in regions such as South Africa that are dominated by different clades of the virus. To investigate, Glenda Gray and colleagues compared immunological data from the RV144 study to an analysis of 100 HIV-negative South Africans who were given the same vaccine in a phase 1b trial. Surprisingly, they discovered that the RV144 regimen stimulated even stronger immune responses in the South Africans while being well-tolerated. Specifically, the vaccine elicited CD4+ T cell and anti-HIV antibody responses that are associated with protection against HIV, irrespective of sex, age or locale. The authors also observed that immune responses waned over time in both studies, suggesting that additional booster doses could help maintain the vaccine's efficacy. More research is needed to see if the vaccine grants protection from infection, but the findings indicate that RV144 could be more adaptable across endemic regions than previously thought.

Credit: 
American Association for the Advancement of Science (AAAS)

Wilderness areas halve extinction risk

image: Areas surrounding the Madidi National Park in the Bolivian Amazon has been identified as a vital 'at risk' wilderness area.

Image: 
Areas surrounding the Madidi National Park in the Bolivian Amazon has been identified as a vital 'at risk' wilderness area.

The global conservation community has been urged to adopt a specific target to protect the world's remaining wilderness areas to prevent large scale loss of at-risk species.

A University of Queensland and CSIRO study has found that wilderness areas - where human impact is minimal or absent - halves the global risk of species extinction.

UQ Centre for Biodiversity and Conservation Science Director Professor James Watson said vital wilderness areas could not be restored so urgent action was needed to ensure these areas were marked for conservation and remained protected.

"Wilderness areas have decreased by more than three million square kilometres - half the size of Australia - since the 1990s," Professor Watson said.

"Once these wilderness areas are gone, they are lost forever."

CSIRO researcher and UQ Adjunct Fellow Dr Moreno Di Marco said wilderness areas acted as a buffer against extinction risk, and the risk of species loss was more than twice as high for biological communities found outside wilderness areas.

"This new research has identified the importance of wilderness areas in hosting highly unique biological communities and representing the only remaining natural habitats for species that have suffered losses elsewhere," he said.

Vital 'at risk' wilderness areas include parts of Arnhem Land, areas surrounding the Madidi National Park in the Bolivian Amazon, partially protected forests in Southern British Columbia, and surrounding savannah areas within the Zemongo Reserve in the Central African Republic.

The researchers used new global biodiversity modelling infrastructure developed at CSIRO integrated with the latest wilderness map developed by UQ, University of Northern British Colombia and the Wildlife Conservation Society.

The study provided fine-scale estimates of probability of species loss around the globe.

Professor Watson said that beyond saving biodiversity, Earth's remaining intact ecosystems are critical in also abating climate change, regulating essential biogeochemical and water cycles, and ensuring the retention of long-term bio-cultural connections of indigenous communities.

Credit: 
University of Queensland

Brain tumors form synapses with healthy neurons, Stanford-led study finds

Scientists at the Stanford University School of Medicine have shown for the first time that severe brain cancers integrate into the brain's wiring.

The tumors, called high-grade gliomas, form synapses that hijack electrical signals from healthy nerve cells to drive their own growth. Experiments demonstrated that interrupting these signals with an existing anti-epilepsy drug greatly reduced the cancers' growth in human tumors in mice, providing the first evidence for a possible new way to treat gliomas.

A paper describing the findings will be published online Sept. 18 in Nature.

"One of the most lethal aspects of high-grade gliomas is that the cancer cells diffusely invade normal brain tissue so that the tumor and the healthy brain tissue are knitted together," said senior author Michelle Monje, MD, PhD, associate professor of neurology and neurological sciences. The discovery helps explain why gliomas are so intractable, she added. "This is such an insidious group of tumors. They're actually integrating into the brain."

The study's lead author is postdoctoral scholar Humsa Venkatesh, PhD.

Discovering that tumors wire themselves into the brain was "unsettling," Monje said. Still, she said she is optimistic about what the knowledge means for glioma patients. Several drugs already exist for treating electrical-signaling disorders such as epilepsy, and these may prove useful for gliomas, she said. "There is real hopefulness to this discovery," she said. "We've been missing this entire aspect of the disease. Now we have a whole new avenue to explore, one that could complement existing therapeutic approaches."

How the tumors grow

High-grade gliomas form synapses with healthy neurons that transmit electrical signals to the cancerous tissue, the study found. The tumors also contain cell-to-cell electrical connections known as gap junctions. Together, the two types of connections allow electrical signals from healthy nerve cells to be conducted into and amplified within the tumors.

High-grade gliomas include glioblastoma, a brain tumor seen in adults that has a five-year survival rate of 5%; diffuse intrinsic pontine glioma, a pediatric brain tumor with a five-year survival rate below 1%; and other diagnoses such as pediatric glioblastoma and diffuse midline gliomas occurring in the spinal cord and thalamus. Studies published by Monje's team in 2015 and 2017 indicated that high-grade gliomas use normal brain activity to drive their growth.

To learn how this worked, the scientists first analyzed the gene expression of thousands of individual cancer cells biopsied from newly diagnosed glioma patients. The cancer cells strongly increased the expression of genes involved in forming synapses.

The researchers then used electron microscopy, a technique that can reveal tiny details of cell anatomy, to show that structures that look like synapses exist between neurons and glioma cells. To confirm that these synapses indeed connect healthy neurons and malignant glioma cells, the scientists studied mice with cells from human gliomas implanted in their brains. After the glioma tumors had become established, the researchers used antibodies that bound to fluorescent markers expressed by the cancer cells to confirm that synapses go into malignant cells. "We saw very clear neuron-to-glioma synaptic structures," Monje said.

Using brain tissue from mice with human gliomas, the researchers measured the transmission of electrical signals into and through the tumors. They recorded two types of electrical signals: brief signals lasting four to five milliseconds, which are transmitted across a synaptic junction from a healthy neuron to a cancer cell by way of neurotransmitter molecules; and sustained electrical signals lasting one to two seconds that reflect electrical current propagated by a flux of potassium ions across the tumor cells' membranes. The potassium currents are caused by signals from neurons and are amplified by gap junctions that connect the cancer cells in an electrically coupled network.

The scientists also conducted experiments using a dye to visualize the gap-junction-connected cells, and used drugs capable of blocking gap junctions to confirm that this type of junction existed between the tumor cells and mediated their electrical coupling. Further experiments measuring changes in calcium levels confirmed that the tumor cells are electrically coupled via gap junctions.

"The live calcium imaging made it strikingly clear that this cancer is an electrically active tissue," said Venkatesh, the lead author. "It was startling to see that in cancer tissue."

The researchers showed that about 5-10% of glioma cells receive synaptic signals, and about 40% exhibit prolonged potassium currents that are amplified by gap junction interconnections such that half of all tumor cells have some type of electrical response to signals from healthy neurons.

Possible drug therapies

In humans who were having the electrical activity in their brains measured before surgery to remove glioblastoma tumors, and in mice with human gliomas, the researchers saw hyper-excitability of healthy neurons near the tumors, a finding that could help explain why human glioma patients are prone to seizures.

Using optogenetic techniques, which relied on laser light to activate the cancer cells in mice implanted with human gliomas, the researchers demonstrated that increasing electrical signals into the tumors caused more tumor growth. Proliferation of the tumors was largely prevented when glioma cells expressed a gene that blocked transmission of the electrical signals.

Existing drugs that block electrical currents also reduced growth of high-grade gliomas, the research found. A seizure medication called perampanel, which blocks activity of neurotransmitter receptors on the receiving end of a synapse, reduced proliferation of pediatric gliomas implanted into mice by 50%. Meclofenamate, a drug that blocks the action of gap junctions, resulted in a similar decrease in tumor proliferation.

Monje's team plans to continue investigating whether blocking electrical signaling within tumors could help people with high-grade gliomas. "It's a really hopeful new direction, and as a clinician I'm quite excited about it," she said.

Credit: 
Stanford Medicine

Gigantic asteroid collision boosted biodiversity on Earth

image: This is an illustration of an asteroid collision.

Image: 
Don Davis

An international study led by researchers from Lund University in Sweden has found that a collision in the asteroid belt 470 million years ago created drastic changes to life on Earth. The breakup of a major
asteroid filled the entire inner solar system with enormous amounts of dust leading to a unique ice age and, subsequently, to higher levels of biodiversity. The unexpected discovery could be relevant for tackling global warming if we fail to reduce carbon dioxide emissions.

In the last few decades, researchers have begun to understand that evolution of life on Earth also depends on astronomical events. One example of this is when the dinosaurs were wiped out instantaneously by the Cretaceous-Paleogene impact of a 10 km asteroid.

For the first time, scientists can now present another example of how an extraterrestrial event formed life on Earth. 470 million years ago, a 150 km asteroid between Jupiter and Mars was crushed, and the dust spread through the solar system.

The blocking effect of the dust partially stopped sunlight reaching Earth and an ice age began. The climate changed from being more or less homogeneous to becoming divided into climate zones - from Arctic conditions at the poles, to tropical conditions at the equator.

The high diversity among invertebrates came as an adaptation to the new climate, triggered by the exploded asteroid.

"It is analogous to standing the middle of your living room and smashing a vacuum cleaner bag, only at a much larger scale", explains Birger Schmitz, professor of geology at Lund University and the leader of the study.

An important method that led to the discovery was the measurements of extraterrestrial helium incorporated in the petrified sea floor sediments at Kinnekulle in southern Sweden. On its way to Earth, the dust was enriched with helium when bombarded by the solar wind.

"This result was completely unexpected. We have during the last 25 years leaned against very different hypotheses in terms of what happened. It wasn't until we got the last helium measurements that everything fell into place" says Birger Schmitz.

Global warming continues as a consequence of carbon dioxide emissions and the temperature rise is greatest at high latitudes. According to the Intergovernmental Panel on Climate Change, we are approaching a situation that is reminiscent of the conditions that prevailed prior to the asteroid collision 470 million years ago.

The last decade or so, researchers have discussed different artificial methods to cool the Earth in case of a major climate catastrophe. Modelers have shown that it would be possible to place asteroids, much like satellites, in orbits around Earth in such a way that they continuously liberate fine dust and hence partly block the warming sunlight.

"Our results show for the first time that such dust at times has cooled Earth dramatically. Our studies can give a more detailed, empirical based understanding of how this works, and this in turn can be used to evaluate if model simulations are realistic", concludes Birger Schmitz.

Credit: 
Lund University

Babies' gut bacteria affected by delivery method, Baby Biome project shows

Babies born vaginally have different gut bacteria - their microbiome - than those delivered by Caesarean, research has shown. Scientists from the Wellcome Sanger Institute, UCL, the University of Birmingham and their collaborators discovered that whereas vaginally born babies got most of their gut bacteria from their mother, babies born via caesarean did not, and instead had more bacteria associated with hospital environments in their guts.

The exact role of the baby's gut bacteria is unclear and it isn't known if these differences at birth will have any effect on later health. The researchers found the differences in gut bacteria between vaginally born and caesarean delivered babies largely evened out by 1 year old, but large follow-up studies are needed to determine if the early differences influence health outcomes. Experts from the Royal College of Obstetricians and Gynaecologists say that these findings should not deter women from having a caesarean birth.

Published in Nature today (18th Sept), this largest ever study of neonatal microbiomes also revealed that the microbiome of vaginally delivered newborns did not come from the mother's vaginal bacteria, but from the mother's gut. This calls into question the controversial practice of swabbing babies born via caesarean with mother's vaginal bacteria. Understanding how the birth process impacts on the baby's microbiome will enable future research into bacterial therapies.

The gut microbiome is a complex ecosystem of millions of microbes, and is thought to be important for the development of the immune system. Lack of exposure to the right microbes in early childhood has been implicated in autoimmune diseases such as asthma, allergies and diabetes. However, it is not fully understood how important the initial gut microbiome is to the baby's immune system development and health, how a baby's microbiome develops, or what happens to it with different modes of birth.

To understand more about the development of the microbiome, and if the delivery method affected this, researchers studied 1,679 samples of gut bacteria from nearly 600 healthy babies and 175 mothers. Faecal samples were taken from babies aged four, seven or 21 days old, who had been born in UK hospitals by vaginal delivery or caesarean. Some babies were also followed up later, up to one year of age.

Using DNA sequencing and genomics analysis, the researchers could see which bacteria were present and found there was a significant difference between the two delivery methods. They discovered that vaginally delivered babies had many more health-associated (commensal) bacteria from their mothers, than babies who were born by caesarean.

Dr Trevor Lawley, a senior author on the paper from the Wellcome Sanger Institute, said: "This is the largest genomic investigation of newborn babies' microbiomes to date. We discovered that the mode of delivery had a great impact on the gut bacteria of newborn babies, with transmission of bacteria from mother to baby occurring during vaginal birth. Further understanding of which species of bacteria help create a healthy baby microbiome could enable us to create bacterial therapies."

Previous limited studies had suggested that vaginal bacteria were swallowed by the baby on its way down the birth canal. However, this large-scale study found babies had very few of their mother's vaginal bacteria in their guts, with no difference between babies born vaginally or by caesarean.

During birth, the baby will come into contact with bacteria from the mother's gut. The study discovered it was the mother's gut bacteria that made up much of the microbiome in the vaginally delivered babies. Babies born via caesarean had many fewer of these bacteria. This study therefore found no evidence to support controversial 'vaginal swabbing' practices, which could transfer dangerous bacteria to the baby.

In place of some of the mother's bacteria, the babies born via caesarean had more bacteria that are typically acquired in hospitals, and were more likely to have antimicrobial resistance. The researchers isolated, grew and sequenced the genomes of more than 800 of these potentially pathogenic bacteria, confirming that they were the same as strains causing bloodstream infections in UK hospitals. Although these bacteria don't usually cause disease while in the gut, they can cause infections if they get into the wrong place or if the immune system fails.

Dr Nigel Field, a senior author on the paper from UCL, said: "Our study showed that as the babies grow and take in bacteria when they feed and from everything around them, their gut microbiomes become more similar to each other. After they have been weaned, the microbiome differences between babies born via caesarean and delivered vaginally have mainly evened out. We don't yet know whether the initial differences we found will have any health implications."

Dr Alison Wright, Consultant Obstetrician and Vice President of The Royal College of Obstetricians and Gynaecologists said: "In many cases, a Caesarean is a life-saving procedure, and can be the right choice for a woman and her baby. The exact role of the microbiome in the newborn and what factors can change it are still uncertain, so we don't think this study should deter women from having a caesarean. This study shows that more research is required to improve our understanding of this important area."

All women who have a caesarean are now offered antibiotics before the delivery to help prevent the mother developing postoperative infections, meaning that the baby also receives a dose of antibiotics via the placenta. This could also cause some of the microbiome differences seen between the two birth methods.

Principal Investigator of the Baby Biome Study, Professor Peter Brocklehurst, of the University of Birmingham, said: "The first weeks of life are a critical window of development of the baby's immune system, but we know very little about it. We urgently need to follow up this study, looking at these babies as they grow to see if early differences in the microbiome lead to any health issues. Further studies will help us understand the role of gut bacteria in early life and could help us develop therapeutics to create a healthy microbiome."

Credit: 
Wellcome Trust Sanger Institute

Shifting the focus of climate-change strategies may benefit younger generations

Strategies to limit climate change that focus on warming in the next couple of decades would leave less of a burden for future generations.

Research led by Imperial College London and the International Institute for Applied Systems Analysis (IIASA), Austria, suggests a new underpinning logic for strategies that seek to limit climate change. Their new proposal is published today in Nature.

Most strategies seek to limit climate change by the year 2100. The strategies may include tactics such as deployment of new renewable technologies, removing carbon from the atmosphere (through planting trees or new technologies), or mandating energy efficiency targets.

However, by focusing on the year 2100, these strategies are inconsistent with the Paris Agreement climate goal - to keep warming below 2°C, and ideally below 1.5°C, at any time in the future.

Strategies that focus on the year 2100 could allow potentially dangerous warming to happen in the short term - in the next couple of decades - and then rely on removing carbon dioxide from the atmosphere in later decades to reach the overall targets by 2100.

These strategies place a burden of investment on later generations, and also rely on carbon removal technologies being widely available, which is in no way certain and thus a risky approach.

Instead, the team suggests climate change strategies should consider when maximum warming will occur, what that level of warming should be, and whether warming is stabilised afterwards, or efforts are made to slowly reverse it.

The researchers suggest it is more sensible, and fairer, to limit warming faster before 2050 and rely less on unproven technologies and investment by future generations - or at least make these intergenerational value judgments explicit when designing climate change strategies.

Lead researcher Dr Joeri Rogelj, from the Grantham Institute at Imperial and the IIASA, said: "When climate-change strategies were first proposed, more than 20 years ago, the planet had only warmed about 0.5°C, so there was time for a long, smooth transition to energy systems and economies that kept warming below 2°C by 2100.

"Now, however, we are at around 1°C warming and science of the last decade has shown that 2°C cannot be considered a safe limit. The need to stabilise warming more quickly is paramount, and therefore we suggest a focus on reaching net zero carbon emissions as a key milestone of any climate strategy.

"Turning the focus from the far future to the next decades, where push will come to shove in terms of adequate climate action, will help us reach the Paris Agreement goals without placing undue burden on future generations."

Net zero carbon emissions is when a region (such as a city or country) balances the carbon they emit with the carbon they remove - often by methods such as by planting trees or deploying technologies that capture and store carbon underground.

The research team suggests this benchmark should be the focus of climate change efforts in the short term, to limit warming that occurs in the next couple of decades and until it is stabilised.

From net zero carbon, countries could then decide their strategy based on how much they need to further reduce their global warming contributions through added carbon removal.

Dr Rogelj said: "Shifting the focus to more short-term warming will underpin the next assessments by the Intergovernmental Panel on Climate Change (IPCC), and we hope it will also help policymakers formulate realistic strategies.

"Policymakers want to know how and when we can reach net zero carbon, and our new logic for strategies could make these questions answerable."

Credit: 
Imperial College London