Tech

how the brain distinguishes between voice and sound

image: On top: Analysis of main acoustic parameters underlying differences in the voices (speakers) and in the speech sounds (phonemes) in the pseudo-words themselves: high spectral modulations best differentiate the voices (blue spectral profile), and fast temporal modulations (red temporal profile) along with low spectral modulations (red spectral profile) best differentiate the speech sounds. At the bottom: Analysis of neural, fMRI data: during performance of the voice task, the auditory cortex amplifies higher spectral modulations (blue spectral profile), and during performance of the phoneme task, it amplifies fast temporal modulations (red temporal profile) and low spectral modulations (red spectral profile). These amplification profiles are highly similar to the acoustic profiles to differentiate between the voices and the phonemes.

Image: 
© UNIGE

Is the brain capable of distinguishing a voice from the specific sounds it utters? In an attempt to answer this question, researchers from the University of Geneva (UNIGE), Switzerland, - in collaboration with the University of Maastricht, the Netherlands - devised pseudo-words (words without meaning) spoken by three voices with different pitches. Their aim? To observe how the brain processes this information when it focuses either on the voice or on speech sounds (i.e. phonemes). The scientists discovered that the auditory cortex amplifies different aspects of the sounds, depending on what task is being performed. Voice-specific information is prioritised for voice differentiation, while phoneme-specific information is important for the differentiation of speech sounds. The results, which are published in the journal Nature Human Behaviour, shed light on the cerebral mechanisms involved in speech processing.

Speech has two distinguishing characteristics: the voice of the speaker and the linguistic content itself, including speech sounds. Does the brain process these two types of information in the same way? "We created 120 pseudo-words that comply with the phonology of the French language but that make no sense, to make sure that semantic processing would not interfere with the pure perception of the phonemes," explains Narly Golestani, professor in the Psychology Section at UNIGE's Faculty of Psychology and Educational Sciences (FPSE). These pseudo-words all contained phonemes such as /p/, /t/ or /k/, as in /preperibion/, /gabratade/ and /ecalimacre/.

The UNIGE team recorded the voice of a female phonetician articulating the pseudo-words, which they then converted into different, lower to higher pitched voices. "To make the differentiation of the voices as difficult as the differentiation of the speech sounds, we created the percept of three different voices from the recorded stimuli, rather than recording three actual different people," continues Sanne Rutten, researcher at the Psychology Section of the FPSE of the UNIGE.

How the brain distinguishes different aspects of speech

The scientists scanned their participants using functional magnetic resonance imaging (fMRI) at high magnetic field (7 Tesla). This method allows to observe brain activity by measuring the blood oxygenation in the brain: the more oxygen is needed, the more that particular area of the brain is used. While being scanned, the participants listened to the pseudo-words: in one session they had to identify the phonemes /p/,/t/ or /k/, and in another they had to say whether the pseudo-words had been read by voice 1, 2 or 3.

The teams from Geneva and the Netherlands first analysed the pseudo-words to better understand the main acoustic parameters underlying the differences in the voices versus the speech sounds. They examined differences in frequency (high / low), temporal modulation (how quickly the sounds change over time) and spectral modulation (how the energy is spread across different frequencies). They found that high spectral modulations best differentiated the voices, and that fast temporal modulations along with low spectral modulations best differentiated the phonemes.

The researchers subsequently used computational modelling to analyse the fMRI responses, namely the brain activation in the auditory cortex when processing the sounds during the two tasks. When the participants had to focus on the voices, the auditory cortex amplified the higher spectral modulations. For the phonemes, the cortex responded more to the fast temporal modulations and to the low spectral modulations. "The results show large similarities between the task information in the sounds themselves and the neural, fMRI data," says Golestani.

This study shows that the auditory cortex adapts to a specific listening mode. It amplifies the acoustic aspects of the sounds that are critical for the current goal. "This is the first time that it's been shown, in humans and using non-invasive methods, that the brain adapts to the task at hand in a manner that's consistent with the acoustic information that is attended to in speech sounds," points out Rutten. The study advances our understanding of the mechanisms underlying speech and speech sound processing by the brain. "This will be useful in our future research, especially on processing other levels of language - including semantics, syntax and prosody, topics that we plan to explore in the context of a National Centre of Competence in Research on the origin and future of language that we have applied for in collaboration with researchers throughout Switzerland," concludes Golestani.

Credit: 
Université de Genève

A study analyzes the influence of political affinities in the processes of socialization

image: A study in which the Universidad Carlos III de Madrid participated (UC3M) has concluded that most people prefer not to have much to do with those who have political sympathies which are different from their own. Moreover, a substantial proportion of Spaniards are hostile towards those who do not have the same political preferences as them.

Image: 
UC3M

In the opinion of the study's main author, Hugo Viciana, a researcher at the Social Sciences Research Institute (in the Spanish acronym: IESA) - a joint centre of the Spanish National Research Council (in the Spanish acronym: CSIC) and the Junta de Andalucía (Regional Government of Andalusia) - during the study and currently associated with the Universidad de Málaga, "the partisanship of political life permeates everyday life and encourages discrimination based on political sympathies."

The research is based on the hypothesis that everyday moral beliefs are used in a tribal manner to define "what our group is and with what individuals we do not wish to join up". The study, which also involved the researchers Antonio Gaitán Torres, from the UC3M, and Ivar Rodríguez-Hannikainen, from the Pontifical University of Rio de Janeiro (Brazil), was based on a survey conducted in Spain between the 23rd of October and the 13th of November, 2018. By the end of that period, 1055 panellists had responded to the survey.

The survey included questions concerning the participants' identification with the main political parties, as well as blocks of questions relating to various issues on the public agenda. It also included a series of questions concerning the extent to which they would like to have someone who sympathised with parties they felt more or less of a kinship with as a neighbour, as a teacher of their children, as the spouse of a relative or as a boss in their workplace.

According to the results of the survey, those who believe that their moral opinions are objectively correct tend to discriminate more against those who have different political sympathies. This "moral absolutism", as defined by the authors, causes a significant sector of the population to assume that in the matter of moral or political disagreements only one of the parties can be right. "There is a significant correlation between those who believe that their moral opinions are objective or absolute and those who are most intolerant of the members of the political party with which they sympathise the least. It would be desirable to promote activities that would help to minimise these trends, although this is an area which still needs to be explored," explains Antonio Gaitán, lecturer at the Department of Humanities: Philosophy, Language and Literature at the UC3M.

The study also found that there is a disconnect between how we perceive our disagreements with those who have different political sympathies. "We imagine our political opponents more radical and dogmatic than they are. Perhaps, by combating this exaggerated perception of our differences we can alleviate the tension in which we live", says from the Pontifical University of Rio de Janeiro.

Credit: 
Universidad Carlos III de Madrid

Community size matters when people create a new language

image: Community size matters when people create a new language

Why do some languages have simpler grammars than others? Researchers from the Netherlands and the UK propose that the size of the community influences the complexity of the language that evolves in it. When small and large groups of participants played a 'communication game' using only gibberish words they had to invent, the languages invented by larger groups were more systematic than languages of smaller groups, showing that community size is important for shaping grammar.

Publication
Raviv, L., Meyer, A., Lev-Ari, S. (2019). Larger communities create more systematic languages. Proceedings of the Royal Society B: Biological Sciences. Doi: 10.1098/rspb.2019.1262

Questions? Contact:
Limor Raviv
Phone: +31 24 751126
Email: Limor.Raviv@mpi.nl

Marjolein Scherphuis (press officer)
Phone: +31 24 3521947
Email: Marjolein.Scherphuis@mpi.nl

Image: 
Limor Raviv

Why are languages so different from each other? After comparing more than two thousand languages, scientists noticed that languages with more speakers are usually simpler than smaller languages. For instance, most English nouns can be turned into plurals by simply adding -s, whereas the German system is notoriously irregular. Linguists have proposed that languages adapt to fit different social structures. "But we actually don't know whether it is the size of the community that drives the difference in complexity", says lead author Limor Raviv from the Max Planck Institute for Psycholinguistics. Perhaps the bigger languages have simpler grammars because they cover larger geographical space and speakers are far from each other, or because large communities have more contact with outsiders. Together with her colleagues, Antje Meyer from the Max Planck Institute for Psycholinguistics and Radboud University and Shiri Lev-Ari from the Royal Holloway University of London, Raviv set out to test whether community size alone plays a role in shaping grammar.

'Wowo-ik' and 'wowo-ii'

To test the role of group size experimentally, the psycholinguists used a communication game. In this game, participants had to communicate without using any language they know, leading them to create a new language. The goal of the game was to communicate successfully about different novel scenes, using only invented nonsense words. A 'speaker' would see one of four shapes moving in some direction on a screen and type in nonsense words to describe the scene (its shape and direction). The 'listener' would then guess which scene the other person was referring to, by selecting one of eight scenes on their own screen. Participants received points for every successful interaction (correct guesses). Participants paired up with a different person from their group at every new round, taking turns producing and guessing words.

At the start of the game, people would randomly guess meanings and make up new names. Over the course of several hours, participants started to combine words or part-words systematically, creating an actual mini-language. For instance, in one group, 'wowo-ik' meant that a specific shape was going up and right, whereas 'wowo-ii' meant that the same shape was going straight up. With such a 'regular' system, it becomes easier to predict the meaning of new labels ('mop-ik' meant a different shape going up and right). Participants played in either 'small' groups of four participants or 'large' groups of eight participants. Would the large groups invent 'simpler' (more systematic) languages than the small groups?

Variability promotes structure

By the end of the experiment, the large groups had indeed created languages with more systematic grammars. "The pressure to create systematic languages is much higher in larger groups", explains Raviv. This is because there are more word versions in the larger groups. In order to understand each other, members of a large group must overcome this variability and develop systematic structure. So the more variability, the more it pushed people to make their language even simpler. The researchers also found that the size of the group predicted how similar that group was to the other groups. All large groups reached similar levels of structure and communicative success. However, the small groups differed a lot from each other: some never developed systematic grammars, while others were very successful." This might mean that in the real world, larger languages are potentially more similar to each other than smaller languages", Raviv says.

"Our study shows that the social environment in which languages evolve, and specifically the number of people in the community, can affect the grammar of languages", concludes Raviv. "This could possibly explain why some languages have more complex grammar than others. The results also support the idea that an increase in human population size was one of the main drivers for the evolution of natural languages."

Credit: 
Max Planck Institute for Psycholinguistics

Modeling predicts blue whales' foraging behavior, aiding population management efforts

image: Blue whale tail.

Image: 
Craig Hayslip, Marine Mammal Institute, Oregon State University

NEWPORT, Ore. - Scientists can predict where and when blue whales are most likely to be foraging for food in the California Current Ecosystem, providing new insight that could aid in the management of the endangered population in light of climate change and blue whale mortality due to ship strikes, a new study shows.

The statistical model used for the predictions combines long-term satellite tracking data of the whales' movement patterns with environmental data such as ocean temperatures and depth, which helps researchers understand how climate variations might impact blue whales over time from a larger "ecosystem" view of the population.

"Most management decisions up to now have been based on locations where the whales tend to be found," said Daniel Palacios, who holds the Endowed Faculty in Whale Habitats position at Oregon State University's Marine Mammal Institute, and is lead author of the study.

"But it's not just where the whales are, but also the activity - are they actually eating there or simply moving through - that matters. This model can tell us which areas are the most important for actual foraging."

The findings were published today in the journal Movement Ecology.

Blue whales can grow 70 to 90 feet long and weigh 200,000 to 300,000 pounds, though their diet is primarily krill - tiny shrimp-like creatures less than two inches in length. They are listed as endangered under the U.S. Endangered Species Act and the International Union for Conservation of Nature's Red List.

An estimated 1,600 of the world's 10,000 blue whales, known as the North Pacific population, spend time in the waters off the West Coast of the Americas. The North Pacific blue whale population can travel from the Gulf of Alaska to an area near the equator known as the Costa Rica Dome. The majority spend the summer and fall in the waters off the U.S. West Coast.

The California Current Ecosystem is the span of waters off the West Coast of North America extending roughly from the border with Canada at the north to Baja California, Mexico, at the south. Steady winds during spring and summer fuel a rich and biologically productive ecosystem. The study focused only on U.S. waters within this ecosystem.

The researchers' goal for the study was to better understand the blue whale behavior in the context of this ecosystem by examining the relationship between feeding behavior and ocean conditions during the feeding season.

Palacios and colleagues used long-term satellite tracking data from 72 tagged blue whales as well as ocean condition data from remote sensing during the same period, 1998 to 2008. Little was known about the blue whale until the 1990s, when Bruce Mate, director of OSU's Marine Mammal Institute, pioneered the satellite tracking studies that produced a wealth of data not previously available.

The data used in the foraging study included several years of cool, productive ocean conditions as well as a couple of warm, low-productivity years, during which food was less likely to be plentiful.

The researchers found that blue whales were more likely to exhibit foraging behavior in areas known from earlier studies for hosting large whale aggregations, providing an improved understanding of the relationship between environmental conditions and whale habitat use.

"The same environmental parameters - water temperature, depth, abundance of phytoplankton - that drive hotspots of whale aggregation also drive where foraging behavior is more likely to occur," Palacios said. "While this was not necessarily surprising, it was good to be able to demonstrate that we can predict whale behavioral states, as this helps inform management in terms of not just what areas are used more often by the whales, but also what they do once they get there."

They also found that whales were less likely to exhibit foraging behavior when they were further away from the coastline. The primary foraging hotspots are found in a few locations along the coast, where krill aggregations are typically most dense and persistent.

The study also supported findings from another recent paper by Palacios and colleagues that showed that blue whales rely on long-term memory to find the best places to forage, and return to them year after year. That makes them susceptible to climatic disruptions to their prey base, as the whales may take some time before they abandon their traditional foraging sites.

Improved understanding of the species-environment relationship, through an ecosystem view of whale behavior, can give researchers a better understanding of how climate change might impact whale feeding, Palacios said.

The study also could help with population management decisions. North Pacific blue whales tend to aggregate in three primary areas: Point Conception and the Santa Barbara Channel in southern California; around the Gulf of Farallones in central California; and between Cape Mendocino and Cape Blanco in northern California and southern Oregon.

Two of those hotspots are in areas of intense commercial shipping traffic near Los Angeles and San Francisco. Blue whale mortality due to ship strikes is a growing concern.

"If there are some areas along the coast that are more biologically important for the whales, based on intensified foraging activity, that's important for management agencies to know, compared to areas where the whales are just passing through," Palacios said.

The researchers' next step is to look more closely at how whales' behavior shifts during years when ocean conditions are unfavorable for food production.

"How long will it take blue whales to abandon their historic feeding grounds if the food is no longer there?" Palacios said. "If they do respond to environmental changes, how do we predict that change long-term?"

Credit: 
Oregon State University

Ants that defend plants receive sugar and protein

image: The aggressiveness of ants in arid environments with scarce food supply helps protect plants against herbivorous arthropods.

Image: 
Laura Leal

Biologists Laura Carolina Leal and Felipe Passos performed a series of experiments in Brazil's Northeast region - specifically in the interior of Bahia State, where the semiarid Caatinga biome predominates - to determine how plants with extrafloral nectaries interact with ants.

Extrafloral nectaries are nectar-secreting glands not involved in pollination which supply insects with carbohydrates in exchange for defense against herbivores. The nectar attracts predatory insects that consume both the nectar and plant-eating arthropods, functioning as bodyguards.

"In contrast with the previous belief, we discovered that carbohydrate is only one of the forms of payment offered by plants to the ants that protect them. Another is protein, which ants obtain by consuming the herbivorous arthropods available on or around the plants they visit," said Leal, a professor in the Federal University of São Paulo's Institute of Environmental, Chemical and Pharmaceutical Sciences (ICAQF-UNIFESP) in Brazil.

"This finding contradicts the idea that payment is in sugar only," Leal told. "It shows that what ants gain from herbivores also matters. We discovered that ants may be more aggressive in environments where arthropods and other sources of protein are scarce, defending their food sources and hence protecting plants."

The study was supported by São Paulo Research Foundation - FAPESP and by the Brazilian education ministry's graduate research council (CAPES). A paper on it has recently been published in Biological Journal of the Linnean Society.

The research interests pursued by Leal and Passos focus on the various forms of insect-plant mutualism. "Mutualism is a form of interaction between two species in which each benefits from the interaction in some way. If it isn't advantageous for both species, but only for one, it's parasitism," Leal said.

"Several studies have shown that nectarivorous ants expel herbivores and enhance the reproductive success of plants with extrafloral nectaries. The greater the importance of extrafloral nectar to the ants, the better for the plants, as this increases the ants' aggressiveness toward herbivores. We decided to find out whether nectar is the only payment by plants for the ants' protection or whether eating herbivores might also be advantageous to the ants."

Leal and Passos confirmed the hypothesis that plant attendance by more aggressive ants and the efficiency of their defense increase when the availability of carbohydrates and/or proteins to the ants is low, enhancing the relative value of both extrafloral nectaries and protein-rich herbivores to these insects.

The study was conducted on the campus of the University of Feira de Santana in Bahia. The region has a semiarid climate with an annual average temperature of 25.2 °C and rainfall averaging 848 mm per year. The vegetation in the Caatinga is xerophytic (adapted to life in a dry habitat), consisting of a mosaic of thorny shrubs and seasonally dry forests.

The researchers established 19 study plots measuring 16 square meters each, in early 2017. The plots were at least 30 m apart and mainly contained Turnera subulata, a clumping plant in the passionflower family known as white alder. This was the only plant with extrafloral nectaries in the plots. Its density varied from five to 218 individuals per plot.

"T. subulata has a pair of extrafloral nectaries on each petiole [the stalk that attaches the leaf blade to the stem] and inflorescence base," Leal said. The extrafloral nectaries are constantly visited by different ant species that can defend the plant against herbivores.

"The relative importance of any resource for animals is influenced not only by its abundance in the habitat but also by the number of individuals sharing it. Our first step was therefore to count the nests of ants that foraged in our study plots."

The researchers left five mixtures of carbohydrate and protein (sardine and honey) as bait in the soil of each plot between 7 and 11 a.m., when ants were most active at the site. One piece of bait was placed at the center and the other four approximately three meters away at the corners.

"We waited until the ants located the bait and followed them back to their nests, even when these were located outside our study site," Leal said.

They counted the ant nests and estimated the abundance of protein and carbohydrate resources for ants in each plot. Because T. subulata is a prostrate herbaceous plant occurring in open habitats, it is visited mainly by soil-foraging ant species.

"We recorded 312 occurrences of 13 ant species on these plants. Most were visited by two or more ant species simultaneously," Leal said.

The most frequent species was Camponotus blandus (42% of occurrences), followed by Dorymyrmex piramicus (25.6%). Dead arthropods in the soil are the main source of protein for these ants.

The researchers used soil arthropod biomass as a proxy for protein availability to the ants that visited extrafloral nectaries in each plot. To obtain this metric, they installed five pitfall traps in each plot, again one at the center and one at each corner.

"The pitfall traps remained active for 24 hours. We filtered their content and dried it in an oven at 60 °C for 24 hours. The lower the average dry arthropod biomass collected from each plot was, the lower the local availability of protein to ants," Leal said.

Less protein, more aggressiveness

The researchers also observed the behavior of ants visiting extrafloral nectaries with regard to a simulated herbivore to determine whether the availability of carbohydrate and/or protein in the habitat affected the efficiency of the ants' defense.

"We simulated the presence of herbivores on the plants using the larvae of Ulomoides dermestoides, a common predator of peanut seeds known as the peanut beetle or Chinese weevil. On the most apical branch of each focal plant, we placed one larva on the leaf that offered the best horizontal or near-horizontal platform for the insect. We allowed the larva to move freely on the leaf and waited for it to be found by the ants," Leal said.

The biologists identified the ants present on five plants in each plot and measured their efficiency in removing the simulated herbivores from the plants.

"When a larva was located, we observed the behavior of the ants with regard to the larva. We observed whether the larva was removed from the plant, whether the ants took the larva to the ground, pushed it off the plant, or consumed it where it was," Leal said.

According to her, the probability of interaction between the plants and more aggressive ant species was not influenced by the number of active extrafloral nectaries or the arthropod biomass found in the plots.

"However, the simulated herbivores were removed more frequently in plots with less arthropod biomass. This suggests that ants, regardless of species, become more aggressive toward other arthropods in protein-poor habitats. This increase in aggressiveness potentially increases the efficiency with which plants that have extrafloral nectaries are defended against herbivores," Leal said.

Unlike carbohydrates, protein resources are not renewable and are randomly distributed in the environment. Dead insects, for example, have no predictable pattern of distribution and may be found almost anywhere. Once consumed, these dead insects are unavailable to other species of ants in the community.

"This led us to propose that plants with extrafloral nectaries may be more efficiently defended in protein-poor habitats regardless of how much they invest in interaction via nectar secretion," Leal said.

If so, even plants that secrete low-quality extrafloral nectar may be efficiently defended because the ants' behavior toward herbivores will be driven by their demand for protein and not for carbohydrates.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Timing of spay, neuter tied to higher risk of obesity and orthopedic injuries in dogs

image: Dogs like Dusty Bottoms helped inform Morris Animal Foundation researchers that spaying or neutering large-breed dogs can put them at a higher risk for obesity and nontraumatic orthopedic injuries.

Image: 
Sean Andersen-Vie, Morris Animal Foundation

Spaying or neutering large-breed dogs can put them at a higher risk for obesity and, if done when the dog is young, nontraumatic orthopedic injuries, reports a new study based on data from the Morris Animal Foundation Golden Retriever Lifetime Study. The spay/neuter study was published today in the journal PLOS ONE.

"For years, we've been taught that spaying or neutering your dog is part of being a responsible pet owner, but there really are advantages and disadvantages to consider when making that decision," said Dr. Missy Simpson, Morris Animal Foundation epidemiologist and lead author on the paper. "Our study results give dog owners and veterinarians new information to consider when deciding on when to spay or neuter their dog, especially when considering the long-term health of their pet."

In the general canine population, estimates are that one-third to one-half of all large-breed dogs are either overweight or obese. Roughly 2% of the same population suffer nontraumatic orthopedic injuries, such as cruciate ligament ruptures.

Dr. Simpson studied health data, collected over six years, from the entire Golden Retriever Lifetime Study cohort of more than 3,000 golden retrievers. Approximately one-half had undergone spay or neuter surgery.

She found that dogs that were spayed or neutered were 50% to 100% more likely to become overweight or obese, and the risk didn't appear to be affected by the dog's age at the time of surgery. Whether the dog had the procedure at 6 months or 6 years, the risk of weight gain remained relatively constant.

However, age at surgery does appear to be a significant factor regarding nontraumatic orthopedic injuries. Dr. Simpson found that dogs spayed or neutered before 6 months of age were at a 300% greater risk of sustaining those injuries.

While the paper focused on golden retrievers, Dr. Simpson noted the results likely can be applied to other breeds particularly other large- and giant-breed dogs.

"Different owners have different concerns for their dogs and the decision to spay or neuter your dog is a very complex one," said Dr. Janet Patterson-Kane, Morris Animal Foundation Chief Scientific Officer. "It's a balance in managing the risks of neutering or not neutering for owners committed to their dog's health."

The Morris Animal Foundation Golden Retriever Lifetime Study is the most extensive, prospective study ever undertaken in veterinary medicine. Launched in 2012, and reaching full enrollment in 2015, it gathers information on more than 3,000 golden retrievers from around the United States, throughout their lives, to identify the nutritional, environmental, lifestyle and genetic risk factors for cancer and other diseases in dogs.

Owners and veterinarians complete yearly online questionnaires about the health status and lifestyle of the dogs. Biological samples also are collected, and each dog has a physical study examination annually.

Credit: 
Morris Animal Foundation

Health insurance idea born at U-M could help millions of Americans spend less

image: Mark Fendrick, a professor at the U-M Medical School and School of Public Health.

Image: 
Michigan Medicine

ANN ARBOR--Millions of Americans with chronic conditions could save money on the drugs and medical services they need the most, if their health insurance plans decide to take advantage of a new federal rule issued today.

And the idea behind that rule was born at the University of Michigan.

Today, the U.S. Department of the Treasury gave health insurers more flexibility to cover the cost of certain medications and tests for people with common chronic conditions who are enrolled in many high-deductible health plans.

The rule change came about in part because of research and over a decade of policy engagement by U-M professor A. Mark Fendrick and colleagues at the U-M Center for Value-Based Insurance Design.

About 43% of adults who get health insurance through their jobs have a high-deductible plan, which requires them to spend at least $1,300 out of their own pockets before their insurance starts covering their care, or $2,600 if they cover family members.

People with high-deductible health plans typically have to pay the entire cost for services used to manage chronic conditions--such as inhalers for asthma, blood sugar testing and insulin for diabetes, and medicines to treat depression and high cholesterol--until they've reached their plan deductible.

More than half of them have access to a special kind of tax-advantaged health savings account to save money for their health costs, and some employers contribute to those accounts.

But until today, the federal tax code specifically barred high-deductible plans with health savings accounts, or HSA-HDHPs, from covering drugs and services for common chronic conditions until enrollees met their deductibles. Such coverage could reduce the chance that people with chronic conditions will skip preventive care because of cost, and improve their longer-term outcomes.

Meanwhile, the bipartisan Chronic Disease Management Act of 2019 was introduced in the Senate and House of Representatives last month with the same goal of lowering out-of-pocket costs for Americans with chronic conditions confronting high plan deductibles.

"As more and more Americans are facing high deductibles, they are struggling to pay for their essential medical care," said Fendrick, a professor at the U-M Medical School and School of Public Health. "Our research has shown that this policy has the potential to lower out-of-pocket costs, reduce federal health care spending, and ultimately improve the health of millions diagnosed with chronic medical conditions. We have actively advocated for this policy change for over a decade."

Fendrick is an internal medicine physician at Michigan Medicine and a member of the U-M Institute for Healthcare Policy and Innovation.

Specific coverage for specific enrollees

The new rule designates 14 services for people with certain conditions that high-deductible health plans can now cover on a pre-deductible basis.

The list closely aligns with the one laid out by the V-BID Center in a 2014 analysis. That report, based on clinical evidence available at the time, shows that these tests and treatments could help people with chronic diseases manage their health, and detect or prevent worsening of their conditions, at low cost.

The list includes:

ACE inhibitor drugs for people with heart failure, diabetes and/or coronary artery disease

Bone-strengthening medications for people with osteoporosis or osteopenia

Beta-blocker drugs for people with heart failure and/or coronary artery disease

Blood pressure monitors for people with hypertension

Inhalers and peak flow meters for people with asthma

Insulin and other medicines to lower the blood sugar of people with diabetes

Eye screening, blood sugar monitors and long-term blood sugar testing for people with diabetes

Tests for blood clotting ability in people with liver disease or bleeding disorders

Tests of LDL cholesterol levels in people with heart disease

Antidepressants called SSRIs for people with depression

Statin medications for people with heart disease and/or diabetes

The new Treasury guidance also leaves the door open to allow high-deductible plans more flexibility in the future for coverage of other preventive services for people with these and other chronic conditions.

Fendrick and Harvard University professor Michael Chernew articulated the need for regulatory changes to level the playing field for people with chronic conditions in high deductible health plans in the Journal of General Internal Medicine in 2007.

V-BID principles--based on the idea that the highest-value clinical services should cost the least to people who need them most--have also made their way into other kinds of health insurance plans.

For instance, Medicare Advantage plans, offered by private insurers to people over age 65 and with disabilities, are now able to offer plans with value-based co-pays. So are plans offered under TRICARE, the insurance program for military families, and private employer-sponsored plans without high deductibles.

The V-BID team has recently introduced V-BID X, a novel benefit design to expand options in the individual market by enhancing coverage of essential medical services and drugs, without increasing premiums or deductibles.

Credit: 
University of Michigan

NASA tracking post-tropical cyclone Barry to Indiana

image: On July 16, 2019, the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Post-Tropical Cyclone Barry in the Mississippi Valley, moving toward the Ohio Valley.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA's Aqua satellite provided a visible image of the clouds associated with Post-Tropical Cyclone Barry moving through the mid-Mississippi Valley on July 16, and headed toward the Ohio Valley.

On July 16, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured a visible look at Barry. Strongest thunderstorms appeared over northwestern Arkansas, western Tennessee and southwestern Kentucky at the time of the image. Barry's remnant clouds were also spreading into southern Indiana.

By 5 a.m. EDT (0900 UTC) on July 17, NOAA's National Weather Service Weather Prediction Center in College Park, Maryland noted that Barry's center of circulation had moved to about 90 miles (150 km) northeast of Indianapolis, Indiana. The center of Post-Tropical Cyclone Barry was located near latitude 40.8 degrees north and longitude 85.3 degrees west. The post-tropical cyclone is moving toward the east-northeast near 22 mph (35 kph) and this motion is expected to continue through tonight. Maximum sustained winds are near 15 mph (30 kph) with higher gusts. Little change in strength is forecast during the next 48 hours. The estimated minimum central pressure is 1010 millibars (29.83 inches).

Flash flood watches are in effect across portions of the northern Mid-Atlantic. Barry is expected to produce additional rain accumulations of 1 to 3 inches from portions of the Upper Ohio and Upper Tennessee Valleys into the northern Mid-Atlantic and southern New England.

Credit: 
NASA/Goddard Space Flight Center

NASA finds tropical storm Danas northeast of the Philippines

image: On July 17, 2019, the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Tropical Storm Danas in the Northwestern Pacific Ocean, located just northeast of the Philippines.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA's Aqua satellite provided a visible image of Tropical Storm Danas as it continued to move north and away from the Philippines.

On July 17 at 12:40 a.m. EDT (0440 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured a visible look at Danas. The strongest thunderstorms appeared southeast of the center of circulation in the MODIS image. Danas was located northeast of Luzon, Philippines, in the Philippine Sea.

At 11 a.m. EDT (1500 UTC) on July 17, the center of Tropical Storm Danas was located near latitude 21.1 degrees north and longitude 124.0 degrees east. The center of Danas was about 421 nautical miles south-southwest of Kadena Air Base, Okinawa Island, Japan. Danas was moving to the north-northeast and had maximum sustained winds near 35 knots (40 mph/74 kph).

Danas is forecast to move north over the next couple of days and strengthen. Its center is expected to pass near Ishigakijima island on July 18.

Credit: 
NASA/Goddard Space Flight Center

Correcting historic sea surface temperature measurements

image: This chart shows annual sea surface temperature changes from different datasets in the North Pacific (top) and North Atlantic (bottom). The blue line indicates the corrected data from this research. It shows greater warming in the North Pacific and less warming in the North Atlantic relative to previous estimates.

Image: 
Duo Chan/Harvard

Something odd happened in the oceans in the early 20th century. The North Atlantic and Northeast Pacific appeared to warm twice as much as the global average while the Northwest Pacific cooled over several decades.

Atmospheric and oceanic models have had trouble accounting for these differences in temperature changes, leading to a mystery in climate science: why did the oceans warm and cool at such different rates in the early 20th century?

Now, research from Harvard University and the UK's National Oceanography Centre points to an answer both as mundane as a decimal point truncation and as complicated as global politics. Part history, part climate science, this research corrects decades of data and suggests that ocean warming occurred in a much more homogenous way.

The research is published in Nature.

Humans have been measuring and recording the sea surface temperature for centuries. Sea surface temperatures helped sailors verify their course, find their bearings, and predict stormy weather.

Until the 1960s, most sea surface temperature measurements were taken by dropping a bucket into the ocean and measuring the temperature of the water inside.

The National Oceanic and Atmospheric Administration (NOAA) and the National Science Foundation's National Center for Atmospheric Research (NCAR) maintains a collection of sea surface temperature readings dating back to the early 19th Century. The database contains more than 155 million observations from fishing, merchant, research and navy ships from all over the world. These observations are vital to understanding changes in ocean surface temperature over time, both natural and anthropogenic.

They are also a statistical nightmare.

How do you compare, for example, the measurements of a British Man-of-War from 1820 to a Japanese fishing vessel from 1920 to a U.S. Navy ship from 1950? How do you know what kind of buckets were used, and how much they were warmed by sunshine or cooled by evaporation while being sampled?

For example, a canvas bucket left on a deck for three minutes under typical weather conditions can cool by 0.5 degrees Celsius more than a wooden bucket measured under the same conditions. Given that global warming during the 20th Century was about 1 degree Celsius, the biases associated with different measurement protocols requires careful accounting.

"There are gigabytes of data in this database and every piece has a quirky story," said Peter Huybers, Professor of Earth and Planetary Sciences and of Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and senior author of the paper. "The data is rife with peculiarities."

A lot of research has been done to identify and adjust for these peculiarities. In 2008, for example, researchers found that a 0.3-degree Celsius jump in sea surface temperatures in 1945 was the result of measurements taken from engine room intakes. Even with these corrections, however, the data is far from perfect and there are still unexplained changes in sea surface temperature.

In this research, Huybers and his colleagues proposed a comprehensive approach to correcting the data, using a new statistical technique that compares measurements taken by nearby ships.

"Our approach looks at the differences in sea surface temperature measurements from distinct groups of ships when they pass nearby, within 300 kilometers and two days of one another," said Duo Chan, a graduate student in the Harvard Graduate School of Arts and Sciences and first author of the paper. "Using this approach, we found 17.8 million near crossings and identified some big biases in some groups."

The researchers focused on data from 1908 to 1941, broken down by the country of origin of the ship and the "decks," a term stemming from the fact that marine observations were stored using decks of punch cards. One deck includes observations from both Robert Falcon Scott's and Ernest Shackleton's voyages to the Antarctic.

"These data have made a long journey from the original logbooks to the modern archive and difficult choices were made to fit the available information onto punch cards or a manageable number of magnetic tape reels," said Elizabeth Kent, a co-author from the UK National Oceanography Centre. "We now have both the methods and the computer power to reveal how those choices have affected the data, and also pick out biases due to variations in observing practice by different nations, bringing us closer to the real historical temperatures."

The researchers found two new key causes of the warming discrepancies in the North Pacific and North Atlantic.

The first had to do with changes in Japanese records. Prior to 1932, most records of sea surface temperature from Japanese vessels in the North Pacific came from fishing vessels. This data, spread across several different decks, was originally recorded in whole-degrees Fahrenheit, then converted to Celsius, and finally rounded to tenths-of-a-degree.

However, in the lead-up to World War II, more and more Japanese readings came from naval ships. These data were stored in a different deck and when the U.S. Air Force digitized the collection, they truncated the data, chopping off the tenths-of-a-degree digits and recording the information in whole-degree Celsius.

Unrecognized effects of truncation largely explain the rapid cooling apparent in foregoing estimate of Pacific sea surface temperatures between 1935 and 1941, said Huybers. After correcting for the bias introduced by truncation, the warming in the Pacific is much more uniform.

While Japanese data holds the key to warming in the Pacific in the early 20th century, it's German data that plays the most important role in understanding sea surface temperatures in the North Atlantic during the same time.

In the late 1920s, German ships began providing a majority of data in the North Atlantic. Most of these measurements are collected in one deck, which, when compared to nearby measurements, is significantly warmer. When adjusted, the warming in the North Atlantic becomes more gradual.

With these adjustments, the researchers found that rates of warming across the North Pacific and North Atlantic become much more similar and have a warming pattern closer to what would be expected from rising greenhouse gas concentrations. However, discrepancies still remain and the overall rate of warming found in the measurements is still faster than predicted by model simulations.

"Remaining mismatches highlight the importance of continuing to explore how the climate has been radiatively forced, the sensitivity of the climate, and its intrinsic variability. At the same time, we need to continue combing through the data---through data science, historical sleuthing, and a good physical understanding of the problem, I bet that additional interesting features will be uncovered," said Huybers.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Rare inherited enzyme disorder yields insight into fibrosis

image: NEU1 mutations and lysosomal storage disorders are a long-standing research interest of Alessandra d'Azzo, Ph.D., a member of the St. Jude Department of Genetics (right)

Image: 
St. Jude Children's Research Hospital

(MEMPHIS, Tenn.--July 17, 2019) What can a family of rare inherited disorders teach scientists about more common health problems like fibrosis? Plenty, based on research led by St. Jude Children's Research Hospital scientists that appears today in the journal Science Advances.

The investigators have discovered an association between a deficiency in the enzyme neuraminidase 1 (NEU1) and the build-up of connective tissue (fibrosis) in organs such as the muscle, kidney, liver, heart and lungs. Fibrosis includes life-threatening conditions such as idiopathic pulmonary fibrosis.

Mutations in the NEU1 gene cause the lysosomal storage disease sialidosis, which belongs to a large group of related pediatric diseases. "This is the first time NEU1 has been associated with fibrotic conditions," said corresponding author Alessandra d'Azzo, Ph.D., a member and endowed chair of the St. Jude Department of Genetics. "NEU1 is an important enzyme that breaks down sugar-containing molecules in many cells of the body, but it has not really been on the radar for adult health problems.

"Based on these findings, it is tantalizing to hypothesize that NEU1 expression levels may help identify individuals at risk for fibrosis or inform their prognosis, particularly when information about the cause or possible treatment is lacking," she said. Fibrosis results when excess connective tissue is produced and accumulates and disrupts normal function of the muscle, lungs, liver, heart and other tissues. Connective tissue is produced in part by fibroblast cells.

Early evidence

The findings build on earlier research from d'Azzo's laboratory. That work centered on mice that lacked the Neu1 gene. The mice developed muscle atrophy when connective tissue proliferated and invaded muscle.

In this study, researchers unraveled the underlying mechanism by showing that mouse fibroblasts lacking Neu1 proliferated, migrated and released large numbers of molecules that promoted the relentless expansion of connective tissue. "The cells started acting more like cancer cells," d'Azzo said. She previously reported evidence about the potential role of NEU1 deficiency in cancer, particularly sarcomas, which are cancers of the connective tissue.

The researchers found that mouse fibroblasts lacking Neu1 release excessive numbers of molecules that degrade the extracellular matrix, as well as tiny vesicles (exosomes). The exosomes are loaded with factors that promote fibrosis, including the growth factor TGF-β and the signaling molecule WNT. Normal mouse and human fibroblasts cells were activated to proliferate and migrate when exposed to exosomes containing TGF-β, WNT and related molecules released by Neu1-deficient fibroblasts.

Human disease

Researchers checked tissue from adults with idiopathic pulmonary fibrosis and found NEU1 production was significantly reduced (down-regulated) as compared to adults without the diagnosis. The investigators checked an RNA sequencing database of 89 idiopathic pulmonary fibrosis patients and found NEU1 was among the most down-regulated of 66 genes included in the database.

The research puts NEU1 deficiency, which is associated with a pediatric catastrophic disease, squarely on the radar as a possible risk factor for development and progression of fibrotic diseases in adults for which the primary cause is unknown.

The findings also confirm d'Azzo's belief that in-depth studies of rare pediatric diseases, such as the lysosomal storage disorders, often help to explain the disease mechanisms underlying disorders common in the aging population.

Credit: 
St. Jude Children's Research Hospital

Massive potential health gains in switching to active transport -- Otago study

Swapping short car trips for walking or biking could achieve as much health gain as ongoing tobacco tax increases, according to a study from the University of Otago, New Zealand.

Lead author Dr Anja Mizdrak, of Otago's Burden of Disease Epidemiology, Equity, and Cost-Effectiveness Programme (Department of Public Health), says transport has a major impact on population health.

"New Zealand is highly car dependent - 79 per cent of all self-reported trips are made by car and ownership rates are among the highest in the world - and only half of New Zealand adults meet national physical activity recommendations.

"Road transport also makes up 17.3 per cent of the nation's gross greenhouse gas emissions, so it directly affects injury rates, physical activity and air pollution, and indirectly affects health through climate change.

"Switching short trips to walking and cycling is a good way to incorporate physical activity into daily life and reduce carbon emissions associated with vehicle use," Dr Mizdrak says.

The study, just published in Plos One, is the first to estimate the health impact, and changes in health system costs and greenhouse gas emissions, associated with increasing active transport in New Zealand.

The researchers estimated changes in physical activity, injury risk, and air pollution for switching car trips under 1km to walking, and switching car trips under 5km to a mix of walking and cycling.

They used modelling to perform a "what if" analysis of uptake levels of 25, 50, and 100 per cent. From this, they estimated health gains and health system cost impacts of changes in injury risk, air pollution exposure and physical activity levels.

Health impacts across these different risks were combined into a common metric - quality adjusted life years (QALYs) - where one QALY represents a year lived in full health, which were calculated out over the rest of the life course of the New Zealand population alive in 2011 (4.4 million people).

Depending on uptake levels: health gains ranged between 1.61 and 25.43 QALYs per 1000 people, with total QALYs up to 112,000 over the remaining lifespan; healthcare cost savings ranged from $127 million to $2.1 billion over the remaining life span, with around 4 per cent of this total saved in the next ten years.

Greenhouse gas emissions were reduced by up to 194ktCO2e per year - the equivalent of 64,000 people flying from London to Auckland.

"The overall greenhouse gas emissions reductions we observed for 100 per cent uptake of the walking and cycling scenario were equivalent to up to 1.4 per cent of total emissions from road transport in New Zealand," Dr Mizdrak says.

Co-author Professor Tony Blakely, also of the Department of Public Health, puts it this way: "If people swapped their car for walking or biking for just one quarter of short trips, the health gains would be comparable to the health gain we have estimated previously for 10 percent per annum tobacco tax increases from 2011 to 2025."

Thus, he says, the health gains are substantial.

"Predicting the future is never easy, but that is implicitly what we do as a society when we make policy decisions that change our cities and lifestyles for decades into the future.

"Our research suggests that making walking and cycling easier and preferred over cars for short trips is likely to be beneficial on all three counts of health gain, health system cost savings and greenhouse gas emissions. This evidence needs consideration in future policy making and urban design."

Background facts on New Zealand:

79 per cent of all self-reported trips are made by car.

56 per cent of car trips are under 5km (12 per cent are under 1km).

New Zealand has among the highest car ownership rates in the world.

17.3 per cent of gross greenhouse gas emissions are related to road transport.

Only half of adults meet the national physical activity recommendations.

Credit: 
University of Otago

First-ever visualizations of electrical gating effects on electronic structure

image: Electrons ejected by a beam of light focused on a two-dimensional semiconductor device are collected and analyzed to determine how the electronic structure in the material changes as a voltage is applied between the electrodes.

Image: 
Nelson Yeung/Nick Hine/Paul Nguyen/David Cobden

Scientists have visualised the electronic structure in a microelectronic device for the first time, opening up opportunities for finely-tuned high performance electronic devices.

Physicists from the University of Warwick and the University of Washington have developed a technique to measure the energy and momentum of electrons in operating microelectronic devices made of atomically thin, so-called two-dimensional, materials.

Using this information, they can create visual representations of the electrical and optical properties of the materials to guide engineers in maximising their potential in electronic components.

The experimentally-led study is published in Nature today (17 July) and could also help pave the way for the two dimensional semiconductors that are likely to play a role in the next generation of electronics, in applications such as photovoltaics, mobile devices and quantum computers.

The electronic structure of a material describes how electrons behave within that material, and therefore the nature of the current flowing through it. That behaviour can vary depending upon the voltage - the amount of 'pressure' on its electrons - applied to the material, and so changes to the electronic structure with voltage determine the efficiency of microelectronic circuits.

These changes in electronic structure in operating devices are what underpin all of modern electronics. Until now, however, there has been no way to directly see these changes to help us understand how they affect the behaviour of electrons.

By applying this technique scientists will have the information they need to develop 'fine-tuned' electronic components that work more efficiently and operate at high performance with lower power consumption. It will also help in the development of two dimensional semiconductors that are seen as potential components for the next generation of electronics, with applications in flexible electronics, photovoltaics, and spintronics. Unlike today's three dimensional semiconductors, two dimensional semiconductors consist of just a few layers of atoms.

Dr Neil Wilson from the University of Warwick's Department of Physics said: "How the electronic structure changes with voltage is what determines how a transistor in your computer or television works. For the first time we are directly visualising those changes. Not being able to see how that changes with voltages was a big missing link. This work is at the fundamental level and is a big step in understanding materials and the science behind them.

"The new insight into the materials has helped us to understand the band gaps of these semiconductors, which is the most important parameter that affects their behaviour, from what wavelength of light they emit, to how they switch current in a transistor."

The technique uses angle resolved photoemission spectroscopy (ARPES) to 'excite' electrons in the chosen material. By focusing a beam of ultra-violet or x-ray light on atoms in a localised area, the excited electrons are knocked out of their atoms. Scientists can then measure the energy and direction of travel of the electrons, from which they can work out the energy and momentum they had within the material (using the laws of the conservation of energy and momentum). That determines the electronic structure of the material, which can then be compared against theoretical predictions based on state-of-the-art electronic structure calculations performed in this case by the research group of co-author Dr Nicholas Hine.

The team first tested the technique using graphene before applying it to two dimensional transition metal dichalcogenide (TMD) semiconductors. The measurements were taken at the Spectromicroscopy beamline at the ELETTRA synchrotron in Italy, in collaboration with Dr Alexei Barinov and his group there.

Dr David Cobden, professor in the Department of Physics at the University of Washington, said: "It used to be that the only way to learn about what the electrons are doing in an operating semiconductor device was to compare its current-voltage characteristics with complicated models. Now, thanks to recent advances which allow the ARPES technique to be applied to tiny spots, combined with the advent of two-dimensional materials where the electronic action can be right on the very surface, we can directly measure the electronic spectrum in detail and see how it changes in real time. This changes the game."

Dr Xiaodong Xu, from the Department of Physics and the Department of Materials Science & Engineering at the University of Washington, said: "This powerful spectroscopy technique will open new opportunities to study fundamental phenomena, such as visualisation of electrically tunable topological phase transition and doping effects on correlated electronic phases, which are otherwise challenging."

Credit: 
University of Warwick

A new spin on DNA

video: The molecular motor RNA polymerase rotates around DNA, shifting from one base pair to another.

Image: 
Pallav Kosuri/Zhuang Lab/Harvard University

Every year, robots get more and more life-like. Solar-powered bees fly on lithe wings, humanoids stick backflips, and teams of soccer bots strategize how to dribble, pass, and score. And, the more researchers discover about how living creatures move, the more machines can imitate them all the way down to their smallest molecules.

"We have these amazing machines already in our bodies, and they work so well," said Pallav Kosuri. "We just don't know exactly how they work."

For decades, researchers have chased ways to study how biological machines power living things. Every mechanical movement--from contracting a muscle to replicating DNA--relies on molecular motors that take tiny, near-undetectable steps.

Trying to see them move is like trying to watch a soccer game taking place on the moon.

Now, in a recent study published in Nature, a team of researchers including Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University and a Howard Hughes Medical Institute Investigator, and Zhuang Lab postdoctoral scholar Pallav Kosuri and Benjamin Altheimer, a Ph.D. student in the Graduate School of Arts and Sciences, captured the first recorded rotational steps of a molecular motor as it moved from one DNA base pair to another.

In collaboration with Peng Yin, a professor at the Wyss Institute and Harvard Medical School, and his graduate student Mingjie Dai, the team combined DNA origami with high-precision single-molecule tracking, creating a new technique called ORBIT--origami-rotor-based imaging and tracking--to look at molecular machines in motion.

In our bodies, some molecular motors march straight across muscle cells, causing them to contract. Others repair, replicate or transcribe DNA: These DNA-interacting motors can grab onto a double-stranded helix and climb from one base to the next, like walking up a spiral staircase.

To see these mini machines in motion, the team wanted to take advantage of the twisting movement: First, they glued the DNA-interacting motor to a rigid support. Once pinned, the motor had to rotate the helix to get from one base to the next. So, if they could measure how the helix rotated, they could determine how the motor moved.

But there was still one problem: Every time one motor moves across one base pair, the rotation shifts the DNA by a fraction of a nanometer. That shift is too small to resolve with even the most advanced light microscopes.

Two pens lying in the shape of helicopter propellers sparked an idea to solve this problem: A propeller fastened to the spinning DNA would move at the same speed as the helix and, therefore, the molecular motor. If they could build a DNA helicopter, just large enough to allow the swinging rotor blades to be visualized, they could capture the motor's elusive movement on camera.

To build molecule-sized propellers, Kosuri, Altheimer and Zhuang decided to use DNA origami. Used to create art, deliver drugs to cells, study the immune system, and more, DNA origami involves manipulating strands to bind into beautiful, complicated shapes outside the traditional double-helix.

"If you have two complementary strands of DNA, they zip up," Kosuri said. "That's what they do." But, if one strand is altered to complement a strand in a different helix, they can find each other and zip up instead, weaving new structures.

To construct their origami propellers, the team turned to Peng Yin, a pioneer of origami technology. With guidance from Yin and his graduate student Dai, the team wove almost 200 individual pieces of DNA snippets into a propeller-like shape 160 nanometers in length. Then, they attached propellers to a regular double-helix and fed the other end to RecBCD, a molecular motor that unzips DNA. When the motor got to work, it spun the DNA, twisting the propeller like a corkscrew.

"No one had seen this protein actually rotate the DNA because it moves super-fast," Kosuri said.

The motor can move across hundreds of bases in less than a second. But, with their origami propellers and a high-speed camera running at a thousand frames per second, the team could finally record the motor's fast rotational movements.

"So many critical processes in the body involve interactions between proteins and DNA," said Altheimer. Understanding how these proteins work--or fail to work--could help answer fundamental biological questions about human health and disease.

The team started to explore other types of DNA motors. One, RNA polymerase, moves along DNA to read and transcribe the genetic code into RNA. Inspired by previous research, the team theorized this motor might rotate DNA in 35-degree steps, corresponding to the angle between two neighboring nucleotide bases.

ORBIT proved them right: "For the first time, we've been able to see the single base pair rotations that underlie DNA transcription," Kosuri said. Those rotational steps are, as predicted, around 35 degrees.

Millions of self-assembling DNA propellers can fit into just one microscope slide, which means the team can study hundreds or even thousands of them at once, using just one camera attached to one microscope. That way, they can compare and contrast how individual motors perform their work.

"There are no two enzymes that are identical," Kosuri said. "It's like a zoo."

One motor protein might leap ahead while another momentarily scrambles backwards. Yet another might pause on one base for longer than any other. The team doesn't yet know exactly why they move like they do. Armed with ORBIT, they soon might.

ORBIT could also inspire new nanotechnology designs powered with biological energy sources like ATP. "What we've made is a hybrid nanomachine that uses both designed components and natural biological motors," Kosuri said. One day, such hybrid technology could be the literal foundation for biologically-inspired robots.

Credit: 
Harvard University

Protecting a forgotten treasure trove of biodiversity

The Cerrado is the largest savanna region in South America, but compared to the Amazon Forest to the North, it does not attract much attention. It is home to an incredible diversity of large mammal species including jaguar, the endangered maned wolf, the giant anteater, giant armadillo, and marsh deer, as well as more than 10,000 species of plants, almost half of which are found nowhere else on Earth. Despite its importance as a global biodiversity hotspot, it is one of the most threatened and over-exploited regions in Brazil. In fact, today, less than 20% of the Cerrado's original area remains undisturbed and this habitat is at risk of conversion to agriculture, especially for soybean cultivation.

The region has been at the center of the country's recent agricultural boom, with 48% of Brazil's soybean production harvested in the Cerrado in 2015. Unlike the Amazon, where almost half of the area is under some sort of conservation protection, only 13% of the Cerrado is protected. Under Brazil's Forest Code - an environmental law designed to protect the country's native vegetation and regulate land use - 80% of the native vegetation on private lands in the Amazon biome has to be protected, but only 20% is required in the majority of the Cerrado. Between 2000 and 2014, almost 30% of the soy expansion in the Cerrado occurred at the expense of native vegetation. A similar proportion of soy expansion in the Amazon between 2004 and 2005 led to the implementation of the Amazon Soy Moratorium, which constitutes a zero-deforestation agreement between civil society, industry, and government that forbids the buying of soy grown on recently deforested land.

In their study, the team led by IIASA researcher Aline Soterroni and Fernando Ramos from Brazil's National Institute for Space Research (INPE), endeavored to quantify the direct and indirect impacts of expanding the Amazon Soy Moratorium to the Cerrado biome in terms of avoided native vegetation conversion and consequent soybean production loss. Their findings indicate that expanding the moratorium to the Cerrado would prevent the direct conversion of 3.6 million hectares of native vegetation to soybeans between 2020 and 2050. Accounting for leakage effects - in other words, the increase of native vegetation loss to other agricultural activities due to the expansion of soybeans over already cleared areas - the expanded moratorium would save 2.3 million hectares of the Cerrado. Nationally, this would require a reduction in soybean cultivation area of only around 2% (or 1 million hectares), as there are at least 25.4 million hectares of land that has already been cleared in the region (mainly for pasture areas with low productivity cattle ranching) that would be suitable for agricultural expansion. This suggests that both agricultural expansion and conservation of the remaining habitat may therefore be possible.

"According to our model, expanding the Amazon Soy Moratorium to the Cerrado can avoid the loss of a significant amount of native vegetation while simultaneously achieving soybean production goals. We also show that the Forest Code is not enough to protect the area given its low level of legal reserve requirements and its historical lack of enforcement," explains Soterroni. "Our study presents the first quantitative analysis of expanding the soy moratorium from the Amazon to the Cerrado and could be used by traders and consumer markets to adjust their supply chains".

According to the researchers, a growing number of private sector actors are already voluntarily pledging to eliminate deforestation from their supply chains. Furthermore, consumer awareness of deforestation is increasing, providing companies with incentives to adhere to the responsible sourcing of commodities. Soterroni also points out that the close relative risks of future native vegetation conversion to soy estimated for China and the EU (37.52 and 37.06 hectares per 1,000 tons annually, respectively), shows that both of these entities can play an important role regarding the responsible sourcing of soy.

To preserve the biodiversity and ecosystem services provided by the remaining parts of the Cerrado, urgent action is needed. Soterroni says that to this end, a public-private policy mix would be essential to preserve the last remnants of the region and in light of the recent absence of strong environmental governance in Brazil, the expansion of the Soy Moratorium beyond the Amazon to the Cerrado might be more urgent than previously thought. The researchers urge the EU and stakeholders from other regions to encourage the expansion of conservation measures to the Cerrado and to support the call for making soy trade with Brazil more sustainable.

Credit: 
International Institute for Applied Systems Analysis