Tech

Information recorded over time in medical records tells more about diseases

Electronic health records (EHRs) contain important information about patients' health outlook and the care they receive, but the records are not always precise. A new study describes an approach that uses machine learning, a type of artificial intelligence, to carefully track patients' medical records over time in EHRs to predict their likelihood of having or developing different diseases. The study was led by researchers at Massachusetts General Hospital (MGH) and is published in Cell Patterns.

"Over the past decade, billions of dollars have been spent to institute meaningful use of EHR systems. For a multitude of reasons, however, EHR data are still complex and have ample quality issues, which make it difficult to leverage these data to address pressing health issues, especially during pandemics such as COVID-19, when rapid responses are needed," said lead author Hossein Estiri, PhD, of the MGH Laboratory of Computer Science. "In this paper, we propose an algorithm for exploiting the temporal information in the EHRs that is distorted by layers of administrative and healthcare system processes."

The strategy connects information from EHRs on patients' medications and diagnoses over time, rather than from independent health records. Analyses revealed that this sequential approach can accurately compute the likelihood that a patient may actually have an underlying disease.

"Our study doesn't rely on single diagnostic codes but instead relies on sequences of codes with the expectation that a sequence of relevant characteristics over time is more likely to represent reality than a single element," Dr. Estiri said. "Additionally, the computer sorts through thousands of patients and can find sequences that a physician would likely never identify on their own as relevant, but actually are associated with the disease."

As an example, coronary artery disease followed by chest pain in the medical record was more useful for predicting the development of heart failure than either of the factors on their own or in a different order.

The method can therefore identify disease markers that are interpretable by clinicians. This could lead to new computational models for identifying and validating new disease markers and for advancing medical discoveries. The proposed way of thinking about medical records could also help identify patients in a community who are at risk of developing a variety of other diseases and recommend their evaluation by healthcare providers.

Credit: 
Massachusetts General Hospital

Fighting the COVID-19 pandemic through testing

image: Testing for the novel coronavirus, SARS-CoV-2 (shown here in an electron microscopy image), can help scientists trace the pathogen's spread and stop the chain of transmission.

Image: 
National Institute of Allergy and Infectious Diseases, NIH

The world is now in the grips of a historic pandemic. The death toll from the novel coronavirus has climbed to more than 117,000 in the United States and 448,000 around the world. Total cases of the disease, called COVID-19, have soared past 2 million in the US and 8.3 million globally. Debates are now raging about whether US states have begun to move too quickly to reopen restaurants, stores, barbershops, and the myriad other engines of life and commerce after weeks of lockdown.

But there is one area of widespread agreement, says Robert Tjian, a Howard Hughes Medical Institute Investigator at the University of California, Berkeley: the safe path out of the pandemic requires enormous amounts of testing. In the May 1, 2020, issue of the journal RNA, Tjian, study coauthor Meagan Esbin, and their colleagues reviewed recent advances in COVID-19 testing techniques and highlighted barriers facing widespread testing. To trace the pathogen's spread and stop the chain of transmission, it's crucial to test both for the SARS-CoV-2 virus itself and for evidence that people have previously been infected, Tjian explains.

The countries that have so far successfully quashed their outbreaks, such as New Zealand, Taiwan, South Korea, and Iceland, have done the best job of identifying cases. In contrast, "the United States has done quite poorly," says Lawrence Gostin, professor of medicine and public health expert at Georgetown University.

That failing is not for lack of effort in the scientific community. Scores of researchers around the country have dropped what they were doing to tackle the challenge in the US, Tjian says. In fact, he adds, in compiling the many studies described in his group's paper, he was "surprised at how quickly so many labs have converted to working on COVID-19."

These labs have devised innovative new approaches for testing, as well as for overcoming the bottlenecks that hampered testing efforts early in the pandemic. Some labs, like at Berkeley, have set up their own rapid testing operations to serve local communities, quickly publishing their methods "so that everyone doesn't have to reinvent the wheel," says Tjian. These and many other efforts are helping to answer some of the basic questions about fighting the pandemic.

Why is testing so important?

SARS-CoV-2 is an especially pernicious virus. It is both highly contagious and relatively lethal, with a mortality rate that's still uncertain but higher than that of flu - 10 times or more higher, some data suggest. But the virus's wiliest feature is that it can be spread by people who don't even know they are infected. In contrast, victims of the original SARS virus in 2003 weren't contagious until severe symptoms struck, making it easy to isolate those people and cut the chain of transmission.

In the United States, the number of confirmed coronavirus cases has surpassed two million. Case density shown in red. View full dashboard with case tally by country. Credit: Johns Hopkins University

"That people can have COVID-19 without symptoms is one of the most challenging aspects of preventing spread," explains Eric Topol, founder and director of the Scripps Research Translational Institute. One unknowingly infected person can infect dozens of others, as shown by "superspreading" events like a choir practice in Washington state, with 32 confirmed cases, and a man who visited several South Korean nightclubs, infecting more than 100 people.

In addition, testing may spot SARS-CoV-2 only when an infected person is actively producing lots of the virus, says Tjian. That's why three types of testing are vital, he says. People with any COVID-19 symptoms should be tested, to spot new cases as soon as possible. People who have been in contact with an infected person also should be tested, even if they have no symptoms. And finally, he says, health care providers should test people for antibodies to the virus, to identify those who may have already been infected.

How do scientists test for the new coronavirus?

SARS-CoV-2 reproduces by getting into human cells, then hijacking the cells' machinery to make many copies of its genetic material, called RNA. Scientists have designed several testing methods to spot this distinctive viral RNA. The method used in almost all testing to date and considered the "gold standard" relies on a technique for amplifying tiny amounts of viral genes. First, a swab collects infected cells from a person's throat, gathering bits of viral RNA. That genetic material is typically purified and then copied from RNA into complementary DNA. The DNA is then copied millions of times using a standard method known as polymerase chain reaction (PCR). Finally, a fluorescent probe is added that emits a telltale glow when DNA copies of the viral RNA are present.

PCR isn't the only viable approach. Scientists at MIT and other universities have also repurposed the gene editing technique called CRISPR to quickly detect SARS-CoV-2. CRISPR uses engineered enzymes to cut DNA at precise spots. The testing approach harnesses that ability to hunt for a specific bit of genetic code, in this case a viral RNA, using an enzyme that fluoresces when it finds the distinctive SARS-CoV-2 target. In early May, the Food and Drug Administration gave emergency authorization to the test developed by the MIT team, which is led by HHMI Investigator Feng Zhang.

Another testing technique quickly reads each RNA "letter" of the viral genome, using a process called genetic sequencing. That's overkill for detecting the virus, but it has been particularly helpful at charting the virus's relentless march around the globe. And some researchers are experimenting with clever DNA "nanoswitches" that can flip from one shape to another and generate a fluorescent glow when they spot a piece of viral RNA.

Scientists can also see telltale signs of infection in the blood. Once people have been infected, their immune systems respond by creating antibodies designed to neutralize the virus. Antibody tests detect that immune response in blood samples using a protein engineered to bind to SARS-CoV-2 antibodies. Creating an antibody test that's both sensitive and accurate can be tricky, however.

Though coronavirus testing in the US has struggled to reach the levels needed, "the science is not the complicated part," says Tjian. "Like anything else in research, there is more than one solution." Instead, the real problem has been accelerating the pace of testing.

What is the US's track record on testing?

Even as the virus rampaged through Wuhan, China, in January 2020 and started to kill Americans in February (or perhaps even earlier), the US government failed to prepare for the spreading pandemic. There was essentially "no response" from the federal government, Tjian says. "You could not have imagined a worse leadership team to be dealing with this worldwide pandemic."

The Trump Administration declined to use a PCR-based test developed by the World Health Organization (WHO), for example, and a test produced by the U.S. Centers for Disease Control and Prevention (CDC) turned out to be faulty. The lack of a coordinated national effort left states, companies, and university labs scrambling to fill the gap.

As labs and states in the US raced to boost their testing capabilities, they ran into bottlenecks and roadblocks. For example, "only a few supply houses were providing the reagents [needed for the PCR reactions] and supplies were woefully inadequate," says Tjian. Even basic equipment, like the swabs used for collecting samples, was hard to find. "That was one thing that caught us by surprise," recalls Tjian. "Who would have imagined that the most rate-limited piece of this whole puzzle was the swab?" It turned out that the major producer of swabs approved by the CDC was a factory in northern Italy, a region among those hardest hit by the virus.

Without sufficient testing, there was a "tragic data gap undermining the U.S. pandemic response," writes health service researcher Eric C. Schneider in a commentary in the May 15 issue of the New England Journal of Medicine. Instead of being able to test every person with symptoms and all those they had been in contact with, as countries like South Korea did, the shortage meant reserving tests for hospitalized patients and for helping prevent health care workers from transmitting COVID-19, he explains.

The lack of data on case numbers has made it challenging to model the path of the pandemic, writes Schneider, of the Commonwealth Fund, a private foundation aimed at improving the health care system. As a result, it has been difficult to anticipate where emergency medical services, hospital beds, and ventilators are most needed.

By mid-May, the testing capacity in the US had finally risen from a few thousand a day to about 300,000 a day. Still, that's far short of what's needed. The Harvard Roadmap to Pandemic Resilience estimates, for example, that the country will require testing at a rate of "20 million a day to fully remobilize the economy." To safely reopen, "we need massive testing capacities don't currently exist," says Georgetown's Gostin, one of the authors of the report.

How can scientists overcome testing bottlenecks?

Scientists around the world have responded to the challenges posed by the novel coronavirus. The Berkeley group, for example, dramatically boosted its testing capacity and reduced costs to near $1 per test with improvements such as skipping one step - RNA purification - and making their own reagents. "It's not rocket science, but it took us five weeks to figure out the details because commercial companies don't tell you what's in their reagents," explains Tjian. The research team has made their home-brewed test freely available to any lab that wants to replicate it.

Meanwhile, groups at Rutgers, Yale (including HHMI Investigator Akiko Iwasaki), and other centers have eliminated the need for throat swabs by showing that saliva samples work just as well. That opens the door to home testing wider, since spitting into a tube and mailing it to a lab is far easier than swabbing.

Progress is also being made in testing for antibodies. Most of the dozens of so-called serology tests initially on the market didn't have the sensitivity and specificity to pick out only those antibodies directed at SARS-CoV-2. The challenge is that the tests require using copies of a viral protein that binds to the antibodies. One key to solving that problem, it turns out, is using mammalian cells to make the viral protein with the precise shape needed to home in on just the SARS-CoV-2 antibodies.

How will testing help tame the pandemic?

The basic strategy for overcoming COVID-19 is identifying infected people, finding and testing anyone they came in contact with, and quarantining infected individuals. That's not practical for big cities or entire countries, given the staggering numbers of needed tests, logistical challenges, and thorny privacy issues. But there are clever ways to cast a wider net without so many individual tests.

One is lumping together many samples in a pool, so that large groups of people can be monitored with only one test. Then, if the virus does show up in the pool, public health officials can test the individuals in that group to pinpoint the infections.

Perhaps even more powerful is monitoring sewage. The virus can appear in a person's feces within three days of infection - far earlier than the onset of first symptoms. Scientists could use the standard PCR test on sewage samples to detect the virus. And by collecting samples from specific locations, such as manholes, scattered throughout a community, it would be possible to narrow down the location of any infections to a few blocks or even individual buildings, like an apartment complex or a college dorm. "You can determine the viral load and how it is changing over time with one test a day," says Tjian. "That would be amazing."

Tjian and many others are now figuring out how these approaches might be used to safely reopen a university or a business. Large-scale testing efforts would be labor-intensive and not inexpensive, he says, but far cheaper than locking down a whole economy - and far safer than reopening without adequate testing, as some states are now doing. And as scientists continue to increase testing capacities and create cheaper and better tests, this strategy should soon be within reach.

Credit: 
Howard Hughes Medical Institute

22,000 tiny tremblors illustrate 3D fault geometry and earthquake swarm evolution

By mapping the more than 22,000 tremblors, researchers composed a detailed, three-dimensional image of the complex fault structure below southern California's Cahuilla Valley. According to a new study, the four-year-long earthquake swarm that rocked the region was likely triggered by the dynamic interaction between the fault's intricate architecture and natural subterranean fluids, revealing new insights into how these enigmatic seismic events evolve. Despite being known as highly complex three-dimensional structures, earthquake faults are often simplified into two-dimensional features in most standard models of general fault architecture. However, these idealized representations generally fail to explain the dynamic seismicity of earthquake swarms - prolonged periods of localized seismic activity that can occasionally persist over several years. While many of a swarm's tremblors are small, the overall length of the phenomenon and the potential severity of individual seismic events cannot be predicted, thus making them a public safety concern. According to the authors, understanding 3D fault geometry is essential to understanding the complex seismic evolution of earthquake swarms. Zachary Ross and colleagues used advanced earthquake detection algorithms to catalog more than 22,000 individual seismic events during the 2016-2019 Cahuilla swarm in southern California. Using machine learning to plot the location, depth and size of the tremblors, Ross et al. generated a high-resolution, 3D representation of the underlying fault zone structure. The results reveal a complex yet permeable fault structure and suggest that dynamic pressure changes from natural fluid injections from below largely controlled the evolution of Cahuilla swarm. The methods offer a new way to characterize other similar faults and seismic events around the world.

Credit: 
American Association for the Advancement of Science (AAAS)

Forest loss escalates biodiversity change

image: A red squirrel in the Caledonian forests, Scotland

Image: 
Gergana Daskalova

New international research reveals the far-reaching impacts of forest cover loss on global biodiversity.

The research, led by the University of Edinburgh and the University of St Andrews, investigated the impacts of forest loss on species and biodiversity over time and around the world, revealing both losses and gains in species.

Focussing on biodiversity data spanning 150 years and over 6,000 locations, the study, published in the journal Science (18th June), reveals that as tree cover is lost across the world's forests, plants and animals are responding to the transformation of their natural habitats.

Forest loss amplifies the gains and losses of biodiversity - the numbers of individual plant and animal species, as well as the wider diversity and composition of ecosystems around the planet.

Forests support around 80% of all species living on land, from eagles, bluebells, beetles, and many more. This biodiversity provides important ecosystem services and some species, such as the rosalia longicorn beetle, survive best in intact old forests. However, forests are being altered by human activities, for example deforestation for the cultivation of agricultural crops or the conversion to rangeland for grazing cattle. The research reveals that forest loss amplified both gains and losses in the abundance of different species as well as in the overall biodiversity.

This study used the BioTIME and Living Planet biodiversity databases - that contain data collected by researchers working at sites around the world. Bringing together over 5 million records of the numbers of different plants and animals with information on both historic and contemporary peaks in forest loss, the researchers analysed the worldwide impacts of forest loss on biodiversity.

The international research team discovered both immediate and delayed effects of forest loss on ecosystems, indicating that biodiversity responses to human impacts are diverse and play out across decades.

Findings also reveal that some tropical areas experience more forest loss now than they have ever seen in the past, resulting in declining numbers of different animal species. In North America and Europe, the greatest loss of forests often occurred centuries ago, however even the smaller amounts of forest loss in the present day led to different biodiversity responses, escalating gains in certain species and losses in others.

The pace at which biodiversity responds to forest loss varies from a few years, as is the case for many short-lived grasses, light-loving plants and insects, to decades for long-living trees and larger birds and mammals.

For long-lived species, the effects of forest loss do not happen right away and could take decades to become apparent in the biodiversity data that scientists collect.

Gergana Daskalova, PhD student in the School of GeoSciences at the University of Edinburgh and lead author of the study, said: "Biodiversity, the types of species like different plants and animals around the world, is always changing and the species we see on our forest walks today are likely different from the ones we saw growing up.

"We're harnessing the power of generations of scientists recording data as they walk through forests. This allowed us to find signals amidst the noise and pick apart the influence of forest loss from the natural variation in biodiversity over time.

"Surprisingly, we found that forest loss doesn't always lead to biodiversity declines. Instead, when we lose forest cover, this can amplify the ongoing biodiversity change. For example, if a plant or animal species was declining before forest loss, its decline becomes even more severe after forest loss. That same intensification of the signal was also true for increasing species.

"Changes in the biodiversity of the planet's forests matter because they will echo through how these landscapes look, the types of species they support and the benefits that forests provide for society like clean air and water."

Dr Isla Myers-Smith, co-senior author, from the School of GeoSciences at the University of Edinburgh, continued: "To get a global picture of how the planet is changing we need to combine different types of information from observations of plants and animals on the ground through to satellite records of ecosystem change from space. Our study brings together these two perspectives to make new insights into how biodiversity responds when forests are lost around the world.

"Ecology is being reshaped by the new tools available to us as researchers. From satellite observations through to high-performance computers, we ecologists can now ask questions with larger and more complex datasets. We are now coming to a new understanding of how ecosystems are responding to human impacts around the planet."

Dr Maria Dornelas, co-senior author from the School of Biology at the University of St Andrews, added: "Humans are undoubtedly changing the planet. Yet, global analyses of how biodiversity is changing over time, like our study, are revealing biodiversity changes are nuanced and variable.

"With a better understanding of the different ways, both positive and negative, in which forest loss influences biodiversity, we can improve future conservation and restoration of global ecosystems.

Only with collaborative science combining datasets from around the world can we assess both the state of the world's forests, as well as the millions of plants and animals they support.

Credit: 
University of Edinburgh

Forests can be risky climate investments to offset greenhouse gas emissions

image: Science/Policy Nexus

Image: 
David Meikle, The University of Utah

Forests can be risky climate investments to offset greenhouse gas emissions, say scientists

Given the tremendous ability of forests to absorb carbon dioxide from the atmosphere, some governments are counting on planted forests as offsets for greenhouse gas emissions--a sort of climate investment. As with any investment, however, it's important to understand the risks. If a forest goes bust--through severe droughts or wildfires, researchers say--much of that stored carbon could go up in smoke.

Professor Scott Goetz of Northern Arizona University's School of Informatics, Computing, and Cyber Systems and associate professor Deborah Huntzinger of NAU's School of Earth and Sustainability co-authored a paper published in Science finding that forests can be best deployed in the fight against climate change with a proper understanding of the risks to forests that climate change itself imposes.

"There have been optimistic assessments of how valuable forests could be in mitigating climate change over coming decades," said Goetz, "but all of those have somewhat surprisingly overlooked or underestimated the factors that constrain forest carbon sequestration in the face of extreme temperatures, drought, fire and insect disturbance. This paper tempers that enthusiasm while also more realistically recognizing the potential of forests to remove massive amounts of heat trapping carbon dioxide from the atmosphere."

A workshop held in 2019 gathered some of the foremost experts on climate change risks to forests. The diverse group, including Goetz and Huntzinger, represented various disciplines, including law, economics, science and public policy, and the workshop enabled the participants to start talking and come up with a roadmap. This paper, part of that roadmap, calls attention to the risks forests face from myriad consequences of rising global temperatures, including fire, drought, insect damage and human disturbance - a call to action to bridge the divide between the data and models produced by scientists and the actions taken by policymakers.

"Terrestrial ecosystems absorb and store about a third of the carbon emissions human activities release each year," said Huntzinger, "reducing the amount of carbon dioxide that accumulates in the atmosphere year after year. As a result, land ecosystems serve as a thermostat of sorts, regulating climate by helping to control carbon dioxide levels."

Because of this, governments in many countries are looking to "forest-based natural climate solutions" that include preventing deforestation, managing natural forests and reforesting. Forests could be some of the more cost-effective climate mitigation strategies, with co-benefits for biodiversity, conservation and local communities.

But built into this strategy is the idea that forests are able to store carbon for at least 50 to 100 years.
Such permanence is not always a given, with the very real chance that the carbon stored in forest mitigation projects could go up in flames or be lost due to insect infestations, severe drought or hurricanes in the coming decades.

Forests have long been vulnerable to all of these factors, and have been able to recover from them when they are episodic or come one at a time. However, the risks connected with climate change, including drought and fire, increase over time. Multiple threats at once, or insufficient time for forests to recover from those threats, can kill the trees, release carbon and undermine the entire premise of forest-based natural climate solutions.

"Not fully accounting for the range of climate- and human-driven risks to forests can result in an overestimation of the carbon storage potential of forest-based mitigation projects," said Huntzinger. "Good science can better help identify and quantify risks to forest carbon stocks and lead to better policy decisions."

The paper's authors encourage scientists to focus increased attention on assessing forest climate risks and share the best of their data and predictive models with policymakers so that climate strategies including forests can have the best long-term impact. For example, the climate models that scientists use are detailed and cutting-edge, but aren't widely used outside the scientific community--so policymakers might be relying on science that is decades old.

"There are at least two key things you can do with this information," said lead author William Anderegg of The University of Utah. "The first is to optimize investment in forests and minimize risks. Science can guide and inform where we ought to be investing to achieve different climate aims and avoid risks."

The second, he said, is to mitigate risks through forest management. "If we're worried about fire as a major risk in a certain area, we can start to think about what are the management tools that make a forest more resilient to that disturbance." More research, he said, is needed in this field, and the collaborators plan to work toward answering those questions.

"We view this paper as an urgent call to both policymakers and the scientific community," Anderegg said, "to study this more, and improve in sharing tools and information across different groups."

Credit: 
Northern Arizona University

Natural fluid injections triggered Cahuilla earthquake swarm

A naturally occurring injection of underground fluids drove a four-year-long earthquake swarm near Cahuilla, California, according to a new seismological study that utilizes advances in earthquake monitoring with a machine-learning algorithm. In contrast to mainshock/aftershock sequences, where a large earthquake is followed by many smaller aftershocks, swarms typically do not have a single standout event.

The study, which will be published on June 19 in the journal Science, illustrates an evolving understanding of how fault architecture governs earthquake patterns. "We used to think of faults more in terms of two dimensions: like giant cracks extending into the earth," says Zachary Ross, assistant professor of geophysics and lead author of the Science paper. "What we're learning is that you really need to understand the fault in three dimensions to get a clear picture of why earthquake swarms occur."

The Cahuilla swarm, as it is known, is a series of small temblors that occurred between 2016 and 2019 near Mt. San Jacinto in Southern California. To better understand what was causing the shaking, Ross and colleagues from Caltech, the United States Geological Survey (USGS), and the University of Texas at Austin used earthquake-detection algorithms with deep neural networks to produce a highly detailed catalog of more than 22,000 seismic events in the area ranging in magnitude from 0.7 to 4.4.

When compiled, the catalog revealed a complex but narrow fault zone, just 50 meters wide with steep curves when viewed in profile. Plotting those curves, Ross says, was crucial to understanding the reason for the years of regular seismic activity.

Typically, faults are thought to either act as conduits for or barriers to the flow of underground fluids, depending on their orientation to the direction of the flow. While Ross's research supports that generally, he and his colleagues found that the architecture of the fault created complex conditions for underground fluids flowing within it.

The researchers noted the fault zone contained undulating subterranean channels that connected with an underground reservoir of fluid that was initially sealed off from the fault. When that seal broke, fluids were injected into the fault zone and diffused through the channels, triggering earthquakes. This natural injection process was sustained over about four years, the team found.

"These observations bring us closer to providing concrete explanations for how and why earthquake swarms start, grow, and terminate," Ross says.

Next, the team plans to build off these new insights and characterize the role of this type of process throughout the whole of Southern California.

Credit: 
California Institute of Technology

Achievement isn't why more men are majoring in physics, engineering and computer science

video: Researchers at New York University's Steinhardt School found that the reason there are more undergraduate men than women majoring in physics, engineering and computer science is not because men are higher achievers. On the contrary, the scholars found that men with very low high-school GPAs in math and science and very low SAT math scores were choosing these math-intensive majors just as often as women with much higher math and science achievement. This video offers additional insight.

Image: 
New York University

While some STEM majors have a one-to-one male-to-female ratio, physics, engineering and computer science (PECS) majors consistently have some of the largest gender imbalances among U.S. college majors - with about four men to every woman in the major. In a new study published today in the peer-reviewed research journal, Science, NYU researchers find that this disparity is not caused by higher math or science achievement among men. On the contrary, the scholars found that men with very low high-school GPAs in math and science and very low SAT math scores were choosing these math-intensive majors just as often as women with much higher math and science achievement.

"Physics, engineering and computer science fields are differentially attracting and retaining lower-achieving males, resulting in women being underrepresented in these majors but having higher demonstrated STEM competence and academic achievement," said Joseph R. Cimpian, lead researcher and associate professor of economics and education policy at NYU Steinhardt.

Cimpian and his colleagues analyzed data from almost 6,000 U.S. high school students over seven years - from the start of high school into the students' junior year of college. When the researchers ranked students by their high-school math and science achievement, they noticed that male students in the 1st percentile were majoring in PECS at the same rate as females in the 80th percentile, demonstrating a stark contrast between the high academic achievement of the female students majoring in PECS compared to their male peers.

The researchers also reviewed the data for students who did not intend to major in PECS fields, but later decided to. They found that the lowest achieving male student was as least as likely to join one of these majors as the highest achieving female student.

The rich dataset the researchers used was collected by the U.S. Department of Education, and it contained measures of many factors previously linked to the gender gap in STEM. The NYU team tested whether an extensive set of factors could explain the gender gap equally well among high, average, and low achieving students. While the gender gap in PECS among the highest achievers could be explained by other factors in the data, such as a student's prior career aspirations and confidence in their science abilities, these same factors could not explain the higher rates of low-achieving men in these fields.

This new work suggests that interventions to improve gender equity need to become more nuanced with respect to student achievement.

"Our results suggest that boosting STEM confidence and earlier career aspirations might raise the numbers of high-achieving women in PECS, but the same kinds of interventions are less likely to work for average and lower achieving girls, and that something beyond all these student factors is drawing low-achieving men to these fields," said Cimpian.

"This new evidence, combined with emerging literature on male-favoring cultures that deter women in PECS, suggests that efforts to dismantle barriers to women in these fields would raise overall quality of students," continued Cimpian.

Credit: 
New York University

A deep-learned E-skin decodes complex human motion

video: A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin.

Image: 
Professor Sungho Jo, KAIST

A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin.

Conventional approaches require many sensor networks that cover the entire curvilinear surfaces of the target area. Unlike conventional wafer-based fabrication, this laser fabrication provides a new sensing paradigm for motion tracking.

The research team, led by Professor Sungho Jo from the School of Computing, collaborated with Professor Seunghwan Ko from Seoul National University to design this new measuring system that extracts signals corresponding to multiple finger motions by generating cracks in metal nanoparticle films using laser technology. The sensor patch was then attached to a user's wrist to detect the movement of the fingers.

The concept of this research started from the idea that pinpointing a single area would be more efficient for identifying movements than affixing sensors to every joint and muscle. To make this targeting strategy work, it needs to accurately capture the signals from different areas at the point where they all converge, and then decoupling the information entangled in the converged signals. To maximize users' usability and mobility, the research team used a single-channeled sensor to generate the signals corresponding to complex hand motions.

The rapid situation learning (RSL) system collects data from arbitrary parts on the wrist and automatically trains the model in a real-time demonstration with a virtual 3D hand that mirrors the original motions. To enhance the sensitivity of the sensor, researchers used laser-induced nanoscale cracking.

This sensory system can track the motion of the entire body with a small sensory network and facilitate the indirect remote measurement of human motions, which is applicable for wearable VR/AR systems.

The research team said they focused on two tasks while developing the sensor. First, they analyzed the sensor signal patterns into a latent space encapsulating temporal sensor behavior and then they mapped the latent vectors to finger motion metric spaces.

Professor Jo said, "Our system is expandable to other body parts. We already confirmed that the sensor is also capable of extracting gait motions from a pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Researchers design a system to reduce the noise of space rockets in the launch phase

image: Researchers at the Gandia campus of the Universitat Politècnica de València (UPV) have developed a new system to reduce the noise of space rockets during the first phases of launching. The prototype was presented by Iván Herrero, Doctor in Mathematics by UPV in his doctoral thesis, and will increase the safety of launching of space vehicles

Image: 
ESA

The thesis is focused on the research of methods that reduce the noise level of space rockets during the first phases of launching (engine ignition and takeoff). According to Iván Herrero, at those times, the levels of acoustic pressure experienced by the space vehicles are extremely high and could seriously affect the light structures onboard, such as solar panels and antennas, making it necessary to reduce the noise levels.

"During the launch of space rockets, over 150 dB of sound pressure level are reached at certain frequencies. It is the highest level sound event produced by a human being, only behind some natural events like an earthquake," explains Iván Herrero.

In addition, the intense sound generated by the primary sources, engine and jet increases due to reflection at the bottom of the rocket launch site, which acts like a mirror from the acoustic point of view, and gives the energy released back to the rocket and the structures onboard, implying economic and safety consequences.

Prototype

The prototype designed by Iván Herrero, under the supervision of his thesis directors, is based on an array of Helmholtz resonators, which maximizes the sound absorption and diffusion, to mitigate the sound pressure levels generated during these events in the space context.

"The presence of Helmholtz resonators, as well as their specific distribution, produces a reduction of the speed of sound diffusion. This is due to the friction of acoustic waves with the resonator walls, which produces a deceleration. The design of this system was done by optimizing a specific frequency range, which they have been able to reduce by an average of 20 decibels," explains Iván Herrero.

Despite of the importance of this problem, knowledge about the characteristics of the sources, the behavior of the ground installations regarding the sound diffusion and absorption, and the possible measures to mitigate the impact are still poor. The research work developed at the Gandia campus of UPV and the European Space Agency is in response to this need.

The Iván Herrero's thesis was directed by Rubén Picó Vila, Víctor Sánchez Morcillo and Lluís García Raffi, from the Universitat Politècnica de Valènca, and by Vicente Romero García, from the Laboratoire d'Acoustique de l'Université du Maine (France).

NPI PROGRAM

Iván Herrero has developed part of his research work in the technical center of the European Space Agency (ESA) in Holland, thanks to the agreement signed between UPV and ESA within the Networking / Partnering Initiative (NPI) program, in which several universities and research institutes in advanced technologies are participating with potential space applications.

During the thesis work, Iván did two research stays in the European Space Research and Technology Centre (ESTEC) of ESA.

Credit: 
Universitat Politècnica de València

No disadvantages to having kids early

When some species are heavily hunted, animal mortality increases and they have fewer offspring in the course of their lives.

To compensate for this, animals that are hunted often respond by becoming sexually mature and bearing young earlier than species that are not hunted. In other words, animals being hunted have a "faster" life history.

Until now we've believed that animals that grew fast did so at a cost. The idea has been that rapid physical maturity happens at the expense of a long-lived body. The early-growth individuals should therefore be more prone to disease and earlier natural death, partly due to a poorer immune system and increased physiological stress.

This pattern may indeed apply to some species. But an NTNU study of wild boar recently published in Oecologia shows that this is not always the case.

Wild boar are starting to establish themselves in Norway after spreading via Sweden. But this research team examined around 1,000 wild boars in the Châteauvillain-Arc-en-Barrois forest in France.

The most common cause of death for this wild boar population is hunting by humans.

"Evidence indicates that hunting by humans affects the population. The hunting pressure means it's advantageous for wild swine to become sexually mature and have young earlier," says Lara Veylit, first author of the study and a PhD candidate in the Department of Biology at the Norwegian University of Science and Technology (NTNU).

Normally, researchers would expect wild boars in this population to reproduce earlier, but also to die earlier as a result of higher stress on the body. What they observed went against the conventional knowledge in the field.

"We found that males that grew rapidly actually had lower mortality due to both hunting and natural causes," says Veylit.

The faster-growing males also lived longer on average, whether taken by hunters or dying from natural causes.

This finding may indicate that the healthiest and most sexually attractive males are the early-maturing ones. They are also the best at hiding from hunters.

The mortality for the females was not affected by whether they developed early or late.

Females that reproduce early can be much smaller when they are young. The females only needed to reach 35 to 40 per cent of their adult weight when having their first litter.

But the strain this places on the body does not seem to make them more vulnerable or prone to early mortality.

Credit: 
Norwegian University of Science and Technology

Cyclosporin study may lead to novel ways of approaching mitochondrial dysfunction

image: Long-range NOEs commonly observed in cyclosporins B-E in apolar media (solution in chloroform or in the complex with DPC micelles in water).

Image: 
Kazan Federal University

Cyclic peptide molecules of the fungal origin called cyclosporins were discovered in 1970's, and cyclosporin A soon became an important drug due to its immunosuppressive activity. The details of the biochemical reactions involving cyclosporin were elucidated by the beginning of 1990s, but still some aspects of the behavior of this molecule raise questions. Investigation started in the Nuclear Magnetic Resonance Lab (guided by Professor Vladimir Klochkov) at Kazan Federal University in 2008 and was dedicated mainly to physico-chemical properties of cyclosporin A (CsA). Recently, we extended the study to cyclosporin variants with different composition.

Fungi producing cyclosporins exist as two reproducing stages: asexual - soil fungi from which cyclosporin was initially extracted, and sexual - parasitic fungi close to a popularly known genus Cordyceps. Unlike most polypeptides synthesized on ribosomes following the information directly encoded in nucleic acid, cyclosporins are produced on a special enzyme, cyclosporin synthetase. This process is less accurate, hence the final product is usually a mix of several compounds with slightly different chemical compositions.

These small variations, however, have a dramatic effect on the behavior of the peptide used as a drug. Only CsA was found to be an effective immunosuppressant helping people with transplants. Two questions arise immediately: what is the key peculiarity of CsA, and is immunosuppression the only use which we can find in cyclosporins?

It is a well-established fact that the activity of a biochemical compound depends strongly on its three-dimensional molecular structure. Nuclear magnetic resonance (NMR) spectroscopy offers a possibility to reconstruct the structure of a molecule as it exists in a solution or in a membrane-mimicking medium. We found that different cyclosporins have similar structures in similar environments such as solution in chloroform or complex with phospholipid micelles serving as model membranes.

Cyclization of peptides gives them unique properties. In case of cyclosporin, we have an additional reason for unusual molecular behavior: absence of several amide protons, which in typical peptides can form hydrogen bonds and thus stabilize the structure. The shape of the cyclosporin molecule is more rigid compared to linear oligopeptides due to cyclization, but at the same time it still remains relatively flexible. This flexibility is observed in NMR studies as coexistence of multiple molecular forms of cyclosporin in polar solvents. Among all these conformations, only one may be active as a drug, and thus conformational equilibrium influences the medical efficacy of a compound. We estimated the energy barrier dividing the conformers to be on the order of 70-80 kJ/mol, which reveals the nature of the transformation: rotation of peptide bonds by 180?. However, some cyclosporin variants show unexpected behavior: cyclosporin E (CsE) was found to be rigid in a polar solvent DMF, while CsH exists as multiple conformers in apolar chloroform, unlike all other studied variants.

Molecular dynamics simulation reveals peptide chain flexibility on a nanosecond time scale, which is not observed directly by NMR. Simulation of cyclosporin molecules confirmed that CsE molecule is more rigid than other studied variants (A, B, C, D, H). The most interesting result obtained so far is that these peculiarities correlate with the biochemical action of cyclosporins in living objects. Experiments carried out by a co-author from Mari State University (Mikhail Dubinin) revealed that cyclosporins B, C, and D inhibited pore opening in rat liver mitochondria - effect which was observed earlier in the presence of CsA. However, CsE did not show this biological activity.

First, this research sheds light on the structure-activity relationship, or, in a wider sense, on the structure-dynamics-activity correlation. Conformation of the molecule can be a crucial element in some interactions, which is proved by the inability of most cyclosporins other than CsA to participate in the immune response: substitution of a single amino acid prevents needed interaction with target protein, cyclophilin. On the other hand, this substitution turned out to be unimportant for the interaction with the mitochondrial pore complex, in which other factors should be considered: general chain flexibility and the ability of the molecule to penetrate through phospholipid cell membrane.

Second, the possibility of regulating the mitochondrial activity is of great interest itself, because it allows finding a way to treat diseases related to mitochondrial dysfunction. Cyclosporin A does not suit this role due to its effect on the immune system, but maybe its congeners of the wide cyclosporin family could be useful.

In view of the recent findings, the research can be continued in several directions. More variants of cyclosporin can be characterized by NMR and molecular dynamics to reveal elements of composition (e.g., presence of additional amide protons in the peptide chain) responsible for different properties. Structures of the peptides buried in the interior of a model membrane are of special interest, since many biochemical phenomena occur in this environment. Finally, studies of the influence of cyclosporins on living systems such as mitochondria should be continued, too.

Credit: 
Kazan Federal University

Spacecrafts get a boost in 'aerogravity assisted' interactions

In a recent paper published in EPJ Special Topics, Jhonathan O. Murcia Piñeros, a post-doctoral researcher at Space Electronics Division, Instituto Nacional de Pesquisas Espaciais, São José dos Campos, Brazil, and his co-authors, map the energy variations of the spacecraft orbits during 'aerogravity assisted' (AGA) manoeuvres. A technique in which energy gains are granted to a spacecraft by a close encounter with a planet or other celestial body via that body's atmosphere and gravity.

In 2019, Voyager 2 became the second man-made object to leave the solar system, following its counterpart Voyager 1. The energy to carry these probes was obtained via interactions with the solar system's giant planets - an example of a pure gravity assisted manoeuvre.

The topic approached by the paper is one that has been tackled from a number of different angles before, but the team took the novel approach of considering a passage inside the atmosphere of a planet and the effects of the spacecraft's rotation as it performs such a manoeuvre. During the course of simulating over 160,000 AGA manoeuvres around the Earth, the team adjusted parameters such as masses, sizes and angular momentum, to see how this would affect the 'drag' on the spacecraft, thus changing the amount of energy imparted.

The researchers discovered that the larger the values of the area to mass ratio (A/m - the inverse of area density) that they employed in their models the greater the drag was on the probe, and thus, the greater the energy loss it experienced due to this drag, and the lower its velocity was as a result, but it may increase the energy gains from gravity, due to the larger rotation of the velocity of the spacecraft. The same effect also increased the region in which energy losses occurred whilst simultaneously reducing the area in which maximum velocity can be achieved.

Their results indicate that as this is the inverse of area density and density falls off at greater altitudes, drag can be reduced by a trajectory that brings a craft in at higher altitudes. This can eventually approach the values of trajectory given by a pure gravity-assisted AGA.

As the Voyager missions show, when performed at maximum efficiency, AGA manoeuvres have the potential to send mankind beyond the reaches of our solar system into the wider galaxy.

Credit: 
Springer

Popular doesn't mean influential among Cambodian farmers

image: Junjian Zhang led the social network survey in the field and collected data from over 120 farmers.

Image: 
Junjian Zhang, University of Sydney

It's become common practice for NGOs and environmental development agencies to use 'influencers' for the roll out of environmentally sustainable farming practices, but this isn't always the most effective method, say social network analysts from the University of Sydney.

Published in the International Journal of Agricultural Sustainability, their research examined the role of social network brokers - well-connected individuals within a community - in the adoption of innovative farming practices in Battambang Province in North-Western Cambodia. The authors, Dr Petr Matous, Junjian Zhang and Associate Professor Daniel Tan found that less popular farmers were better influencers, compared to their more popular peers.

"Similar to marketers on social media, the international development industry and environment conservation organisations have become enamoured with the idea of leveraging local 'influencers' to deliver programs ranging from behavioural interventions, to the promotion of new technologies," said Faculty of Engineering academic and environmental and humanitarian engineer, Dr Petr Matous.

"External organisations often don't have the capacity to support every single farmer in a village and show them how a new technology works. Instead, they often select several 'model farmers', who they choose based on whether they are community leaders or regularly offer advice," he said.

"They then give these 'popular' and seemingly influential farmers new technologies in the hope they will adopt them and disseminate the knowledge or technology around the village using their social networks.

The researchers found that providing less popular farmers with new information and technologies was more likely to result in a wider community adoption of sustainable farming practices.

"Farmers who move between diverse sub-communities and were more open-minded were the most receptive to the early adoption of the recommended farming practices such as crop rotation or drip irrigation and they are not the same group as the most 'popular' farmers," said Dr Matous.

"This might be the case because popular farmers may be reluctant or tired of being repeatedly used by external agencies. Whether in Cambodia or anywhere else, the fact that someone is locally prominent does not necessarily mean that they are interested in new environmental or resource-conserving practices.

"The findings suggest that we should not excessively rely only on the handful of prominent farmers in the hope that new technologies will magically trickle down from them to others, who are often their competitors. To tackle environmental degradation and looming food insecurity, we need to better engage larger sections of the communities."

Implementing sustainable practices

Rice farming is Battambang's main agricultural activity, although many farmers apply practices that deteriorate soil health and water resources, often leading to insufficient yields. Coupled with environmental degradation and the current COVID-19 pandemic, the region's food security has deteriorated.

To combat this, since 2017, University of Sydney researchers have been working with Battambang farmers to diversify their crops and adopt practices that will better sustain their livelihoods and the local environment.

"One practice we have worked to implement is crop rotation: alternately planting different crops on the same land in between rice growing seasons, for example, mungbean, watermelon, rice and cucumber," said Associate Professor Daniel Tan from Sydney Institute of Agriculture and the Faculty of Science.

"This practice ensures that organic matter in the soil is preserved, which improves soil structure and nutrient content, and prevents soil erosion. It also allows the producers to gain additional income in between rice harvest when their fields would be otherwise unused," said PhD student and the study's lead author, Junjian Zhang.

"Another practice that we studied and promoted was drip irrigation: a low-cost system of small perforated hoses laid between crops that bring water to the root zone, with minimal loss by evaporation and surface run off," he said.

Credit: 
University of Sydney

Decide now or wait for something better?

Be it booking flight tickets, buying a car or finding a new apartment, we always come up against the same question: Should I strike while the iron's hot, or wait until a better offer comes along? People often find it difficult to make decisions when options are presented not simultaneously but one after another. This becomes even more difficult when time is limited and an offer that you turn down now may no longer be available later.

"We have to make decisions like this countless times every day, from the small ones like looking for a parking space to the big ones like buying a house or even choosing a partner," says Christiane Baumann, a doctoral candidate in the Department of Psychology of the University of Zurich. "However, until now, the way we behave in such situations has never been thoroughly examined." Under the leadership of cognitive psychologist Bettina von Helversen (previously UZH, now University of Bremen) and in collaboration with Professor Sam Gershman (Harvard University), Baumann carried out numerous experiments to investigate this issue. Using the results, she then developed a simple mathematical model for the strategy that people use when they make decisions.

Is there an optimal process?

It is easy, using a computer, to find the best-possible process for making decisions of this type. "But the human brain is not capable of carrying out the complex calculations that are required, so humans use a rather simplified strategy," says Baumann.

Baumann simulated purchasing situations with up to 200 participants in each test in order to find out what strategies people use. In one test, the participants were told to try to get a flight ticket as cheaply as possible - they were given 10 offers one after the other in which the price fluctuated; meanwhile the fictional departure date was getting nearer and nearer. In another test, people had to get the best possible deal on products such as groceries or kitchen appliances, with the fluctuating prices taken from an online shop.

Expectations driven down

The evaluation of the experiments confirmed that the test participants did not use the optimal, yet complex, strategy calculated by the computer. Instead, Baumann discovered that they use a "linear threshold model": "The price that I am prepared to pay increases every day by the same amount. That is, the further along I am in the process, the higher the price I will accept," explains Baumann.

This principle can be applied not only to purchasing decisions, but also situations such as choice of an employer or a life partner: "At the beginning perhaps my standards are high. But over time they may lower so that in the end I may settle for someone I would have rejected in the beginning."

A model to stimulate the human strategy

Baumann analyzed the experimental data and developed a mathematical model that describes human behavior in various scenarios. "That helps us to better understand decision-making," says Baumann. The model also allows us to predict the circumstances in which we tend to buy a product too early - or when we delay too long and then have to take whatever is left in the end.

Baumann thinks these findings could help people make difficult decisions in future: "In the current digital world the amount of information available for decision-making can be overwhelming. Our work provides a starting point for a better understanding of when people succeed or fail in such tasks. That could enable us to structure decision-making problems, for example in online shopping, in such a way that people are supported in navigating the flood of data."

Credit: 
University of Zurich

Simulating cooperation in local communities

Many goods and service providers in China rely on supplies from local governments, but these are often limited by financial budgets - especially in rural villages. Members of the public must cooperate with their governments and each other in order for this system to run smoothly, but unfortunately, this balance is threatened by a small proportion of individuals who take in welfare without contributing fairly to their communities. In new research published in EPJ B, Ran Yang and colleagues at Tianjin University, China, introduce a new simulation-based approach which could help to solve this issue, through a cost-effective system which rewards individuals who use welfare systems responsibly.

The team's work could help to improve the efficiency and fairness of goods and service operations in China, without requiring external funds for reward and punishment systems. Their system works by assigning ranked reputation scores to individuals, which are quantified by their previous levels of cooperation and made known to the public. When payoffs are made by local governments, lower-reputation individuals will be required to transfer some of this welfare directly to those with higher reputations. This provides a significant incentive for people to improve their reputations.

Yang and colleagues designed the system using computer simulations of a 'public goods game.' By tuning the parameters of the simulation, they explored how various mechanisms of payment transfer between 'players' of differing reputations would cause public cooperation as a whole to evolve over time. This allowed them to determine how these transfers could be optimised to ensure as many players as possible came to improve their reputations, without incurring any significant costs. The study could ultimately provide useful insights for local governments and organisers as to how they can ensure that their supplies to public goods and services providers can benefit their communities the most.

Credit: 
Springer