Brain

First ever detailed description of a volcanic eruption from Sierra Negra

A volcanic eruption in the Galápagos Islands has given scientists a fresh insight into how volcanoes behave and provided vital information that will help to predict future hazards on the islands.

Irish scientists, based at Dublin Institute for Advanced Studies (DIAS) and Trinity College Dublin respectively, were members of an international research team from Ireland, United Kingdom, United States, France and Ecuador that made the discovery.

The research published today (02.03.21) in Nature Communications reveals the first ever detailed description of a volcanic eruption from Sierra Negra - one of the world's most active volcanoes - found on Isabela Island, the largest of the Galápagos archipelago and home to nearly 2,000 people.

The new understandings developed from the research will allow Ecuadorian volcanologists to track the evolution of unrest for future eruptions in the Galápagos Islands, and communicate it to local authorities and the public.

The research process

The eruption in June 2018 began after 13 years of earthquakes and uplift of the surface marked the gradual accumulation of molten rock (magma) under the volcano. They were amongst the largest signals ever recorded before an eruption. Strong earthquakes allowed new fissures to open in the shield volcano, feeding lava flows that extended 16 km to the coast, and were active for nearly two months.

When the eruption finished, the hills within the 10 km wide caldera of the volcano were nearly two metres higher than at the start. This phenomenon, known as 'caldera resurgence', is important for understanding when and where eruptions happen, but is rare and had never been observed in such detail.

Despite their significance, the Galápagos Islands' remote location means that this is the first eruption there to have been recorded by modern monitoring instruments, including seismometers and GPS. Consequently, there have been no previous multidisciplinary studies into the volcanic processes behind Galápagos Island volcanism.

The international research team combined the latest data recorded by instruments on the ground, by satellites, and by analysis of the chemical composition of the erupted lava. They showed how ascending magma permanently uplifted a 'trapdoor' in the floor of the caldera, raising the surface, and triggering large earthquakes.

Comments from the research team

Professor Chris Bean, Head of the Geophysics Section and Director of the School of Cosmic Physics at DIAS, who was a member of the research team said:

"It was fantastic to represent DIAS on this international research team, we managed to examine the Sierra Negra volcano with an unprecedented level of detail which has produced some ground-breaking results. Although the volcano had been slowly inflating for over a decade, the final trigger to the eruption was a violent earthquake strong enough to make anything that wasn't tied down hop clear off the ground. Stress changes related to this event unzip subterranean fractures through which magma flowed to the surface in a spectacular eruption."

Dr Michael Stock, Assistant Professor of Geology at Trinity College Dublin, who was also a member of the research team, said:

"This is a genuinely multidisciplinary study which brought together a diverse team of international scientists to produce one of the most detailed records of pre-eruptive processes at an active volcano to date. The data will be invaluable in improving volcano monitoring in Galapagos, where eruptions pose a risk to the unique and fragile ecosystem. However, it also has far-reaching global implications, demonstrating that not all volcanoes are created equally - our current understanding of volcano monitoring data is largely based on well-studied eruptions in Iceland and Hawaii and may need to be urgently reassessed to effectively manage volcanic hazards in other locations."

Credit: 
Trinity College Dublin

UNESCO reveals largest carbon stores found in Australian World Heritage Sites

Australia's marine World Heritage Sites are among the world's largest stores of carbon dioxide according to a new report from the United Nations, co-authored by an ECU marine science expert.

The UNESCO report found Australia's six marine World Heritage Sites hold 40 per cent of the estimated 5 billion tons of carbon dioxide stored in mangrove, seagrass and tidal marsh ecosystems within UNESCO sites.

The report quantifies the enormous amounts of so-called blue carbon absorbed and stored by those ecosystems across the world's 50 UNESCO marine World Heritage Sites.

Despite covering less than 1 per cent of the world's surface, blue carbon ecosystems are responsible for around half of the carbon dioxide absorbed by the world's oceans while it is estimated they absorb carbon dioxide at a rate about 30 times faster than rainforests.

Australia a 'Blue Carbon' hotspot

Report author and ECU Research Fellow Dr Oscar Serrano said Australia's Great Barrier Reef, Ningaloo Coast and Shark Bay World Heritage areas contained the vast majority of Australia's blue carbon ecosystems.

"We know Australia contains some of the world's largest stores of blue carbon due to the enormous size and diversity of our marine ecosystems," he said.

"However here in Australia and around the world, these ecosystems are under threat from human development and climate change.

"While they're healthy, blue carbon ecosystems are excellent stores of carbon dioxide, but if they are damaged, they can release huge amounts of carbon dioxide stored over millennia back into the atmosphere."

Climate change turns up the heat on seagrass

In 2011 seagrass meadows in the Shark Bay World Heritage Site in Western Australia released up to nine million tons of stored carbon dioxide after a marine heatwave devastated more than 1000sqkm of seagrass meadows.

The UNESCO Report's authors have outlined the potential for the countries including Australia to use the global carbon trading market to fund conservation and restoration efforts at marine World Heritage Sites including here in Australia.

Dr Serrano said both Shark Bay and the Great Barrier Reef ecosystems are at risk due to climate change and human development.

"There are significant opportunities for both the Great Barrier Reef and Shark Bay to be protected and restored to ensure they survive and thrive in the future," he said.

"Australia also has plenty of marine ecosystems in need of protection not contained within a World Heritage Site which are worthy of our attention.

Money to be made in carbon market

Dr Serrano's previous research has highlighted the millions of dollars in potential conservation and restoration projects of blue carbon ecosystems while also helping Australia and other countries achieve their commitments to the Paris Climate Agreement.

Credit: 
Edith Cowan University

Education, interest in alternative medicine associated with believing misinformation

While many people believe misinformation on Facebook and Twitter from time to time, people with lower education or health literacy levels, a tendency to use alternative medicine or a distrust of the health care system are more likely to believe inaccurate medical postings than others, according to research published by the American Psychological Association.

"Inaccurate information is a barrier to good health care because it can discourage people from taking preventive measures to head off illness and make them hesitant to seek care when they get sick," said lead author Laura D. Scherer, PhD, with the University of Colorado School of Medicine. "Identifying who is most susceptible to misinformation might lend considerable insight into how such information spreads and provide us with new avenues for intervention."

In the study, published in the journal Health Psychology, researchers surveyed 1,020 people in the U.S. between the ages of 40 and 80 about the accuracy of 24 recent Facebook and Twitter postings on HPV vaccines, statin medications and cancer treatment. Researchers shared with participants an equal number of true and false postings for all three medical issues. False claims included asserting that red yeast rice is more effective at lowering cholesterol than statins, that marijuana, ginger and dandelion roots can cure cancer and that HPV vaccines are dangerous.

Participants were asked to evaluate whether the postings were completely false, mostly false, mostly true or completely true. Researchers asked follow-up questions, including participants' education level, interest in alternative treatments, understanding of health care issues, income and age.

Participants with lower education and health literacy levels were more likely to believe the misinformation, the researchers found. Those with a distrust for the health care system or who had positive attitudes toward alternative medicine also tended to believe the misinformation on the three health topics more often than others in the study. Also, participants who fell for misinformation on one health issue tended to be more susceptible to misinformation on the other two health topics.

The findings could help public health officials develop more targeted messaging and outreach for a range of health care issues, according to the researchers.

"People who were susceptible to misinformation tended to be susceptible to all three types we showed them, about a vaccine, statin medications and cancer treatment," Scherer said. "One possible implication is that these individuals are susceptible to many different types of health misinformation, making these findings potentially relevant to other health care issues beyond the ones we studied here. This information could have implications for other public information efforts, such as those currently underway to address COVID-19."

Still, more research needs to be done to fully understand how to interrupt misinformation cycles, Scherer said.

"We hope that researchers can build on these findings and develop novel and evidence-based interventions to reduce the influence and spread of health misinformation online. Such steps could save countless lives," she said.

Credit: 
American Psychological Association

Root cause: Plant root tips are constrained to a dome shape common to arch bridges

image: The common curve of root tip outlines found in 10 plant species.

Image: 
Osaka University and NAIST

Osaka, Japan - Nature is full of diversity, but underneath the differences are often shared features. Researchers from Japan investigating diversity in plant features have discovered that plant root tips commonly converged to a particular shape because of physical restrictions on their growth.

In a study published in February in Development, researchers from Osaka University, Nara Institute of Science and Technology, and Kobe University have revealed that plant root tips are constrained to a dome-shaped outline because of restrictions on their tissue growth. This study is one of the papers selected as a Research Highlight published in this issue of Development that looked at how the geometric and mechanical properties of plant tissues are regulated during development, and how they contribute to growth.

In plants and animals, the outlines of organs are defined by shape and size. But despite species differences, these outlines retain a basic similarity (e.g., songbird beaks). Plant root tips are no exception, sharing a domed shape. Root tips need to be able to push through soil effectively without disintegrating, and the similarity of their shape between plant species suggests that it may be constrained by evolution.

To investigate how the shape of root tips is defined, the team used morphometric (i.e., measurements of shape and form) analysis and mathematical modeling. They looked at the shape of primary and lateral root tips in Arabidopsis - small flowering plants similar to mustard and cabbage - and other flowering plant species.

"We found that the shape of the root tips in these species commonly converged to a unique curve by rescaling their size," says lead author of the study Tatsuaki Goh (Figure 1). "This curve can be described as a catenary curve - like that of arch bridges, or of a chain hanging between two points." (Figure 2)

The team also revealed with simulations that, with this shape, mechanical force is evenly spread over the surface of a root tip, and propose that this may help the tip to efficiently push through the soil. Mechanically, the formation of a curve like this in a growing structure needs a distinct boundary between a growing and non-growing region at the lateral edge of the young root, as well as spatially even, one-directional (oriented) tissue growth in the growing root tip (Figure 3).

"The very young roots of Arabidopsis show both of these characteristics, and mutant strains of these plants that disrupt either of these requirements lead to a departure from the dome shape," explains senior author Koichi Fujimoto.

Future studies could look at how localized and spatially even occurrence of one-directional tissue growth are shared constraints for the maintenance of the dome shape between different species and classes of roots and have potential applications in plant conservation and plant biotechnology.

Credit: 
Osaka University

Will we enjoy work more once routine tasks are automated? - Not necessarily, a study shows

Will we enjoy our work more once routine tasks are automated? - Not necessarily, suggests a recent study

Research conducted at Åbo Akademi University suggests that when routine work tasks are being replaced with intelligent technologies, the result may be that employees no longer experience their work as meaningful.

Advances in new technologies such as artificial intelligence, robotics and digital applications have recently resurrected discussions and speculations about the future of working life. Researchers predict that new technologies will affect, in particular, routine and structured work tasks. According to estimations, 7-35 percent of work tasks in Finland will be automated over the next 10-20 years or so. Globally, it is anticipated that up to 60 percent of all work tasks will be affected by new technologies.

The discussion has thus far centred around which skills are required in the future working life, or if work as we know it will vanish altogether. A recent study conducted at Åbo Akademi University brings a new perspective to the debate.

"Our values guide many of the selections we make during our lives, including career or occupational choices and the type of competences we value. That's why it is important to understand how the changes brought to work by novel technologies affect future work and if work will correspond to what we today view as meaningful", says Johnny Långstedt, who is a doctoral student in Comparative Religion and a project researcher in Industrial Management at Åbo Akademi University.

Långstedt's study indicates that there is a systematic association between the automatability and the prominent values in various occupations. When structured work is being automated and replaced by other tasks - mainly creative, social and non-regular tasks - the contents of the work may not necessarily fit with the values that have been characteristic of automatable occupations. This could result in a widespread decreased commitment and job satisfaction if the changes at work are as comprehensive as researchers have estimated.

"Up to date, we have mostly talked about how nice it is that routine work is being reduced. But what about those who enjoy such work? This is the first study aimed at understanding the ways our values are linked to the work we are expected to carry out in the future", says Långstedt.

Credit: 
Abo Akademi University

Hot electrons send CO2 back to the future

video: Researchers at KAUST have developed an efficient catalyst that uses light energy to convert carbon dioxide and hydrogen into methane, which counteracts the release of carbon dioxide when methane is burned as a fuel.

Image: 
© 2021 KAUST; Anastasia Serin.

Atmospheric carbon dioxide (CO2) is a major driver of global warming, but this gas could also serve as a valuable resource. Researchers at KAUST have developed an efficient catalyst that uses light energy to convert CO2 and hydrogen into methane (CH4). This counteracts the release of CO2 when methane is burned as a fuel.

Many researchers worldwide are exploring ways to convert CO2 into useful carbon-based chemicals, but their efforts have been limited by low efficiencies that restrict the potential for large-scale application.

"Our approach is based on the synergistic combination of light and heat, known as the photothermal effect," says postdoc Diego Mateo. He explains that the heat is generated by the interaction of light with the catalyst, so the two forms of energy come from absorbed light.

Some other industrial approaches require heating from external sources to attain temperatures as high as 500 degrees Celsius. The KAUST research demonstrates that the reaction can be achieved using just the photothermal effect of daylight.

The catalyst is built from nickel nanoparticles on a layer of barium titanate. It captures the light in a way that kicks electrons into high energy states, known as "hot electrons". These electrons then initiate the chemical reaction that sends CO2 back into methane. Under optimum conditions, the catalyst generates methane with nearly 100 percent selectivity and with impressive efficiency.

A major advantage is the wide range of the spectrum of light harnessed, including all visible wavelengths, in addition to the ultraviolet rays that many catalysts are restricted to. This is hugely significant since ultraviolet light comprises only 4 to 5 percent of the energy available in sunlight.

"We strongly believe that our strategy, in combination with other existing CO2 capture techniques, could be a sustainable way to convert this harmful greenhouse gas into valuable fuel," says Mateo.

Any fuels made from CO2 would still release that gas when they are burned, but the CO2 could be repeatedly recycled from the atmosphere to fuel and back again, rather than being continually released by burning fossil fuels.

The researchers are also looking to widen the applications of their approach. "One strategy for our future research is to move towards producing other valuable chemicals, such as methanol," says Jorge Gascon, who led the research team. The researchers also see potential for using light energy to power the production of chemicals that don't contain carbon, such as ammonia (NH3).

Credit: 
King Abdullah University of Science & Technology (KAUST)

Genetic study uncovers hidden pieces of?eye disease?puzzle?

image: The cornea of a keratoconus patient, which is cone-shaped rather than curved.

Image: 
Community Eye Health (Flickr)

Scientists have taken a significant step forward in their search for the origin of a progressive eye condition which causes sight loss and can lead to corneal transplant.

A new study into keratoconus by an international team of researchers, including a University of Leeds group led by Chris Inglehearn, Professor of Molecular Ophthalmology in the School of Medicine, has for the first time detected DNA variations which could provide clues as to how the disease develops.

Keratoconus causes the cornea, which is?the clear outer layer?at the front of the eye, to thin and bulge outwards into a cone shape over time, resulting in blurred vision and sometimes blindness. It usually emerges in young adulthood, often with lifelong consequences, and affects 1 in 375 people on average, though in some populations this figure is much higher.

It is more common in people with an affected relative, leading scientists to believe there could be a genetic link.

Glasses or contact lenses can be used to correct vision in the early stages. The only treatment is 'cornea cross-linking', a procedure where targeted UV light is used to strengthen the corneal tissue. In very advanced cases a corneal transplant may be needed.

Professor Inglehearn said: "This multinational, multicenter study gives us the first real insights into the cause of this potentially blinding condition and opens the way for genetic testing in individuals at risk."

The team, led by Alison Hardcastle, Professor of Molecular Genetics at UCL Institute of Ophthalmology, and Dr Pirro Hysi at King's College London, and including researchers from the UK, US, Czech Republic, Australia, the Netherlands, Austria and Singapore, compared the full genetic code of 4,669 people with keratoconus to that of 116,547 people without the condition.

The team pinpointed short sequences of DNA that were significantly altered in genomes of people with keratoconus, offering clues about how it develops.

The findings indicate that people with keratoconus tend to have faulty collagen networks in their corneas, and that there may be abnormalities in the cells' programming which affect their development. These promising insights were not possible in previous studies due to insufficient sample sizes.

Future work will now aim to understand the precise effects of these DNA variations on corneal biology and pinpoint the mechanism by which keratoconus then develops. It will also be crucial to identify any remaining genetic variations among keratoconus patients that were not picked up in this study.

The work has brought science a step closer to earlier diagnosis and potentially even new therapeutic targets, offering hope to current and future keratoconus patients.

The study, A multi-ethnic genome-wide association study implicates collagen matrix integrity and cell differentiation pathways in keratoconus, was funded by Moorfields Eye Charity, and is published in Communications Biology today.

Dr Hysi said: "The results of this work will enable us to diagnose keratoconus even before it manifests; this is great news because early intervention can avoid blinding consequences."

Professor Hardcastle said: "This study represents a substantial advance of our understanding of keratoconus. We can now use this new knowledge as the basis for developing a genetic test to identify individuals at risk of keratoconus, at a stage when vision can be preserved, and in the future develop more effective treatments."

Professor Stephen Tuft, from Moorfields Eye Hospital, said: "If we can find ways to identify keratoconus early, corneal collagen cross linking can prevent progression of the disease in the great majority of cases.

"We would like to thank the thousands of individuals who attend our Moorfields Eye Hospital cornea clinic and donated a DNA sample, without whom this important study would not have been possible."

Credit: 
University of Leeds

Reinforced by policies, charters segregate schools

ITHACA, N.Y. - The expansion of charter schools in the 2000s led to an increase in school segregation and a slight decline in residential segregation, according to new research from Cornell University providing the first national estimates of the diverging trends.

According to the study, the average district to expand charter school enrollment between 2000 and 2010 experienced a 12% increase in white-Black school segregation and a 2% decrease in white-Black residential segregation.

The patterns moved in opposite directions, the research found, because charter schools - which receive public funds but operate independently - weaken the traditional link between neighborhood and school assignment, allowing families to choose more racially homogenous schools regardless of where they live.

The findings highlight education policy's influence beyond schools and offer a "cautionary lesson" about continued charter expansion without efforts to limit racial sorting by families, according to lead author Peter Rich.

Understanding charter schools' effects on segregation is critical, because they represent an increasingly popular educational reform, the researchers said. Charter school enrollment has quadrupled since 2000, serving nearly 6% of students in 2015-2016, and is expected to continue growing and gaining influence.

The researchers analyzed more than 1,500 metropolitan school districts to examine what happened when school choice decoupled neighborhood and school options, using data from the census and the National Center for Education Statistics' Common Core of Data.

The researchers said their findings reveal school and residential segregation as "more like eddies in a stream, circling and reinforcing each other via policies and preferences."

The analysis did not find that charter school affected white-Hispanic segregation in schools, because Hispanic students on average attend more diverse charter schools. White-Hispanic segregation did fall as charter enrollment grew.

Though the reductions in residential segregation were "nontrivial," the researchers said, policy makers should not see school choice as a tool for achieving resident diversity, given how it exacerbated school segregation.

Credit: 
Cornell University

Sensing suns

image: The red supergiant appears as a red starburst between two orange clouds.

Image: 
© 2021 Andrew Klinger

Red supergiants are a class of star that end their lives in supernova explosions. Their lifecycles are not fully understood, partly due to difficulties in measuring their temperatures. For the first time, astronomers develop an accurate method to determine the surface temperatures of red supergiants.

Stars come in a wide range of sizes, masses and compositions. Our sun is considered a relatively small specimen, especially when compared to something like Betelgeuse which is known as a red supergiant. Red supergiants are stars over nine times the mass of our sun, and all this mass means that when they die they do so with extreme ferocity in an enormous explosion known as a supernova, in particular what is known as a Type-II supernova.

Type II supernovae seed the cosmos with elements essential for life; therefore, researchers are keen to know more about them. At present there is no way to accurately predict supernova explosions. One piece of this puzzle lies in understanding the nature of the red supergiants that precede supernovae.

Despite the fact red supergiants are extremely bright and visible at great distances, it is difficult to ascertain important properties about them, including their temperatures. This is due to the complicated structures of their upper atmospheres which leads to inconsistencies of temperature measurements that might work with other kinds of stars.

"In order to measure the temperature of red supergiants, we needed to find a visible, or spectral, property that was not affected by their complex upper atmospheres," said graduate student Daisuke Taniguchi from the Department of Astronomy at the University of Tokyo. "Chemical signatures known as absorption lines were the ideal candidates, but there was no single line that revealed the temperature alone. However, by looking at the ratio of two different but related lines -- those of iron -- we found the ratio itself related to temperature. And it did so in a consistent and predictable way."

Taniguchi and his team observed candidate stars with an instrument called WINERED which attaches to telescopes in order to measure spectral properties of distant objects. They measured the iron absorption lines and calculated the ratios to estimate the stars' respective temperatures. By combining these temperatures with accurate distance measurements obtained by the European Space Agency's Gaia space observatory, the researchers calculated the stars luminosity, or power, and found their results consistent with theory.

"We still have much to learn about supernovae and related objects and phenomena, but I think this research will help astronomers fill in some of the blanks," said Taniguchi. "The giant star Betelgeuse (on Orion's shoulder) could go supernova in our lifetimes; in 2019 and 2020 it dimmed unexpectedly. It would be fascinating if we were able to predict if and when it might go supernova. I hope our new technique contributes to this endeavor and more."

Credit: 
University of Tokyo

New catalyst makes styrene manufacturing cheaper, greener

Chemical engineering researchers have developed a new catalyst that significantly increases yield in styrene manufacturing, while simultaneously reducing energy use and greenhouse gas emissions.

"Styrene is a synthetic chemical that is used to make a variety of plastics, resins and other materials," says Fanxing Li, corresponding author of the work and Alcoa Professor of Chemical Engineering at North Carolina State University. "Because it is in such widespread use, we are pleased that we could develop a technology that is cost effective and will reduce the environmental impact of styrene manufacturing." Industry estimates predict that manufacturers will be producing more than 33 million tons of styrene each year by 2023.

Conventional styrene production technologies have a single-pass yield of about 54%. In other words, for every 100 units of feedstock you put into the process, you would get 54 units of styrene out of each pass. Using their new catalyst, the researchers were able to achieve a single-pass yield of 91%.

The conversion process takes place at 500-600 degrees Celsius - the same temperature range as conventional styrene manufacturing processes. However, there is a big difference.

"Current techniques require injecting very large volumes of steam into the reactor where the conversion takes place," says Yunfei Gao, a postdoctoral scholar at NC State and co-lead author of a paper on the work. "Our technique requires no steam. In practical terms, this drastically reduces the amount of energy needed to perform the conversion."

Specifically, the conversion process that incorporates the new catalyst uses 82% less energy - and reduces carbon dioxide emissions by 79%.

"These advances are made possible by the engineered design of the catalyst itself," says Xing Zhu, co-lead author of the paper and a researcher at the Kunming University of Science and Technology (KUST). "The new redox catalyst has a potassium ferrite surface for the catalytic phase and a mixed calcium manganese oxide core for lattice oxygen storage." Zhu worked on the project as a visiting scholar at NC State.

"In order to adopt the new catalyst, styrene manufacturers would need to adopt a different style of reactor than they are currently using," Li says. "They would need something similar to a CATOFIN® reactor. But those are already in widespread use for other industrial applications. And the cost savings from the new process should be significant."

Credit: 
North Carolina State University

New open-source platform accelerates research into the treatment of heart arrhythmia's

An open-source platform, OpenEP co-developed by researchers from the School of Biomedical Engineering & Imaging Sciences at King's College London has been made available to advance research on atrial fibrillation, a condition characterised by an irregular and often fast heartbeat. It can cause significant symptoms such as breathlessness, palpitations and fatigue, as well as being a major contributor to stroke and heart failure.

Current research into the condition involves the interpretation of large amounts of clinical patient data using software written by individual research groups.

But a new study recently published in Frontiers in Physiology shows that the OpenEP platform, developed in collaboration between King's College London, the University of Edinburgh, Invicro, a Konica Minolta Company, Guy's and St Thomas' NHS Foundation Trust and Imperial College London, is capable of doing close to 90 per cent of the types of analyses that are performed in contemporary electrophysiology studies, enabling researchers to focus on their specific hypothesis or research question.

Having a standardised way of using data processing techniques can also help to make them reproducible for other scientists.

Lead author, Dr Steven Williams, Honorary Senior Lecturer at King's School of Biomedical Engineering & Imaging Sciences said the platform lowers barriers of entry to electrophysiology research.

"For clinicians who may wish to do this sort of research but have not been able to before because of the significant barriers, many of these are now overcome. It is now possible to get the clinical data into a standardised format using the OpenEP and analyse it without writing specialised programmes," Dr Williams said.

Dr Williams said as the code is open source, the research community can verify that the methods are implemented correctly and update them, if required.

The software contained in the platform has been under development for ten years and has been used in a number of electrophysiology research projects at King's College London.

In addition to its impact on atrial fibrillation research, OpenEP is already being used for research into other arrhythmias by collaborating institutions.

Dr Nick Linton, Consultant Cardiologist & Senior Lecturer at Imperial College London and a senior author of the study said: "We hope that OpenEP will foster collaboration with new and existing researchers in this exciting area of cardiology. Arrhythmias are a leading cause of morbidity in the UK, and we are confident that OpenEP will help to accelerate progress towards innovative treatments."

Credit: 
King's College London

Exposure to diverse career paths can help fill labor market 'skills gap'

image: Patrick Rottinghaus is an associate professor in the University of Missouri College of Education.

Image: 
University of Missouri College of Education

COLUMBIA, Mo. -- When Patrick Rottinghaus began college, he had no idea what he wanted to do with his career. He started out as an "Open" major while he explored possibilities.

Today, Rottinghaus, an associate professor in the University of Missouri College of Education, is helping young people eager to find their place in the world by identifying their strengths and connecting them with careers that match their skillset, interests and personality. As the father of three children, including a daughter soon to enter high school, he wants to ensure they are equipped with the knowledge and skills to succeed as they prepare to enter the modern workforce.

In an effort to fill the United States labor market's current "skills gap," Rottinghaus and graduate student Chan Jeong Park collaborated to study how certain tools could better help students to know their strengths. In his study, Rottinghaus partnered with YouScience, a web-based aptitude assessment system, to distribute an online career aptitude test to more than 7,000 high school students across 14 states. The skills gap is defined as the disconnect between the skills employers look for when recruiting potential employees and the number of job-seekers with those skills.

While most career exploration surveys focus mainly on students' interests, the aptitude test Rottinghaus distributed also inquired about the strengths and skills the students possess, which enabled them to explore more potential careers than they had originally considered. The aptitude assessments helped identify female students with the talent for careers in construction, technical health care, manufacturing and computer technology, areas they may not have previously considered based solely on their interests. The assessments also helped identify males with the talent to pursue jobs in patient-centered health care.

"When you look at rapidly growing employment sectors like manufacturing, computer technology, health care and construction, there is a pipeline concern, as we need more young people equipped with the skills to enter these fields," Rottinghaus said. "Not only does the aptitude test help high school students identify potential career paths, but it also helps them identify classes they can take now or in college that will strengthen their skillset and potentially open up doors for their future."

According to Rottinghaus, one approach to mitigating the "skills gap" is to encourage more women and underrepresented groups to pursue high-demand careers in science, technology, engineering and mathematics, or STEM.

"Due to traditional societal norms and gender stereotypes, young women have historically not been as likely to pursue STEM careers," Rottinghaus said. "But when we looked at their aptitude scores, the system would often indicate many of the young women surveyed have the aptitude to be successful in these areas. We can also help men consider more nontraditional fields, too, such as nursing or health care, which tend to be predominantly female."

Rottinghaus believes providing aptitude tests to young students and having a trained counselor review the results with them can help in overcoming gender stereotypes. He encourages his own children to expand their career possibilities by exploring fields not traditionally considered in the past.

"I don't want my daughter to feel constricted in her career exploration by only considering fields traditionally held by women. I want her to consider a full array of opportunities," Rottinghaus said. "My overall goal is to help people intentionally identify aptitudes and interests to find their fit with educational pathways and labor market needs so they can be happy and productive members of society. There are also implications for institutions of higher education, as students who don't know what they want to major in are more likely to drop out before graduation."

Credit: 
University of Missouri-Columbia

OHSU study advances field of precision medicine

Researchers at Oregon Health & Science University have demonstrated a new method of quickly mapping the genome of single cells, while also clarifying the spatial position of the cells within the body.

The discovery, published in the journal Nature Communications, builds upon previous advances by OHSU scientists in single-cell genome sequencing. The study represents another milestone in the field of precision medicine.

"It gives us a lot more precision," said senior author Andrew Adey, Ph.D., associate professor of molecular and medical genetics in the OHSU School of Medicine. "The single-cell aspect gives us the ability to track the molecular changes within each cell type. Our new study also allows the capture of where those cells were positioned within complex tissues, as opposed to a slurry of cells from the entire sample."

Scientists applied a method of indexing large numbers of single cells in hundreds of microbiopsies taken from a portion of the brain in mice and from human brain tissue kept in the OHSU Brain Bank.

Researchers isolated tiny pieces in cross sections of tissues and then used an existing technique of single-cell profiling previously developed in the Adey lab to differences in the epigenetic profiles of the cells with respect to their position in the tissue.

Casey Thornton, a graduate student in the Adey lab, is lead author on the study and led the work.

The technique could be especially useful where it's necessary to precisely identify and target cells from specific structures within a tissue, such as cancer or in cases of stroke, which the authors explored in the study.

"Tracking where cells come from allows us to uncover how diseases progress and alter healthy tissues," Thornton said. "We are excited to see this approach applied to find novel features that define disease progression and can be used for targeted therapies."

Credit: 
Oregon Health & Science University

Pushing computing to the edge by rethinking microchips' design

image: Princeton researchers have created a new chip that speeds artificial intelligence systems called neural nets while slashing power use. The chips could help bring advanced applications to remote devices such as cars and smartphones.

Image: 
Hongyang Jia/Princeton University

Responding to artificial intelligence's exploding demands on computer networks, Princeton University researchers in recent years have radically increased the speed and slashed the energy use of specialized AI systems. Now, the researchers have moved their innovation closer to widespread use by creating co-designed hardware and software that will allow designers to blend these new types of systems into their applications.

"Software is a critical part of enabling new hardware," said Naveen Verma, a professor of electrical and computer engineering at Princeton and a leader of the research team. "The hope is that designers can keep using the same software system - and just have it work ten times faster or more efficiently."

By cutting both power demand and the need to exchange data from remote servers, systems made with the Princeton technology will be able to bring artificial intelligence applications, such as piloting software for drones or advanced language translators, to the very edge of computing infrastructure.

"To make AI accessible to the real-time and often personal process all around us, we need to address latency and privacy by moving the computation itself to the edge," said Verma, who is the director of the University's Keller Center for Innovation in Engineering Education. "And that requires both energy efficiency and performance."

Two years ago, the Princeton research team fabricated a new chip designed to improve the performance of neural networks, which are the essence behind today's artificial intelligence. The chip, which performed tens to hundreds of times better than other advanced microchips, marked a revolutionary approach in several measures. In fact, the chip was so different than anything being used for neural nets that it posed a challenge for the developers.

"The chip's major drawback is that it uses a very unusual and disruptive architecture," Verma said in a 2018 interview. "That needs to be reconciled with the massive amount of infrastructure and design methodology that we have and use today."

Over the next two years, the researchers worked to refine the chip and to create a software system that would allow artificial intelligence systems to take advantage of the new chip's speed and efficiency. In a presentation to the International Solid-State Circuits Virtual Conference on Feb. 22, lead author Hongyang Jia, a graduate student in Verma's research lab, described how the new software would allow the new chips to work with different types of networks and allow the systems to be scalable both in hardware and execution of software.

"It is programmable across all these networks," Verma said. "The networks can be very big, and they can be very small."

Verma's team developed the new chip in response to growing demand for artificial intelligence and to the burden AI places on computer networks. Artificial intelligence, which allows machines to mimic cognitive functions such as learning and judgement, plays a critical role in new technologies such as image recognition, translation, and self-driving vehicles. Ideally, the computation for technology such as drone navigation would be based on the drone itself, rather than in a remote network computer. But digital microchips' power demand and need for memory storage can make designing such a system difficult. Typically, the solution places much of the computation and memory on a remote server, which communicates wirelessly with the drone. But this adds to the demands on the communications system, and it introduces security problems and delays in sending instructions to the drone.

To approach the problem, the Princeton researchers rethought computing in several ways. First, they designed a chip that conducts computation and stores data in the same place. This technique, called in-memory computing, slashes the energy and time used to exchange information with dedicated memory. The technique boosts efficiency, but it introduces new problems: because it crams the two functions into a small area, in-memory computing relies on analog operation, which is sensitive to corruption by sources such as voltage fluctuation and temperature spikes. To solve this problem, the Princeton team designed their chips using capacitors rather than transistors. The capacitors, devices that store an electrical charge, can be manufactured with greater precision and are not highly affected by shifts in voltage. Capacitors can also be very small and placed on top of memory cells, increasing processing density and cutting energy needs.

But even after making analog operation robust, many challenges remained. The analog core needed to be efficiently integrated in a mostly digital architecture, so that it could be combined with the other functions and software needed to actually make practical systems work. A digital system uses off-and-on switches to represent ones and zeros that computer engineers use to write the algorithms that make up computer programming. An analog computer takes a completely different approach. In an article in the IEEE Spectrum, Columbia University Professor Yannis Tsividis described an analog computer as a physical system designed to be governed by equations identical to those the programmer wants to solve. An abacus, for example, is a very simple analog computer. Tsividis says that a bucket and a hose can serve as an analog computer for certain calculus problems: to solve an integration function, you could do the math, or you could just measure the water in the bucket.

Analog computing was the dominant technology through the Second World War. It was used to perform functions from predicting tides to directing naval guns. But analog systems were cumbersome to build and usually required highly trained operators. After the emergency of the transistor, digital systems proved more efficient and adaptable. But new technologies and new circuit designs have allowed engineers to eliminate many shortcomings of the analog systems. For applications such as neural networks, the analog systems offer real advantages. Now, the question is how to combine the best of both worlds.
Verma points out that the two types of systems are complimentary. Digital systems play a central role while neural networks using analog chips can run specialized operations extremely fast and efficiently. That is why developing a software system that can integrate the two technologies seamlessly and efficiently is such a critical step.

"The idea is not to put the entire network into in-memory computing," he said. "You need to integrate the capability to do all the other stuff and to do it in a programmable way."

Credit: 
Princeton University, Engineering School

Record-high Arctic freshwater will flow to Labrador Sea, affecting local and global oceans

image: A simulated red dye tracer released from the Beaufort Gyre in the Artic Ocean (center top) shows freshwater transport through the Canadian Arctic Archipelago, along Baffin Island to the western Labrador Sea, off the coast of Newfoundland and Labrador, where it reduces surface salinity. At the lower left is Newfoundland (triangular land mass) surrounded by orange for fresher water, with Canada's Gulf of St. Lawrence above colored yellow.

Image: 
Francesca Samsel and Greg Abram

Freshwater is accumulating in the Arctic Ocean. The Beaufort Sea, which is the largest Arctic Ocean freshwater reservoir, has increased its freshwater content by 40% over the past two decades. How and where this water will flow into the Atlantic Ocean is important for local and global ocean conditions.

A study from the University of Washington, Los Alamos National Laboratory and the National Oceanic and Atmospheric Administration shows that this freshwater travels through the Canadian Archipelago to reach the Labrador Sea, rather than through the wider marine passageways that connect to seas in Northern Europe. The open-access study was published Feb. 23 in Nature Communications.

"The Canadian Archipelago is a major conduit between the Arctic and the North Atlantic," said lead author Jiaxu Zhang, a UW postdoctoral researcher at the Cooperative Institute for Climate, Ocean and Ecosystem Studies. "In the future, if the winds get weaker and the freshwater gets released, there is a potential for this high amount of water to have a big influence in the Labrador Sea region."

The finding has implications for the Labrador Sea marine environment, since Arctic water tends to be fresher but also rich in nutrients. This pathway also affects larger oceanic currents, namely a conveyor-belt circulation in the Atlantic Ocean in which colder, heavier water sinks in the North Atlantic and comes back along the surface as the Gulf Stream. Fresher, lighter water entering the Labrador Sea could slow that overturning circulation.

"We know that the Arctic Ocean has one of the biggest climate change signals," said co-author Wei Cheng at the UW-based Cooperative Institute for Climate, Ocean and Atmosphere Studies. "Right now this freshwater is still trapped in the Arctic. But once it gets out, it can have a very large impact."

Fresher water reaches the Arctic Ocean through rain, snow, rivers, inflows from the relatively fresher Pacific Ocean, as well as the recent melting of Arctic Ocean sea ice. Fresher, lighter water floats at the top, and clockwise winds in the Beaufort Sea push that lighter water together to create a dome.

When those winds relax, the dome will flatten and the freshwater gets released into the North Atlantic.

"People have already spent a lot of time studying why the Beaufort Sea freshwater has gotten so high in the past few decades," said Zhang, who began the work at Los Alamos National Laboratory. "But they rarely care where the freshwater goes, and we think that's a much more important problem."

Using a technique Zhang developed to track ocean salinity, the researchers simulated the ocean circulation and followed the Beaufort Sea freshwater's spread in a past event that occurred from 1983 to 1995.

Their experiment showed that most of the freshwater reached the Labrador Sea through the Canadian Archipelago, a complex set of narrow passages between Canada and Greenland. This region is poorly studied and was thought to be less important for freshwater flow than the much wider Fram Strait, which connects to the Northern European seas.

In the model, the 1983-1995 freshwater release traveled mostly along the North American route and significantly reduced the salinities in the Labrador Sea -- a freshening of 0.2 parts per thousand on its shallower western edge, off the coast of Newfoundland and Labrador, and of 0.4 parts per thousand inside the Labrador Current.

The volume of freshwater now in the Beaufort Sea is about twice the size of the case studied, at more than 23,300 cubic kilometers, or more than 5,500 cubic miles. This volume of freshwater released into the North Atlantic could have significant effects. The exact impact is unknown. The study focused on past events, and current research is looking at where today's freshwater buildup might end up and what changes it could trigger.

"A freshwater release of this size into the subpolar North Atlantic could impact a critical circulation pattern, called the Atlantic Meridional Overturning Circulation, which has a significant influence on Northern Hemisphere climate," said co-author Wilbert Weijer at Los Alamos National Lab.

Credit: 
University of Washington