Earth

Migraine linked to increased risk of high blood pressure after menopause

MINNEAPOLIS - Women who have migraine before menopause may have an increased risk of developing high blood pressure after menopause, according to a study published in the April 21, 2021, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"Migraine is a debilitating disorder, often resulting in multiple severe headaches a month, and typically experienced more often by women than men," said study author Gianluca Severi, Ph.D. of the French National Institute of Health and Medical Research in Paris. "Migraine is most prevalent in women in the years before menopause. After menopause, fewer women experience migraines, however this is when the prevalence of high blood pressure in women increases. Migraine is a risk factor for cardiovascular disease. Therefore, we wanted to determine if a history of migraine is linked to an increased risk of high blood pressure after menopause."

The study involved 56,202 women who did not have high blood pressure or cardiovascular disease at the age when their menopause began. Of this group, 46,659 women never had migraine and 9,543 women had experienced migraine. Women were followed up to 20 years and completed health surveys every two to three years. By the end of the study, 11,030 women reported experiencing migraine.

A total of 12,501 women developed high blood pressure during the study. This included 9,401 of the women with no migraine and 3,100 of the women with migraine. Women with migraine also developed high blood pressure at a younger age than women without migraine. The average age of diagnosis for women without migraine was 65 and for women with migraine was 63.

Researchers calculated the risk of developing high blood pressure using person-years, which represent both the number of people in the study and the amount of time each person spends in the study.

During the 826,419 person-years in the study, there was an overall rate of 15 cases of high blood pressure diagnosed for every 1,000 person years. For women without migraine, the rate was 14 cases for every 1,000 person-years compared to 19 cases per 1,000 person-years for women with migraine.

After adjusting for factors such as body mass index, physical activity levels, and family history of cardiovascular disease, researchers found that women who had migraine before menopause had a 29% increased risk of developing high blood pressure after menopause.

Researchers found the risk of developing high blood pressure was similar in women with migraine with aura and without.

"There are multiple ways in which migraine may be linked to high blood pressure," said Severi. "People with migraine have been shown to have early signs of arterial stiffness. Stiffer, smaller vessels are not as capable of accommodating blood flow, resulting in pressure increases. It is also possible that associations could be due to genetics. Since previous research shows migraine increases the likelihood of cardiovascular events, identification of additional risk factors such as the higher likelihood of high blood pressure among people with migraine could aid in individualized treatment or prevention. Doctors may want to consider women with a history of migraine at a higher risk of high blood pressure."

The study does not show that migraine causes high blood pressure after menopause. It only shows an association between the two.

A limitation of the study was migraine was self-reported by the women and may have been misclassified. High blood pressure was self-reported as well, meaning some cases may have been missed.

Credit: 
American Academy of Neurology

Bistable pop-up structures inspired by origami

image: This inflatable shelter is out of thick plastic sheets and can pop up or fold flat.

Image: 
(Image courtesy of Benjamin Gorissen/David Melancon/Harvard SEAS)

In 2016, an inflatable arch wreaked havoc at the Tour de France bicycle race when it deflated and collapsed on a cyclist, throwing him from his bike and delaying the race while officials scrambled to clear the debris from the road. Officials blamed a passing spectator's wayward belt buckle for the arch's collapse, but the real culprit was physics.

Today's inflatable structures, used for everything from field hospitals to sporting complexes, are monostable, meaning they need a constant input of pressure in order to maintain their inflated state. Lose that pressure and the structure returns to its only stable form -- flat.

But what if these structures had more than one stable state? What if the arch was just as stable inflated as it is flat on the ground?

Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed bistable inflatable structures inspired by origami.

The research is published in Nature.

"This research provides a direct pathway for a new generation of robust, large-scale inflatable systems that lock in place after deployment and don't require continuous pressure," said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the paper.

Inspired by origami and guided by geometry, the research team developed a library of triangular building blocks that can pop up or fold flat and be combined in different configurations to build closed, multistable shapes.

"We are relying on the geometry of these building blocks, not the material characteristics, which means we can make these building blocks out of almost any materials, including inexpensive recyclable materials," said Benjamin Gorissen, an associate in Materials Science and Mechanical Engineering at SEAS and co-first author of the paper.

Taking their design process to the real world, the researchers designed and built an 8 foot by4 foot inflatable shelter out of thick plastic sheets.

"You can imagine these shelters being deployed as part of the emergency response in disaster zone," said David Melancon, a PhD student at SEAS and co-first author of the paper. "They can be stacked flat on a truck and you only need one pressure source to inflate them. Once they are inflated, you can remove the pressure source and move onto the next tent."

The shelter can be set up by one or two people, as opposed to the dozen or so it takes to deploy today's military field hospitals.

The building blocks of these origami structures can be mixed and matched to create a structure of any shape or size. The researchers built a range of other structures, including an archway, an extendable boom and a pagoda-style structure. The researchers also designed shapes with more than two stable forms.

"We've unlocked an unprecedented design space of large-scale inflatable structures that can fold flat and maintain their deployed shape without the risk of catastrophic rupture," said Chuck Hoberman, the Pierce Anderson Lecturer in Design Engineering at the Graduate School of Design and co-author of the paper. "By using inflatable, reversible actuation to achieve hard-walled structural enclosures, we see important applications, not only here on Earth, but potentially as habitats for lunar or Mars exploration."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

To design truly compostable plastic, scientists take cues from nature

image: A modified plastic (left) breaks down after just three days (right) in standard compost and entirely after two weeks.

Image: 
UC Berkeley photo by Ting Xu

Despite our efforts to sort and recycle, less than 9% of plastic gets recycled in the U.S., and most ends up in landfill or the environment.

Biodegradable plastic bags and containers could help, but if they’re not properly sorted, they can contaminate otherwise recyclable #1 and #2 plastics. What’s worse, most biodegradable plastics take months to break down, and when they finally do, they form microplastics – tiny bits of plastic that can end up in oceans and animals’ bodies – including our own.

Now, as reported in the journal Nature, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have designed an enzyme-activated compostable plastic that could diminish microplastics pollution, and holds great promise for plastics upcycling. The material can be broken down to its building blocks – small individual molecules called monomers – and then reformed into a new compostable plastic product.

"In the wild, enzymes are what nature uses to break things down - and even when we die, enzymes cause our bodies to decompose naturally. So for this study, we asked ourselves, 'How can enzymes biodegrade plastic so it's part of nature?" said senior author Ting Xu , who holds titles of faculty senior scientist in Berkeley Lab's Materials Sciences Division, and professor of chemistry and materials science and engineering at UC Berkeley.

At Berkeley Lab, Xu - who for nearly 15 years has dedicated her career to the development of functional polymer materials inspired by nature - is leading an interdisciplinary team of scientists and engineers from universities and national labs around the country to tackle the mounting problem of plastic landfill posed by both single-use and so-called biodegradable plastics.

Most biodegradable plastics in use today are usually made of polylactic acid (PLA), a vegetable-based plastic material blended with cornstarch. There is also polycaprolactone (PCL), a biodegradable polyester that is widely used for biomedical applications such as tissue engineering.

But the problem with conventional biodegradable plastic is that they're indistinguishable from single-use plastics such as plastic film - so a good chunk of these materials ends up in landfills. And even if a biodegradable plastic container gets deposited at an organic waste facility, it can't break down as fast as the lunch salad it once contained, so it ends up contaminating organic waste, said co-author Corinne Scown, a staff scientist and deputy director for the Research, Energy Analysis & Environmental Impacts Division in Berkeley Lab's Energy Technologies Area.

Another problem with biodegradable plastics is that they aren't as strong as regular plastic - that's why you can't carry heavy items in a standard green compost bag. The tradeoff is that biodegradable plastics can break down over time - but still, Xu said, they only break down into microplastics, which are still plastic, just a lot smaller.

So Xu and her team decided to take a different approach - by "nanoconfining" enzymes into plastics.

Putting enzymes to work

Because enzymes are part of living systems, the trick would be carving out a safe place in the plastic for enzymes to lie dormant until they're called to action.

In a series of experiments, Xu and co-authors embedded trace amounts of the commercial enzymes Burkholderia cepacian lipase (BC-lipase) and proteinase K within the PLA and PCL plastic materials. The scientists also added an enzyme protectant called four-monomer random heteropolymer, or RHP, to help disperse the enzymes a few nanometers (billionths of a meter) apart.

In a stunning result, the scientists discovered that ordinary household tap water or standard soil composts converted the enzyme-embedded plastic material into its small-molecule building blocks called monomers, and eliminated microplastics in just a few days or weeks.

They also learned that BC-lipase is something of a finicky "eater." Before a lipase can convert a polymer chain into monomers, it must first catch the end of a polymer chain. By controlling when the lipase finds the chain end, it is possible to ensure the materials don't degrade until being triggered by hot water or compost soil, Xu explained.

In addition, they found that this strategy only works when BC-lipase is nanodispersed - in this case, just 0.02 percent by weight in the PCL block - rather than randomly tossed in and blended.

"Nanodispersion puts each enzyme molecule to work - nothing goes to waste," Xu said.

And that matters when factoring in costs. Industrial enzymes can cost around $10 per kilogram, but this new approach would only add a few cents to the production cost of a kilogram of resin because the amount of enzymes required is so low - and the material has a shelf life of more than 7 months, Scown added.

The proof is in the compost

X-ray scattering studies performed at Berkeley Lab's Advanced Light Source characterized the nanodispersion of enzymes in the PCL and PLA plastic materials.

Interfacial-tension experiments conducted by co-author Tom Russell revealed in real time how the size and shape of droplets changed as the plastic material decomposed into distinct molecules. The lab results also differentiated between enzyme and RHP molecules.

"The interfacial test gives you information about how the degradation is proceeding," he said. "But the proof is in the composting - Ting and her team successfully recovered plastic monomers from biodegradable plastic simply by using RHPs, water, and compost soil."

Russell is a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab's Materials Sciences Division.

Developing a very affordable and easily compostable plastic film could incentivize produce manufacturers to package fresh fruits and vegetables with compostable plastic instead of single-use plastic wrap - and as a result, save organic waste facilities the extra expense of investing in expensive plastic-depackaging machines when they want to accept food waste for anaerobic digestion or composting, Scown said.

Since their approach could potentially work well with both hard, rigid plastics and soft, flexible plastics, Xu would like to broaden their study to polyolefins, a ubiquitous family of plastics commonly used to manufacture toys and electronic parts.

The team's truly compostable plastic could be on the shelves soon. They recently filed a patent application through UC Berkeley's patent office. And co-author Aaron Hall, who was a Ph.D. student in materials science and engineering at UC Berkeley at the time of the study, founded UC Berkeley startup Intropic Materials to further develop the new technology.

"When it comes to solving the plastics problem, it's our environmental responsibility to take up nature on its path. By prescribing a molecular map with enzymes behind the wheel, our study is a good start," Xu said.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Swing vote 'trumped' turnout in 2016 election

Swing voters in battleground states delivered Donald Trump his unexpected victory in the 2016 presidential election, suggests a new study coauthored by Yale political scientist Gregory A. Huber.

The study, published on April 21 in the journal Science Advances, compares the outcomes of the 2012 and 2016 presidential elections in six key states: Florida, Georgia, Michigan, Nevada, Ohio, and Pennsylvania. The analysis merged voter turnout records of 37 million individuals with precinct-level election returns to determine the sources of Trump's electoral success. It examined the relative roles of conversion -- voters switching their support from one party to the other between elections -- and changes in the electorate's composition, which are driven by mobilization and variations in voter turnout.

The researchers found that conversion was the greater factor in four of the six states, including Florida and the pivotal Rust Belt states of Ohio, Michigan, and Pennsylvania. Overall, people switching their votes from Democrat to Republican more consistently explained the GOP's success in 2016 than did increased turnout by the party's base, they concluded.

"Despite increasing political polarization, a lot of voters aren't committed partisans and will cast ballots for a Democrat in one election and a Republican in the next," said Huber, the Forst Family Professor of Political Science in Yale's Faculty of Arts and Sciences. "Turnout certainly matters -- the parties benefit from mobilizing their bases -- but our study suggests that swing voters were a bigger factor in 2016."

Studying the sources of electoral change is challenging. The secret ballot prevents researchers from observing individuals' vote choices. At the same time, the composition of the electorate constantly changes as people move, become eligible to vote, or fall off the voter rolls. The absence of a centralized election administration in the United States presents another obstacle.

Survey data can offer some insight into voters' choices, but its reach is limited, Huber said.

"It's fairly easy to get committed Republicans or Democrats to tell you that they support their teams, but it's much harder to reach the people who aren't partisan or don't vote consistently," he said. "Those kinds of voters are important drivers of electoral change."

Huber and coauthors Seth J. Hill of the University of California-San Diego, and Daniel J. Hopkins of the University of Pennsylvania relied on public records to avoid the recruitment bias and other shortcomings of survey data. They gathered comprehensive lists of eligible voters from each state, allowing them to identify changes in voter turnout between 2012 and 2016. They matched the voter lists to election returns at the precinct level -- the smallest geographical unit for measuring vote counts -- to estimate the extent of conversion that occurred between 2012 and 2016 in each precinct.

Trump improved on Mitt Romney's 2012 performance in each state but Georgia. The researchers found that the balance between conversion and the electorate's composition varied by state, but their analysis clearly indicated that conversion more consistently explained the pro-GOP electoral change between the two elections. Trump outperformed Romney in precincts where the electorate's composition, or turnout, remained stable between 2012 and 2016 as well as in precincts where shifts in party registrations had favored the Democrats, according to the study.

The researchers found that conversion was especially relevant in Michigan, Ohio, and Pennsylvania -- the states with the largest swings in party margin between the two elections. For example, in the average Michigan precinct, Trump netted 101 votes over Romney's 2012 total. Changes in electorate composition increased the Democratic vote total by an estimated 102 votes. To net those 101 votes, Trump gained an estimated 203 votes from voters who had cast ballots for Barack Obama in 2012, the study found. In all, the composition effect in Michigan was estimated to be only half the conversion effect.

"In a sense, the difference between composition and conversion comes down to simple math," said Huber. "Mobilizing one additional voter adds a single vote to your margin but converting a swing voter adds one to your candidate's tally while subtracting another from your opponent's," Huber noted.

In Nevada and Georgia, the estimated compositional effects were 3 and 1.4 times larger than the conversion effects. The Democrats' enhanced voter mobilization efforts in Georgia, which are credited with enabling Joe Biden's 2020 victory in the state, were already producing results in 2016, Huber explained.

"Georgia demonstrates the importance of voter mobilization," Huber said. "The Democrats had a massive expansion in registration and an enormous number of new voters entering the political system, which resulted in Trump losing votes relative to Mitt Romney. It set the stage for a surprising win in 2020."

Credit: 
Yale University

Improved management of farmed peatlands could cut 500 million tons CO2

image: An eddy covariance tower measures CO2, water and energy fluxes at a drained grassland on lowland peat soil in East Anglia

Image: 
Picture: Alex Cumming

Substantial cuts in global greenhouse gas emissions could be achieved by raising water levels in agricultural peatlands, according to a new study in the journal Nature.

Peatlands occupy just three per cent of the world's land surface area but store a similar amount of carbon to all terrestrial vegetation, as well as supporting unique biodiversity.

In their natural state, they can mitigate climate change by continuously removing CO2 from the atmosphere and storing it securely under waterlogged conditions for thousands of years.

But many peatland areas have been substantially modified by human activity, including drainage for agriculture and forest plantations. This results in the release, from drained peatlands, of the equivalent of around 1.5 billion tonnes of carbon dioxide (CO2) into the atmosphere each year - which equates to three per cent of all global greenhouse gas (GHG) emissions caused by human activities.

A team of scientists, led by the UK Centre for Ecology and Hydrology (UKCEH), estimated the potential reduction in emissions by restoring all global agricultural peatlands. However, because large populations rely on these areas for their livelihoods, it may not be realistic to expect all agricultural peatlands to be fully rewetted and returned to their natural condition in the near future.

The team therefore also analysed the impact of halving current drainage depths in croplands and grasslands on peat - which cover over 250,000km2 globally - and showed that this could still bring significant benefits for climate change mitigation. The study estimates this could cut emissions by around 500 million tonnes of CO2 a year, which equates to 1 per cent of all global GHG emissions caused by human activities.

A large proportion of the global greenhouse gases from peatlands are produced in Europe and Southeast Asia, with the total land area of many countries including the UK now a net source, not a sink, of GHGs due to emissions from degraded peat.

The study's authors say there is a growing recognition of the significance of peatlands for the global climate system, with efforts to curb emissions by conservation of undrained peatlands and rewetting of drained sites intensifying.

Professor Chris Evans of UKCEH, who led the research, says: "Widespread peatland degradation will need to be addressed if UK and other countries are to achieve their goal of net zero greenhouse gas emissions by 2050, as part of their contribution to the Paris climate agreement targets.

"Concerns over the economic and social consequences of rewetting agricultural peatlands have prevented large-scale restoration, but our study shows the development of locally appropriate mitigation measures could still deliver substantial reductions in emissions."

Professor Evans and his fellow authors recognise the practical challenges, for example controlling water levels and storage, as well as cultivating crops suited to the waterlogged conditions of peatlands, known as 'paludiculture'. Research into wetland-adapted crops is under way but does not yet provide commercially viable large-scale alternatives to conventional farming.

However, the scientists point out there is plenty of scope to partially rewet agricultural peatlands without severely affecting production because many sites are over-drained - sometimes to over two metres - and often when no crop is present.

In addition to increased emissions, drainage of peatlands causes land subsidence and soil compaction, which affects soil health and exposes low-lying areas to increasing flood risk. It also deprives rare wetland-adapted plants, insects and mammals of important habitats.

Professor Sue Page of the University of Leicester, a co-author of the study, says: "Our results present a challenge but also a great opportunity. Better water management in peatlands offers a potential 'win-win' - lower greenhouse gas emissions, improved soil health, extended agricultural lifetimes and reduced flood risk."

The scientists say potential reductions in greenhouse gases from halving the drainage depth in agricultural peatlands are likely to be greater than estimated, given they did not include changes in emissions of the GHG nitrous oxide (N2O) which, like levels of CO2, are also likely to be higher in deep-drained agricultural peatlands.

The study in Nature involved authors from UKCEH, the Swedish University of Agricultural Sciences, the University of Leeds, The James Hutton Institute, Bangor University, Durham University, Queen Mary University of London, University of Birmingham, University of Leicester, Rothamsted Research and Frankfurt University.

Credit: 
UK Centre for Ecology & Hydrology

Age at the menopause can be assessed using predictive modeling

image: Recent study revealed that higher estradiol and follicle-stimulating hormone levels, irregular menstrual cycles, and menopausal symptoms are strong indicators of approaching menopause in middle-aged women.

Image: 
University of Jyväskylä

The natural menopause occurs when the menstrual periods cease due to the naturally decreased ovary function. There is a significant interindividual variation in the age at natural menopause but, on average, women undergo it around the age of 51 in Western countries. Furthermore, the length of the preceding menopausal transition, characterized by irregular menstrual cycles and menopausal symptoms, is also known to vary between individuals.

The study revealed that higher estradiol and follicle-stimulating hormone levels, irregular menstrual cycles, and menopausal symptoms are strong indicators of approaching menopause in middle-aged women. Additionally, information related to life habits, such as physical activity, alcohol consumption, and smoking, may provide useful information for assessing the time to natural menopause.

After the menopause, women can no longer get pregnant naturally, but the age at menopause has also been associated with the risk of several public health concerns such as cardiovascular disease, cancer, and osteoporosis.

"The prediction of the age at natural menopause is beneficial for health promotion with middle-aged and elderly women but it could also be useful for women making decisions related to family planning and treatments for menopausal symptoms," explains doctoral researcher Matti Hyvärinen from the Gerontology Research Center and Faculty of Sport and Health Sciences, University of Jyväskylä.

The predictive models constructed in the study demonstrated adequate prediction accuracy with absolute errors of slightly over 6 months between the predicted and observed age at natural menopause in middle-aged women.

According to Hyvärinen, the results indicate that the approach used in the study may be used to develop a tool for assessing the age at natural menopause also for women in their 30s or early 40s in the future.

The study was carried out in the Gerontology Research Center and Faculty of Sport and Health Science at the University of Jyväskylä, Finland. It was part of the Estrogenic Regulation of Muscle Apoptosis (ERMA) study led by Academy Research Fellow Eija Laakkonen. The participants in the ERMA study were women between the ages of 47 and 55 living in the Jyväskylä region at the baseline. Participants that were perimenopausal during the baseline measurements were invited to the follow-up study, which included laboratory visits every 3 to 6 months until the participant was categorized as postmenopausal. The research has been funded by the Academy of Finland, European Commission, and Juho Vainio Foundation.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

Great white feeding ground

image: Aaron Carlisle tags salmon shark in previous study looking at shark travel patterns

Image: 
Photos courtesy of Jack Musick and Scot Anderson

Perhaps no other ocean creature lives in the human imagination like the great white shark. But while great white sharks might be plentiful in the minds of beachgoers across the country, there are only a handful of places in the world where white sharks can be consistently found. In those areas -- such as Central California, Guadalupe Island Mexico, South Australia and South Africa -- they tend to be found aggregated in small hotspots, often located around seal colonies.

Researchers have estimated that white shark populations are incredibly small, with only hundreds of large adults and a few thousand white sharks total in any of their global populations. This has made protecting white sharks a priority for conservation with many countries, including the United States and Mexico, having laws in place to prevent the catching and killing of the species.

After uncovering a previously unknown white shark hot spot in the central Gulf of California, however, a new study involving University of Delaware assistant professor Aaron Carlisle suggests that these low numbers for eastern north Pacific white sharks, especially those listed in the Gulf of California, might be underestimated. In addition, the researchers found that the mortality rates for these white sharks might be underestimated as well, as an illicit fishery for the species was uncovered in the Gulf of California, suggesting that fishers were killing many more white sharks than has been previously understood.

The research findings were published in Conservation Letters. Daniel J. Madigan, of the Department of Integrative Biology at the University of Windsor in Ontario, Canada, served as the lead author on the paper and Carlisle, assistant professor in UD's School of Marine Science and Policy in the College of Earth, Ocean and Environment, served as a co-author on the paper.

Underestimated Mortality

For this study, Madigan interacted with a small group of local fishermen and over several months that group killed about 14 large white sharks. Of these, almost half could have been mature females. This was a conservative estimate as other groups reportedly killed additional sharks during this time.

To show just how significant this new source of mortality might be, Carlisle pointed to a National Oceanic and Atmospheric Administration (NOAA) endangered species act status review on white shark populations from 2012. Using the best available information at the time, the NOAA report estimated that the adult female mortality rate for the entire eastern Pacific was likely around two annually.

"He found, in just a two-week time period, more mortality in this one location than what we thought for the whole ocean," said Carlisle. "It was pretty clear then that, well, something kind of important is happening here."

Carlisle explained that the mortality estimate of the earlier NOAA study could have been off because calculating mortality for animals in the ocean -- figuring out how many die naturally or unnaturally -- is one of the most difficult population metrics to quantify.

What makes this finding particularly interesting is that this population of white sharks -- the eastern Pacific population of white sharks -- is perhaps the most well-studied group of sharks on the planet. Here, in the midst of all this scientific research, was a seemingly robust population of white sharks that had eluded scientific study.

"It's been about 20 years since a new 'population' of white sharks has been discovered," said Carlisle. "The fact that the eastern Pacific has so much infrastructure focused on white sharks and we didn't know that there were these sites in the Gulf of California was kind of mind-blowing."

Future steps

Now that the aggregation has been identified, Carlisle said that there are many more scientific questions that need to be answered.

There is a pressing need to study and quantify the population of sharks in the new aggregation site. In particular, it is unknown whether these sharks are part of the other known populations of white sharks in the eastern Pacific, which include populations that occur in Central California and Guadalupe Island Mexico, or whether they belong to a third, unknown population.

They are also interested in finding out how long the aggregation sites have been there and how long people have been fishing at the sites.

"One of the big points of this paper was to raise the red flag and let managers and scientists know that this is going on and this population is here and needs to be studied," said Carlisle. "Hopefully, it will be studied by some local researchers who are invested and working with the local fishing communities because these fishing communities are all heavily dependent on marine resources and fisheries."

Carlisle stressed that the researchers are not looking to cause problems for the local fishing communities that they worked with for the study.

Instead, perhaps there is an opportunity for these communities to learn about other opportunities with these animals through avenues like eco-tourism, educating them on the fact that these sharks are worth more and could provide a steadier stream of revenue alive rather than dead.

"This seems like it would be a perfect situation for ecotourism, much like there is at Guadalupe Island," said Carlisle. "There could be huge opportunities to build businesses around these populations of sharks, and that's just from a management point of view. From a science point of view, there's all sorts of fun things you could do."

Still, Carlisle said that more than anything, this paper highlights just how little we know about what is going on with sharks in the ocean.

"Even though we've studied these animals so much, we still know so little," said Carlisle. "How many fish are in the ocean is a very old but very hard question to answer."

Credit: 
University of Delaware

Brushing away oral health disparities in America's rural children

image: A little girl has her first visit to the dentist.

Image: 
Dave Buchwald. CC by-SA 3.0 license, available at https://creativecommons.org/licenses/by-sa/3.0.

Meaningful legislation addressing health care inequities in the U.S. will require studies examining potential health disparities due to geographic location or economic status.

An interdisciplinary team at the Medical University of South Carolina (MUSC) and the University of South Carolina (UofSC) report in the Journal of Public Health Dentistry that rural children are less likely to receive preventive dental care than urban children. Using samples from 20,842 respondents from a 2017 National Survey of Children's Health, the team determined the existence of an urban-rural disparity in U.S. children's oral health. This disparity can have serious consequences for children's oral health.

"When preventive care is delayed, the persistent exposure to things like refined sugars can promote bacterial growth," said Amy Martin, DPH, a professor in the College of Dental Medicine at MUSC and senior author of the article. "When that bacteria grows, it can exacerbate the cycle of decay, which can only be disrupted by visiting the dentist for regular cleanings and parent education."

The findings of this particular study are important to informing legislative change. Preparations for the 2020 Surgeon General's report on oral health are currently underway. The original report, released in 2000, was a pivotal call to action to prioritize oral health in the U.S. As a result of this report, significant pieces of legislation were passed that addressed the nation's inequities.

However, work still needs to be done to address inequities in oral health care for children. The National Advisory Committee on Rural Health and Human Services has also released oral health reports in response or as a prelude to the Surgeon General's reports. In 2003, its report found that fewer rural children visited dentists annually than urban children. This statistic motivated the MUSC-UofSC team to investigate whether health disparities still existed between rural and urban children.

"There has not really been a comprehensive study on what's happening among rural children, compared to their urban counterparts, in the last five years," commented lead author Elizabeth Crouch, MD., Ph.D. , assistant professor and deputy director of the Rural and Minority Health Research Center in the Arnold School of Public Health at UofSC. "This article may influence policy decisions in the near future."

To understand oral health discrepancies between rural and urban children, the investigators used data from the 2017-2018 National Survey of Children's Health, a nationally representative sample of children from the U.S. Along with collecting information about geographic location, family income, oral health condition, family structure and race, the survey asked about access to oral health care: Had a participant's child received oral health care? Did the child regularly see a health care provider for preventive care such as checkups and dental cleanings?

The investigators found that rural children were less likely to have had a preventive dental visit than urban children. Furthermore, compared with urban kids, they were less likely to receive fluoride treatments or dental sealants, and their teeth were in poorer condition.

"It's really important for researchers to examine access, utilization and outcomes among racial ethnic minorities and vulnerable populations living in rural areas," said Crouch. "Unless we quantify these differences, there's no way to know what kinds of policies that we need to be advocating for."

Additionally, children who were uninsured or had caregivers with a high school education or less were less likely to have had preventive dental visits. These finding suggest that to improve preventive care in kids, policies should focus on increasing family income or dental insurance coverage.

Importantly, this study confirmed previous studies that found racial/ethnic minorities were less likely to receive preventive dental care than nonminorities.

"Ultimately, illuminating these disparities, both quantitatively and qualitatively, will inform policy that can eliminate these disparities in oral health care in rural children," said Crouch.

Credit: 
Medical University of South Carolina

Illuminating invisible bloody fingerprints with a fluorescent polymer

image: Fingerprint patterns made in blood are clearly visible on aluminum foil (left) and painted wood (right) when developed with a fluorescent polymer.

Image: 
Adapted from <i>ACS Applied Materials & Interfaces</i> <b>2021</b>, DOI: 10.1021/acsami.1c00710

Careful criminals usually clean a scene, wiping away visible blood and fingerprints. However, prints made with trace amounts of blood, invisible to the naked eye, could remain. Dyes can detect these hidden prints, but the dyes don't work well on certain surfaces. Now, researchers reporting in ACS Applied Materials & Interfaces have developed a fluorescent polymer that binds to blood in a fingerprint -- without damaging any DNA also on the surface -- to create high-contrast images.

Fingerprints are critical pieces of forensic evidence because their whorls, loops and arches are unique to each person, and these patterns don't change as people age. When violent crimes are committed, a culprit's fingerprints inked in blood can be hard to see, especially if they tried to clean the scene. So, scientists usually use dyes to reveal this type of evidence, but some of them require complex techniques to develop the images, and busy backgrounds can complicate the analysis. In addition, some textured surfaces, such as wood, pose challenges for an identification. Fluorescent compounds can enhance the contrast between fingerprints and the surface on which they are deposited. However, to get a good and stable image, these molecules need to form strong bonds with molecules in the blood. So, Li-Juan Fan, Rongliang Ma and colleagues wanted to find a simple way to bind a fluorescent polymer to blood proteins so that they could detect clear fingerprints on many different surfaces.

The researchers modified a yellow-green fluorescent polymer they had previously developed by adding a second amino group, which allowed stable bonds to form between the polymer and blood serum albumin proteins. They dissolved the polymer and absorbed it into a cotton pad, which was placed on top of prints made with chicken blood on various surfaces, such as aluminum foil, multicolored plastic and painted wood. After a few minutes, they peeled off the pad, and then let it air-dry. All of the surfaces showed high contrast between the blood and background under blue-violet light and revealed details, including ridge endings, short ridges, whorls and sweat pores. These intricate patterns were distinguishable when the researchers contaminated the prints with mold and dust, and they lasted for at least 600 days in storage. In another set of experiments, a piece of human DNA remained intact after being mixed with the polymer, suggesting that any genetic material found after processing a print could still be analyzed to further identify a suspect, the researchers say.

Credit: 
American Chemical Society

A growing problem of 'deepfake geography': How AI falsifies satellite images

image: What may appear to be an image of Tacoma is, in fact, a simulated one, created by transferring visual patterns of Beijing onto a map of a real Tacoma neighborhood.

Image: 
Zhao et al., Cartography and Geographic Information Science

A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.

Both images exemplify what a new University of Washington-led study calls “location spoofing.” The photos — created by different people, for different purposes — are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such “deepfake geography” could become a growing problem.

So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map.

But with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat.

To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map — similar to how popular image filters can map the features of a human face onto a cat.

Next, the researchers combined maps and satellite images from three cities — Tacoma, Seattle and Beijing — to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their “base map” city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.

In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deep fake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing. Low-rise buildings and greenery mark the “Seattle-ized” version of Tacoma on the bottom left, while Beijing’s taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows — hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.

 

The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a “fake,” researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.

Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones — and clearly identifying them as simulations — could fill in the gaps and help provide perspective.

The study’s goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today’s fact-checking services, for public benefit.

“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data,” Zhao said. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary,” he said.

Co-authors on the study were Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University.

For more information, contact Zhao at zhaobo@uw.edu.

Journal

Cartography and Geographic Information Science

DOI

10.1080/15230406.2021.1910075

Credit: 
University of Washington

Stem cell therapy promotes recovery from stroke and dementia in mice

image: Microscope images showing brain tissue that has been damaged by white matter stroke (left) and then repaired by the new glial cell therapy (right). Myelin (seen in red), is a substance that protects the connections between neurons and is lost due to white matter stroke. As seen at right, the glial cell therapy (green) restores lost myelin and improves connections in the brain.

Image: 
UCLA Broad Stem Cell Research Center/Science Translational Medicine

A one-time injection of an experimental stem cell therapy can repair brain damage and improve memory function in mice with conditions that replicate human strokes and dementia, a new UCLA study finds.

Dementia can arise from multiple conditions, and it is characterized by an array of symptoms including problems with memory, attention, communication and physical coordination. The two most common causes of dementia are Alzheimer's disease and white matter strokes -- small strokes that accumulate in the connecting areas of the brain.

"It's a vicious cycle: The two leading causes of dementia are almost always seen together and each one accelerates the other," said Dr. S. Thomas Carmichael, senior author of the study and interim director of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA.

An estimated 5 million Americans have dementia. "And with the aging population, that number is going to skyrocket," Carmichael said.

Currently, there are no therapies capable of stopping the progression of white matter strokes or enhancing the brain's limited ability to repair itself after they occur. The new study, published in Science Translational Medicine, identifies a cell therapy that can stop the progressive damage caused by the disease and stimulate the brain's own repair processes.

The cells used in the therapy are a specialized type of glial cells, which are cells that surround and support neurons in the central nervous system. Carmichael and his collaborators evaluated the effects of their glial cell therapy by injecting it into the brains of mice with brain damage similar to that seen in humans in the early to middle stages of dementia.

"Upon injection, our cell therapy traveled to damaged areas of the brain and secreted chemicals called growth factors that stimulated the brain's stem cells to launch a repair response," said Dr. Irene Llorente, the paper's first author and an assistant research professor of neurology at the David Geffen School of Medicine at UCLA.

Activating that repair process not only limited the progression of damage, but it also enhanced the formation of new neural connections and increased the production of myelin -- a fatty substance that covers and protects the connections.

"Understanding the role that glia play in repairing white matter damage is a critically important area of research that needs to be explored," said Francesca Bosetti, a program director at the National Institutes of Health's National Institute of Neurological Disorders and Stroke, which supported the study. "These preliminary results suggest that glial cell-based therapies may one day help combat the white matter damage that many stroke and vascular dementia patients suffer every year."

The therapy was developed in collaboration with Bill Lowry, a UCLA professor of molecular, cell and developmental biology. The team used a method, previously discovered by Lowry, for quickly producing large numbers of glial cells by treating human induced pluripotent stem cells with a drug called deferoxamine. Induced pluripotent stem cells are derived from skin or blood cells that have been reprogrammed back to an embryonic stem cell-like state from which scientists can create an unlimited supply of any cell type.

In the future, if the therapy is shown to be safe and effective through clinical trials in humans, the researchers envision it becoming an "off-the-shelf" product, meaning that the cells would be mass manufactured, frozen and shipped to hospitals, where they could be used as a one-time therapy for people with early signs of white matter stroke.

That would set the treatment apart from patient-specific cell therapies, which are created using each individual patient's own cells. While patient-specific cell therapies are appealing because they do not require patients to take drugs to prevent their immune systems from rejecting the transplanted cells, they are also expensive and can take weeks or months to produce.

"The damage from white matter strokes is progressive, so you don't have months to spend producing a treatment for each patient," said Carmichael, who is also chair of neurology at the medical school. "If you can have a treatment that's already in the freezer ready to go during the window of time when it could be most effective, that's a much better option."

The brain is a particularly good target for off-the-shelf cell therapies because immune activity in the brain is highly controlled. That feature, known as immune privilege, allows donor cells or tissues that would be rejected by other parts of the body to survive for prolonged, even indefinite, periods.

Interestingly, the researchers found that even if they eliminated the injected cells a few months after they had been transplanted, the mice's recovery was unaffected. That's because the therapy primarily serves as a wake-up call to stimulate the brain's own repair processes.

"Because the cell therapy is not directly repairing the brain, you don't need to rely on the transplanted cells to persist in order for the treatment to be successful," Carmichael said.

The team is now conducting the additional studies necessary to apply to the Food and Drug Administration for permission to test the therapy in a clinical trial in humans.

Credit: 
University of California - Los Angeles Health Sciences

Flowering rooted in embryonic gene-regulation

Researchers at GMI - Gregor Mendel Institute of Molecular Plant Biology of the Austrian Academy of Sciences - and the John Innes Centre, Norwich, United Kingdom, determine that gene-regulatory mechanisms at an early embryonic stage govern the flowering behavior of Arabidopsis later in development. The paper is published in the journal PNAS.

How do early life events shape the ability of organisms to respond to environmental cues later in their life? Can such phenomena be explained at the mechanistic level? GMI group leader and co-corresponding author Michael Nodine counters these questions with a clear statement: "Our research demonstrates that gene-regulatory mechanisms established in early embryos forecast events that have major physiological consequences long after they are initiated."

What if springtime could last longer?

Developmental phase transitions are controlled by precise quantitative regulation of gene expression. Decades of research on the Arabidopsis floral repressor FLC (Flowering Locus C), which is produced by default in a plant embryo following fertilization, has revealed the involvement of multiple molecular pathways that regulate its expression levels. Ultimately, these pathways converge to set FLC expression levels such that flowering only occurs in response to favorable environmental cues. In other words, the regulation mechanisms ensure that plants overwinter before flowering, a process called "vernalization", as opposed to flowering multiple times a year (rapid cycle habit). However, the exact molecular interactions regulating FLC expression at specific developmental stages have remained poorly understood.

Early life decisions impact "flourishing" in adulthood

The team around GMI group leader Michael Nodine and Professor Dame Caroline Dean, group leader at the John Innes Centre, Norwich, UK, investigated the antagonistic functions of the FLC activator FRIGIDA (FRI) and repressor FCA (Flowering time control protein) at specific stages of Arabidopsis embryonic development. The researchers, including first author Michael Schon and co-author Balaji Enugutti, PhD student and post-doctoral researcher in the Nodine group at GMI, respectively, lifted the mysteries on the plant's embryonic mechanisms that determine flowering behavior. Namely, they found that FCA promotes the attachment of a poly-adenine (poly-A) tail near the transcription start site of the FLC mRNA, which produces the shorter and non-functional FLC protein. On the other hand, FRI promotes the attachment of the poly-A tail further downstream in the FLC mRNA, thus resulting in the longer and functional version of FLC. In addition, the team found that the maximal antagonistic effect of FRI against FCA takes place at the early heart stage of embryo development. FRI thus leads to increased FLC expression levels and, ultimately, ensures vernalization-dependent (delayed) flowering.

Setting the stage for blooming

With these findings, the researchers show that the FLC transcript is antagonistically regulated in a co-transcriptional manner (as the mRNA is being transcribed), and that these effects take place within an early developmental stage in the plant embryo. Additionally, they propose that the FLC antagonist FCA acts by establishing a specific chromatin state in the early embryonic developmental stages which later induces a rapid cycle flowering habit without vernalization. On the other hand, this repressed chromatin state is prevented by the FLC activator FRI within the early heart stage, thus maintaining an FLC high transcriptional state that persists in later developmental stages and leads to overwintering behavior. "Our findings demonstrate that opposing functions of co-transcriptional regulators at a very specific developmental stage set the quantitative expression state of FLC," states GMI group leader Michael Nodine before concluding: "Understanding how gene regulatory mechanisms established early in life can influence processes that occur much later is of general interest in animals and plants. Our findings will be of interest to researchers investigating RNA-mediated and epigenetic regulation of gene expression, as well as mechanisms controlling developmental phase transitions including flowering time."

Credit: 
Gregor Mendel Institute of Molecular Plant Biology

Outback radio telescope discovers dense, spinning, dead star

image: An artist's impression of Pulsar -- a dense and rapidly spinning neutron star sending radio waves into the cosmos.

Image: 
ICRAR / Curtin University.

Astronomers have discovered a pulsar--a dense and rapidly spinning neutron star sending radio waves into the cosmos--using a low-frequency radio telescope in outback Australia.

The pulsar was detected with the Murchison Widefield Array (MWA) telescope, in Western Australia's remote Mid West region.

It's the first time scientists have discovered a pulsar with the MWA but they believe it will be the first of many.

The finding is a sign of things to come from the multi-billion-dollar Square Kilometre Array (SKA) telescope. The MWA is a precursor telescope for the SKA.

Nick Swainston, a PhD student at the Curtin University node of the International Centre for Radio Astronomy Research (ICRAR), made the discovery while processing data collected as part of an ongoing pulsar survey.

"Pulsars are born as a result of supernovae--when a massive star explodes and dies, it can leave behind a collapsed core known as a neutron star," he said.

"They're about one and a half times the mass of the Sun, but all squeezed within only 20 kilometres, and they have ultra-strong magnetic fields."

Mr Swainston said pulsars spin rapidly and emit electromagnetic radiation from their magnetic poles.

"Every time that emission sweeps across our line of sight, we see a pulse--that's why we call them pulsars," he said. "You can imagine it like a giant cosmic lighthouse."

ICRAR-Curtin astronomer Dr Ramesh Bhat said the newly discovered pulsar is located more than 3000 light-years from Earth and spins about once every second.

"That's incredibly fast compared to regular stars and planets," he said. "But in the world of pulsars, it's pretty normal."

Dr Bhat said the finding was made using about one per cent of the large volume of data collected for the pulsar survey.

"We've only scratched the surface," he said. "When we do this project at full-scale, we should find hundreds of pulsars in the coming years."

Pulsars are used by astronomers for several applications including testing the laws of physics under extreme conditions.

"A spoonful of material from a neutron star would weigh millions of tonnes," Dr Bhat said.

"Their magnetic fields are some of the strongest in the Universe--about 1000 billion times stronger than that we have on Earth."

"So we can use them to do physics that we can't do in any of the Earth-based laboratories."

Finding pulsars and using them for extreme physics is also a key science driver for the SKA telescope.

MWA Director Professor Steven Tingay said the discovery hints at a large population of pulsars awaiting discovery in the Southern Hemisphere.

"This finding is really exciting because the data processing is incredibly challenging, and the results show the potential for us to discover many more pulsars with the MWA and the low-frequency part of the SKA." 

"The study of pulsars is one of the headline areas of science for the multi-billion-dollar SKA, so it is great that our team is at the forefront of this work," he said.

Credit: 
International Centre for Radio Astronomy Research

Identification of the wettability of graphene layers at the molecular level

image: Macroscopic observation of WCA shows that increasing the number of graphene layers results in higher WCA, which hints hydrophobicity of multilayer graphene.

Image: 
Institute for Basic Science

Graphene is a two-dimensional material in which carbon atoms are arranged in hexagonal structures, and it has unique physical and chemical properties such as sub-nanometer thickness, chemical stability, mechanical flexibility, electrical and thermal conductivity, optical transparency, and selective permeability to water. Due to these properties, various applications of graphene in transparent electrodes, desalination, electrical energy storage, and catalysts have been vigorously studied.

Because graphene is an extremely thin material, for practical uses, it has to be deposited on top of other materials that serve as substrate. One of the research subjects which is of great scientific interest is how graphene on a substrate interacts with water. Wettability is the ability of the interfacial water to maintain contact with a solid surface, and it depends on the material's hydrophobicity. Unlike most materials, the wettability of graphene varies depending on the type of substrate. More specifically, the wettability of the substrate is weakly affected by the presence of a single graphene layer on its surface. Such a peculiar wettability of graphene has been described by the term "wetting transparency" because the wetting properties at the graphene-water interface have little effect on the substrate-water interaction through the thin graphene.

There have been numerous water contact angle (WCA) measurements to study the wettability of graphene on various types of substrates. WCA is a commonly used method to measure the hydrophobicity of the material since the contact angle between the water droplet and material increases as the material becomes more hydrophobic. These studies have hinted that while the wettability of graphene monolayer is notably transparent, the graphene becomes increasingly hydrophobic as the number of layers increases. However, WCA measurement can only provide information on the macroscopic properties of the graphene-water interface, and it cannot give a detailed picture of interfacial water at the graphene-water interface. Furthermore, other techniques such as Raman spectroscopy or reflection-based infrared spectroscopy, which have been commonly used for measuring microscopic properties, are not of use for selectively observing the interfacial water molecules. That is because the vibrational spectroscopic signal of interfacial water molecules are completely masked by the huge signal from bulk water. As a result, it is not entirely surprising that there has been a dearth of molecular-level studies in this area of graphene research.

Recently, a research team at the Center for Molecular Spectroscopy and Dynamics (CMSD) within the Institute for Basic Science (IBS) in Seoul, South Korea and the Korea University revealed the origin of the wettability of graphene. The team succeeded at observing the hydrogen-bond structure of water molecules at graphene-water interfaces using a technique called 'vibrational sum-frequency generation spectroscopy (VSFG)'. VSFG is a second-order nonlinear spectroscopy that can be used to selectively analyze molecules with broken centrosymmetry. It is an ideal method for studying the behavior and structures of water molecules at the graphene interface since the water molecules in the bulk liquid are not visible due to their isotropic distribution of molecular orientations.

The research team observed the VSFG spectra of water molecules on a multi-layer graphene covering a calcium fluoride (CaF2) substrate. They were able to track changes in the hydrogen bond structure of water molecules. When there were four or more layers of graphene, a characteristic peak at ~3,600 cm-1 started to appear in the VFSG spectra. This peak corresponds to the water molecules with the dangling -OH groups that do not form hydrogen bonds with neighboring water molecules, which is a characteristic feature that has been commonly found for water at the hydrophobic interface. This result is the first observation showing the molecular-level structure of water at the water-graphene interface.

In addition, the researchers compared the VSFG wettability value that they could calculate from the measured spectra to the estimated adhesion energy that is related to the measured WCAs. They found that both properties are highly correlated with each other. This observation suggests that the VSFG could be an incisive tool for studying the wettability of two-dimensional materials at the molecular level. It also showed the possibility of using VSFG as an alternative to measuring the adhesion energy of water on buried surfaces, where measuring the water contact angle is difficult or even impossible.

The first and second authors KIM Donghwan and KIM Eunchan Kim note: "This study is the first case describing the increasing hydrophobicity of the graphene surface at a molecular level depending on the number of graphene layers," and "Vibrational sum-frequency generation spectroscopy could be used as a versatile tool for understanding the properties of any functional two-dimensional materials."

Prof. CHO Minhaeng, the Director of CMSD, notes: "For applications where graphene is utilized in water solution, the hydrophobicity of the interface is one of the key factors in determining the efficiency of graphene layers for various application. This research is expected to provide basic scientific knowledge for an optimal design of graphene-based devices in the future."

Credit: 
Institute for Basic Science

Environmental DNA and RNA may be key in monitoring pathogens such as SARS-CoV-2

Real-world disease and parasite monitoring is often hampered by the inability of traditional approaches to easily sample broad geographical areas and large numbers of individuals. This can result in patchy data that fall short of what researchers need to anticipate and address outbreaks. Writing in BioScience, Jessica Farrell (University of Florida), Liam Whitmore (University of Limerick), and David Duffy (University of Florida) describe the promise of novel molecular techniques to overcome these shortcomings.

By sampling environmental DNA and RNA (eDNA and eRNA), the authors say, researchers will be better able to determine the presence of both human and wildlife pathogens. The eDNA and eRNA approach works through the collection of a sample (often from an aquatic source), whose genetic contents are then sequenced to reveal the presence and prevalence of pathogens. This eDNA or eRNA gives researchers a timely view into disease spread, which "can help predict the spread of pathogens to nearby new and susceptible geographic locations and populations in advance, providing opportunities to implement prevention and mitigation strategies," say the authors.

For instance, during the COVID-19 pandemic, researchers have used eRNA analysis of wastewater to track large-scale outbreaks of disease, finding that "wastewater detection of SARS-CoV-2 eRNA increased rapidly prior to medical detection of human outbreaks in those regions, with environmental virus concentration peaking at the same time or before the number of human-detected cases, providing advanced warning of a surge in infected individuals." With this advance knowledge, crucial and limited medical resources can be provisioned where they will be most needed.

The benefits of eDNA and eRNA analysis are not limited to the detection of human pathogens; the authors describe the ways in which these tools also help in understanding the presence and transmission of pathogens that hamper wildlife conservation efforts, such as the turtle-specific DNA virus, chelonid herpesvirus 5. eDNA monitoring of this pathogen may help researchers evaluate the disease's spread--in particular, the idea that the virus is most frequently transmitted by "superspreader" individuals.

The future for these technologies is bright, say Farrell, Whitmore, and Duffy, "with the potential to vastly exceed traditional detection methods and the capacity to improve the detection and monitoring of aquatic pathogens and their vulnerable host species, including humans."

Credit: 
American Institute of Biological Sciences