Culture

Feeling hot and bothered? It's complicated

image: Long-term trends of thermal discomfort, a measure of stress and discomfort induced by high temperatures and humidity, indicate that discomfort levels are actually decreasing in Saudi Arabia, despite rising global temperatures.

Image: 
© 2021 Morgan Bennett Smith

Rising temperatures are increasingly affecting the quality of life in many regions, setting new challenges for architects, urban planners and healthcare systems. Researchers at KAUST have analyzed discomfort due to outdoor heat across Saudi Arabia and neighboring regions to help understand and combat the problem.

"Living conditions in the Kingdom have been particularly affected by the changing climate," says Hari Dasari, first author of the paper. He also emphasizes the unique challenges facing the Hajj pilgrimage visits by several million people each year. Between 2014 and 2018, the Hajj occurred in summer months when the average temperature often exceeded 40 degrees Celsius with 80 percent humidity.

The team examined the variability and trends in a measure called the thermal discomfort index (DI), computed from temperature and humidity records collected from 1980 to 2018. The DI evaluates how these two factors combine to cause heat stress and discomfort.

Surprisingly, most cities in Saudi Arabia recorded an improvement in DI levels during the past 20 years, but significant exceptions were Yanbu, Makkah, Medina and Taif. The danger of increasing DI values in the Makkah region was confirmed by examining clinical records, which suggest a correlation with heat-related deaths during the Hajj pilgrimage.

"Many of us expected that the rising temperatures due to global warming and rapid growth in urbanization during recent decades should have reduced human comfort levels over KSA," says Dasari. A valuable insight from the research is the discovery that the situation is more complicated, with many regional variations worthy of further exploration.

Increased heat discomfort levels were concentrated mainly in neighboring regions of the Arabian Gulf, including the United Arab Emirates, Oman and Qatar.

Ibrahim Hoteit, leader of the research group, says the findings will help regional authorities, engineers and architects to plan the most effective developments in infrastructure, building design and healthcare interventions to improve safety and comfort across the region.

"We now plan to develop an atlas of DI values with risk maps indicating the trends through an online interactive visualization and analysis interface that provides real-time access for nonexpert users, and also a forecasting system to support the management of various outdoor activities and minimize health-related chronic symptoms." says Hoteit.

The team expect the further development of this research will also be critically important for supporting several large-scale projects currently being developed in Saudi Arabia, including the city of NEOM and the Red Sea Project and AMAALA tourism initiatives.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Tracking RNA through space and time

image: One-cell zebrafish embryo: The MDC research lab found numerous localized genes at this early stage of development. Much of their genetic information flows into the precursor cells of the later germ cells.

Image: 
AG Junker, MDC

The "miracle of life" is most obvious at the very beginning: When the fertilized egg cell divides by means of furrows into blastomeres, envelops itself in an amniotic sac, and unfolds to form germ layers. When the blastomeres begin to differentiate into different cells - and when they eventually develop into a complete organism.

"We wanted to find out whether the later differences between the various cells are already partly hard-wired into the fertilized egg cell," says Dr. Jan Philipp Junker, who heads the Quantitative Developmental Biology Lab at the Berlin Institute for Systems Biology (BIMSB) of the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC). Junker and his team are investigating how cells make decisions and what dictates whether they become nerve, muscle, or skin cells. This involves creating cell lineage trees that allow them to determine the lineage and cell type of thousands of individual cells from an organism. Using these lineage trees, they can understand how and by what mechanisms cells come together to form a functioning organism or how they respond to perturbations.

Blueprints for different cell types already exist in the one-cell embryo

Yet this search for clues by means of cell lineage trees begins at a later stage - namely, when cell division and differentiation is already under way. What's more, the observations cover long time periods. In their current study, which has just been published in the journal "Nature Communications", Junker and his team focus on a very short time period: the first hours after fertilization, from the one-cell stage to the process of gastrulation - the formation of the germ layers - of the embryo. The scientists wanted to know whether the one-cell embryo already contains parts of the blueprint for the multitude of different cell types that later develop from it. To do this, they studied zebrafish and clawed frog embryos. Researchers had previously succeeded in finding individual genes whose RNA is localized at specific sites within one-cell zebrafish embryos. The Berlin scientists have now shown that there are many more such genes. "We have discovered ten times more genes whose RNA is spatially localized in the fertilized egg cell than previously known," explains Karoline Holler, lead author of the study. "Many of these RNA molecules are later transported into the primordial germ cells. This means that the program for subsequent cell differentiation is hard-wired into the fertilized egg cell."

New approaches in transcriptomics

State-of-the-art methods of single-cell transcriptomics provide a good understanding of cell differentiation. Scientists order individual cells according to the similarity of their transcriptome - the complete collection of RNA molecules present in a cell - and can use the patterns that emerge to decipher how the cells became what they are. However, they cannot use this method to reconstruct the earliest stages of embryonic development, because here the spatial arrangement of RNA molecules is crucial. His team instead used a specialized technique called tomo-seq, which Junker developed at the Hubrecht Institute in the Netherlands in 2014. It enables scientists to spatially track RNA molecules within the cell. This is achieved by cutting embryos of the model organisms into thin slices. It is then possible read the RNA profiles on the cut surfaces and convert them into spatial expression patterns. Holler refined the tomo-seq technique to now measure the spatial distribution of the transcriptome within the fertilized egg cell.

The scientists used another new technique to study which localized genes later contribute to which cells. "We labeled the RNA molecules so as to be able to track them over different developmental stages. This allows us to observe the RNA not only in space but also over time," explains Junker. In this way, the scientists can distinguish the RNA transferred to the embryo by the mother from the RNA produced by the embryo itself. This RNA labeling method, called scSLAM-seq, was fine-tuned at BIMSB in the labs of Professor Markus Landthaler and Professor Nikolaus Rajewsky, enabling it to be applied in living zebrafish. "Labeling RNA molecules allows us to measure with high precision how gene expression changes in individual cells, for example, after an experimental intervention," explains Junker.

How do drugs affect cell differentiation?

RNA labeling opens up completely new avenues for studying such things as the mechanism of action of drug therapies. "We can use it in organoids to investigate how different cell types respond to substances," explains the physicist. The method, Junker says, is not suitable for long-term processes of change. "But we can see which genes change within five to six hours after treatment, providing a pathway to understanding how we might influence cell differentiation."

Spatial analysis also has medical relevance: Looking further into the future, it could be useful for studying those diseases that result from mislocalized RNA, such as cancer or neurodegenerative diseases. In such diseases a large number of molecules are transported through the cell. "If we understand these transport processes, then we may be able to identify risk factors for these diseases," explains Holler. But, for now, that is a long way off. "There is still much work to be done before the one-cell zebrafish embryo can be used as a model system for studying human neurodegenerative diseases," stresses Junker.

The scientists next want to uncover the mechanisms involved in RNA localization: How does the detected RNA differ from other transcripts in the cell? Junker's team plans to work with Professor Irmtraud Meyer's lab at BIMSB to characterize the sequence features of the localized RNA. With the help of algorithms, they hope to predict whether the localized genes share a two- or three-dimensional fold. They are also working on further developing their method so that it can be used in other systems than the one-cell zebrafish embryo.

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Food systems offer huge opportunities to cut emissions, study finds

image: Greenhouse-gas contributions from various parts of the global food system.

Image: 
From Tubiello et al., Environmental Research Letters 2021

A new global analysis says that greenhouse-gas emissions from food systems have long been systematically underestimated--and points to major opportunities to cut them. The authors estimate that activities connected to food production and consumption produced the equivalent of 16 billion metric tons of carbon dioxide in 2018--one third of the human-produced total, and an 8 percent increase since 1990. A companion policy paper highlights the need to integrate research with efforts to reduce emissions. The papers, developed jointly by the UN Food and Agriculture Organization, NASA, New York University and experts at Columbia University, are part of a special issue of Environmental Research Letters on sustainable food systems.

The Center on Global Energy Policy has also produced a detailed guide to food systems and climate, and a related video, both geared to the general public.

The lead author of the analysis, Francesco Tubiello, heads the environment statistics unit at FAO. He said the study shows that food production represents a "larger greenhouse-gas mitigation opportunity than previously estimated, and one that cannot be ignored in efforts to achieve the Paris Agreement goals." He said emissions inventories that countries currently report to the United Nations Framework Convention on Climate Change poorly characterize food systems, and underestimate their contribution to climate change.

The study provides country-level datasets that are being refined ahead of the UN's Food Systems Summit, to be held in July. It considers emissions linked not just to production of livestock and crops, but from land-use changes at the boundary between farms and natural ecosystems, and from related manufacturing, processing, storage, transport and waste disposal.

The companion policy piece calls for better scientific understanding of the processes through which greenhouse gases are emitted from all phases of food production and consumption. It says that the food system has a major role to play in mitigating climate change. The lead author of that paper, Cynthia Rosenzweig of Columbia University's Earth Institute and the NASA Goddard Institute for Space Studies, said, "Science and policy domains have often been siloed in academia. We propose a 'double helix' of interactive research by scientists and policy experts that can deliver significant benefits for both climate change and the food system."

"The food system and the climate system are deeply intertwined," said coauthor David Sandalow, a fellow at Columbia's Center on Global Energy Policy. "Better data can help lead to better policies for cutting emissions and protecting the food system from a changing climate."

Programs and policies to mitigate climate change must consider the impact on the more than 500 million smallholder households around the world, say the authors. This issue is particularly acute in the least-developed countries, where relatively larger shares of the population rely on agriculture for their livelihoods, they say.

"To achieve a net-zero future, we need to understand better the interplay between the food system and emissions in developing countries where populations are growing, poverty is diminishing, and incomes are rising," said Philippe Benoit, an adjunct senior research scholar at the Center on Global Energy Policy.

One emergent theme: optimal mitigation strategies will require a focus on activities both before and after farm production, ranging from the industrial production of fertilizers to refrigeration at the retail level. Emissions from these activities are growing fast.

"Agriculture in developed countries emits large quantities of greenhouse gases, but their share can be obscured by large emissions from other sectors like electricity, transportation and buildings," said Matthew Hayek, an assistant professor in environmental studies at New York University and coauthor of both pieces. "Looking at the entire food system can not only illuminate opportunities to reduce emissions from agriculture, but also improve efficiency across the whole supply chain with technologies such as refrigeration and storage."

The study found that while total food-systems emissions rose from 1990 to 2018, growing populations and changing technologies meant that per capita emissions actually decreased, from the equivalent of 2.9 metric tons to 2.2 metric tons per person. But per capita emissions in developed countries, at 3.6 metric tons per person in 2018, were nearly twice those in developing countries.

The conversion of natural ecosystems to agricultural croplands or pastures remained the largest single emissions source over the study period, at nearly 3 billion metric tons per year. But it declined significantly over time, by over 30 percent, possibly in part because we are running out of land to convert.

On the other hand, global emissions from domestic food transportation have increased by nearly 80 percent since 1990, to 500 million tons in 2018. Those emissions have nearly tripled in developing countries. And emissions generated by food system energy use, largely carbon dioxide from fossil fuels along the supply chain, amounted to over 4 billion tons in 2018, an increase of 50 percent since 1990.

Credit: 
Columbia Climate School

'Camouflage breakers' can find a target in less than a second

image: Dr. Jay Hegdé and first author Fallon Branch.

Image: 
Michael Holahan, Augusta University

After looking for just one-twentieth of a second, experts in camouflage breaking can accurately detect not only that something is hidden in a scene, but precisely identify the camouflaged target, a skill set that can mean the difference between life and death in warfare and the wild, investigators report.

They can actually identify a camouflaged target as fast and as well as individuals identifying far more obvious "pop-out" targets, similar to the concept used at a shooting range, but in this case using easy-to-spot scenarios like a black O-shaped target among a crowd of black C shapes.

In fact, the relatively rapid method for training civilian novices to become expert camouflage breakers developed by Medical College of Georgia neuroscientist Dr. Jay Hegdé and his colleagues, also enabled the camouflage breakers to sense that something was amiss even when there was no specific target to identify.

This intuitive sense that something is not quite right has also been found in experienced radiologists finding subtle changes in mammograms, sometimes years before there is a detectable lesion.

The MCG investigators who developed the camouflage breaking technique wanted to know if trainees could detect the actual camouflaged target or just sense that something is different, an issue that is highly significant in real world circumstances, where a sniper might be hiding in the desert sand or a dense forest landscape.

"Merely being able to judge, no matter how accurately, that the given combat scene contains a target is not very useful to a sniper under real-world combat conditions if he/she is unable to tell where the target is," Hegdé and his colleagues write in the journal Cognitive Research: Principles and Implications.

They already knew that they could train most nonmilitary individuals off the street to break camouflage in as little as an hour daily for two weeks as long as their vision is good, a finding they want to benefit military personnel.

"We want to hide our own personnel and military material from the enemy and we want to break the enemy's camouflage," says Hegdé, goals that summarize his research, which has been funded by the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command Army Research Laboratory, for nearly a decade. "What are the things we can tweak? What are the things we can do to make our snipers better at recognizing camouflage?"

Because a missed shot by a sniper also tells the enemy his location. "You can't take shots at things that are not the target," Hegdé says.

"The potential for rapid training of novices in the camouflage-breaking paradigm is very promising as it highlights the potential for application to a wide variety of detection and localization tasks," says Dr. Frederick Gregory, program manager, U.S. Army Combat Capabilities Development Command Army Research Laboratory. "Results in experts highlight an opportunity to extend the training to real world visual search and visualization problems that would be of prime importance for the Army to solve."

For this newly published work, six adult volunteers with normal or corrected-to-normal vision were trained to break camouflage using Hegdé's deep-learning method, but received no specific training about how to pinpoint the target. Participants looked at digitally synthesized camouflage scenes like foliage or fruit and each scene had a 50-50 chance of containing no target versus a camouflaged target like a human head or a novel, 3D digital image. Similar to computer scientists training self-driving cars, the idea was and is to get viewers to get to know the lay of the land that is their focus. "If it turns out there is something that doesn't belong there, you can tell," he says.

Trainees could then either look at the image for 50 milliseconds --.05 seconds -- or as long as they wanted, then proceed to the next step where they quickly viewed a random field of pixels, that work like a visual palate cleanser, before acknowledging whether the camouflage image contained a target, then using a mouse to show where the target was. "You have to work from memory to say where it was," he notes.

When the participants could look at the image for as long as they wanted, the reported location of the actual target was essentially indistinguishable from the actual target but the accuracy did not drop much when the viewing time was just 50 milliseconds, which gives little time for even moving your eyes around, Hegdé says.

The subjects again had no subsequent training on identifying precisely where the target was. And they found that even without that specific training, they could do both equally well. "This was not a given," Hegdé notes.

In a second experiment with seven different individuals they used a much-abbreviated training process, which basically ensured participants knew which buttons to push when, and used instead a clearly more pronounced "pop-out' target without the traditional camouflage background, rather scenarios like that black O-shaped target among a crowd of black C shapes or a blue S shape among a sea of green H shapes. Both the longer and shorter viewing times yielded essentially identical results from the more extensively trained camouflage-breakers, both in accuracy of localization and reaction time.

Camouflage is used extensively by the military, from the deserts of the Middle East to the dense jungles of South America with the visual texture changing to blend with the natural environment. "You often are recognized by your outline, and you use these patterns to break up your outline, so the person trying to break your camouflage doesn't know where you leave off and the background begins," he says.

He notes that context is another important factor for recognition, referencing how you may not recognize a person whose face you have seen multiple times when you see them in a different environ. His current Army funded studies include exploring more about the importance of context, and further exploring ramifications of "camouflage breaking" in identifying medical problems.

He notes that even with his training, some people are better at breaking camouflage than others -- he says he is really bad at it -- and why remains mostly a mystery and another learning point for Hegdé and his colleagues.

Credit: 
Medical College of Georgia at Augusta University

Study identifies major barriers to financing a sustainable ocean economy

image: Summary of major capital types, level of risk vs. return for each capital type and the key providers of these types. (Fig. 1)

Image: 
Patricia Tiffany Angkiriwang

Financing a sustainable global ocean economy may require a Paris Agreement type effort, according to a new report from an international team of researchers led by the University of British Columbia.

That's because a significant increase in sustainable ocean finance will be required to ensure a sustainable ocean economy that benefits society and businesses in both developing and developed countries.

The report, published today - on World Ocean Day - identifies major barriers to financing such a sustainable ocean economy. This includes all ocean-based industries, like seafood production, shipping and renewable energy, and ecosystem goods and services, such as climate regulation and coastal protection.

"The size of the ocean economy was estimated at around USD $1.5 trillion in 2010, and prior to the COVID-19 pandemic, was projected to increase to USD $3 trillion in 2030," said lead author Dr. Rashid Sumaila, a professor at UBC's Institute for the Oceans and Fisheries and the School of Public Policy and Global Affairs and Canada Research Chair in Interdisciplinary Ocean and Fisheries Economics.

"But a sustainable ocean economy requires healthy and resilient marine ecosystems, which are being severely threatened by anthropogenic and climate pressures," said Dr. Sumaila. "There are many opportunities for governments, financial institutions and other players to make financial gains in this type of sustainable economy -- but there are also many barriers that need to be overcome."

Four major barriers identified in the study include:

1. A weak enabling environment for attracting sustainable ocean finance;

2. Insufficient public and private investment in the ocean economy due to a lack of high quality, investible projects with appropriate deal size and risk-return ratios to match available capital;

3. A limited ability of people to visualize and develop projects that are attractive to investors; and

4. The higher relative risk profile of ocean investments, where the enabling environment for insurance and risk mitigation is also not in place.

Currently, there is a shortfall in financing a sustainable ocean economy. According to the researchers, governments and public institutions may be a good place to start, in order to close this gap.

"There is scope for raising money from the uses of the ocean, and for part of this to be used to improve its management," said Dr. Sumaila. "The gap in conservation financing for all ecosystems, which includes funds for a sustainable ocean economy, was estimated at USD $300 billion globally. That is less than one per cent of the global GDP. Can you imagine what we would have available if governments made two or three per cent available?"

This would lead to financial institutions being incentivised to invest, and to the development of an enabling environment with private sector actors who are interested in encouraging green ventures that foster ocean development.

"And then you get insurance companies involved, because working in the ocean is generally riskier than working on land," said Dr. Louise Teh, a research associate at the Institute for the Oceans and Fisheries.

The report's authors point to public-private partnerships that have had significant results, including special green investments funds offered by the Netherlands that are exempt from income tax, thus allowing investors in green projects - such as green shipping - to contract loans at reduced interest rates.

"These Dutch green funds have already attracted more investment than can be utilized in the available schemes - an encouraging sign for the future prospects of such instruments," said Dr. Sumaila.

The cost of inaction regarding the conservation and sustainable use of the ocean is high.

"If we carry on with 'business as usual' we still face the cost of coastal protection, relocation of people and loss of land to sea level rise - a cost that is projected to rise from USD $200 billion to a trillion USD annually by 2100," said Dr. Sumaila. "The centrality of adequate finance to ensure a sustainable ocean economy is such that the world may need a Paris Agreement type effort to meet the needs."

"Financing a sustainable ocean economy" was published in Nature Communications.

Credit: 
University of British Columbia

Monarchs raised in captivity can orient themselves for migration, U of G study reveals

image: A monarch is fitted with a radio tracker

Image: 
Alana Wilcox

Monarch butterflies raised indoors still know how to fly south if given enough time to orient themselves, according to new University of Guelph research.

The finding is good news for the many nature lovers and school students who raise monarchs and then set them free to help boost struggling numbers.

Monarchs are the only butterfly known to make a long-distance migration to warmer wintering grounds. While those born in the spring and early summer live only from two to six weeks, those that emerge in the late summer sense environmental signals that tell them to fly thousands of kilometres south, to central Mexico.

Recent U.S. studies have suggested that captive-raised monarchs become disoriented when they emerge from their cocoons and cannot fly south. But this new research, led by U of G PhD student Alana Wilcox and integrative biology professor Dr. Ryan Norris, finds that may not be true.

Wilcox said previous research was conducted only in a "flight simulator," involving placing the butterflies into an open vessel and then gauging which direction they try to fly. The U of G team used a flight simulator but also tracked a second group of monarchs that were released in the wild after being equipped with tiny radio transmitters.

Those butterflies showed proper southward orientation, if given enough time to get their bearings.

"We believe the reason why the monarchs released in the wild flew in the proper direction is likely because they had time to calibrate their internal compasses after being released, which ensured they flew in a southerly direction," said Norris.

The new study appears in the journal Conservation Physiology. Integrative biology professor Dr. Amy Newman and Dr. Nigel Raine, a professor in U of G's School of Environmental Sciences, contributed to the research.

The team came upon the finding almost by accident. They had been investigating whether monarch caterpillars raised on milkweed grown in soil with a neonicotinoid pesticide would have trouble migrating once they moulted into butterflies.

They found the pesticide did not appear to affect the butterflies' migration. But they noticed differences between the monarchs tested in the flight simulator and those that had been raised in the same conditions and released in the wild with radio transmitters.

Only 26 per cent of monarchs tested in the flight simulator (10 of 39 butterflies) showed a weak southward orientation after several minutes of testing; the rest flew in all directions. But almost all the radio-tracked butterflies (28 of 29, or 97 per cent) flew in a south-to-southeast direction from the release site and were detected at distances of up to 200 kilometres away.

"Our results suggest that although captive rearing of monarch butterflies may cause temporary disorientation for monarchs, once butterflies have been exposed to sunlight and natural skylight cues, they can establish proper orientation using their sense of proper flight direction," Wilcox said.

This process of orientation after release from captivity can take between 24 and 48 hours, added Norris.

The findings are good news for the thousands of butterfly enthusiasts and educators concerned that raising endangered monarchs in captivity might hamper their instinct to fly to their wintering grounds in Mexico.

The team notes the findings apply only to late-summer monarchs, which perceive environmental cues such as shorter days as signs that it's time to migrate to Mexico.

"Though the environmental conditions in our experiment might differ for monarchs reared by hobbyists, our results suggest that captive rearing remains a valuable educational tool for highlighting the natural history and biology of butterflies," said Norris.

Credit: 
University of Guelph

Mechanochemical peptide bond formation behind the origins of life

image: T. Stolar, K. Užarevi?, José G. Hernández.

Image: 
RBI

The presence of amino acids on the prebiotic Earth is widely accepted, either coming from endogenous chemical processes or being delivered by extraterrestrial material. On the other hand, plausibly prebiotic pathways to peptides often rely on different aqueous approaches where condensation of amino acids is thermodynamically unfavorable. Now, chemists from the Ruđer Bošković Institute (RBI), in collaboration with colleagues from Xellia Pharmaceuticals, have shown that solid-state mechanochemical activation of glycine and alanine in combination with mineral surfaces leads to the formation of peptides.

This research shows for the first time the usefulness of mechanochemical activation for the prebiotic synthesis of larger biomolecules such as peptides. The results of the research have been published in the prestigious scientific journal Angewandte Chemie.

Prebiotic chemistry studies chemical transformations in conditions plausible for early Earth (approximately before 4,3-3,7 billion years ago) that could have led to life. Since the surface of the Earth has been changing by different geological processes over time, there is no historical evidence that would unambiguously explain how life appeared.

It is generally considered that from the primordial chemical inventory, more complex molecules emerged by chemical evolution which subsequently led to life.

Reaction conditions that are accepted as plausible are aqueous media, water/rock interfacial interactions, and a solid-state environment absent of water.

Prebiotic sources of mechanical energy on the prebiotic Earth likely included impacts, erosion, weathering, tectonics, and earthquakes, whereas geothermal settings provided the local inputs of thermal energy.

Peptide bond formation is one of the key chemical transformations in the field of prebiotic chemistry. It is regarded that peptides played an important catalytic role in the formation of other biomolecules and were included in primordial molecular symbiosis with nucleic acids. Current strategies for the prebiotic peptide bond synthesis rely on α-aminonitrile ligation in water and the use of wet/dry cycles for the condensation of amino acids.

Researchers from the RBI: Dr. José G. Hernández, Dr. Krunoslav Užarević, and PhD student Tomislav Stolar, in collaboration with scientists from Xellia; Dr. Ernest Meštrovi, PhD student Saša Grubeši and Dr. Nikola Cindro from the Chemistry Department of the Faculty of Science (University of Zagreb), have shown that mechanochemical prebiotic peptide bond formation occurs in the absence of water.

The team discovered that mechanochemical ball-milling of glycine in the presence of minerals such as TiO2 and SiO2 leads to the formation of glycine oligomers. If the reaction mixture is simultaneously heated using the thermally controlled ball-milling, glycine oligomers up to Gly11 are obtained (11 residues of glycine).

Experiments with diketopiperazine (DKP), diglycine, and triglycine showed that mechanochemical peptide bond formation is a dynamic and reversible process with simultaneous making and breaking of peptide bonds.

Notably, ball-milling of glycine and L-alanine mixture results in the formation of their hetero-oligopeptides. High-performance liquid chromatography (HPLC) and mass spectrometry (MS) were used to analyze the reaction products.

Long oligomers of glycine obtained through a mechanochemical pathway might have offered access to a more diverse library of peptides on the prebiotic Earth through chemical modifications such as α-alkylation. The results of this study complement the existing experimental procedures in prebiotic chemistry and offer an alternative synthetic pathway to peptides that is absent of water.

"The origin of life question is one of the most important ones in science and requires an interdisciplinary approach to study it. Therefore, space agencies such as NASA and JAXA invest great resources to acquire new fundamental insights. For example, recent Hayabusa2 and OSIRIS-REx asteroid sampling missions will offer clues into the chemical inventory available during the time when life emerged on Earth.

First-ever samples of the asteroid were brought back to Earth in December of 2020 and more are expected in 2023. Together with the identification of extraterrestrial materials in those samples, it is important to conduct laboratory experiments that would explain their presence and formation mechanism. Such fundamental studies can then be applied in modern synthetic chemistry." says Tomislav Stolar, first author of the publication.

Credit: 
Ruđer Bošković Institute

Preventing plant disease pandemics

image: David Schmale co-authored a recent paper examining efforts to prevent plant disease pandemics.

Image: 
Virginia Tech

During the COVID-19 pandemic, food systems faced disruptions from staff shortages and supply chain issues. Now, a Virginia Tech researcher is assisting with efforts to help plants themselves from facing their own pandemic.

Just like human diseases, plant diseases don't have arbitrary boundaries. These diseases don't stop at a border crossing or a port of entry. That's why plant disease surveillance, improved plant disease detection systems, and predictive plant disease modeling - integrated at the global scale - are necessary to mitigate future plant disease outbreaks and protect the global food supply, according to a team of researchers in a new commentary published in "Proceedings of the National Academy of Sciences."

"The manuscript offers a unique and timely perspective on plant diseases, particularly in the context of the COVID-19 pandemic," said David Schmale, a co-author on the paper and a professor in the Virginia Tech School of Plant and Environmental Sciences in the College of Agriculture and Life Sciences. "What would happen if the world lost a staple crop, such as wheat, to a plant disease pandemic? The manuscript considers current tools and capabilities in the context of climate change and growing human populations. There is a clear opportunity to bring researchers together that work on the epidemiology of human diseases and plant diseases."

Schmale was a part of a team of experts, led by North Carolina State University, that met in Raleigh, North Carolina, a few years ago to discuss plant diseases and their impacts on food security. This manuscript is the result of that meeting, and many of the experts that were there in Raleigh are co-authors on the paper.

The idea is to "detect these plant disease outbreak sources early and stop the spread before it becomes a pandemic," said Jean Ristaino, William Neal Reynolds Distinguished Professor of Plant Pathology at North Carolina State University and the paper's lead author. Once an epidemic occurs, it is difficult to control, Ristaino said, likening the effort to the one undertaken to stop the spread of COVID-19.

Ristaino said that the efforts from a wide range of scholars - so-called convergence science - are needed to prevent plant disease pandemics. That means economists, engineers, crop scientists, crop disease specialists, geneticists, geographers, data analysts, statisticians, and others working together to protect crops, the farmers growing crops, and the people fed by those crops.

While some diseases are already under some sort of global surveillance - such as wheat rust and late blight, an important pathogen that affects potatoes and caused the Irish Potato Famine - other crop diseases are not routinely monitored.

A new strain of the fungal pathogen that causes wheat rust turned up in 1999 (Ug99) and has moved quickly throughout Africa, recently flying over the Red Sea into Yemen.

Globalization of food trade is another factor for the jump of plant pathogens and further risks a food supply already strained by a growing world population and human diseases.

Research is underway to model the risk of plant-pathogen spread and help predict and then prevent outbreaks, the researchers report in the paper. Modeling and forecasting disease spread can help mobilize mitigation strategies more precisely to stop these plant pandemics.

"Our work extends the concept of One Health to plant disease pandemics and the threat they pose to global food security," Schmale said. "People, domestic animals, and plants are all tightly connected through each other and the environments that they share. These connections are changing, and we must address the threat of high-risk plant pathogens to staple food crops to safeguard global food security."

Credit: 
Virginia Tech

Discovery of circadian rhythm gene in mice could lead to breakthroughs

image: Shihoko Kojima. Photo by Steven Mackay.

Image: 
Virginia Tech

That internal nagging feeling that drives you to seek sleep at night and wake in the morning to eat, work, and play, is, it turns out, genetic, and it's not just in people. Nearly every living organism - from animals to plants as well as several microorganisms and fungi - has an internal body clock, or a circadian rhythm.

Yet, scientists have been perplexed out how these genes operate. Now, Virginia Tech scientists have taken a step closer to an answer thanks to the DNA of a mouse, a petri dish, and much patience. In a new study published in the journal Genes & Development, Shihoko Kojima, an assistant professor in the Department of Biological Sciences, part of the
Virginia Tech College of Science, and a researcher with the Fralin Life Sciences Institute, and her team has identified a novel gene, Per2AS, that controls the sleep/wake cycle in mice. Per2AS appears to be a new type of gene, known as a non-coding gene. Unlike most other genes, Per2AS is not translated from RNA into a subsequent protein, thus making its function unclear until now.
(Circadian rhythms derives from the Latin circa diem, or "around a day.")

The study has been in the works for several years. Nine, exactly. Why the long tenure? Well, it's complicated. Literally. "It was difficult to find out what its job is because Per2AS was a noncoding gene," Kojima said. "Scientists have accumulated a lot of knowledge and tools to figure out the function of traditional genes. However, these tools cannot be readily applicable to nontraditional genes, such as Per2AS, because most tools are made based on the unique characteristics common to traditional genes."

In addition to Kojima, the study includes authorship of 13 members of the Virginia Tech, including faculty, former staff, and alumni. They are University Distinguished Professor John Tyson of the Department of Biological Sciences and former director of the systems biology in the Academy of Integrated Science, research specialist Rebecca Mosig; undergraduate alums Allison Castaneda, Jacob Deslauriers, Landon Frazier, Kevin He, Naseem Maghzian, and Camille Schrier, most of who are seeking advanced degrees in health care or research; and Blacksburg High School alums Aarati Pokharel, now at the University of Virginia, and Lily Zhu, now at Johns Hopkins University.

According to Kojima, when Human Genome Project started some 30 years ago, scientists then thought most of our genome is made out of traditional genes, because these genes were believed to control unique traits that we all have - eye and hair color, height and weight, personality. That didn't turn out to be true.

"It turned out that only 2 percent of our genome is used for traditional genes and the rest appears to be nontraditional genes. There has been a hot debate whether these nontraditional genes are also important for our traits - some say it is DNA junk, while others say they have important functions," she said.

Growing evidence suggests that at least some nontraditional genes are important for various biological processes, such as neuronal activities, immune functions, and cell differentiation, as well as disease development including cancer, neurodegeneration, and congenital genetic diseases."

The big takeaway: A nontraditional gene can have functions to control our body clock and therefore is important for our genome to have. In other words, nontraditional genes are as vital as their more basic counterparts.

"People also have an equivalent gene," Kojima said. "However, it is unclear at this point whether the human version has the same function(s) as the mouse version. Most organisms living on the Earth have a circadian clock because this is an internal timing system important to adapt to the daily environmental changes caused by the Earth's rotation. The circadian clock of human is not much different from that of rodents or insects."

What's next? Kojima wants to study the gene in a live mouse model. Not just from a petri dish. "We also want to know if this gene is in many other organisms. If so, that would mean this gene is very important."

Credit: 
Virginia Tech

Radicalized and believing in conspiracies: Can the cycle be broken?

If your idea of conspiracy theories entails aliens, UFOs, governmental cover-ups at Roswell Air Force base, and the melody of The X-Files--you're not alone. That was, indeed, the classic notion, says Scott Tyson, an assistant professor of political science at the University of Rochester.

But over the course of the last five years, he noticed a watershed. For starters, the term "theory" no longer applied to the convoluted ideas spouted by today's conspiracist groups such as QAnon, the Proud Boys, and the Oath Keepers, all of whom Tyson calls largely "theoryless."

For example, Tyson, a game theorist whose research focuses on authoritarian politics, conspiracies, and radicalization, points out that those who believe erroneously that former President Donald Trump's "victory was stolen," usually do not believe that votes cast on that same ballot for successful Republican congressional candidates have been tampered with.

Yet, these conspiracies have entered the mainstream discourse and are driving the growing radicalization of average Americans that manifested itself most visibly in the storming of the US Capitol on January 6, he says.

In a recent study, "Sowing the Seeds: Radicalization as a Political Tool" published in the American Journal of Political Science, Tyson--together with University of Michigan coauthor Todd Lehmann--looks at two common policy interventions--economic and psychological--designed to counter the growing radicalization among the US population.

The duo finds that improving economic conditions reduces both radicalization efforts and dissent. However, trying to render people psychologically less susceptible to radicalization attempts can backfire and instead increase the efforts by radical leaders to influence and radicalize more followers.

While radical assertions of a "deep state" and "stolen elections" have long bubbled quietly beneath the public discourse, Tyson says those ideas have now moved into the mainstream. That shift--from fringe to center stage--Tyson argues, happened during the Trump presidency.

Q&A

What's the nutshell definition of "radicalization"?

"Radicalization" is used interchangeably with "indoctrination." Essentially, it's creating self-motivation among people to do certain things. You would call someone radicalized when those things that you would normally have to motivate someone to do--you don't have to do anymore because they've become self-motivated. That's where conspiracism comes in--it restructures the way that people perceive the social world around them. Radicalization involves an element of extremism and is fundamentally a political thought with an ecosystem to it: there needs to be a political group, or a set of political leaders who are trying to restructure people's beliefs or their values in such a way that it helps their own political goals or causes.

How can radicalization be countered?

The way to combat it is not to hope for the easy solution. It's a false idea that we can just take out the leaders and it'll all go away, akin to simply cutting off the head off the snake. That doesn't actually work. You have to go from the bottom up to start trying to siphon off radicalized people, and treat the organization more as a terrorist group, in terms of any hearts and minds policies.

Does leadership "decapitation" work against a radical group such as QAnon?

We looked in our research at what happens when you threaten leadership decapitation and found that you actually provide an incentive for leaders to increase their efforts to radicalize others. The reason is very simple: if we think of radicalized people as having the self-motivation to do things against the government--whether it's protests, attacks, or to bomb things--if more people become radicalized the actual leaders are less important in these kinds of antigovernment actions. Our theory suggests that leaders are less important in the actual production of antigovernment actions, so that the government is essentially forced to divert attention from the leaders and toward these other threats. The leaders intentionally take away from their own control.

Why were conspiracies able to enter the American mainstream so pervasively?

Trump was incredibly important in giving a megaphone to conspiracists who had been on the fringe beforehand until he became a political force and essentially weaponized a lot of those ideas. When Trump unleashed all these conspiracies on the public--many people didn't know that they were really fringe ideas. One other reason they were able to spread so quickly is our so-called "media ecosystem." We have media outlets like Fox News, OAN, and Newsmax who are perfectly willing to spout conspiracies. When it all started back in 2015, the mainstream media wasn't ready to deal with this kind of weaponization. That's why conspiracists were able to misuse the mainstream media to essentially launder their claims: the conspiracists would make a bunch of unfounded assertions and accusations, which the mainstream media would pick up in turn to report on. Part of the debunking, however, was retelling the untrue story. That way a lot of these conspiracy narratives ended up reaching a much larger audience.

What role did the pandemic play in the spread of conspiracies and the radicalization of US citizens?

QAnon was around before the pandemic, and the radicalization campaigns of far-right groups were already under way beforehand. But it certainly accelerated these efforts and made them more effective. Because of the pandemic people were more isolated, which means they were talking to fewer people, and the echo chamber became narrower. That in turn, made people more susceptible to becoming radicalized. It's very similar to how cults recruit people: they isolate them from their family and friends who are not involved in the cult. They keep new recruits in that echo chamber long enough until they've been able to radicalize them. The number of QAnon members and radicalized people through other far-right groups today would be much, much lower if the pandemic hadn't forced us all to isolate in the way that it did.

Credit: 
University of Rochester

Sleep characteristics predict cannabis use, binge drinking in teens and young adults

DARIEN, IL - A recent study of teens and young adults found that several factors related to sleep timing and sleep duration are associated with an increased risk of cannabis use and binge drinking of alcohol during the following year.

Results show that a greater late-night preference predicted a greater likelihood of any cannabis use the following year. Greater late-night preference, greater daytime sleepiness, later sleep timing on the weekend, and shorter sleep duration during weekdays and on the weekend, all predicted an increased risk for more severe binge drinking the following year.

For further analysis, the sample was stratified into two groups: middle school/high school students (age 12-18) and high school graduates (age 18-27). Results show that sleep variables predicted marijuana use only in the middle school and high school students, while different patterns of sleep characteristics predicted binge drinking in the two stratified samples.

"Overall, the results suggest that teens in middle and high school may be more vulnerable to sleep-related risk for substance use," said lead author Brant P. Hasler, who has a doctorate in clinical psychology and is an associate professor of psychiatry, psychology, and clinical and translational science in the Center for Sleep and Circadian Science at the University of Pittsburgh. "The particular pattern of sleep predictors in the middle school and high school sample is consistent with the 'circadian misalignment' caused by early school start times."

Multiple years of data were analyzed from the National Consortium on Alcohol and Neurodevelopment in Adolescence. The sample comprised 831 participants, including 423 females. Participants were between 12 and 21 years of age at baseline. Results were controlled for factors such as age, sex, race, parental education, and previous year's substance use.

"Sleep is modifiable behavior, and perhaps easier to modify than going after substance use directly," said Hasler. "Furthermore, other studies show college-age teens are more willing to hear about changing their sleep than changing their substance use. Thus, focusing on improving teen sleep -- including through delaying school start times -- may be an underutilized but effective approach to reducing risk for problematic substance use."

CDC data show that only 25% of students in grades 9 through 12 get sufficient sleep on an average school night, and early school start times are one factor associated with insufficient sleep in teens. The American Academy of Sleep Medicine recommends that middle school and high school start times should be 8:30 a.m. or later to support an adequate opportunity for adolescents to obtain sufficient sleep on school nights. Sleep problems such as insufficient sleep duration, irregular sleep timing, and insomnia also are common among college students, and these problems are associated with anxiety and depression symptoms.

The research abstract was published recently in an online supplement of the journal Sleep and will be presented as an oral presentation on Friday, June 11, during Virtual SLEEP 2021. SLEEP is the annual meeting of the Associated Professional Sleep Societies, a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society.

This press release includes data that has been updated since the publication of the abstract.

Credit: 
American Academy of Sleep Medicine

Sleep disorders are associated with increased dementia risk in patients with TBI

DARIEN, IL - Preliminary results from a study of more than 700,000 patients with traumatic brain injury (TBI) show that those with a sleep disorder had an increased risk of developing dementia.

Results show that over a median follow-up period of more than four years, TBI patients with a diagnosed sleep disorder were 25% more likely to develop dementia. The results were similar when stratified by sex: Having a sleep disorder was associated with a 25.5% increase in the risk of incident dementia in male persons with TBI and a 23.4% increase in the risk of developing dementia in female persons with TBI.

"Our study's novelty is its confirmation of sleep disorders' association with incident dementia in both male and female patients, independently of other known dementia risks," said lead author and primary investigator Dr. Tatyana Mollayeva, an affiliate scientist at the Kite Research Institute, the research arm of the Toronto Rehabilitation Institute and one of the principal research institutes at the University Health Network. Mollayeva is part of the Acquired Brain Injury & Society team at KITE. She is also an assistant professor at the Dalla Lana School of Public Health. "We are also the first to report on the risks that sleep disorders and other factors pose separately for male and female patients with TBI."

The retrospective study involved a province-wide cohort of all adult patients who were free of dementia when admitted to the emergency department or acute care hospital with a diagnosis of TBI between May 2003 and April 2013. The total sample comprised 712,708 patients with TBI of all severities. Their median age was 44 years, and 59% were male.

Over a median follow-up period of 52 months, 32,834 patients -- or 4.6% -- developed dementia. Analyses controlled for age, sex, income level, injury severity, and known comorbidity risks.

"The strong links to incidence of dementia in both sexes suggest a need for more targeted sleep disorders risk awareness in patients with TBI," said Mollayeva.

The research abstract was published recently in an online supplement of the journal Sleep and will be presented as an oral presentation on June 13 during Virtual SLEEP 2021. SLEEP is the annual meeting of the Associated Professional Sleep Societies, a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society.

Credit: 
American Academy of Sleep Medicine

How your phone can predict depression and lead to personalized treatment

image: Jyoti Mishra, PhD, is senior author of the study, director of NEATLabs and assistant professor in the Department of Psychiatry at UC San Diego School of Medicine.

Image: 
UC San Diego Health Sciences

According to the National Alliance on Mental Illness and the World Health Organization, depression affects 16 million Americans and 322 million people worldwide. Emerging evidence suggests that the COVID-19 pandemic is further exacerbating the prevalence of depression in the general population. With this trajectory, it is evident that more effective strategies are needed for therapeutics that address this critical public health issue.

In a recent study, publishing in the June 9, 2021 online edition of Nature Translational Psychiatry, researchers at University of California San Diego School of Medicine used a combination of modalities, such as measuring brain function, cognition and lifestyle factors, to generate individualized predictions of depression.

The machine learning and personalized approach took into account several factors related to an individual's subjective symptoms, such as sleep, exercise, diet, stress, cognitive performance and brain activity.

"There are different underlying reasons and causes for depression," said Jyoti Mishra, PhD, senior author of the study, director of NEATLabs and assistant professor in the Department of Psychiatry at UC San Diego School of Medicine. "Simply put, current health care standards are mostly just asking people how they feel and then writing a prescription for medication. Those first-line treatments have been shown to be only mild to moderately effective in large trials.

"Depression is a multifaceted illness, and we need to approach it with personalized treatment whether that be therapy with a mental health professional, more exercise or a combination of approaches."

The one-month study collected data from 14 participants with depression using smartphone applications and wearables (like smart watches) to measure mood and lifestyle variables of sleep, exercise, diet and stress, and paired these with cognitive evaluations and electroencephalography, using electrodes on the scalp to record brain activity.

The goal was not to make any comparisons across individuals, but to model the predictors of each person's daily fluctuations in depressed mood.

The researchers developed a new machine-learning pipeline to systematically identify distinct predictors of low mood in each individual.

As an example, exercise and daily caffeine intake emerged as strong predictors of mood for one participant, but for another, it was sleep and stress that were more predictive, while in a third subject, the top predictors were brain function and cognitive responses to rewards.

"We should not be approaching mental health as one size fits all. Patients will benefit by having more direct and quantified insight onto how specific behaviors may be feeding their depression. Clinicians can leverage this data to understand how their patients might be feeling and better integrate medical and behavioral approaches for improving and sustaining mental health," said Mishra.

"Our study shows that we can use the technology and tools that are readily available, like cell phone apps, to collect information from individuals with or at risk for depression, without significant burden to them, and then harness that information to design personalized treatment plans."

Mishra said next steps include examining if the personalized treatment plans guided by the data and machine learning are effective.

"Our findings could have broader implications than depression. Anyone seeking greater well-being could benefit from insights quantified from their own data. If I don't know what is wrong, how do I know how to feel better?"

Credit: 
University of California - San Diego

Increasing the memory capacity of intelligent systems based on the function of human neurons

Researchers from the University of Liège (Belgium) have recently developed a new artificial neuron inspired by the different modes of operation of human neurons. Called a Bistable Recurrent Cell (BRC), this process has enabled recurrent networks to learn temporal relationships of more than a thousand discrete time units where classical methods failed after only a hundred time units. These important results are published in the journal PLOS One.

The enormous interest in artificial intelligence (AI) in recent years has led to the development of extremely powerful machine learning techniques. For example, time series - any series of data where a time component is present, such as stock prices, weather patterns or electroencephalograms - are by their nature extremely common and of great interest due to their wide range of applications. Time-series analysis is a type of task for which machine learning techniques are of particular interest, enabling the prediction of future events based on past events. Given the diversity of potential applications, it is logical that the processing of such data via AI algorithms has become very popular in recent years.

A particular type of artificial neural network, called a recurrent neural network (RNN), has been specially developed in recent years to have a memory that enables the network to retain information over time in order to correctly process a time series. Each time new data is received, the network updates its memory to retain this new information. Despite these developments, it is still difficult to train such networks and their memory capability is limited in time. &laquo We can imagine the example of a network that receives new information every day, » explains Nicolas Vecoven, a doctoral student in the Systems and Modeling lab at the University of Liège and first author of the study. &laquo but after the fiftieth day, we notice that the information from the first day had already been forgotten. »

&laquo However, human neurons capable of retaining information over an almost infinite period of time thanks to the bi-stability mechanism. This allows neurons to stabilise in two different states, depending on the history of the electrical currents they have been subjected to, and this for an infinite period of time. In other words, thanks to this mechanism, human neurons can retain a bit (a binary value) of information for an infinite time. », Nicolas further explains. Based on this bi-stability mechanism, Nicolas Vecoven and his colleagues Damien Ernst (an AI specialist) and Guillaume Drion (a neuroscience specialist) from ULiège, have constructed a new artificial neuron with this same mechanism and have integrated it into recurrent artificial networks. Called a Bistable Recurrent Cell (BRC), this new artificial neuron has enabled recurrent networks to learn temporal relationships of more than 1000 time steps, where classical methods have failed after only about 100 time steps. These are important and promising results that have been published in the journal PLOS One. The three researchers are continuing their research in this particular field and are continuing to develop technologies to improve the memories of RNNs, by promoting the emergence of equilibrium points within them.

Credit: 
University of Liège

The buck stops where? UNH research records longest-ever deer distance

image: Fleeting glimpse of an adult white-tailed deer, known as N17003, that traveled the longest distance ever recorded by a UNH researcher - over 200 miles in 22 days.

Image: 
Missouri Department of Conservation

DURHAM, N.H.--Why did the deer cross the road? According to research from the University of New Hampshire to keep going and going and going. Researchers have discovered the longest distance ever recorded by an adult male white-tailed deer--300 kilometers, or close to 200 miles, in just over three weeks. The finding has important implications for population management and the transmission of disease, especially chronic wasting disease, a fatal neurological disease.

"Deer are one of the most abundant, well-known and intensely managed species of wildlife in the United States," said Remington Moll, assistant professor of wildlife ecology and lead author. "So, to make this discovery despite the fact that they are so well studied is pretty surprising."

In their study, published in the journal Ecology and Evolution, researchers analyzed data from GPS radio collars on more than 600 deer in Missouri. One dispersal, or long-distance journey, of an adult white-tailed deer stood out for its length, duration and age of the deer. The buck travelled close to 300 kilometers over 22 days by moving an average of 13.6 kilometers per day (almost eight and a half miles), crossing a major river seven times, an interstate highway, a railroad and eight state highways. To confirm the findings, the researchers surveyed the scientific literature for other dispersals of white-tailed deer. The deer, known as N17003, stood head and antlers above others; his walkabout was 174 kilometers longer than any other recorded for an adult male deer.

"This extraordinary movement just jumped out from the others we tracked," said Moll. "At first, we thought it was an error. It looks like someone took the GPS collar and drove across the state of Missouri."

The findings were remarkable not only for the deer's range--he roamed a distance equal to that between New York City and Baltimore--but also because unlike juvenile males, who move to seek breeding opportunities, adult males tend to stay put. Movements were faster and more directional than those in their home territory and were faster and more directional at night than during the day when the deer frequently sheltered in forest cover. The journey, which happened in November 2017, occurred during hunting season.

"We call this a rare event, but we haven't been putting collars out for that long, and not in these large numbers," said Moll. "It's entirely possible that it could be happening with greater frequency than we've known."

Nearly eight million Americans hunt deer which contributes more than $20 billion to the U.S. economy. The researchers say that understanding the distance deer travel and how they do it is important for managing the species and controlling chronic wasting disease, a fatal neurological disease spread by direct contact and the environment. Knowing that deer are crossing county or even state lines highlights a need for regional management coordination.

Funding for this study was provided by the Missouri Department of Conservation, the U.S. Fish and Wildlife Service and the University of Montana.

Co-authors are Jon Roberts and Joshua Millspaugh, University of Montana; Kevyn Wiskirchen, Jason Sumners, Jason Isabelle and Barbara Keller, Missouri Department of Conservation; and Robert Montgomery, Michigan State University.

The >University of New Hampshire inspires innovation and transforms lives in our state, nation, and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. As one of the nation's highest-performing research universities, UNH partners with NASA, NOAA, NSF and NIH, and receives more than $110 million in competitive external funding every year to further explore and define the frontiers of land, sea and space.

Credit: 
University of New Hampshire