Tech

Small finlets on owl feathers point the way to less aircraft noise

image: Owl wingspan

Image: 
Courtesy Professor Hermann Wagner

A recent research study conducted by City, University of London's Professor Christoph Bruecker and his team has revealed how micro-structured finlets on owl feathers enable silent flight and may show the way forward in reducing aircraft noise in future.

Professor Bruecker is City's Royal Academy of Engineering Research Chair in Nature-Inspired Sensing and Flow Control for Sustainable Transport and Sir Richard Olver BAE Systems Chair for Aeronautical Engineering.

His team have published their discoveries in the Institute of Physics journal, Bioinspiration and Biomimetics in a paper titled 'Flow turning effect and laminar control by the 3D curvature of leading edge serrations from owl wing'.

Their research outlines their translation of the detailed 3D geometry data of typical owl feather examples provided by Professor Hermann Wagner at RWTH Aachen University (Germany) into a biomimetic aerofoil to study the aerodynamic effect on the special filaments at the leading edge of the feathers.

The results show that these structures work as arrays of finlets which coherently turn the flow direction near the aerodynamic wall and keep the flow for longer and with greater stability, avoiding turbulence.

The City research team was inspired by the complex 3D geometry of the extensions along the front of the owl's feathers - reconstructed by Professor Wagner and his team in previous studies using high-resolution micro-CT scans.

After being transferred into a digital shape model, the flow simulations around those structures (using computational fluid dynamics) clearly indicated the aerodynamic function of these extensions as finlets, which turn the flow direction in a coherent way.

This effect is known to stabilize the flow over a swept wing aerofoil, typical for owls while flapping their wings and gliding.

Using flow studies in a water tunnel, Professor Bruecker, also proved the flow-turning hypothesis in experiments with an enlarged finlet model.

His team was surprised that instead of producing vortices, the finlets act as thin guide vanes due to their special 3D curvature. The regular array of such finlets over the wing span therefore turns the flow direction near the wall in a smooth and coherent manner.

The team plans to use a technical realisation of such a swept wing aerofoil pattern in an anechoic wind-tunnel for further acoustic tests. The outcome of this research will prove to be important for future laminar wing design and has the potential to reduce aircraft noise.

Credit: 
City St George’s, University of London

New test reveals AI still lacks common sense

image: DESPITE ADVANCES IN NATURAL LANGUAGE PROCESSING, STATE-OF-THE-ART SYSTEMS STILL GENERATE SENTENCES LIKE "TWO DOGS ARE THROWING FRISBEES AT EACH OTHER."

Image: 
ADRIANA SANCHEZ.

Natural language processing (NLP) has taken great strides recently--but how much does AI understand of what it reads? Less than we thought, according to researchers at USC's Department of Computer Science. In a recent paper Assistant Professor Xiang Ren and PhD student Yuchen Lin found that despite advances, AI still doesn't have the common sense needed to generate plausible sentences.

"Current machine text-generation models can write an article that may be convincing to many humans, but they're basically mimicking what they have seen in the training phase," said Lin. "Our goal in this paper is to study the problem of whether current state-of-the-art text-generation models can write sentences to describe natural scenarios in our everyday lives."

Understanding scenarios in daily life

Specifically, Ren and Lin tested the models' ability to reason and showed there is a large gap between current text generation models and human performance. Given a set of common nouns and verbs, state-of-the-art NLP computer models were tasked with creating believable sentences describing an everyday scenario. While the models generated grammatically correct sentences, they were often logically incoherent.

For instance, here's one example sentence generated by a state-of-the-art model using the words "dog, frisbee, throw, catch":

"Two dogs are throwing frisbees at each other."

The test is based on the assumption that coherent ideas (in this case: "a person throws a frisbee and a dog catches it,") can't be generated without a deeper awareness of common-sense concepts. In other words, common sense is more than just the correct understanding of language--it means you don't have to explain everything in a conversation. This is a fundamental challenge in the goal of developing generalizable AI--but beyond academia, it's relevant for consumers, too.

Without an understanding of language, chatbots and voice assistants built on these state-of-the-art natural-language models are vulnerable to failure. It's also crucial if robots are to become more present in human environments. After all, if you ask a robot for hot milk, you expect it to know you want a cup of mile, not the whole carton.

"We also show that if a generation model performs better on our test, it can also benefit other applications that need commonsense reasoning, such as robotic learning," said Lin. "Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people."

Joining Lin and Ren on the paper are USC's Wangchunshu Zhou, Ming Shen, Pei Zhou; Chandra Bhagavatula from the Allen Institute of Artificial Intelligence; and Yejin Choi from the Allen Institute of Artificial Intelligence and Paul G. Allen School of Computer Science & Engineering, University of Washington.

The common sense test

Common-sense reasoning, or the ability to make inferences using basic knowledge about the world--like the fact that dogs cannot throw frisbees to each other--has resisted AI researchers' efforts for decades. State-of-the-art deep-learning models can now reach around 90% accuracy, so it would seem that NLP has gotten closer to its goal.

But Ren, an expert in natural language processing and Lin, his student, needed more convincing about this statistic's accuracy. In their paper, published in the Findings of Empirical Methods in Natural Language Processing (EMNLP) conference on Nov. 16, they challenge the effectiveness of the benchmark and, therefore, the level of progress the field has actually made.

"Humans acquire the ability to compose sentences by learning to understand and use common concepts that they recognize in their surrounding environment," said Lin.

"Acquiring this ability is regarded as a major milestone in human development. But we wanted to test if machines can really acquire such generative commonsense reasoning ability."

To evaluate different machine models, the pair developed a constrained text generation task called CommonGen, which can be used as a benchmark to test the generative common sense of machines. The researchers presented a dataset consisting of 35,141 concepts associated with 77,449 sentences. They found the even best performing model only achieved an accuracy rate of 31.6% versus 63.5% for humans.

"We were surprised that the models cannot recall the simple commonsense knowledge that 'a human throwing a frisbee' should be much more reasonable than a dog doing it," said Lin. "We find even the strongest model, called the T5, after training with a large dataset, can still make silly mistakes."

It seems, said the researchers, that previous tests have not sufficiently challenged the models on their common sense abilities, instead mimicking what they have seen in the training phase.

"Previous studies have primarily focused on discriminative common sense," said Ren. "They test machines with multi-choice questions, where the search space for the machine is small--usually four or five candidates."

For instance, a typical setting for discriminative common-sense testing is a multiple-choice question answering task, for example: "Where do adults use glue sticks?" A: classroom B: office C: desk drawer.

The answer here, of course, is "B: office." Even computers can figure this out without much trouble. In contrast, a generative setting is more open-ended, such as the CommonGen task, where a model is asked to generate a natural sentence from given concepts.

Ren explains: "With extensive model training, it is very easy to have a good performance on those tasks. Unlike those discriminative commonsense reasoning tasks, our proposed test focuses on the generative aspect of machine common sense."

Ren and Lin hope the data set will serve as a new benchmark to benefit future research about introducing common sense to natural language generation. In fact, they even have a leaderboard depicting scores achieved by the various popular models to help other researchers determine their viability for future projects.

"Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people," said Lin.

"By introducing common sense and other domain-specific knowledge to machines, I believe that one day we can see AI agents such as Samantha in the movie Her that generate natural responses and interact with our lives."

Credit: 
University of Southern California

NYUAD researcher aids in the development of a pathway to solve cybersickness

image: Associate Professor of Psychology and Director of the Neuroimaging Center at NYU Abu Dhabi, Bas Rokers

Image: 
NYU Abu Dhabi

Fast facts:

Virtual (VR) and augmented reality (AR) technologies have grown in popularity as they can immerse users in novel situations and environments by simulating the necessary stimuli.

However, when using VR or AR technologies such as head-worn displays, users frequently report symptoms of nausea, disorientation, and sleepiness.

This is more commonly referred to as cybersickness, a form of motion sickness that has been caused by the use of technology.

Abu Dhabi, UAE, November 18, 2020: Associate Professor of Psychology and Director of the Neuroimaging Center at NYU Abu Dhabi Bas Rokers and a team of researchers have evaluated the state of research on cybersickness and formulated a research and development agenda to eliminate cybersickness, allowing for broader adoption of immersive technologies.

In the paper titled Identifying Causes of and Solutions for Cybersickness in Immersive Technology: Reformulation of a Research and Development Agenda, published in the International Journal of Human-Computer Interaction, Rokers and his team discuss the process of creating a research and development agenda based on participant feedback from a workshop titled Cybersickness: Causes and Solutions and analysis of related research. The new agenda recommends prioritizing the creation of powerful, lightweight, and untethered head-worn displays, reducing visual latencies, standardizing symptom and aftereffect measurement, developing improved countermeasures, and improving the understanding of the magnitude of the problem and its implications for job performance.

The results of this study have identified a clear path towards finding a solution for cybersickness and allowing for the widespread use of immersive technologies. In addition to its use in entertainment and gaming, VR and AR have significant applications in the domains of education, manufacturing, training, health care, retail, and tourism. For example, it can enable educators to introduce students to distant locations and immerse themselves in a way that textbooks cannot. It can also allow healthcare workers to reach patients in remote and underserved areas, where they can provide diagnostics, surgical planning and image-guided treatment.

"As there are possible applications across many industries, understanding how to identify and evaluate the opportunities for mass adoption and the collaborative use of AR and VR is critical," said Rokers. "Achieving the goal of resolving cybersickness will allow the world to embrace the potential of immersive technology to enhance training, performance, and recreation."

Credit: 
New York University

Revolutionary CRISPR-based genome editing system treatment destroys cancer cells

Researchers at Tel Aviv University (TAU) have demonstrated that the CRISPR/Cas9 system is very effective in treating metastatic cancers, a significant step on the way to finding a cure for cancer. The researchers developed a novel lipid nanoparticle-based delivery system that specifically targets cancer cells and destroys them by genetic manipulation. The system, called CRISPR-LNPs, carries a genetic messenger (messenger RNA), which encodes for the CRISPR enzyme Cas9 that acts as molecular scissors that cut the cells' DNA.

The revolutionary work was conducted in the laboratory of Prof. Dan Peer, VP for R&D and Head of the Laboratory of Precision Nanomedicine at the Shmunis School of Biomedicine and Cancer Research at TAU. The research was conducted by Dr. Daniel Rosenblum together with Ph.D. student Anna Gutkin and colleagues at Prof. Peer's laboratory, in collaboration with Dr. Dinorah Friedmann-Morvinski from the School of Neurobiology, Biochemistry & Biophysics at TAU; Dr. Zvi R. Cohen, Director of the Neurosurgical Oncology Unit and Vice-Chair of the Department of Neurosurgery at the Sheba Medical Center; Dr. Mark A. Behlke, Chief Scientific Officer at IDT Inc. and his team; and Prof. Judy Lieberman of Boston Children's Hospital and Harvard Medical School.

The results of the groundbreaking study, which was funded by ICRF (Israel Cancer Research Fund), were published in November 2020 in Science Advances.

"This is the first study in the world to prove that the CRISPR genome editing system can be used to treat cancer effectively in a living animal," said Prof. Peer. "It must be emphasized that this is not chemotherapy. There are no side effects, and a cancer cell treated in this way will never become active again. The molecular scissors of Cas9 cut the cancer cell's DNA, thereby neutralizing it and permanently preventing replication."

To examine the feasibility of using the technology to treat cancer, Prof. Peer and his team chose two of the deadliest cancers: glioblastoma and metastatic ovarian cancer. Glioblastoma is the most aggressive type of brain cancer, with a life expectancy of 15 months after diagnosis and a five-year survival rate of only 3%. The researchers demonstrated that a single treatment with CRISPR-LNPs doubled the average life expectancy of mice with glioblastoma tumors, improving their overall survival rate by about 30%. Ovarian cancer is a major cause of death among women and the most lethal cancer of the female reproductive system. Most patients are diagnosed at an advanced stage of the disease when metastases have already spread throughout the body. Despite progress in recent years, only a third of the patients survive this disease. Treatment with CRISPR-LNPs in a metastatic ovarian cancer mice model increased their overall survival rate by 80%.

"The CRISPR genome editing technology, capable of identifying and altering any genetic segment, has revolutionized our ability to disrupt, repair or even replace genes in a personalized manner," said Prof. Peer. "Despite its extensive use in research, clinical implementation is still in its infancy because an effective delivery system is needed to safely and accurately deliver the CRISPR to its target cells. The delivery system we developed targets the DNA responsible for the cancer cells' survival. This is an innovative treatment for aggressive cancers that have no effective treatments today."

The researchers note that by demonstrating its potential in treating two aggressive cancers, the technology opens numerous new possibilities for treating other types of cancer as well as rare genetic diseases and chronic viral diseases such as AIDS.

"We now intend to go on to experiments with blood cancers that are very interesting genetically, as well as genetic diseases such as Duchenne muscular dystrophy," says Prof. Peer. "It will probably take some time before the new treatment can be used in humans, but we are optimistic. The whole scene of molecular drugs that utilize messenger RNA (genetic messengers) is thriving - in fact, most COVID-19 vaccines currently under development are based on this principle. When we first spoke of treatments with mRNA twelve years ago, people thought it was science fiction. I believe that in the near future, we will see many personalized treatments based on genetic messengers - for both cancer and genetic diseases. Through Ramot, the Technology Transfer Company of TAU, we are already negotiating with international corporations and foundations, aiming to bring the benefits of genetic editing to human patients."

Credit: 
American Friends of Tel Aviv University

Surprises in 'active' aging

Aging is a process that affects not only living beings. Many materials, like plastics and glasses, also age - ie they change slowly over time as their particles try to pack better - and there are already computer models to describe this. Biological materials, such as living tissue, can show similar behaviour to glasses except that the particles are actual cells or bacteria which have their own propulsion. Researchers at the University of Göttingen have now used computer simulations to explore the aging behaviour of these "living" glassy systems. There was a surprise in that the activity of the particles can actually drive aging, which has potential consequences for a number of applications. Their research was published in Physical Review Letters.

In materials like glasses and plastics, their particles pack together better over time (ie they age). But if this process is disturbed by mechanical deformation, for instance if a solid is bent, then the materials go back to their earlier state and are thus 'rejuvenated'. To model what happens in biological systems, physicists at the University of Göttingen developed extensive computer simulations of a model of a glass made up of active particles (a living glass). Just as it would in a real biological system, each particle in the simulation has its own propulsion force; this is modelled as changing direction randomly over time. Then the researchers varied the timescale of these changes in direction. When this timescale is short, particles are propelled randomly as if they were at a higher temperature, and this is known to produce aging. But when direction changes are slow, particles try to keep going in the same direction and this should act like local deformation, thus stopping aging. However, the simulations here showed something interesting and unexpected: when the activity of the particles is very persistent, it actually drives aging in living glassy systems.

"We were really surprised when we saw that persistent active propulsion can cause aging. We had expected it to work like small-scale deformation in the material that would rejuvenate it," comments Dr Rituparno Mandal from the Institute for Theoretical Physics at the University of Göttingen. He goes onto say, "But in fact, the local deformation is so slow that the particles can effectively go with the flow and use their motion to find lower energy arrangements. In effect, they pack better."

Senior author, Professor Peter Sollich, also from the University of Göttingen, added "The research highlights important features of glassy behaviour in active materials that have no comparable behaviour in conventional glasses. This might have implications for many biological processes where glass-like effects have been identified, including cell behaviour in wound-healing, tissue development and cancer metastasis."

Credit: 
University of Göttingen

Are high-protein total diet replacements the key to maintaining healthy weight?

image: Overview of the experimental protocol (A) and variables assessed during each 32-h test (B). CON, control diet; EE, energy expenditure; HP-TDR, high-protein total diet replacement; N/A, not applicable; REE, resting energy expenditure; WBCU, whole-body calorimetry unit.

Image: 
The Authors

Key Points

High-protein total diet replacement products are widely available to consumers; however, their efficacy has not been adequately studied.

AJCN study compared the impact of a high-protein total diet replacement to that of a typical North American diet on key components of energy metabolism.

The high-protein total diet replacement compared to the North American diet resulted in higher energy expenditure, increased fat oxidation and negative fat balance, likely implying body fat loss.

Diets with a higher proportion of protein might offer a metabolic advantage compared to a diet consisting of the same number of calories, but with a lower proportion of protein.

Future studies are needed to better understand the long-term effects of high-protein total diet replacements on both healthy and diseased population groups.

Rockville, MD According to the World Health Organization, obesity has nearly tripled worldwide since 1975. In 2016, for example, more than 1.9 billion adults were categorized as overweight. Of these, more than 650 million had obesity. Because obesity is associated with a higher incidence of diabetes, cardiovascular disease and some cancers, the rise in its incidence has led to a global public health emergency.

Total diet replacements, nutritionally complete formula foods designed to replace the whole diet for a set period of time, have become increasingly popular strategies to combat obesity. Another popular weight management strategy are high-protein diets, which have been shown to promote weight loss and weight maintenance by increasing our sense of fullness, energy expenditure, and ability to maintain or increase fat-free mass. Taken together, the combination of a total diet replacement with a high-protein diet may be a promising strategy for weight management. In fact, several high-protein total diet replacement products are widely available to consumers. The question is do they work?

That's the core question addressed by the authors of "A High-Protein Total Diet Replacement Increases Energy Expenditure and Leads to Negative Fat Balance in Healthy, Normal-Weight Adults," published in The American Journal of Clinical Nutrition. In their study, the authors compared the impact of a high-protein total diet replacement to that of a control diet, a typical North American diet, on selected components of energy metabolism. Lead author, Camila Oliveira, a doctoral student at the University of Alberta, noted, "considering the prevalence of obesity worldwide and its impact on health, it's not surprising nutritional strategies such as total diet replacements and high-protein diets are becoming increasingly popular as weight management strategies; however, research around these topics has not kept pace with their growth in popularity."

In order to conduct their experiment, the authors recruited a group of healthy, normal-weight adults between the ages of 18 and 35 via advertisements placed on notice boards at the University of Alberta, Canada. Subjects were then randomly assigned into one of two groups: one group was fed the high-protein total diet replacement, which consisted of 35% carbohydrate, 40% protein, and 25% fat. The second group, the control group, was fed a diet with the same number of calories, but consisting of 55% carbohydrate, 15% protein, and 30% fat, a typical North American dietary pattern. Participants received the prescribed diets for a 32-hour period while inside a metabolic chamber.

Compared to the standard North American dietary pattern, the findings of this inpatient metabolic balance study revealed that the high-protein total diet replacement led to "higher energy expenditure, increased fat oxidation, and negative fat balance." In particular, the results of the study provide further evidence that a calorie is not just a calorie. That is, a diet with a higher proportion of protein might lead to an increase in energy expenditure and fat oxidation compared to a diet consisting of the same number of calories, but with a lower proportion of protein as well as a higher proportion of carbohydrate or fat.

Dr. Carla Prado, Professor, University of Alberta and the study's principal investigator, commented, "although these results are restricted to a specific population of healthy, normal-weight adults, they can help nutrition scientists and healthcare providers better understand the real physiological effects of a high-protein total diet replacement in humans. In our opinion, it is imperative to first understand the physiological impact of a high-protein total diet replacement in a healthy population group so that the effects are better translated in individuals with obesity and its related comorbidities."

In summary, the results of this study suggest that high-protein total diet replacements may be a promising nutritional strategy to combat rising rates of obesity. Lead author Camila Oliveira added, "future studies are needed to better understand the long-term effects of this dietary intervention on the physiology of both healthy and diseased population groups."

Credit: 
American Society for Nutrition

Antibiotic resistance genes in three Puerto Rican watersheds after Hurricane Maria

image: ARGs were lower in number and less diverse in this rural Puerto Rican watershed after Hurricane Maria than in an urban watershed.

Image: 
Maria Virginia Riquelme

In the aftermath of Hurricane Maria, a category 5 hurricane that made landfall in September 2017, flooding and power outages caused some wastewater treatment plants (WWTPs) to discharge raw sewage into waterways in Puerto Rico. Six months later, researchers monitored antibiotic resistance genes (ARGs) in three Puerto Rican watersheds, finding that the abundance and diversity of ARGs were highest downstream of WWTPs. They report their results in ACS' Environmental Science & Technology.

Flooding can result in contamination of waterways with untreated human waste, and in turn, fecal and pathogenic bacteria. Previous research has linked this contamination to the emergence of antibiotic-resistant bacteria. To help monitor the spread of these potentially harmful microbes in waterways, Benjamin Davis, Maria Virginia Riquelme, Amy Pruden and colleagues wanted to detect and quantify bacterial genes that confer antibiotic resistance in three Puerto Rican watersheds post-Hurricane Maria.

The researchers used a method called shotgun metagenomics DNA sequencing to detect ARGs in river water samples from three watersheds, including samples upstream and downstream of three WWTPs. The researchers found that the abundance and diversity of total ARGs, in particular those that confer resistance to clinically important aminoglycoside and β-lactam antibiotics, were higher downstream of WWTPs compared with upstream. The total ARG abundance was higher in samples from an urban high-impact watershed than from the two other watersheds, which had less human influence. Also, two anthropogenic antibiotic resistance markers -- DNA sequences associated with human impacts to the watershed -- correlated with the abundance of a distinct set of ARGs. Although baseline levels of ARGs in these Puerto Rican watersheds prior to Hurricane Maria are unknown, surveillance methodologies like these could be used to assess future impacts of major storms on the spread of antibiotic resistance, the researchers say.

Credit: 
American Chemical Society

How fishermen have adapted to change over the past 35+ years

image: An analysis published in Fish and Fisheries notes that marine fisheries are increasingly exposed to external drivers of social and ecological change, and recent changes have had different impacts upon the livelihood strategies favored by fishermen based on the size of their boats.

Image: 
American Albacore Fishing Association

An analysis published in Fish and Fisheries notes that marine fisheries are increasingly exposed to external drivers of social and ecological change, and recent changes have had different impacts upon the livelihood strategies favored by fishermen based on the size of their boats.

The analysis describes changes among Pacific Northwest fishermen over 35+ years, with a focus on the albacore troll and pole-and-line fishery. In describing different trajectories associated with the albacore fishery, one of the U.S. West Coast's last open access fisheries, the authors highlight the diverse strategies used to sustain fishing livelihoods in the modern era. They argue that alternative approaches to management and licensing may be needed to maintain the viability of small-scale fishing operations worldwide.

"While resource managers have traditionally focused on maximizing economic returns one species at a time, new approaches that prioritize diversity and flexibility may be required to help coastal communities navigate the uncertainty associated with climate change and the globalization of seafood markets," said lead author Timothy H. Frawley, PhD, of NOAA Southwest Fisheries Science Center and the University of California Santa Cruz.

Credit: 
Wiley

Starved, stuffed and squandered: Consequences of decades of global nutrition transition

Just a handful of rice and beans - a part of our world is starved. Hawaiian Pizza and ice-cream - another part of our world is stuffed, throwing away food every day. This gap is likely to worsen, while food waste will increase and pressure on the environment will go up, a new study shows. Researchers from the Potsdam Institute for Climate Impact Research (PIK) assessed the consequences if the current nutrition transition, from scarce starch-based diets towards processed foods and animal products, continues - the calculations combine, for the first time, estimates for under- and overweight, food composition and waste. Their findings provide a startling look ahead: By 2050, more than 4 billion people could be overweight, 1.5 billion of them obese, while 500 million people continue to be underweight.

"If the observed nutrition transition continues, we will not achieve the United Nations goal of eradicating hunger worldwide," explains Benjamin Bodirsky from PIK, lead author of the study just published in Scientific Reports. "At the same time, our future will be characterized by overweight and obesity of mind-blowing magnitude." By 2050, 45 percent of the world's population could be overweight and 16 percent obese - compared to about 29 and 9 percent in 2010. This development is due to the insufficient global distribution of food as well as to the shift from scarcely processed plant-based diets towards unbalanced, affluent diets, where animal protein, sugar and fat displace whole grains and pulses.

And that's not all as Bodirsky underlines: "The increasing waste of food and the rising consumption of animal protein mean that the environmental impact of our agricultural system will spiral out of control. Whether greenhouse gases, nitrogen pollution or deforestation: we are pushing the limits of our planet - and exceed them."

Food systems as driver for greenhouse gas emissions

Crop and grazing land for food production cover about one third of the global land area; our food system is responsible for up to a third of global greenhouse gas emissions. The study projects that - if current trends continue - global food demand will increase by about 50% between 2010 and 2050, the demand for animal products like meat and milk will approximately double, a development that requires more and more land.

"Using the same area of land, we could produce much more plant-based food for humans than animal-based food," explains co-author Alexander Popp, head of PIK's Land Use Management Research Group. "To put it in a very simplistic way: If more people eat more meat, there's less plant-based food for the others - plus we need more land for food production which can lead to forests being cut down. And greenhouse gas emissions rise as a consequence of keeping more animals."

Global food demand: distribution and education are at the heart of the problem

The study provides a first-of-its kind, consistent long-term overview of a continued global nutrition transition from 1965 to 2100, using an open-source model that forecasts how much of food demand can be attributed to factors like population growth, ageing, increasing height, growing body mass index, declining physical activity and increasing food waste. Co-author Prajal Pradhan from PIK explains: "There is enough food in the world - the problem is that the poorest people on our planet have simply not the income to purchase it. And in rich countries, people don't feel the economic and environmental consequences of wasting food." But redistribution alone would not be sufficient, as actually both the poor and the rich eat poorly: There is a lack of knowledge about a healthy way of life and nutrition.

How to trigger an appetite for change?

"Unhealthy diets are the world's largest health risks," co-author Sabine Gabrysch, head of PIK's Research Department on Climate Resilience explains. "While many countries in Asia and Africa currently still struggle with undernutrition and associated health problems, they are increasingly also faced with overweight, and as a consequence, with a rising burden of diabetes, cardiovascular disease and cancer," she adds. The study could provide valuable orientation about the potential development pathway of different countries and regions. It could also support much-needed pro-active policies for a qualitative transition towards sustainable and healthy diets.

Sabine Gabrysch concludes: "We urgently need political measures to create an environment that promotes healthy eating habits. This could include binding regulations that limit the marketing of unhealthy snacks and promote sustainable and healthy meals in schools, hospitals and canteens. A stronger focus on nutrition education is also key, from early education in kindergarten to counseling by medical doctors and nurses. What we eat is of vital importance - both for our own health and that of our planet."

Credit: 
Potsdam Institute for Climate Impact Research (PIK)

Predicting urban water needs

The gateway to more informed water use and better urban planning in your city could already be bookmarked on your computer. A new Stanford University study identifies residential water use and conservation trends by analyzing housing information available from the prominent real estate website Zillow.

The research, published Nov. 18 in Environmental Research Letters, is the first to demonstrate how new real estate data platforms can be used to provide valuable water use insights for city housing and infrastructure planning, drought management and sustainability.

"Evolving development patterns can hold the key to our success in becoming more water-wise and building long-term water security," said study senior author Newsha Ajami, director of urban water policy at Stanford's Water in the West program. "Creating water-resilient cities under a changing climate is closely tied to how we can become more efficient in the way we use water as our population grows."

It's estimated that up to 68 percent of the world's population will reside in urban or suburban areas by 2050. While city growth is a consistent trend, the types of residential dwellings being constructed and neighborhood configurations are less uniform, leading to varying ways in which people use water inside and outside their homes. The people living within these communities also have different water use behaviors based on factors such as age, ethnicity, education and income. However, when planning for infrastructure changes, decision-makers only take population, economic growth and budget into account, resulting in an incomplete picture of future demand. This, in turn, can lead to infrastructure changes, such as replacing old pipes, developing additional water supply sources or building wastewater treatment facilities, that fail to meet community needs.

Zillow and other real estate websites gather and publish records collected from different county and municipal agencies. These websites can also be updated by homeowners, making them rich sources of information that can otherwise be difficult and timely to obtain. The Stanford researchers used data from Zillow to gather single-family home information, including lot size, home value and number of rooms in Redwood City, California, a fast-growing, economically diverse city with various styles of houses, lots and neighborhoods. Then, they pulled U.S. Census Bureau demographic information for the city, looking at factors including average household size and income along with the percentage occupied by renters, non-families, college educated and seniors.

Coupling the Zillow and census data and then applying machine learning methods, the researchers were able to identify five community groupings, or clusters. They then compared the different group's billing data from the city's public works department to identify water usage trends and seasonal patterns from 2007 to 2017 and conservation rates during California's historic drought from 2014 to 2017.

"With our methods incorporating Zillow data we were able to develop more accurate community groupings beyond simply clustering customers based on income and other socioeconomic qualities. This more granular view resulted in some unexpected findings and provided better insight into water-efficient communities," said lead author Kim Quesnel, a postdoctoral scholar at the Bill Lane Center for the American West while performing the research.

They found the two lowest income groups scored average on water use despite having a higher number of people living in each household. The middle-income group had high outdoor water use but ranked low in winter water use, signaling efficient indoor water appliances - such as low-flow, high-efficiency faucets and toilets - making them an ideal target for outdoor conservation features such as converting green spaces or upgrading to weather-based or smart irrigation controllers.

The two highest income groups, characterized by highly educated homeowners living in comparatively larger homes, were the most dissimilar. One cluster - younger residents on smaller lots with newer homes in dense, compact developments - had the lowest water use of the entire city. The other high-income cluster consisting of older houses built on larger lots with fewer people turned out to be the biggest water consumer. The finding goes against most previous research linking income and water use, and suggests that changing how communities are built and developed can also change water use patterns, even for the most affluent customers.

All groups showed high rates of water conservation during drought. Groups with the highest amount of savings (up to 37 percent during peak drought awareness) were the two thirstiest consumers (the high-income, large-lot and middle-income groups) demonstrating high potential for outdoor water conservation. Groups with lower normal water usage were also able to cut back, but were more limited in their savings. Understanding these limitations could inform how policymakers and city planners target customers when implementing water restrictions or offering incentives such as rebates during drought.

This research lays the framework for integrating big data into urban planning, providing more accurate water use expectations for different community configurations. Further studies could include examining how data from emerging online real estate platforms can be used to develop neighborhood water use classifications across city, county or even state lines. An additional area of interest for the researchers is examining how water use consumption is linked to development patterns in other kinds of residential areas, for example in dense cities.

"Emerging, accessible data sources are giving us a chance to develop a more informed understanding of water use patterns and behaviors," said Ajami. "If we rethink the way we build future cities and design infrastructure, we have the opportunity for more equitable and affordable access to water across various communities."

Credit: 
Stanford University

Fostering creativity in researchers: how automation can revolutionize materials research

image: CASH that combines machine learning, robotics, and big data demonstrates the tremendous potential in materials science. It is only through coevolution with such technologies that future researchers can work on more creative research, leading to the acceleration of materials science research.

Image: 
Tokyo Tech

At the heart of many past scientific breakthroughs lies the discovery of novel materials. However, the cycle of synthesizing, testing, and optimizing new materials routinely takes scientists long hours of hard work. Because of this, lots of potentially useful materials with exotic properties remain undiscovered. But what if we could automate the entire novel material development process using robotics and artificial intelligence, making it much faster?

In a recent study published at APL Material, scientists from Tokyo Institute of Technology (Tokyo Tech), Japan, led by Associate Professor Ryota Shimizu and Professor Taro Hitosugi, devised a strategy that could make fully autonomous materials research a reality. Their work is centered around the revolutionary idea of laboratory equipment being 'CASH' (Connected, Autonomous, Shared, High-throughput). With a CASH setup in a materials laboratory, researchers need only decide which material properties they want to optimize and feed the system the necessary ingredients; the automatic system then takes control and repeatedly prepares and tests new compounds until the best one is found. Using machine learning algorithms, the system can employ previous knowledge to decide how synthesis conditions should be changed to approach the desired outcome in each cycle.

To demonstrate that CASH is a feasible strategy in solid-state materials research, Associate Prof Shimizu and team created a proof-of-concept system comprising a robotic arm surrounded by several modules. Their setup was geared toward minimizing the electrical resistance of a titanium dioxide thin film by adjusting the deposition conditions. Therefore, the modules are a sputter deposition apparatus and a device for measuring resistance. The robotic arm transferred the samples from module to module as needed, and the system autonomously predicted the synthesis parameters for the next iteration based on previous data. For the prediction, they used the Bayesian optimization algorithm.

Amazingly, their CASH setup managed to produce and test about twelve samples per day, a tenfold increase in throughput compared to what scientists can manually achieve in a conventional laboratory. In addition to this significant increase in speed, one of the main advantages of the CASH strategy is the possibility of creating huge shared databases describing how material properties vary according to synthesis conditions. In this regard, Prof Hitosugi remarks: "Today, databases of substances and their properties remain incomplete. With the CASH approach, we could easily complete them and then discover hidden material properties, leading to the discovery of new laws of physics and resulting in insights through statistical analysis."

The research team believes that the CASH approach will bring about a revolution in materials science. Databases generated quickly and effortlessly by CASH systems will be combined into big data and scientists will use advanced algorithms to process them and extract human-understandable expressions. However, as Prof Hitosugi notes, machine learning and robotics alone cannot find insights nor discover concepts in physics and chemistry. "The training of future materials scientists must evolve; they will need to understand what machine learning can solve and set the problem accordingly. The strength of human researchers lies in creating concepts or identifying problems in society. Combining those strengths with machine learning and robotics is very important," he says.

Overall, this perspective article highlights the tremendous benefits that automation could bring to materials science. If the weight of repetitive tasks is lifted off the shoulders of researchers, they will be able to focus more on uncovering the secrets of the material world for the benefit of humanity.

Credit: 
Tokyo Institute of Technology

Antibiotic resistance surveillance tools in Puerto Rican watersheds after Hurricane Maria

image: Amy Pruden, Virginia Tech researcher.

Image: 
Virginia Tech

When Hurricane Maria made landfall, devastating Dominica, St. Croix, and Puerto Rico in September 2017, flooding and power outages wreaked havoc on the debilitated land, resulting in the contamination of waterways with untreated human waste and pathogenic microorganisms.

Six months after the deadly Category 5 hurricane, Virginia Tech civil and environmental engineering Professor Amy Pruden led a team of Virginia Tech researchers, including Maria Virginia Riquelme and William Rhoads, then post-doctoral researchers, who packed their bags and lab supplies and headed to Puerto Rico.

The island territory of the United States located in the northeast of the Caribbean Sea had been devastated, plunging its 3.4 million inhabitants into crisis. The mass destruction presented a critical opportunity for the researchers to study how wastewater infrastructure damage might contribute to the spread of antibiotic resistance -- a growing global public health threat.

In a study published in American Chemical Society's
Journal of Environmental Science & Technology,
Virginia Tech researchers and international collaborators have further developed an innovative antibiotic resistance surveillance approach by applying DNA sequencing techniques to detect the spread of disease in watersheds impacted by large-scale storms.

"This study is a critical step toward establishing a unified and comprehensive surveillance approach for antibiotic resistance in watersheds," said Pruden, the W. Thomas Rice Professor of Civil and Environmental Engineering. "Ideally, it can be applied as a baseline to track disturbances and public health concerns associated with future storms."

Over the past decade, Pruden, a microbiologist and environmental engineer, has worked with her students using next-generation DNA sequencing, a specialty of Pruden's, to examine Legionella strains as they operate before, during, after, and outside of Legionnaires' disease outbreaks in various towns and cities across the country, including Flint, Michigan.

With RAPID funding from the National Science Foundation and collaborating with principal investigator Christina Bandoragoda, research scientist at the University of Washington with expertise in watershed modeling and geospatial analysis, Virginia Tech researchers teamed up with Graciela Ramirez Toro, professor and director of the Centro de Educación, Conservación e Interpretación Ambiental, and her research group at the local Interamerican University in San German, Puerto Rico. Together, they identified three sampling sites in watersheds with distinct land-use patterns and levels of wastewater input that were ideal for tracking down geospatial patterns in occurrence of bacterial genes that cause antibiotic resistance.

Pruden's doctoral student and first author of the paper Benjamin Davis used a method called shotgun metagenomic DNA sequencing to detect antibiotic resistance genes in river water samples from three watersheds, including samples collected by hiking to far upstream pristine reaches of the watersheds and downstream of three wastewater treatment plants. Metagenomics is the study of genetic material recovered directly from environmental samples.

Analysis of the data revealed that two anthropogenic antibiotic resistance markers -- DNA sequences associated with human impacts to the watershed -- correlated with a distinct set of antibiotic resistance genes, relative to those that correlated specifically with human fecal markers.

A clear demarcation of wastewater treatment plant influence on the antibiotic resistance gene profiles was apparent and levels were elevated downstream of wastewater treatment plants, resulting in a high diversity of genes impacting resistance to clinically important antibiotics, such as beta lactams and aminoglycosides, in the watershed samples. Some of the beta lactam resistance genes detected were associated with deadly antibiotic-resistant infections in the region and showed evidence of being able to jump across bacterial strains. Beta lactam resistance genes were also noted to be more accurately predicted by anthropogenic antibiotic resistance markers than human fecal markers.

Although baseline levels of antibiotic resistance genes in Puerto Rican watersheds prior to Hurricane Maria are unknown, surveillance methodologies like these could be used to assess future impacts of major storms on the spread of antibiotic resistance, the researchers said.

Many international communities will likely not have access to sophisticated metagenomic-based monitoring tools in the near future, but the identification of single gene targets, such as the anthropogenic antibiotic resistance markers, make watershed surveillance of antibiotic resistance much more accessible. And such genes can be quantified directly by quantitative polymerase chain reaction, yielding cost-effective, rapid results in less than a day.

Credit: 
Virginia Tech

Faster detection of photocatalyst-generated oxygen has big implications for clean energy

image: Detecting the oxygen (O2) generated from artificial photosynthesis using a microelectrode.

Image: 
(TOC graphic from the ACS Catalyst journal paper).

Currently, the majority of energy consumed by the world's population is derived from oil and other non-renewable resources which are in danger of running out in the near future. Consequently the development of artificial photosynthesis methods using photocatalysts to produce chemical energy (hydrogen fuel) from sunlight and water has received much attention and various research projects are being conducted in this area.

During artificial photosynthesis, oxygen (O2) is produced by the photocatalyst via the water splitting reaction. Working with researchers from Kanazawa University, Shinshu University and The University of Tokyo, Professor ONISHI Hiroshi et al. of Kobe University's Graduate School of Science developed a measurement evaluation method which is able to detect O2 1000 times faster than conventional methods. It is hoped that the method developed through this research can be utilized to improve our understanding of the reaction mechanisms behind artificial photosynthesis and contribute towards developing photocatalysts that could be implemented in the real world.

The importance of making these research results public as soon as possible has been recognized; the paper published in the American Chemistry Society's journal ACS Catalysis was given an advanced online release on October 29, 2020.

Research Background

Artificial photosynthesis, which can be utilized to produce chemical energy (hydrogen fuel) from sunlight and water has received much attention for its potential to provide an energy source that does not emit CO2. Photocatalysts are the key component of artificial photosynthesis. The first photocatalyst material was discovered and developed by Japanese researchers in the 1970s, and scientists around the world have continuously strived to improve their efficiency over the last 50 years.

The current research study used a strontium titanate (SrTiO3) photocatalyst, which was originally developed by Special Contract Professor DOMEN Kazunari et al. of Shinshu University (a contributing researcher to this study). As a result of various improvements made by Shinshu's Associate Professor HISATOMI Takashi et al. (also a contributing researcher), this photocatalytic material achieved the highest reaction yield (i.e. the efficiency of hydrogen conversion from water via illumination by ultraviolet light) in the world. The final remaining issue is improving the efficiency of hydrogen generation from water and sunlight, instead of artificial ultraviolet light. Overcoming this issue would mean the birth of CO2-free hydrogen fuel producing technology that can be utilized by society.

However, one factor that hinders efforts to improve conversion efficiency is the low rate of oxygen produced from the water when hydrogen is also being produced. In order to generate hydrogen (H2) from water (H2O) via artificial photosynthesis, the following chemical reaction has to take place: 2H2O -> 2H2 + O2. Even though the goal is to produce hydrogen (that can be utilized as a fuel by society) and not oxygen, the principles of chemistry require oxygen to be produced from the water at the same time in order for hydrogen to be produced.

Furthermore, the process of generating oxygen is more complicated than the process of generating hydrogen, which consequently makes it difficult to improve the efficiency of the reaction (The oxygen atoms taken from 2 H2O particles must adhere to one another.). This is a bottleneck that limits the efficient conversion of hydrogen from water using sunlight.

A solution would be to improve the efficiency of oxygen conversion from water, however this is no simple matter. It is not well understood how oxygen is generated from water (i.e. the mechanism behind the reaction), therefore trying to improve this reaction is akin to working in the dark. In order to shed light on the situation, this research aimed to develop a high speed detection method to observe the oxygen generated by artificial photosynthesis to reveal the mechanism behind the water to oxygen reaction.

Research Methodology

This research study utilized a method of underwater chemical analysis using microelectrode developed by Kanazawa University's Professor TAKAHASHI Yasufumi et al. (contributing researcher) as the underlying technology. The oxygen generated from the artificial photosynthesis photocatalyst was detected as it merged back into the water. As shown in Figure 1, the strontium titanite photocatalyst panel was submerged in water. The microelectrode, which consisted of a 20 micrometer platinum wire (about ¼ of a human hair) with its sides coated in glass, was lowered into the water 100 micrometers away from the surface of the photocatalyst panel.

When the photocatalyst panel was illuminated by ultraviolet light (with a wavelength of 280nm) from a light-emitting diode, oxygen (O2) and hydrogen (H2) were dissociated from the water where it made contact with the panel. These oxygen and hydrogen molecules were subsequently released into the water. The released oxygen was scattered throughout the water and reached the microelectrode. The oxygen that reached the microelectrode received 4 electrons (e-) from the electrode resulting in the following transformation: O2 + 2H2O + 4e- -> 4OH-.

The number of electrons received from the electrode by the oxygen can be determined by measuring the electric current that passes through the electrode. Measuring the electric current that passed through the electrode every 0.1 seconds enabled the researchers to calculate the amount of oxygen that reached the electrode every 0.1 seconds. Gas-chromatographic detection, the analytic apparatus used for oxygen detection up until now, can only measure the amount of oxygen every 3 minutes. This study succeeded in developing a detection method that is 1000 times faster.

Calculating the time required for the oxygen to travel the 100 micrometer distance through the water from the photocatalyst panel to the electrode is not difficult. This can be achieved by conducting numerical simulations on a desktop computer, based on Fick's laws of diffusion. Comparing the measurement results obtained from the microelectrode with those of the simulation revealed that there was a 1 to 2 second delay between the photocatalyst panel being illuminated by UV light and the oxygen being released into the water. This delay is a new phenomenon that couldn't be observed via gas-chromatographic detection.

It is believed that this delay is a necessary preparatory stage for the illuminated photocatalyst to commence water-splitting. Future research will seek to verify this hypothesis, in addition to investigating what the photocatalyst is doing during the preparatory stage. Nevertheless, it is expected that the oxygen detection method developed in this study, which is 1000 times faster than previous detection methods, will lead to new developments in artificial photosynthesis.

Researcher Comment (Professor Onishi Hiroshi, Graduate School of Science, Kobe University)

I am a physical chemistry specialist, and the idea to detect the oxygen generated via artificial photosynthesis using a microelectrode came to me in 2015. At Kobe University, we set up the measuring apparatus developed by Professor Takahashi et al., who are experts in chemical analysis using microelectrodes, and began to apply it to photocatalysts.

By improving the apparatus and accumulating know-how regarding its operation, we verified that this method is able to measure the oxygen generated from the photocatalyst panel provided by Professor Domen and Associate Professor Hisatomi et al., who are authorities on photocatalyst research.

In addition, 3 graduate students at Kobe University's Graduate School of Science were at the forefront of this research for the 5 year period spanning from the development of the computer program for the numerical simulation up until the discovery of the 'oxygen release delay'.

The three teams brought the distinct features of their respective fields of physical chemistry, analytical chemistry and catalyst chemistry to the development of this research. Through this collaboration, we succeeded in contributing a new perspective to the science of artificial photosynthesis.

Credit: 
Kobe University

Parental restrictions on tech use have little lasting effect into adulthood

"Put your phone away!" "No more video games!" "Ten more minutes of YouTube and you're done!"

Kids growing up in the mobile internet era have heard them all, often uttered by well-meaning parents fearing long-term problems from overuse.

But new University of Colorado Boulder research suggests such restrictions have little effect on technology use later in life, and that fears of widespread and long-lasting "tech addiction" may be overblown.

"Are lots of people getting addicted to tech as teenagers and staying addicted as young adults? The answer from our research is 'no'," said lead author Stefanie Mollborn, a professor of sociology at the Institute of Behavioral Science. "We found that there is only a weak relationship between early technology use and later technology use, and what we do as parents matters less than most of us believe it will."

The study, which analyzes a survey of nearly 1,200 young adults plus extensive interviews with another 56, is the first to use such data to examine how digital technology use evolves from childhood to adulthood.

The data were gathered prior to the pandemic, which has resulted in dramatic increases in the use of technology as millions of students have been forced to attend school and socialize online. But the authors say the findings should come as some comfort to parents worried about all that extra screen time.

"This research addresses the moral panic about technology that we so often see," said Joshua Goode, a doctoral student in sociology and co-author of the paper. "Many of those fears were anecdotal, but now that we have some data, they aren't bearing out."

Published in Advances in Life Course Research, the paper is part of a 4-year National Science Foundation-funded project aimed at exploring how the mobile internet age truly is shaping America's youth.

Since 1997, time spent with digital technology has risen 32% among 2- to 5-year-olds and 23% among 6- to 11-year-olds, the team's previous papers found. Even before the pandemic, adolescents spent 33 hours per week using digital technology outside of school.

For the latest study, the research team shed light on young adults ages 18 to 30, interviewing dozens of people about their current technology use, their tech use as teens and how their parents or guardians restricted or encouraged it. The researchers also analyzed survey data from a nationally representative sample of nearly 1,200 participants, following the same people from adolescence to young adulthood.

Surprisingly, parenting practices like setting time limits or prohibiting kids from watching shows during mealtimes had no effect on how much the study subjects used technology as young adults, researchers found.

Those study subjects who grew up with fewer devices in the home or spent less time using technology as kids tended to spend slightly less time with tech in young adulthood - but statistically, the relationship was weak.

What does shape how much time young adults spend on technology? Life in young adulthood, the research suggests.

Young adults who hang out with a lot of people who are parents spend more time with tech (perhaps as a means of sharing parenting advice). Those whose friends are single tend toward higher use than the married crowd. College students, meantime, tend to believe they spend more time with technology than they ever have before or ever plan to again, the study found.

"They feel like they are using tech a lot because they have to, they have it under control and they see a future when they can use less of it," said Mollborn.

From the dawn of comic books and silent movies to the birth of radio and TV, technological innovation has bred moral panic among older generations, the authors note.

"We see that everyone is drawn to it, we get scared and we assume it is going to ruin today's youth," said Mollborn.

In some cases, excess can have downsides. For instance, the researchers found that adolescents who play a lot of video games tend to get less physical activity.

But digital technology use does not appear to crowd out sleep among teens, as some had feared, and use of social media or online videos doesn't squeeze out exercise.

In many ways, Goode notes, teens today are just swapping one form of tech for another, streaming YouTube instead watching TV, or texting instead of talking on the phone.

That is not to say that no one ever gets addicted, or that parents should never instill limits or talk to their kids about its pros and cons, Mollborn stresses.

"What these data suggest is that the majority of American teens are not becoming irrevocably addicted to technology. It is a message of hope."

She recently launched a new study, interviewing teens and parents in the age of COVID-19. Interestingly, she said, parents seem less worried about their kids' tech use during the pandemic than they were in the past.

"They realize that kids need social interaction and the only way to get that right now is through screens. Many of them are saying, 'Where would we be right now without technology?'"

Credit: 
University of Colorado at Boulder

UT researchers establish proof of principle in superconductor study

Three physicists in the Department of Physics and Astronomy at the University of Tennessee, Knoxville, together with their colleagues from the Southern University of Science and Technology and Sun Yat-sen University in China, have successfully modified a semiconductor to create a superconductor.

Professor and Department Head Hanno Weitering, Associate Professor Steve Johnston, and PhD candidate Tyler Smith were part of the team that made the breakthrough in fundamental research, which may lead to unforeseen advancements in technology.

Semiconductors are electrical insulators but conduct electrical currents under special circumstances. They are an essential component in many of the electronic circuits used in everyday items including mobile phones, digital cameras, televisions, and computers.

As technology has progressed, so has the development of semiconductors, allowing the fabrication of electronic devices that are smaller, faster, and more reliable.

Superconductors, first discovered in 1911, allow electrical charges to move without resistance, so current flows without any energy loss. Although scientists are still exploring practical applications, superconductors are currently used most widely in MRI machines.

Using a silicon semiconductor platform--which is the standard for nearly all electronic devices--Weitering and his colleagues used tin to create the superconductor.

"When you have a superconductor and you integrate it with a semiconductor, there are also new types of electronic devices that you can make," Weitering stated.

Superconductors are typically discovered by accident; the development of this novel superconductor is the first example ever of intentionally creating an atomically thin superconductor on a conventional semiconductor template, exploiting the knowledge base of high-temperature superconductivity in doped 'Mott insulating' copper oxide materials.

"The entire approach--doping a Mott insulator, the tin on silicon--was a deliberate strategy. Then came proving we're seeing the properties of a doped Mott insulator as opposed to anything else and ruling out other interpretations. The next logical step was demonstrating superconductivity, and lo and behold, it worked," Weitering said.

"Discovery of new knowledge is a core mission of UT," Weitering stated. "Although we don't have an immediate application for our superconductor, we have established a proof of principle, which may lead to future practical applications."

Credit: 
University of Tennessee at Knoxville