Tech

Cancer treatments may accelerate cellular aging

New research indicates that certain anti-cancer therapies may hasten cellular aging, where changes in the DNA of patients may contribute to greater inflammation and fatigue. The findings are published by Wiley early online in CANCER, a peer-reviewed journal of the American Cancer Society.

Gene activity is often adjusted during life through epigenetic changes, or physical modifications to DNA that do not involve altering the underlying DNA sequence. Some individuals may experience epigenetic age acceleration (EAA) that puts them at a higher risk of age-related conditions than other individuals of the same chronological age. Investigators recently examined EAA changes during and following cancer treatment, and they looked for a potential link between these changes and fatigue in patients with head and neck cancer (HNC).

In the study of 133 patients with HNC, half of the patients experienced severe fatigue at some point. EAA was most prominent immediately after radiation therapy, when the average epigenetic age was accelerated by 4.9 years. Increased EAA was associated with elevated fatigue, and patients with severe fatigue experienced 3.1 years higher EAA than those with low fatigue. Also, patients with high levels of markers of inflammation exhibited approximately 5 years higher EAA, and inflammation appeared to account for most of the effects of EAA on fatigue.

"Our findings add to the body of evidence suggesting that long-term toxicity and possibly increased mortality incurred from anti-cancer treatments for patients with HNC may be related to increased EAA and its association with inflammation," said lead author Canhua Xiao, PhD, RN, FAAN, of the Emory University School of Nursing, in Atlanta. "Future studies could examine the vulnerabilities that may account for sustained high EAA, fatigue, and inflammation among patients."

The authors noted that interventions to reduce inflammation, including prior to cancer treatment, might benefit patients by decelerating the aging process and subsequently reducing age-related chronic health problems such as fatigue.

An accompanying editorial stresses that chronic fatigue in patients receiving treatment for cancer is not just a symptom; it may also play an important role in influencing patients' health.

Credit: 
Wiley

Risk of second stroke can be reduced with prevention efforts based on cause of first stroke

DALLAS, May 24, 2021 -- Having a stroke or a transient ischemic attack (TIA), sometimes called a "mini-stroke," increases the risk for a stroke in the future. Identifying the cause of the stroke or TIA can lead to specific prevention strategies to reduce the risk of additional strokes, according to an updated guideline from the American Heart Association/American Stroke Association. The guideline is published today in Stroke, a journal of the American Stroke Association, a division of the American Heart Association.

Ischemic strokes account for 87% of strokes in the United States. An ischemic stroke occurs when blood flow in a vessel leading to the brain is blocked, by either clots or plaques. Strokes can lead to serious disability and/or death. A transient ischemic attack, commonly referred to as a TIA, occurs when an artery is blocked for a short amount of time; thus, the blockage is transient (temporary) and does not cause permanent brain injury.

As prevention strategies have improved, studies have noted a reduction in recurrent stroke rates from 8.7% in the 1960s to 5.0% in the 2000s. Yet many risk factors for a second stroke remain poorly managed among stroke survivors.

A new recommendation of the "2021 Guideline for the Prevention of Stroke in Patients With Stroke and Transient Ischemic Attack" is for health care professionals to perform diagnostic evaluations to determine the cause of the first stroke or TIA within 48 hours of symptom onset. The guideline includes a section outlining treatment recommendations based on the cause of the initial stroke/TIA. Underlying causes could be related to blockages in large arteries in the neck or brain, small arteries in the brain damaged from high blood pressure or diabetes, irregular heart rhythms and many other potential causes.

"It is critically important to understand the best ways to prevent another stroke once someone has had a stroke or a TIA," said Dawn O. Kleindorfer, M.D., FAHA, chair of the guideline writing group, and professor and chair of the department of neurology at the University of Michigan School of Medicine in Ann Arbor, Michigan. "If we can pinpoint the cause of the first stroke or TIA, we can tailor strategies to prevent a second stroke."

For patients who have survived a stroke or TIA, the secondary prevention guidelines recommend:

Managing their vascular risk factors, especially high blood pressure, as well as Type 2 diabetes, cholesterol, triglyceride levels and not smoking.

Limiting salt intake and/or following a Mediterranean diet.

If they are capable of physical activity, engaging in moderate-intensity aerobic activity for at least 10 minutes four times a week or vigorous-intensity aerobic activity for at least 20 minutes twice a week.

"In fact, approximately 80% of strokes can be prevented by controlling blood pressure, eating a healthy diet, engaging in regular physical activity, not smoking and maintaining a healthy weight," said Amytis Towfighi, M.D., FAHA, vice-chair of the guideline writing group, and director of neurological services at the Los Angeles County Department of Health Services.

For health care professionals, the updated treatment recommendations highlighted in the guideline include:

Using multidisciplinary care teams to personalize care for patients and employing shared decision-making with the patient to develop care plans that incorporate a patient's wishes, goals and concerns.

Screening for and diagnosing atrial fibrillation (an irregular heart rhythm) and starting blood-thinning medications to reduce recurrent events.

Prescribing antithrombotic therapy, including antiplatelet medications (blood thinners) or anticoagulant medications (to prevent blood clotting), for nearly all patients who don't have contraindications. However, the combination of antiplatelets and anticoagulation is typically not recommended for preventing second strokes, and dual antiplatelet therapy, taking aspirin along with a second medicine to prevent blood clotting, is recommended short-term, only for specific patients: those with early arriving minor stroke and high-risk TIA or severe symptomatic stenosis.

Carotid endarterectomy, surgical removal of a blockage or, in select cases, the use of a stent in the carotid artery, should be considered for patients with narrowing arteries in the neck.

Aggressive medical management of risk factors and short-term dual anti-platelet therapy are preferred for patients with severe intracranial stenosis thought to cause the stroke or TIA.

In some patients, it is now reasonable to consider percutaneously closing (a less invasive, catheter-based surgical procedure) a patent foramen ovale, a small and fairly common heart defect.

The guideline is accompanied by a systematic review article, published simultaneously, "Benefits and Risks of Dual Versus Single Antiplatelet Therapy for Secondary Stroke Prevention." The review paper, chaired, by Devin L. Brown, M.D., M.S., is a meta-analysis of three short-duration clinical trials on dual antiplatelet therapy (DAPT) and concludes DAPT may be appropriate for select patients. The review authors note: "Additional research is needed to determine: the optimal timing of starting treatment relative to the clinical event; the optimal duration of DAPT to maximize the risk-benefit ratio; whether additional populations excluded from POINT and CHANCE [two of the trials examined], such as those with major stroke, may also benefit from early DAPT; and whether certain genetic profiles eliminate the benefit of early DAPT."

"The secondary prevention of stroke guideline is one of the American Stroke Association's 'flagship' guidelines, last updated in 2014," Kleindorfer added. "There are also a number of changes to the writing and formatting of this guideline to make it easier for professionals to understand and locate information more quickly, ultimately greatly improving patient care and preventing more strokes in our patients."

Credit: 
American Heart Association

Accurate evaluation of CRISPR genome editing

CRISPR technology allows researchers to edit genomes by altering DNA sequences and by thus modifying gene function. Its many potential applications include correcting genetic defects, treating and preventing the spread of diseases and improving crops.

Genome editing tools, such as the CRISPR-Cas9 technology, can be engineered to make extremely well-defined alterations to the intended target on a chromosome where a particular gene or functional element is located. However, one potential complication is that CRISPR editing may lead to other, unintended, genomic changes. These are known as off-target activity. When targeting several different sites in the genome off target activity can lead to translocations, unusual rearrangement of chromosomes, as well as to other unintended genomic modifications.

Controlling off-target editing activity is one of the central challenges in making CRISPR-Cas9 technology accurate and applicable in medical practice. Current measurement assays and data analysis methods for quantifying off-target activity do not provide statistical evaluation, are not sufficiently sensitive in separating signal from noise in experiments with low editing rates, and require cumbersome efforts to address the detection of translocations.

A multidisciplinary team of researchers from the Interdisciplinary Center Herzliya and Bar-Ilan University report in the May 24th issue of the journal Nature Communications the development of a new software tool to detect, evaluate and quantify off-target editing activity, including adverse translocation events that can cause cancer. The software is based on input taken from a standard measurement assay, involving multiplexed PCR amplification and Next-Generation Sequencing (NGS).

Known as CRISPECTOR, the tool analyzes next generation sequencing data obtained from CRISPR-Cas9 experiments, and applies statistical modeling to determine and quantify editing activity. CRISPECTOR accurately measures off-target activity at every interrogated locus. It further enables better false-negative rates in sites with weak, yet significant, off-target activity. Importantly, one of the novel features of CRISPECTOR is its ability to detect adverse translocation events occurring in an editing experiment.

"In genome editing, especially for clinical applications, it is critical to identify low level off-target activity and adverse translocation events. Even a small number of cells with carcinogenic potential, when transplanted into a patient in the context of gene therapy, can have detrimental consequences in terms of cancer pathogenesis. As part of treatment protocols, it is therefore important to detect these potential events in advance," says Dr. Ayal Hendel, of Bar-Ilan University's Mina and Everard Goodman Faculty of Life Sciences. Dr. Hendel led the study together with Prof. Zohar Yakhini, of the Arazi School of Computer Science at Interdisciplinary Center (IDC) Herzliya. "CRISPECTOR provides an effective method to characterize and quantify potential CRISPR-induced errors, thereby significantly improving the safety of future clinical use of genome editing." Hendel's team utilized CRISPR-Cas9 technology to edit genes in stem cells relevant to disorders of the blood and the immune system. In the process of analyzing the data they became aware of the shortcomings of the existing tools for quantifying off-target activity and of gaps that should be bridged to improve applicability. This experience led to the collaboration with Prof Yakhini's leading computational biology and bioinformatics group.

Prof. Zohar Yakhini, of IDC Herzliya and the Technion, adds that "in experiments utilizing deep sequencing techniques that have significant levels of background noise, low levels of true off-target activity can get lost under the noise. The need for a measurement approach and related data analysis that are capable of seeing beyond the noise, as well as of detecting adverse translocation events occurring in an editing experiment, is evident to genome editing scientists and practitioners. CRISPECTOR is a tool that can sift through the background noise to identify and quantify true off-target signal. Moreover, using statistical modelling and careful analysis of the data CRISPECTOR can also identify a wider spectrum of genomic aberrations. By characterizing and quantifying potential CRISPR-induced errors our methods will support the safer clinical use of genome editing therapeutic approaches."

Credit: 
Bar-Ilan University

Surge in nitrogen has turned sargassum into the world's largest harmful algal bloom

video: Sargassum, floating brown seaweed, have grown in low nutrient waters of the North Atlantic Ocean for centuries. Scientists have discovered dramatic changes in the chemistry and composition of Sargassum, transforming this vibrant living organism into a toxic "dead zone."

Image: 
Brian Lapointe, Ph.D.

For centuries, pelagic Sargassum, floating brown seaweed, have grown in low nutrient waters of the North Atlantic Ocean, supported by natural nutrient sources like excretions from fishes and invertebrates, upwelling and nitrogen fixation. Using a unique historical baseline from the 1980s and comparing it to samples collected since 2010, researchers from Florida Atlantic University's Harbor Branch Oceanographic Institute and collaborators have discovered dramatic changes in the chemistry and composition of Sargassum, transforming this vibrant living organism into a toxic "dead zone."

Their findings, published in Nature Communications, suggest that increased nitrogen availability from natural and anthropogenic sources, including sewage, is supporting blooms of Sargassum and turning a critical nursery habitat into harmful algal blooms with catastrophic impacts on coastal ecosystems, economies, and human health. Globally, harmful algal blooms are related to increased nutrient pollution.

The study, led by FAU Harbor Branch, in collaboration with the University of South Florida, Woods Hole Oceanographic Institution, the University of Southern Mississippi, and Florida State University, was designed to better understand the effects of nitrogen and phosphorus supply on Sargassum. Researchers used a baseline tissue data set of carbon (C), nitrogen (N) and phosphorus (P) and molar C:N:P ratios from the 1980s and compared them with more recent samples collected since 2010.

Results show that the percentage of tissue N increased significantly (35 percent) concurrent with a decrease in the percentage of phosphorus (42 percent) in Sargassum tissue from the 1980s to the 2010s. Elemental composition varied significantly over the long-term study, as did the C:N:P ratios. Notably, the biggest change was the nitrogen:phosphorus ratio (N:P), which increased significantly (111 percent). Carbon:phosphorus ratios (C:P) also increased similarly (78 percent).

"Data from our study supports not only a primary role for phosphorus limitation of productivity, but also suggests that the role of phosphorus as a limiting nutrient is being strengthened by the relatively large increases in environmental nitrogen supply from terrestrial runoff, atmospheric inputs, and possibly other natural sources such as nitrogen fixation," said Brian Lapointe, Ph.D., senior author, a leading expert on Sargassum and a research professor at FAU Harbor Branch.

A total of 488 tissue samples of Sargassum were collected during various research projects and cruises in the North Atlantic basin between 1983-1989 and more recently between 2010-2019, and included seasonal sampling offshore Looe Key reef in the lower Florida Keys (1983 and 1984) and a broader geographic sampling (1986 and 1987) offshore the Florida Keys, Gulf Stream (Miami, Charleston and Cape Fear), and Belize, Central America. Oceanic stations included the northern, central and southern Sargasso Sea.

The highest percentage of tissue N occurred in coastal waters influenced by nitrogen-rich terrestrial runoff, while lower C:N and C:P ratios occurred in winter and spring during peak river discharges. The overall range for N:P ratios was 4.7 to 99.2 with the highest mean value in western Florida Bay (89.4) followed by locations in the Gulf of Mexico and Caribbean. The lowest N:P ratios were observed in the eastern Caribbean at St. Thomas (20.9) and Barbados (13.0).

Because of anthropogenic emissions of oxides of nitrogen (NOx), the NOx deposition rate is about five-fold greater than that of pre-industrial times largely due to energy production and biomass burning. Production of synthetic fertilizer nitrogen has increased nine-fold, while that of phosphate has increased three-fold since the 1980s contributing to a global increase in N:P ratios. Notably, 85 percent of all synthetic nitrogen fertilizers have been created since 1985, which was shortly after the baseline Sargassum sampling began at Looe Key in 1983.

"Over its broad distribution, the newly-formed Great Atlantic Sargassum Belt can be supported by nitrogen and phosphorus inputs from a variety of sources including discharges from the Congo, Amazon and Mississippi rivers, upwelling off the coast of Africa, vertical mixing, equatorial upwelling, atmospheric deposition from Saharan dust, and biomass burning of vegetation in central and South Africa," said Lapointe.

Long-term satellite data, numerical particle-tracking models, and field measurements indicate that the Great Atlantic Sargassum Belt has recurred annually since 2011 and extended up to 8,850 kilometers from the west coast of Africa to the Gulf of Mexico, peaking in July 2018.

"Considering the negative effects that the Great Atlantic Sargassum Belt is having on the coastal communities of Africa, the Caribbean, Gulf of Mexico and South Florida, more research is urgently needed to better inform societal decision-making regarding mitigation and adaptation of the various terrestrial, oceanic, and atmospheric drivers of the Sargassum blooms," said Lapointe.

Sargassum removal from Texas beaches during earlier, less severe inundations was estimated at $2.9 million per year and Florida's Miami-Dade County alone estimated recent removal expenses of $45 million per year. The Caribbean-wide clean-up in 2018 cost $120 million, which does not include decreased revenues from lost tourism. Sargassum strandings also impact marine life and cause respiratory issues from the decaying process and other human health concerns, such as increased fecal bacteria.

"Human activities have greatly altered global carbon, nitrogen, and phosphorus cycles, and nitrogen inputs are considered now 'high risk' and above a safe planetary boundary," said Lapointe. "Based on scientific research, population growth and land-use changes have increased nitrogen pollution and degradation of estuaries and coastal waters since at least the 1950s. Despite decreases in nitrogen loading in some coastal watersheds, N:P ratios remain elevated in many rivers compared to historic values. The trend toward higher N:P ratios in the major rivers in the Atlantic basin parallel the increased N:P ratios we now see in Sargasum."

Credit: 
Florida Atlantic University

Implantable piezoelectric polymer improves controlled release of drugs

image: An implantable piezoelectric nanofiber polymer membrane delivers precise amounts of drugs under mechanical force.

Image: 
Jin Nam/UCR

A membrane made from threads of a polymer commonly used in vascular sutures can be loaded with therapeutic drugs and implanted in the body, where mechanical forces activate the polymer's electric potential and slowly release the drugs.

The novel system, developed by a group led by bioengineers at UC Riverside and published in ACS Applied Bio Materials, overcomes the biggest limitations of conventional drug administration and some controlled release methods, and could improve treatment of cancer and other chronic diseases.

The drawbacks of conventional drug administration include repeated administration, nonspecific biodistribution in the body's systems, the long-term unsustainability of drug molecules, and high cytotoxicity, posing a challenge for the efficient treatment of chronic diseases that require varying drug dosages over time for optimal therapeutic efficacy. Most controlled release methods encapsulate drug particles in biodegradable, bubble-like containers that dissolve over time to release the drug, making it difficult to deliver drugs on a schedule. Others involve a battery-powered device that is not biocompatible.

Jin Nam, an associate professor of bioengineering in UC Riverside's Marlan and Rosemary Bourns College of Engineering, runs a lab that works with biocompatible polymers to build frameworks known as scaffolds that help stem cells repair tissues and organs. One of these polymers, poly(vinylidene fluoride-trifluro-ethylene), or P(VDF-TrFE), can produce an electrical charge under mechanical stress. Nam realized this property, known as piezoelectricity, made the polymer a potentially viable candidate for a drug release system.

His team used a technique called electrospinning to produce P(VDF-TrFE) nanofibers layered in a thin mat. Structuring the material in nanoscale by electrospinning optimized the sensitivity of the resulting nanofibers so the drug delivery system would respond to physiologically safe magnitudes of force while remaining insensitive to daily activities. The large surface area of the nanofibers allowed them to adsorb a relatively large quantity of drug molecules.

After embedding the film in a hydrogel that mimics living tissue, a series of tests using therapeutic shockwaves generated enough electric charge to release an electrostatically attached model drug molecule into the surrounding gel. The researchers could tune the drug release quantity by varying the applied pressure and duration.

"This piezoelectric nanofiber-based drug delivery system enables localized delivery of drug molecules on demand, which would be useful for diseases or conditions that require long-term, repeated drug administration, such as cancer treatments," Nam said. "The large surface area-to-volume ratio of nanofibrous structure enables a greater drug loading, leading to a single injection or implantation that lasts longer than conventional drug delivery."

Compared to traditional drug delivery systems based on degradation or diffusion release that typically show an initial burst release followed by different rates of release, the linear profile of drug release from the piezoelectric-based system allows for the precise administration of drug molecules regardless of implantation duration. Repeated on-demand drug release tests showed a similar amount of drug release per activation, confirming the robust control of release rate.

The sensitivity of the drug release kinetics can be tuned by controlling the nanofiber size to a range that is activated by therapeutic shockwaves, often used for musculoskeletal pain treatment with a handheld device. Smaller, more sensitive nanofiber sizes can be utilized for implantation in deep tissues, such as near a bone under muscles, while less sensitive larger nanofibers could find use in subcutaneous applications to avoid false activation by accidental impact.

Credit: 
University of California - Riverside

No link between milk and increased cholesterol according to new study of 2 million people

Regular consumption of milk is not associated with increased levels of cholesterol, according to new research.

A study published in the International Journal of Obesity looked at three large population studies and found that people who regularly drank high amounts of milk had lower levels of both good and bad cholesterol, although their BMI levels were higher than non-milk drinkers. Further analysis of other large studies also suggests that those who regularly consumed milk had a 14% lower risk of coronary heart disease.

The team of researchers took a genetic approach to milk consumption by looking at a variation in the lactase gene associated with digestion of milk sugars known as lactose.

The study identified that having the genetic variation where people can digest lactose was a good way for identifying people who consumed higher levels of milk.

Prof Vimal Karani, Professor of Nutrigenetics and Nutrigenomics at the University of Reading said:

"We found that among participants with a genetic variation that we associated with higher milk intake, they had higher BMI, body fat, but importantly had lower levels of good and bad cholesterol. We also found that those with the genetic variation had a significantly lower risk of coronary heart disease. All of this suggests that reducing the intake of milk might not be necessary for preventing cardiovascular diseases."

The new research was conducted following several contradictory studies that have previously investigated the causal link between higher dairy intake and cardiometabolic diseases such as obesity and diabetes. To account for inconsistencies in sampling size, ethnicity and other factors, the team conducted a meta-analysis of data in up to 1.9 million people and used the genetic approach to avoid confounding.

Even though the UK biobank data showed that those with the lactase genetic variation had 11% lower risk of type 2 diabetes, the study did not suggest that there is any strong evidence for a link between higher milk intake and increased likelihood of diabetes or its related traits such as glucose and inflammatory biomarkers.

Professor Karani said:

"The study certainly shows that milk consumption is not a significant issue for cardiovascular disease risk even though there was a small rise in BMI and body fat among milk drinkers. What we do note in the study is that it remains unclear whether it is the fat content in dairy products that is contributing to the lower cholesterol levels or it is due to an unknown 'milk factor'".

Credit: 
University of Reading

Smart toilet may soon analyze stool for health problems

Bethesda, MD (May 22, 2021) -- An artificial intelligence tool under development at Duke University can be added to the standard toilet to help analyze patients' stool and give gastroenterologists the information they need to provide appropriate treatment, according to research that was selected for presentation at Digestive Disease Week® (DDW) 2021. The new technology could assist in managing chronic gastrointestinal issues such as inflammatory bowel disease (IBD) and irritable bowel syndrome (IBS).

"Typically, gastroenterologists have to rely on patient self-reported information about their stool to help determine the cause of their gastrointestinal health issues, which can be very unreliable," said Deborah Fisher, MD, one of the lead authors on the study and associate professor of medicine at Duke University Durham, North Carolina. "Patients often can't remember what their stool looks like or how often they have a bowel movement, which is part of the standard monitoring process. The Smart Toilet technology will allow us to gather the long-term information needed to make a more accurate and timely diagnosis of chronic gastrointestinal problems."

The technology can be retrofitted within the pipes of an existing toilet. Once a person has a bowel movement and flushes, the toilet will take an image of the stool within the pipes. The data collected over time will provide a gastroenterologist a better understanding of a patient's stool form (i.e., loose, normal or constipated) and the presence of blood, allowing them to diagnose the patient and provide the right treatment for their condition.

To develop the artificial intelligence image analysis tool for the Smart Toilet, researchers analyzed 3,328 unique stool images found online or provided by research participants. All images were reviewed and annotated by gastroenterologists according to the Bristol Stool Scale, a common clinical tool for classifying stool. Using a computationally efficient approach to convolutional neural networks, which is a type of deep learning algorithm that can analyze images, researchers found that the algorithm accurately classified the stool form 85.1 percent of the time; gross blood detection had an accuracy of 76.3 percent.

"We are optimistic about patient willingness to use this technology because it's something that can be installed in their toilet's pipes and doesn't require the patient to do anything other than flush," said Sonia Grego, PhD, a lead researcher on the study and founding director of the Duke Smart Toilet Lab. "An IBD flare-up could be diagnosed using the Smart Toilet and the patient's response to treatment could be monitored with the technology. This could be especially useful for patients who live in long-term care facilities who may not be able to report their conditions and could help improve initial diagnosis of acute conditions."

The prototype has promising feasibility, but it is not yet available to the public. Researchers are developing additional features of the technology to include stool specimen sampling for biochemical marker analysis that will provide highly specific disease data to meet patients' and gastroenterologists' needs.

Credit: 
Digestive Disease Week

Sand's urban role demands key part on sustainability stage

image: Sand pit in Sunkoshi near Kathmandu, Nepal

Image: 
Bibek Raj Shrestha

Over 20 Indonesian islands mysteriously disappear. One of the world's deadliest criminal syndicates rises to power. Eight cities the size of New York will be built every year for the next three decades. What connects them is sand, embedded in the concrete of nearly all of the world's buildings, roads, and cities, the glass in the windows, laptops and phone screens, and COVID-19 vaccine vials.

The unexamined true costs of sand - broadly, construction aggregates production -- has spurred a group of scientists to call for a stronger focus on understanding the physical dimension of sand use and extraction. They also suggest new ways to achieve economic and environmental justice.

Four years ago, an international group of scientists, including two from Michigan State University (MSU), called attention to a looming global sand crisis. Extraction of valuable resources across the planet typically brings to mind oil, coal or rare earth minerals. Construction aggregates - sand, gravel and crushed rock - can seem less rare by comparison. It's easy, after all, to find a vast sandy beach, gravel pit or local hard rock quarry. However, construction aggregates are critical for meeting society's needs for housing, health, energy, transportation, and industry.

In Science Magazine in September 2017, the group noted that overexploitation of sand, a key ingredient of concrete, asphalt and glass, was damaging the environment, endangering communities, and triggering social conflicts.

In this week's One Earth, scientists led by research associate Aurora Torres shine a new light on the sustainability implications of the world's hunger for sand and propose different solutions for meeting these challenges.

"With this paper, we look forward towards what we need to do as a society if we want to promote a sustainable consumption on global sand resources," said Torres, part of MSU's Center for Systems Integration and Sustainability (CSIS) and the Université catholique de Louvain in Belgium. "A drastic problem calls for drastic solutions - truly doing this differently to put aside problems and create pathways to sustainability."

The authors of "Sustainability of the global sand system in the Anthropocene" call for a new way of looking at and understanding the interlinkages of sand supply and demand to reduce negative impacts such as depleting natural environments and creating human conflict. Collaborating across the research disciplines made it possible to fit the puzzle pieces into a full picture. Rather than focusing on single sand extraction sites like many studies before them, they take a broad look at the physical and socio-environmental dimensions of sand supply networks - linking extraction, processing, distribution, economics, policy - to gain an in-depth understanding of the stresses on both nature and people.

The novelty of the sand supply network approach is the integration of material flow analysis with the telecoupling framework to provide a more robust and holistic perspective on the sand system across different spatio-temporal scales. It allows for understanding and quantifying socioeconomic and environmental interactions from mining sites to consumption sites, such as cities, and spillover systems such as the transport corridors or rural landfills where the mining and construction waste piles up.

"Simple views cannot solve complex sustainability challenges," said co-author Jianguo "Jack" Liu, director of MSU-CSIS. "New ways like the telecoupling framework help untangle and embrace the complexity of global sand challenges and point the way toward effective solutions."

In addition, the authors highlight that robust strategies for managing sand resources depend on a solid understanding of the construction aggregates cycle. As Mark Simoni from the Geological Survey of Norway put it, "the physical system is key for linking local impacts of natural resource extraction to global development trends -- we have to map how construction material demand and supply evolve over space and time to inform stakeholder decisions and policymaking."

This requires quantifying the geological deposits, flows, and accumulation of construction aggregates within a region, including both natural raw material sources and alternatives, and it can be used to assess how long resources will last and how the entire supply system can be optimized to reduce negative impacts of sand mining and make use of substitute materials.

For example, they said, we need to think about construction aggregates beyond excavating deposits of sand and gravel. Blasting and crushing rocks also produces 'artificial' sand and gravel of similar or even higher quality and is a major export commodity for instance for Norway. Indeed, crushed rock has already become the main source of aggregates in countries like the U.S., China or in Europe.

With demand for construction aggregates predicted to double in the next decades, the sustainability challenge is daunting. Understanding how sand-supply networks work is relevant not only for assessing their full impacts but also for identifying leverage points for sustainability.

"As with climate change, there is not a single solution but multiple entry points for more sustainable consumption," Torres said. Possible pathways include reducing material demand per capita, promoting compact urban development for more efficient material use, reducing reliance on natural deposits by developing the market and technologies for secondary materials such as construction and demolition waste, and when mining natural deposits is necessary, identifying the mining sources and production methods that minimize impacts for nature and people.

Credit: 
Michigan State University

Researchers develop advanced model to improve safety of next-generation reactors

image: Pebble-bed reactors use passive natural circulation to cool down, making it theoretically impossible for a core meltdown to occur.

Image: 
Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales/Texas A&M University Engineering

When one of the largest modern earthquakes struck Japan on March 11, 2011, the nuclear reactors at Fukushima-Daiichi automatically shut down, as designed. The emergency systems, which would have helped maintain the necessary cooling of the core, were destroyed by the subsequent tsunami. Because the reactor could no longer cool itself, the core overheated, resulting in a severe nuclear meltdown, the likes of which haven't been seen since the Chernobyl disaster in 1986.

Since then, reactors have improved exponentially in terms of safety, sustainability and efficiency. Unlike the light-water reactors at Fukushima, which had liquid coolant and uranium fuel, the current generation of reactors has a variety of coolant options, including molten-salt mixtures, supercritical water and even gases like helium.

Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales from the Department of Nuclear Engineering at Texas A&M University have been studying a new fourth-generation reactor, pebble-bed reactors. Pebble-bed reactors use spherical fuel elements (known as pebbles) and a fluid coolant (usually a gas).

"There are about 40,000 fuel pebbles in such a reactor," said Ragusa. "Think of the reactor as a really big bucket with 40,000 tennis balls inside."

During an accident, as the gas in the reactor core begins to heat up, the cold air from below begins to rise, a process known as natural convection cooling. Additionally, the fuel pebbles are made from pyrolytic carbon and tristructural-isotropic particles, making them resistant to temperatures as high as 3,000 degrees Fahrenheit. As a very-high-temperature reactor (VHTR), pebble-bed reactors can be cooled down by passive natural circulation, making it theoretically impossible for an accident like Fukushima to occur.

However, during normal operation, a high-speed flow cools the pebbles. This flow creates movement around and between the fuel pebbles, similar to the way a gust of wind changes the trajectory of a tennis ball. How do you account for the friction between the pebbles and the influence of that friction in the cooling process?

This is the question that Ragusa and Tano aimed to answer in their most recent publication in the journal Nuclear Technology titled "Coupled Computational Fluid Dynamics-Discrete Element Method Study of Bypass Flows in a Pebble-Bed Reactor."

"We solved for the location of these 'tennis balls' using the Discrete Element Method, where we account for the flow-induced motion and friction between all the tennis balls," said Tano. "The coupled model is then tested against thermal measurements in the SANA experiment."

The SANA experiment was conducted in the early 1990s and measured how the mechanisms in a reactor interchange when transmitting heat from the center of the cylinder to the outer part. This experiment allowed Tano and Ragusa to have a standard to which they could validate their models.

As a result, their teams developed a coupled Computational Fluid Dynamics-Discrete Element Methods model for studying the flow over a pebble bed. This model can now be applied to all high-temperature pebble-bed reactors and is the first computational model of its kind to do so. It's very-high-accuracy tools such as this that allow vendors to develop better reactors.

"The computational models we create help us more accurately assess different physical phenomena in the reactor," said Tano. "As a result, reactors can operate at a higher margin, theoretically producing more power while increasing the safety of the reactor. We do the same thing with our models for molten-salt reactors for the Department of Energy."

As artificial intelligence continues to advance, its applications to computational modeling and simulation grow. "We're in a very exciting time for the field," said Ragusa. "And we encourage any prospective students who are interested in computational modeling to reach out, because this field will hopefully be around for a long time."

Credit: 
Texas A&M University

Study shows which North American mammals live most successfully alongside people

A team of researchers led by scientists at UC Santa Cruz analyzed data from 3,212 camera traps to show how human disturbance could be shifting the makeup of mammal communities across North America.

The new study, published in the journal Global Change Biology, builds upon the team's prior work observing how wildlife in the Santa Cruz Mountains respond to human disturbance. Local observations, for example, have shown that species like pumas and bobcats are less likely to be active in areas where humans are present, while deer and wood rats become bolder and more active. But it's difficult to generalize findings like these across larger geographic areas because human-wildlife interactions are often regionally unique.

So, to get a continent-wide sense for which species of mammals might be best equipped to live alongside humans, the team combined their local camera trap data with that of researchers throughout the U.S., Canada, and Mexico. This allowed them to track 24 species across 61 regionally diverse camera trap projects to see which larger trends emerged.

"We've been very interested for a long time in how human disturbance influences wildlife, and we thought it would be interesting to see how wildlife in general are responding to similar anthropogenic pressures across North America," said Chris Wilmers, an environmental studies professor and director of the Santa Cruz Puma Project, who is the paper's senior author alongside lead author Justin Suraci.

The team was especially interested in understanding how mammals respond to different types of human disturbance and whether these responses were related to species' traits, like body size, diet, and the number of young they have. Overall, the paper found that 33 percent of mammal species responded negatively to humans, meaning they were less likely to occur in places with higher disturbance and were less active when present, while 58 percent of species were actually positively associated with disturbance.

To get a closer look at these trends, the team broke their results down by two different types of human disturbance. One was the footprint of human development: the things that people build, like roads, houses, and agricultural fields. Another was the mere presence of people, including activities like recreation and hunting, since fear of humans can change an animal's behavior and use of space.

In comparing continent-wide data from camera trap locations with varying levels of human development, researchers found that grizzly bears, lynx, wolves, and wolverines were generally less likely to be found in more developed areas and were less active when they did visit. Moose and martens were also less active in areas with a higher development footprint.

Meanwhile, raccoons and white-tailed deer were actually more likely to hang out in more developed areas and were more active in these spaces. Elk, mule deer, striped skunks, red foxes, bobcats, coyotes, and pumas weren't more likely to be found in developed landscapes, but they did tend to be more active in these areas.

Some of the species that frequent more developed areas may actually benefit from living in these places, but the study's lead author, Justin Suraci, a lead scientist at Conservation Science Partners and former postdoctoral researcher at UC Santa Cruz, says that's not necessarily the case. While raccoons can thrive in developed areas by finding food in our garbage cans and avoiding predators, higher levels of puma activity in these same places could mean something very different.

"It's not because these developed areas are really good for puma activity," Suraci said. "It's probably because the camera traps happened to be set in the one pathway that the poor puma can use when it's navigating its way through an otherwise very heavily developed landscape."

In other words, some animals in the study may be increasingly active or present on cameras near human development simply because there's such little remaining natural habitat.

Still, there were certain traits that emerged across species as clear advantages for making a living within the footprint of development. Overall, mammals that were smaller and faster-reproducing, with generalist diets, were the most positively associated with development. Researchers expected they might find similar results in comparing camera trap data by levels of human presence, but in fact, both positive and negative responses to human presence were observed for species across the spectrum of body sizes and diets.

Elk were less likely to stick around in places frequented by humans, and moose, mountain goats, and wolverines were less active in these habitats. On the other hand, bighorn sheep, black bears, and wolverines were more likely to be found in areas frequented by humans, while mule deer, bobcats, grey foxes, pumas, and wolves were more active.

One trend that may be influencing these findings is the growth of outdoor recreation, which increases levels of human presence in otherwise remote and wild landscapes. The study's results may indicate that most mammals are willing to tolerate some level of human recreation in order to remain in high quality habitats, and they could instead be increasing their nocturnal activity in order to avoid humans. Some animals may even take advantage of hiking trails and fire roads as easy movement pathways.

But the study also clearly identified that there's a limit to how much human impact animals can withstand. Even among species that were either more active or more likely to be present around humans or in developed areas, those effects peaked at low to intermediate levels of human disturbance then began to decline beyond those thresholds. Red foxes were the only animals in the study that seemed to continue to be more active or present at medium to high levels of human disturbance.

Ultimately, most species have both something to lose and something to gain from being around humans, and understanding the cutoff at which the costs outweigh the benefits for each species will be important to maintaining suitable habitats that support diversity in mammal populations for the future. Suraci says this may prove to be the new paper's most important contribution.

"From a management perspective, I think the thresholds that we've started to identify are going to be really relevant," he said. "This can help us get a sense of how much available habitat is actually out there for recolonizing or reintroduced species and hopefully allow us to more effectively coexist with wildlife in human-dominated landscapes."

Credit: 
University of California - Santa Cruz

Researchers create world's most power-efficient high-speed ADC microchip

image: Image of an analog-to-digital converter.

Image: 
BYU Photo

To meet soaring demand for lightning-quick mobile technology, each year tech giants create faster, more powerful devices with longer-lasting battery power than previous models.

A major reason companies like Apple and Samsung can miraculously pull this off year after year is because engineers and researchers around the world are designing increasingly power-efficient microchips that still deliver high speeds.

To that end, researchers led by a team at Brigham Young University have just built the world's most power-efficient high-speed analog-to-digital converter (ADC) microchip. An ADC is a tiny piece of technology present in almost every electronic piece of equipment that converts analog signals (like a radio wave) to a digital signal.

The ADC created by BYU professor Wood Chiang, Ph.D. student Eric Swindlehurst and their colleagues consumes only 21 milli-Watts of power at 10GHz for ultra-wideband wireless communications; current ADCs consume hundreds of milli-Watts or even Watts of power at comparable speeds. The BYU-made ADC has the highest power efficiency currently available globally, a record it holds by a substantial margin.

"Many research groups worldwide focus on ADCs; it's like a competition of who can build the world's fastest and most fuel-efficient car," Chiang said. "It is very difficult to beat everyone else around the world, but we managed to do just that."

The central challenge facing researchers like Chiang is that increasingly higher bandwidths within communications system devices means circuits that consume more power. Chiang, Swindlehurst and their team set out to solve the problem by focusing on a key part of the ADC circuit called the DAC, which is a central piece that stands for the exact reverse of ADC: digital-to-analog converter.

For the technologically savvy, here's a broad explanation of what the research team did:

They made the converter faster and more efficient by reducing the loading from the DAC by scaling both the capacitor parallel plate area and spacing. They also grouped unit capacitors differently from the conventional way, grouping together unit capacitors that are part of the same bit in the DAC rather than having them be interleaved throughout. Doing so lowered the bottom-plate parasitic capacitance by three times, significantly lowering power consumption while increasing speed.

Finally, they used a bootstrapped switch, but improved on it by making it dual path where each path can be independently optimized. This method increases the speed but doesn't require additional hardware because it involves splitting existing devices and making route changes in the circuit.

The project, sponsored by the Ministry of Science in Taiwan and a consortium of technology companies, took four years to complete -- three years to design the chip and one year to test it. The team, which included collaborators from National Yang Ming Chiao Tung University in Taiwan and the University of California, Los Angeles, published details of the project in IEEE Journal of Solid-State Circuits earlier this year, with Swindlehurst serving as principal author.

"We've proven the technology of the chip here at BYU and there is no question about the efficacy of this particular technique," Chiang said. "This work really pushes the envelope of what's possible and will result in a lot of conveniences for consumers. Your Wi-Fi will continue to get better because of this technology, you'll have faster upload and download speeds and you can watch 4K or even 8K with little to no lag while maintaining battery life."

Chiang said other likely applications for the ADC include autonomous vehicles (which use a ton of wireless bandwidth), smart wearables like glasses or smart contact lenses, and even things such as implantable devices.

The device required sophisticated design and verification to ensure that all the thousands of connections in the converter would work correctly. A single mistake in the design would have taken at least an additional year to correct, so the team was thrilled to have made no mistakes.

"It's like building a little city. There are so many details that went into this project," Chiang said. "The student team did a marvelous job -- all the pieces fit perfectly together to realize this engineering feat. I am fortunate to have worked with such talented students at BYU."

Credit: 
Brigham Young University

New nondestructive broadband imager is the next step towards advanced technology

image: The robot-assisted photo-source and imager built-in multi-axis movable PTE monitor arm, which represents a consolidation of functionality, led to the non-destructive unmanned remote high-speed omnidirectional photo-monitoring of a miniature aerial defective winding road-bridge.

Image: 
Tokyo Tech

One of the key aspects of academic and industrial research today is non-destructive imaging, a technique in which an object or sample is imaged (using light) without causing any damage to it. Often, such imaging techniques are crucial to ensuring safety and quality of industrial products, subsequently leading to growing demands for high-performance imaging of objects with arbitrary structures and locations.

On one hand, there has been tremendous advancements in the scope of non-destructive imaging regarding the region of electromagnetic (EM) spectrum it can access, which now ranges from visible light to as far as millimeter waves! On the other, imaging devices have become flexible and wearable, enabling stereoscopic (3D) visualization of both plane and curved samples without forming a blind spot.

Despite such progress, however, issues such as portability of sensing modules, cooling-free (free of bulky cooling equipment) device operation, and unmanned or robot-assisted photo-monitoring remain to be addressed. "The transition from manned to robotic inspection can make operations such as disconnection testing of power-transmission lines and exploring cramped environments safer and more sustainable," explains Prof. Yukio Kawano, from Tokyo Tech and Chuo University, who researches extensively on terahertz (THz) waves (EM waves with frequency in the terahertz range) and THz imaging.

While multiple studies in the past have explored systems equipped with one of the aforementioned modules, their functional integration has not yet been attempted, limiting progress. Against this backdrop, in a recent study published in Nature Communications, Prof. Kawano and his colleagues from Tokyo Tech, Japan, developed a robot-assisted, broadband (using a wide range of frequencies) photo-monitoring platform equipped with a light source and imager that can operate in a location-independent manner and switch between reflective and transmissive sensing.

In their proposed module, the scientists made use of physically and chemically enriched carbon nanotube (CNT) thin films to act as uncooled imager sheets that employed "photothermoelectric effect" to convert light into electric signal via thermoelectric conversion. Due to their excellent absorption properties over a wide range of wavelengths, CNTs showed a broadband sensitivity. Moreover, the imager sheet allowed for a stereoscopic sensing operation in both reflective and transmissive modes, thereby enabling inspections of several curved objects such as beverage bottles, water pipes, and gas pipes. By detecting the local changes on signals, scientists were able to identify minuscule defects in these structures otherwise invisible. Further, by employing multi-frequency photo-monitoring, ranging between THz and infrared (IR) bands, the scientists were able to extract both the outer surface and inner surface features using IR and THz light, respectively.

Finally, they achieved a 360°-view photo-monitoring using a light-source-integrated compact sensing module and implemented the same in a multi-axis, robot-assisted movable-arm that performed a high-speed photo-monitoring of a defective miniature model of a winding road-bridge.

The results have spurred scientists to consider the future prospects of their device. "Our efforts can potentially provide a roadmap for the realization of a ubiquitous sensing platform. Additionally, the concept of this study could be used for a sustainable, long-term operable, and user-friendly Internet of Things system of a sensor network," observes an excited Prof. Kawano.

This study, indeed, takes sensing technology to the next level!

Credit: 
Tokyo Institute of Technology

Green light on gold atoms

image: Plasmonic nano-antennas fabricated at EPFL: gold nanoparticles are deposited on a gold film covered with a layer of molecules. Light emission from defects near the film surface is strongly enhanced by the antenna effect, enabling its detection.

Image: 
Nicolas Antille, www.nicolasantille.com

Because individual atoms or molecules are 100 to 1000 times smaller than the wavelength of visible light, it is notoriously difficult to collect information about their dynamics, especially when they are embedded within larger structures.

In an effort to circumvent this limitation, researchers are engineering metallic nano-antennas that concentrate light into a tiny volume to dramatically enhance any signal coming from the same nanoscale region. Nano-antennas are the backbone of nanoplasmonics, a field that is profoundly impacting biosensing, photochemistry, solar energy harvesting, and photonics.

Now, researchers at EPFL led by Professor Christophe Galland at the School of Basic Sciences have discovered that when shining green laser light on a gold nano-antenna, its intensity is locally enhanced to a point that it "knocks" gold atoms out of their equilibrium positions, all the time maintaining the integrity of the overall structure. The gold nano-antenna also amplifies the very faint light scattered by the newly formed atomic defects, making it visible to the naked eye.

This nanoscale dance of atoms can thus be observed as orange and red flashes of fluorescence, which are signatures of atoms undergoing rearrangements. "Such atomic scale phenomena would be difficult to observe in situ, even using highly sophisticated electron or X-ray microscopes, because the clusters of gold atoms emitting the flashes of light are buried inside a complex environment among billions of other atoms," says Galland.

The unexpected findings raise new questions about the exact microscopic mechanisms by which a weak continuous green light can put some gold atoms into motion. "Answering them will be key to bringing optical nano-antennas from the lab into the world of applications - and we are working on it," says Wen Chen, the study's first author.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Telling up from down: How marine flatworms learn to sense gravity

image: Sensory organ of the acoel flatworm Praesagittifera naikaiensis. Shown here is a light micrograph of the whole body (left), and its enlarged anterior regions (right), including the ciliated epidermis with sensory cilia, statocyst, and eyes that constitute the sensory organ.

Image: 
2021 Okayama University

All living organisms are equipped with sensory organs to detect changes in their surrounding environment. It may not immediately strike us as obvious but, similar to how we can sense heat, cold, light, and darkness, we are also extremely adept at sensing gravity. In our case, it is our inner ear that does this job, helping us maintain balance, posture, and orientation in space. But, what about other organisms, for instance invertebrates that lack a backbone?

The gravity sensing organ in some aquatic invertebrates, known as a "statocyst," is, in fact, rather fascinating. The statocyst is essentially a fluid-filled sac with sensory cells lining its inner wall and a small, mineralized mass called "statolith" contained inside. During any body movement, the statolith moves and consequently comes in contact with sensory cells in the inner wall, deflecting them. The deflections, in turn, activate the neurons (nerve cells), which then relay signals to the brain about changes in body orientation.

However, exactly how the sensory cells stimulate the neurons is not particularly clear for acoel flatworms--soft-bodied, marine animals with a simple anatomy, which represent one of the earliest extant life forms with bilateral (left-right) symmetry. What zoologists know so far, based on the finding that juvenile acoel flatworms occasionally fail to sense gravity, is that the ability is acquired sometime after hatching from the eggs.

In a new study published in Zoomorphology, scientists from Okayama University, Japan led by Prof. Motonori Ando have now taken a stab at understanding these curious creatures better. But what exactly is so attractive about acoel flatworms? Prof. Ando explains, "Understanding the stimulus response mechanism of Acoela can uncover a fundamental biological control mechanism that dates back to the origin of bilaterian animals, including humans. These organisms, therefore, are key to unravelling the process of evolution."

For their study, the scientists used an acoel species called Praesagittifera naikaiensis or P. naikaiensis that is endemic to the Seto Island Sea coasts at Okayama. "The mysterious body plan of P. naikaiensis could be key to connecting Okayama and the world's natural environment," says Prof. Ando.

To examine the relationship between the statocyst and nervous system of P. naikaiensis, the scientists had to make them both visible, a task usually accomplished by a "marker" or a "label." However, due to a lack of any suitable label for the statocyst, they adopted a different strategy in which they labeled instead the basal lamina, the layer on which the sensory cells sit. As for the nervous system, they labeled the nerve terminals using a well-known marker. Finally, they studied the specimen using confocal microscopy, a technique in which light is focused on to a defined spot at a specific depth to stimulate only local markers.

The results were illuminating. The scientists found that the acoel flatworm developed a gravity-sensing ability within 0 to 7 days after hatching, with the statolith forming after hatching. The statocyst comprised longitudinal and transverse nerve cords, forming what is called a "commissural brain" and a "statocyst-associated-commissure" (stc) characterized by transverse fibers. They hypothesized that a gravity-sensing ability developed when: 1) the statolith acquired a sufficient concentration of calcium salts, 2) stc functioned as the signal-relaying neurons, and 3) the sensory cells were present outside the sac and stimulated indirectly by the statolith through the basal lamina and stc.

Inspired by these findings, Prof. Ando has envisioned future research directions and even practical applications of their study. "It has been reported that closely related species of this organism inhabit the North Sea coast, the Mediterranean coast, and the east coast of North America. Since there is great interest about the commonality of their habitats, we can extend our research to a more global level, using these animals as a novel bioassay system for the environment they live in, especially in the face of the accelerated pace of climate change and anthropogenic habitat degradation. Furthermore, acoel flatworms could be an excellent biological model for studying diseases caused in humans due to abnormalities of sensory hair cells," says an excited Prof. Ando.

It seems modern science is just warming up to the myriad mysteries of this minute worm!

Credit: 
Okayama University

A new spintronic phenomenon: Chiral-spin rotation found in non-collinear antiferromagnet

image: A figure of merit, defined as the ratio of critical field HC to critical current density JC to manipulate magnetic structure, as a function of magnetic layer thickness for non-collinear antiferromagnet (NC-AFM) as seen in this study. Also shown here is a previously studied collinear ferromagnet (C-Ferro) and ferrimagnet (C-Ferri).

Image: 
S.Fukami

Researchers at Tohoku University and the Japan Atomic Energy Agency (JAEA) have discovered a new spintronic phenomenon - a persistent rotation of chiral-spin structure.

Their discovery was published in the journal Nature Materials on May 13, 2021.

Tohoku University and JAEA researchers studied the response of chiral-spin structure of a non-collinear antiferromagnet Mn3Sn thin film to electron spin injection and found that the chiral-spin structure shows persistent rotation at zero magnetic field. Moreover, their frequency can be tuned by the applied current.

"The electrical control of magnetic structure has been of paramount interest in the spintronics community for the last quarter of a century. The phenomenon shown here provides a very efficient scheme to manipulate magnetic structures, offering new opportunities for application, such as oscillators, random number generators, and nonvolatile memory," said Professor Shunsuke Fukami, who spearheaded the project.

Figure 1 compares the efficiency of manipulating the magnetic structure on a non-collinear antiferromagnet, as seen in the present work, with those reported for other material systems. The current-induced chiral-spin rotation is much more efficient even for thick magnetic layers above 20 nm.

The schematics of chiral-spin rotation as well as the experimental setup are shown in figure 2.

The researchers used a high-quality heterostructure consisting of non-collinear antiferromagnet Mn3Sn sandwiched between heavy metals W/Ta and Pt. They revealed that, when a current is applied to the heterostructure, the chiral-spin structure rotates persistently at zero magnetic field because of the torque originating from the spin current generated in the heavy metals. Meanwhile, the rotation frequency, typically above 1 GHz, depends on the applied current.

Spintronics is an interdisciplinary field, where electric and magnetic degrees of freedom of electrons are utilized simultaneously, allowing for an electrical manipulation of magnetic structure. Representative schemes established so far are summarized in figure 3.

Magnetization switching, magnetic-phase transition, oscillation, and resonance have been observed in ferromagnets, which are promising since they may lead to the realization of functional devices in nonvolatile memory, wireless communication, and so on.

Additionally, in antiferromagnets, the 90-degree rotation of Néel vector in collinear systems and the 180 degree switching of chiral-spin structures in non-collinear systems have been observed recently. The chiral-spin persistent rotation in the current work is totally different from all the previously found phenomena and thus should open a new horizon of the spintronics research.

"The obtained insight is not only interesting in terms of physics and material science but also attractive for functional device applications," added Dr. Yutaro Takeuchi, the first author of the paper. "We would like to further improve the material and device technique in the near future and demonstrate new functional devices such as tunable oscillator and high-quality true random number generator."

Credit: 
Tohoku University