Tech

Experts recreate a mechanical Cosmos for the world's first computer

Researchers at UCL have solved a major piece of the puzzle that makes up the ancient Greek astronomical calculator known as the Antikythera Mechanism, a hand-powered mechanical device that was used to predict astronomical events.

Known to many as the world's first analogue computer, the Antikythera Mechanism is the most complex piece of engineering to have survived from the ancient world. The 2,000-year-old device was used to predict the positions of the Sun, Moon and the planets as well as lunar and solar eclipses.

Published in Scientific Reports, the paper from the multidisciplinary UCL Antikythera Research Team reveals a new display of the ancient Greek order of the Universe (Cosmos), within a complex gearing system at the front of the Mechanism.

Lead author Professor Tony Freeth (UCL Mechanical Engineering) explained: "Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the Mechanism itself.

"The Sun, Moon and planets are displayed in an impressive tour de force of ancient Greek brilliance."

The Antikythera Mechanism has generated both fascination and intense controversy since its discovery in a Roman-era shipwreck in 1901 by Greek sponge divers near the small Mediterranean island of Antikythera.

The astronomical calculator is a bronze device that consists of a complex combination of 30 surviving bronze gears used to predict astronomical events, including eclipses, phases of the moon, positions of the planets and even dates of the Olympics.

Whilst great progress has been made over the last century to understand how it worked, studies in 2005 using 3D X-rays and surface imaging enabled researchers to show how the Mechanism predicted eclipses and calculated the variable motion of the Moon.

However, until now, a full understanding of the gearing system at the front of the device has eluded the best efforts of researchers. Only about a third of the Mechanism has survived, and is split into 82 fragments - creating a daunting challenge for the UCL team.

The biggest surviving fragment, known as Fragment A, displays features of bearings, pillars and a block. Another, known as Fragment D, features an unexplained disk, 63-tooth gear and plate.

Previous research had used X-ray data from 2005 to reveal thousands of text characters hidden inside the fragments, unread for nearly 2,000 years. Inscriptions on the back cover include a description of the cosmos display, with the planets moving on rings and indicated by marker beads. It was this display that the team worked to reconstruct.

Two critical numbers in the X-rays of the front cover, of 462 years and 442 years, accurately represent cycles of Venus and Saturn respectively. When observed from Earth, the planets' cycles sometimes reverse their motions against the stars. Experts must track these variable cycles over long time-periods in order to predict their positions.

"The classic astronomy of the first millennium BC originated in Babylon, but nothing in this astronomy suggested how the ancient Greeks found the highly accurate 462-year cycle for Venus and 442-year cycle for Saturn," explained PhD candidate and UCL Antikythera Research Team member Aris Dacanalis.

Using an ancient Greek mathematical method described by the philosopher Parmenides, the UCL team not only explained how the cycles for Venus and Saturn were derived but also managed to recover the cycles of all the other planets, where the evidence was missing.

PhD candidate and team member David Higgon explained: "After considerable struggle, we managed to match the evidence in Fragments A and D to a mechanism for Venus, which exactly models its 462-year planetary period relation, with the 63-tooth gear playing a crucial role."

Professor Freeth added: "The team then created innovative mechanisms for all of the planets that would calculate the new advanced astronomical cycles and minimize the number of gears in the whole system, so that they would fit into the tight spaces available."

"This is a key theoretical advance on how the Cosmos was constructed in the Mechanism," added co-author, Dr Adam Wojcik (UCL Mechanical Engineering). "Now we must prove its feasibility by making it with ancient techniques. A particular challenge will be the system of nested tubes that carried the astronomical outputs."

Credit: 
University College London

Breast cancer: The risks of brominated flame retardants

image: Mammary gland of a prepubescent female rat stained to see its development.

Image: 
Isabelle Plante (INRS)

Brominated flame retardants (BFRs) are found in furniture, electronics, and kitchenware to slow the spread of flames in the event of a fire. However, it has been shown that these molecules may lead to early mammary gland development, which is linked to an increased risk of breast cancer. The study on the subject by Professor Isabelle Plante from the Institut national de la recherche scientifique (INRS) made the cover of the February issue of the journal Toxicological Sciences.

Part of the flame retardants are considered to be endocrine disruptors, i.e. they interfere with the hormonal system. Since they are not directly bound to the material in which they are added, the molecules escape easily. They are then found in house dust, air and food.

This exposure can cause problems for mammary glands because their development is highly regulated by hormones. "BFRs pose a significant risk, particularly during sensitive periods, from intrauterine life to puberty and during pregnancy," says Professor Plante, co-director of the Intersectoral Centre for Endocrine Disruptor Analysis and environmental toxicologist. Endocrine disruptors, such as BFRs, can mimic hormones and cause cells to respond inappropriately.

The effects of environmental exposure

In their experiments, the research team exposed female rodents to a mixture of BFRs, similar to that found in house dust, prior to mating, during gestation and during lactation. Biologists were able to observe the effects on the offspring at two stages of development and on the mothers.

In pre-pubertal rats, the team noted early development of mammary glands. For pubescent rats, the results, published in 2019, showed a deregulation of communication between cells. Similar consequences were observed in female genitors in a 2017 study. All of these effects are associated with an increased risk of breast cancer.

Professor Isabelle Plante points out that peaks in human exposure to BFRs have been observed in the early 2000s. "Young women exposed to BFRs in utero and through breastfeeding are now in the early stages of fertility. Their mothers are in their fifties, a period of increased risk for breast cancer," says Professor Plante. This is why the team is currently studying endocrine disruptors related to a predisposition to breast cancer, funded by the Breast Cancer Foundation and the Cancer Research Society.

Debate over legislation

In all three studies, most of the effects were observed when subjects were exposed to the lowest dose, from dust, and not the higher doses. This observation raises questions about the current legislation for endocrine disruptors. "To evaluate the "safe" dose, experts give an increasing dose and then, when they observe an effect, identify it as the maximum dose. With endocrine disruptors, the long-term consequences would be caused by lower doses" reports Professor Plante.

Although counter-intuitive, this observation comes from the fact that high doses trigger a toxic response in the cells. When the body is exposed to lower doses, similar to the concentration of hormones in our body, the consequences rather consist in the deregulation of the hormonal system.

Credit: 
Institut national de la recherche scientifique - INRS

Sea-level rise drives wastewater leakage to coastal waters

image: High tide nuisance flooding in Māpunapuna is a hazard to vehicular and pedestrian traffic.

Image: 
Trista McKenzie

When people think of sea level rise, they usually think of coastal erosion. However, recent computer modeling studies indicate that coastal wastewater infrastructure, which includes sewer lines and cesspools, is likely to flood with groundwater as sea-level rises.

A new study, published by University of Hawai'i (UH) at Mānoa earth scientists, is the first to provide direct evidence that tidally-driven groundwater inundation of wastewater infrastructure is occurring today in urban Honolulu, Hawai'i. The study shows that higher ocean water levels are leading to wastewater entering storm drains and the coastal ocean--creating negative impacts to coastal water quality and ecological health.

The study was led by postdoctoral researcher Trista McKenzie and co-authored by UH Sea Grant coastal geologist Shellie Habel and Henrietta Dulai, advisor and associate professor in the UH Mānoa School of Ocean and Earth Science and Technology (SOEST). The team assessed coastal ocean water and storm drain water in low-lying areas during spring tides, which serve as an approximation of future sea levels.

To understand the connection between wastewater infrastructure, groundwater and the coastal ocean, the researchers used chemical tracers to detect groundwater discharge and wastewater present at each site. Radon is a naturally occurring gas that reliably indicates the presence of groundwater, while wastewater can be detected by measuring specific organic contaminants from human sources, such as caffeine and certain antibiotics.

"Our results confirm that indeed, both groundwater inundation and wastewater discharge to the coast and storm drains are occurring today and that it is tidally-influenced," said McKenzie. "While the results were predicted, I was surprised how prevalent the evidence for these processes and the scale of it."

In low-lying inland areas, storm drains can overflow every spring tide. This study demonstrated that at the same time wastewater from compromised infrastructure also discharges into storm drains. During high tides, storm drains are becoming channels for untreated wastewater to flood streets and sidewalks. In addition to impeding traffic, including access by emergency vehicles, this flooding of contaminated water also poses a risk to human health.

The team also found evidence that many of the human-derived contaminants were in concentrations that pose a high risk to aquatic organisms. This has negative consequences to coastal organisms where the groundwater and storm drains discharge.

"Many people may think of sea-level rise as a future problem, but in fact, we are already seeing the effects today," said McKenzie. "Further, these threats to human health, ocean ecosystems and the wastewater infrastructure are expected to occur with even greater frequency and magnitude in the future."

This project demonstrates that actions to mitigate the impact from sea-level rise to coastal wastewater infrastructure in Honolulu are no longer proactive but are instead critical to addressing current issues. Through its multi-partner effort, the Hawai'i State Climate Commission also raises awareness around the variety of impacts of sea level rise, including those highlighted by this study.

"Coastal municipalities should pursue mitigation strategies that account for increased connectivity between wastewater infrastructure and recreational and drinking water resources," said McKenzie. "We need to consider infrastructure that minimizes flooding opportunities and contact with contaminated water; and decreases the number of contaminant sources, such as installation of one-way valves for storm drains, decommissioning cesspools, monitoring defective sewer lines, and construction of raised walkways and streets."

Credit: 
University of Hawaii at Manoa

New perovskite LED emits a circularly polarized glow

image: The first layer is a semitransparent anode, such as ITO, that injects unpolarized "holes," a quantum feature of electrons, with a certain spin. The second layer is the two-dimensional chiral hybrid perovskite that is an active spin filter, allowing only holes with specific spin to pass by, depending on the helicity of the chiral molecules. The third layer is the emitter film, composed of a non-chiral inorganic perovskite such as CsPbBr3. The fourth and fifth layers are the cathode that injects spin up and spin down electrons. Only the spin down electrons recombine with the spin up injected holes to produce circularly polarized light with helicity that depends on the chiral molecules helicity in the two-dimensional organic-inorganic layer.

Image: 
Adapted from: Kim, Y.H. et. al., Science (2021)

Light-emitting diodes (LEDs) have revolutionized the displays industry. LEDs use electric current to produce visible light without the excess heat found in traditional light bulbs, a glow called electroluminescence. This breakthrough led to the eye-popping, high-definition viewing experience we've come to expect from our screens. Now, a group of physicists and chemists have developed a new type of LED that utilizes spintronics without needing a magnetic field, magnetic materials or cryogenic temperatures; a "quantum leap" that could take displays to the next level.

"The companies that make LEDs or TV and computer displays don't want to deal with magnetic fields and magnetic materials. It's heavy and expensive to do it," said Valy Vardeny, distinguished professor of physics and astronomy at the University of Utah. "Here, chiral molecules are self-assembled into standing arrays, like soldiers, that actively spin polarize the injected electrons, which subsequently lead to circularly polarized light emission. With no magnetic field, expensive ferromagnets and with no need for extremely low temperatures. Those are no-nos for the industry."

Most opto-electronic devices, such as LEDs, only control charge and light and not the spin of the electrons. The electrons possess tiny magnetic fields that, like the Earth, have magnetic poles on opposite sides. Its spin may be viewed as the orientation of the poles and can be assigned binary information--an "up" spin is a "1," a "down" is a "0." In contrast, conventional electronics only transmit information through bursts of electrons along a conductive wire to convey messages in "1s" and "0s." Spintronic devices, however, could utilize both methods, promising to process exponentially more information than traditional electronics.

One barrier to commercial spintronics is setting the electron spin. Presently, one needs to produce a magnetic field to orient the electron spin direction. Researchers from the University of Utah and the National Renewable Energy Laboratory (NREL) developed technology that acts as an active spin filter made of two layers of material called chiral two-dimension metal-halide perovskites. The first layer blocks electrons having spin in the wrong direction, a layer that the authors call a chiral-induced spin filter. Then when the remaining electrons pass through the second light-emitting perovskite layer, they cause the layer to produce photons that move in unison along a spiral path, rather than a conventional wave pattern, to produce circular polarized electroluminescence.

The study was published in the journal Science on March 12, 2021.

Left-handed, right-handed molecules

The scientists exploited a property called chirality that describes a particular type of geometry. Human hands are a classic example; the right and left hands are arranged as mirrors of one another, but they will never perfectly align, no matter the orientation. Some compounds, such as DNA, sugar and chiral metal-halide perovskites, have their atoms arranged in a chiral symmetry. A "left-handed" oriented chiral system may allow transport of electrons with "up" spins but block electrons with "down" spins, and vice versa.

"If you try to transport electrons through these compounds, then the electron spin becomes aligned with the chirality of the material," Vardeny said. Other spin filters do exist, but they either require some kind of magnetic field, or they can only manipulate electrons in a small area. "The beauty of the perovskite material that we used is that it's two-dimensional--you can prepare many planes of 1 cm2 area that contain one million of a billion (1015) standing molecules with the same chirality."

Metal-halide perovskite semiconductors are mostly used for solar cells these days, as they are highly efficient at converting sunlight to electricity. Since a solar cell is one of the most demanding applications of any semiconductor, scientists are discovering other uses exist as well, including spin-LEDs.

"We are exploring the fundamental properties of metal-halide perovskites, which has allowed us to discover new applications beyond photovoltaics," said Joseph Luther, a co-author of the new paper and NREL scientist. "Because metal-halide perovskites, and other related metal halide organic hybrids, are some of the most fascinating semiconductors, they exhibit a host of novel phenomena that can be utilized in transforming energy."

Although metal-halide perovskites are the first to prove the chiral-hybrid devices are feasible, they are not the only candidates for spin-LEDs. The general formula for the active spin filter is one layer of an organic, chiral material, another layer of an inorganic metal halide, such as lead iodine, another organic layer, inorganic layer and so on.

"That's beautiful. I'd love that someone will come out with another 2-D organic/inorganic layer material that may do a similar thing. At this stage, it's very general. I'm sure that with time, someone will find a different two-dimensional chiral material that will be even more efficient," Vardeny said.

The concept proves that using these two dimensional chiral-hybrid systems gain control over spin without magnets and has "broad implications for applications such as quantum-based optical computing, bioencoding and tomography," according to Matthew Beard, a senior research fellow and director of Center for Hybrid Organic Inorganic Semiconductors for Energy.

Credit: 
University of Utah

Cutting-edge scale-out technology from Toshiba will take Fintech and Logistics to new level

image: (a) Scale-out approach: improve computing performance by increasing the numbers of computing chips; (b) All-to-all connection type combinatorial optimization problems: all variables interact with each other.

Image: 
Toshiba Corporation

TOKYO - Toshiba Corporation (TOKYO: 6502), the industry leader in solutions for large-scale optimization problems, today announced a scale-out technology that minimizes hardware limitations, an evolution of its optimization computer, the Simulation Bifurcation Machine (SBM), that supports continued increases in computing speed and scale. Toshiba expects the new SBM to be a game changer for real-world problems that require large-scale, high-speed and low-latency, such as simultaneous financial transactions involving large numbers of stock, and complex control of multiple robots. The research results were published in Nature Electronics*1 on March 1.

Speed and scale are keys to success in industrial sectors as different as finance, logistics, and communications, all of which have to deal with large number and make complex decisions in the shortest time possible. Aiming to bring higher efficiencies to these and other businesses, Toshiba has addressed combinatorial optimization problems by developing high-speed, high-accuracy algorithms and corresponding practical computer solutions*2. The company recently announced a second generation of its simulated bifurcation algorithms, implemented on classical computers via a single field programmable gate array (FPGA), that surpasses quantum computers in obtaining optimal solutions for various combinatorial optimization problems at high speed*3.

Toshiba continues to pursue better performance of the SBM by installing more FPGAs in the computer, an approach called scale-out in computer architecture, and has successfully demonstrated the world's first*4 simultaneous scale-out of computing speed and problem size for all-to-all connection type combinatorial optimization problems*1. At the heart of the technology is a partitioned version of the simulated bifurcation algorithm that enables multiple FPGAs to exchange information on variables with each other, and that triggers an autonomous synchronization mechanism in minimizing the communications overhead to an extent that does not affect overall performance (Figures 1 & 2).

Trials has shown that an SBM with a cluster of eight FPGA (Figure 3a) achieves computational throughput 5.4 times higher than an SBM with single FPGA, and solve problems 16 times larger; and simulation results with a 64 FPGA SBM have demonstrated that the relationship between the computing speed and number of FPGA is exactly linear (Figure 3b), indicating that the technology can continue to increase the scale-out with the same effect.

The 8 FPGA SBM also obtains solutions 828 times faster than an implementation of simulated annealing (SA), a widely used optimization technique, demonstrating that the SBM makes much more efficient use of computational resources than the SA (Figure 4).

Commenting on the application of the technology, Kosuke Tatsumura, Chief Research Scientist at Toshiba Corporation's Corporate Research & Development Center, said: "Fast computing speed, large computing scale, and low latency to provide solutions are the critical values the new SBM can offer to business. For example, we expect the financial industry can benefit if they can trade more stocks simultaneously, and robots in the logistic industry will perform better with zero-time-lag computation. We hope the new technology will take fintech and logistic to a new level."

Credit: 
Toshiba Corporation

New insulation takes heat off environment

image: Organic chemist Associate Professor Justin Chalker, Flinders University, Australia

Image: 
Flinders University

Waste cooking oil, sulfur and wool offcuts have been put to good use by green chemists at Flinders University to produce a sustainable new kind of housing insulation material.

The latest environmentally friendly building product from experts at the Flinders Chalker Lab and colleagues at Deakin and Liverpool University, has been described in a new paper published in Chemistry Europe ahead of Global Recycling Day (18 March 2021)

The insulating composite was made from the sustainable building blocks of wool fires, sulfur, and canola oil to produce a promising new model for next-generation insulation - not only capitalising on wool's natural low flammability but also to make significant energy savings for property owners and tenants.

The new composite is one of several exciting new composites and polysulfide polymers made from waste products that are now being commercialised, says lead author Associate Professor Justin Chalker, the New Innovators winner in the 2020 Prime Minister's Prizes for Science.

"The aim of this new study was to evaluate a composite made from sulfur, canola oil, and wool as thermal insulation. The material is prepared by hot pressing raw wool with a polymer made from sulfur and canola oil," Associate Professor Chalker says.

"The promising mechanical and insulation properties of this composite bodes well for further exploration in energy saving insulation in our built environment."

The new study adds to a suite of other composites, such as a new type of building block and a renewable rubber material created in the Chalker Lab.

The long-term biodegradation of these materials in a safe and responsible way at the end of their life is also a target of the research.

The last decade has been described as the hottest on record, and reusing waste is one way to extend the life of billions of tonnes of natural resources consumed every year.

Global company Clean Earth Technologies is commercialising the polymers for a range of applications - from removing mercury contamination from soil and retrieving oil after a large-scale spill, to a polymer to release fertiliser more slowly to reduce run-off, and facilitating a safer method of leaching and extracting gold.

In line with the UN's Sustainable Development Goals 2030, Global Recycling Day recognises individuals, governments and organisations taking direct action to support the global green agenda.

Recycling is a key part of the circular economy, helping to protect our natural resources. Each year the 'Seventh Resource' (recyclables) saves over 700 million tonnes in CO2 emissions and this is projected to increase to 1 billion tonnes by 2030.

Credit: 
Flinders University

Mapping the best places to plant trees

image: This image shows the Reforestation Hub tool.

Image: 
The Nature Conservancy and American Forests

Reforestation could help to combat climate change, but whether and where to plant trees is a complex choice with many conflicting factors. To combat this problem, researchers reporting in the journal One Earth on December 18 have created the Reforestation Hub, an interactive map of reforestation opportunity in the United States. The tool will help foresters, legislators, and natural resource agency staff weigh the options while developing strategies to restore lost forests.

"Often the information we need to make informed decisions about where to deploy reforestation already exists, it's just scattered across a lot of different locations," says author Susan Cook-Patton, a Senior Forest Restoration Scientist at the Nature Conservancy. "Not everybody has the computer science experience to delve into the raw data, so we tried to bring this information together to develop a menu of options for reforestation, allowing people to choose what they would like to see in their community, state, or nation."

The culmination of these efforts is the Reforestation Hub, a web-based interactive map that color-codes individual counties by reforestation opportunity or level of potential for successful reforestation. And the results show that there is a great deal of reforestation opportunity in the United States.

"There are up to 51.6 million hectares (about 200,000 square miles) of opportunity to restore forest across the United States after excluding productive cropland and other places where trees are infeasible," she says. "Those additional forested areas could absorb the emissions equivalent to all the personal vehicles in California, Texas, and New York combined."

In addition to quantifying the amount of land that could yield viable forests, the Hub also identifies trends in how this opportunity is distributed throughout the country.

"While there's no single best place to restore forest cover, we did find a particularly high density of opportunity in the Southeastern United States," says Cook-Patton. "This is a region where carbon accumulation rates are high, costs are low, and there is a lot of opportunity to achieve multiple benefits like creating habitats for biodiversity, improving water quality, and climate mitigation."

The map also quantifies the acreage of 10 individual opportunity classes--or categories based on land ownership and quality. Some of these include pastures, post-burn lands, and floodplains. "The choice to plant trees really depends on what people want out of the landscape, whether it's controlling flood waters, improving urban environments, or recovering forests after a fire," she says.

The researchers hope to create similar maps for other countries, an important next step for combating the global problem of climate change.

"We have about a decade to get climate change in check," Cook-Patton says, "and I am excited about the potential for this study to help accelerate decisions to invest in reforestation as a climate solution."

Credit: 
Cell Press

Computing clean water

Water is perhaps Earth's most critical natural resource. Given increasing demand and increasingly stretched water resources, scientists are pursuing more innovative ways to use and reuse existing water, as well as to design new materials to improve water purification methods. Synthetically created semi-permeable polymer membranes used for contaminant solute removal can provide a level of advanced treatment and improve the energy efficiency of treating water; however, existing knowledge gaps are limiting transformative advances in membrane technology. One basic problem is learning how the affinity, or the attraction, between solutes and membrane surfaces impacts many aspects of the water purification process.

"Fouling -- where solutes stick to and gunk up membranes -- significantly reduces performance and is a major obstacle in designing membranes to treat produced water," said M. Scott Shell, a chemical engineering professor at UC Santa Barbara, who conducts computational simulations of soft materials and biomaterials. "If we can fundamentally understand how solute stickiness is affected by the chemical composition of membrane surfaces, including possible patterning of functional groups on these surfaces, then we can begin to design next-generation, fouling-resistant membranes to repel a wide range of solute types."

Now, in a paper published in the Proceedings of the National Academy of Sciences (PNAS), Shell and lead author Jacob Monroe, a recent Ph.D. graduate of the department and a former member of Shell's research group, explain the relevance of macroscopic characterizations of solute-to-surface affinity.

"Solute-surface interactions in water determine the behavior of a huge range of physical phenomena and technologies, but are particularly important in water separation and purification, where often many distinct types of solutes need to be removed or captured," said Monroe, now a postdoctoral researcher at the National Institute of Standards and Technology (NIST). "This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources, like those produced in oilfield operations, where the concentration of solutes is high and their chemistries quite diverse."

Solutes are frequently characterized as spanning a range from hydrophilic, which can be thought of as water-liking and dissolving easily in water, to hydrophobic, or water-disliking and preferring to separate from water, like oil. Surfaces span the same range; for example, water beads up on hydrophobic surfaces and spreads out on hydrophilic surfaces. Hydrophilic solutes like to stick to hydrophilic surfaces, and hydrophobic solutes stick to hydrophobic surfaces. Here, the researchers corroborated the expectation that "like sticks to like," but also discovered, surprisingly, that the complete picture is more complex.

"Among the wide range of chemistries that we considered, we found that hydrophilic solutes also like hydrophobic surfaces, and that hydrophobic solutes also like hydrophilic surfaces, though these attractions are weaker than those of like to like," explained Monroe, referencing the eight solutes the group tested, ranging from ammonia and boric acid, to isopropanol and methane. The group selected small-molecule solutes typically found in produced waters to provide a fundamental perspective on solute-surface affinity.

The computational research group developed an algorithm to repattern surfaces by rearranging surface chemical groups in order to minimize or maximize the affinity of a given solute to the surface, or alternatively, to maximize the surface affinity of one solute relative to that of another. The approach relied on a genetic algorithm that "evolved" surface patterns in a way similar to natural selection, optimizing them toward a particular function goal.

Through simulations, the team discovered that surface affinity was poorly correlated to conventional methods of solute hydrophobicity, such as how soluble a solute is in water. Instead, they found a stronger connection between surface affinity and the way that water molecules near a surface or near a solute change their structures in response. In some cases, these neighboring waters were forced to adopt structures that were unfavorable; by moving closer to hydrophobic surfaces, solutes could then reduce the number of such unfavorable water molecules, providing an overall driving force for affinity.

"The missing ingredient was understanding how the water molecules near a surface are structured and move around it," said Monroe. "In particular, water structural fluctuations are enhanced near hydrophobic surfaces, compared to bulk water, or the water far away from the surface. We found that fluctuations drove the stickiness of every small solute types that we tested. "

The finding is significant because it shows that in designing new surfaces, researchers should focus on the response of water molecules around them and avoid being guided by conventional hydrophobicity metrics.

Based on their findings, Monroe and Shell say that surfaces comprised of different types of molecular chemistries may be the key to achieving multiple performance goals, such as preventing an assortment of solutes from fouling a membrane.

"Surfaces with multiple types of chemical groups offer great potential. We showed that not only the presence of different surface groups, but their arrangement or pattern, influence solute-surface affinity," Monroe said. "Just by rearranging the spatial pattern, it becomes possible to significantly increase or decrease the surface affinity of a given solute, without changing how many surface groups are present."

According to the team, their findings show that computational methods can contribute in significant ways to next-generation membrane systems for sustainable water treatment.

"This work provided detailed insight into the molecular-scale interactions that control solute-surface affinity," said Shell, the John E. Myers Founder's Chair in Chemical Engineering. "Moreover, it shows that surface patterning offers a powerful design strategy in engineering membranes are resistant to fouling by a variety of contaminants and that can precisely control how each solute type is separated out. As a result, it offers molecular design rules and targets for next-generation membrane systems capable of purifying highly contaminated waters in an energy-efficient manner."

Most of the surfaces examined were model systems, simplified to facilitate analysis and understanding. The researchers say that the natural next step will be to examine increasingly complex and realistic surfaces that more closely mimic actual membranes used in water treatment. Another important step to bring the modeling closer to membrane design will be to move beyond understanding merely how sticky a membrane is for a solute and toward computing the rates at which solutes move through membranes.

Credit: 
University of California - Santa Barbara

Adding triglyceride-lowering Omega-3 based medication to statins may lower stroke risk

DALLAS, March 11, 2021 — Taking the triglyceride-lowering medication icosapent ethyl cut the risk of stroke by an additional 36% in people at increased risk of cardiovascular disease who already have their bad cholesterol levels under control using statin medications, according to preliminary research to be presented at the American Stroke Association’s International Stroke Conference 2021. The virtual meeting is March 17-19, 2021 and is a world premier meeting for researchers and clinicians dedicated to the science of stroke and brain health.

“Icosapent ethyl is a new way to further reduce the risk of stroke in patients with atherosclerosis or who are at high risk of stroke, who have elevated triglyceride levels and are already taking statins,” said Deepak L. Bhatt, M.D., M.P.H., lead author of the study and executive director of interventional cardiovascular programs at the Brigham and Women’s Hospital Heart & Vascular Center in Boston.

Icosapent ethyl is a prescription medication that is a highly purified form of the omega-3 fatty acid eicosapentaenoic acid. “It is very different in terms of purity compared to omega-3 fatty acid supplements available over-the-counter, and these results do not apply to supplements,” said Bhatt, who is also professor of medicine at Harvard Medical School.

Icosapent ethyl was first approved in July 2012 by the U.S. Food and Drug Administration as an adjunct treatment to dietary changes to lower triglycerides in people with extremely high levels of triglycerides (higher than 500 mg/dL). Triglycerides are fats from food that are carried in the blood; normal levels for an adult are below 150 mg/dL.

In late 2018, the REDUCE-IT trial, an 8,000-person multinational study, demonstrated that icosapent ethyl could benefit people with heart disease, diabetes or triglyceride levels above 150 mg/dL and whose LDL (bad) cholesterol levels were already under control using statin medication. In the trial, adding icosapent ethyl (compared with a placebo) reduced the risk of serious cardiovascular events (heart attack, heart-related death, stroke, need for an artery-opening procedure or hospitalization for heart-related chest pain) by 25%.

In December 2019, the FDA approved icosapent ethyl as a secondary treatment to reduce the risk of cardiovascular events among adults with elevated triglyceride levels, and it is now recommended in some professional guidelines. Icosapent ethyl is not included in the American Heart Association’s 2018 Cholesterol Guidelines that were published online prior to the availability of the REDUCE-IT primary results.

In the current analyses, REDUCE-IT Stroke, researchers performed an additional analysis of the impact of icosapent ethyl on stroke in the same 8,000 participants of the original REDUCE-IT trial. They found:

the risk of a first fatal or nonfatal ischemic stroke was reduced by 36% for patients treated with icosapent ethyl;
for every 1,000 patients treated with icosapent ethyl for 5 years, about 14 strokes were averted; and
the risk of a bleeding stroke was very low, and no difference was found among those taking icosapent ethyl.

“Know your triglyceride levels. If they are elevated, ask your doctor if you should be taking icosapent ethyl to further reduce your risk of heart attack and stroke,” Bhatt said. “Your doctor may also recommend that you change your diet, exercise, lose weight if needed to lower your triglyceride levels, and may prescribe a statin medication if you need to lower your LDL cholesterol levels.”

“One study limitation is that icosapent ethyl may increase the risk of minor bleeding,” Bhatt added.

Credit: 
American Heart Association

Healthy Diet Index supports diet quality assessment and dietary counselling in healthcare

image: Dietary counselling plays a crucial role in the prevention and treatment of chronic lifestyle diseases. The Healthy Diet Index developed by Finnish nutrition experts facilitates the assessment of diet quality.

Image: 
UEF/Raija Törrönen

The Healthy Diet Index developed by Finnish nutrition experts facilitates the assessment of diet quality. Its effectiveness was demonstrated in a recently published study.

Dietary counselling plays a crucial role in the prevention and treatment of chronic lifestyle diseases. In healthcare settings, dietary counselling is often provided by professionals without specific training in nutrition, and there is a demand for tools for reliable and easy assessment of diet quality. One such tool is the Healthy Diet Index developed in the recently completed Stop Diabetes (StopDia) project.

The Healthy Diet Index describes the quality of the diet in relation to nutrition recommendations, and to a diet that prevents type 2 diabetes. The scale of the index is from 0 to 100. In addition, the Healthy Diet Index also gives a score to different domains of the diet, including meal pattern, grains, fruit and vegetables, fats, fish and meat, dairy and snacks and treats. The aim was to create scoring method that is sensitive even to minor changes in eating habits, which facilitates the monitoring of changes and may give additional motivation to implement dietary changes.

The Healthy Diet Index was created in collaboration between nutrition experts from the Finnish Institute for Health and Welfare, the University of Eastern Finland, Tampere University Hospital and Pirkanmaa Hospital District. The Healthy Diet Index is based on a validated food intake questionnaire previously developed and used as part of Finland's national programme for the prevention and care of diabetes (DEHKO). However, it is difficult to perceive the whole diet on the basis of individual questions. Dietary counselling is easier and more concrete for the client when the diet is assessed as a whole instead of individual nutrients, and when food-specific advice is given.

The recently published study compared the Healthy Diet Index to the nutrient intake calculated from food diaries (n = 77). The researchers also examined the association of the Healthy Diet Index score with the risk factors of chronic diseases in 3,100 people who had an elevated risk of developing type 2 diabetes, and who participated in the StopDia study. The Healthy Diet Index score was found to associate with the intake of energy nutrients, fibre, and several vitamins and minerals. In the StopDia dataset, a higher Healthy Diet Index score was associated with a lower body mass index, waist circumference, and blood glucose and triglyceride levels in both men and women. The study was published in International Journal of Environmental Research and Public Health.

"The results provide support for the importance of dietary changes in the prevention of chronic diseases such as type 2 diabetes. Even minor improvements to eating habits are important for health, when they are repeated daily. The impact on health is visible even if a person's weight does not go down," says Early Stage Researcher Kirsikka Aittola, who is writing a PhD thesis on the StopDia Study at the University of Eastern Finland.

Scoring methods measuring diet quality have been developed also in the past, including the DASH index for the prevention of high blood pressure, but these often require completing a time-consuming food frequency questionnaire.

"The new Healthy Diet Index is fairly similar to previous scoring methods, but it also assesses the meal pattern, which has often been highlighted as a stumbling block in weight management when working with patients. Importantly, the Healthy Diet Index has been created on the basis of the nutrition recommendations," Professor of Nutrition Therapy Ursula Schwab from the University of Eastern Finland says.

The food intake questionnaire is easy and quick to fill out but computing the Healthy Diet Index requires automation.

"An automated and clearly visualised Healthy Diet Index would be an excellent tool for healthcare professionals to support dietary counselling. It would therefore be important to integrate it into electronic healthcare services and different digital care paths. It could also serve as a self-monitoring tool for patients, and it could include clear tips on how to make dietary changes based on one's own responses," Aittola says.

Credit: 
University of Eastern Finland

Children's dietary guidelines need to change, experts say

image: Flinders University Professor Rebecca Golley, Deputy Director of the Caring Futures Institute

Image: 
Flinders University

Dietary and infant feeding guidelines should be strengthened to include more practical advice on the best ways to support children to learn to like and eat vegetables, say nutrition and dietetics researchers from the Flinders University Caring Futures Institute.

With the Australian Health Survey showing only 6% of children aged 2-17 years are eating the recommended amount of veggies, experts say more tailored practical advice is needed on how to offer vegetables to young children through repeated exposure and daily variety in order to increase their intake.

A recent paper co-authored by researchers from Caring Futures Institute and CSIRO, Australia's national science agency, published in the American Journal of Clinical Nutrition, suggests that up to 10 or more exposures to a particular vegetable when the child is between the age of 6 months and five years can lead to greater chances of them liking vegetables and eating more of them.

While the strategy of repeatedly exposing young children to vegetables to assist flavour familiarity and ultimately intake is not new science, there is a gap between evidence and dietary advice.

"There is an opportunity to improve children's vegetable intake by including practical advice - the 'how to' in our recommendations to parents and caregivers," says Flinders Caring Futures Institute Deputy Director and co-lead author of the paper Professor Rebecca Golley.

Prof Golley says food preferences are established within a child's first five years of life. Therefore, it's crucial to establish healthy eating behaviours early to support growth, development, and dietary habits.

"We know that a lack of vegetable consumption across the lifespan has effects on health, including an increased risk of chronic diseases, obesity and being overweight," she says.

"That is why getting children to like a variety of vegetables such as green beans, peas, carrots and even Brussel sprouts from an early age is so important.

"Early eating behaviours are impressionable and babies and young children can be supported to try different foods and to learn to like them."

The paper, Supporting strategies for enhancing vegetable liking in the early years of life: an Umbrella review of systematic reviews' is an output of the five-year VegKIT project, funded by Hort Innovation and undertaken by a consortium led by CSIRO, including Flinders University and Nutrition Australia Victoria Division.

An umbrella review was undertaken on the diverse body of existing international research around sensory and behavioural strategies that support children to like certain foods including vegetables.

The project examined 11 systemic reviews to determine the effectiveness of strategies including repeated exposure and variety of vegetables, for which promising evidence was found.

Emerging evidence was found for other strategies such as offering vegetables as a first food (not fruit), using non-food rewards to encourage the eating of veggies and reading children vegetable-based story books.

The report also highlights that foundations for vegetable liking can even be laid before a child is born.

"It appears that the maternal diet also plays a part through exposure to vegetable flavours in-utero and increasing children's chances of liking and eating them later, and the same goes for the mothers' diet while breastfeeding," Professor Golley says.

However, she says these strategies must be backed by more research if they are to be underpinning advice for parents, health professionals and policymakers.

Credit: 
Flinders University

Modulation of photocarrier relaxation dynamics in two-dimensional semiconductors

image: a, Modulation on the Coulomb interaction. (left) Illustration of increased screening of Coulomb interactions in 2D semiconductors. (right) Schematic illustration showing the impact of increased screening of Coulomb interactions on the electronic bandgap (Eg), exciton binding energy (Eb) and optical bandgap (Eopt) of 2D semiconductors. b, Modulation through initial distribution of photocarriers in electronic band structures (left) The electronic band structure of monolayer TMDs by DFT calculation. The green area shows the band nesting region. (right) Relaxation pathways of photocarriers in monolayer TMDs, where the excitation is from ground state (GS) to the band nesting (BN) region. c, Modulation through interfacial electron-phonon coupling. (left) Illustration of interfacial electron-phonon (e-ph) coupling. (right) Photocarrier dynamics of monolayer MoSe2 on different substrates. d, Modulation through engineering the band alignment of vdW heterostructures. (left) Band alignment of the graphene/MoS2/MoSe2 trilayer sample. (right) Electron transfer from MoSe2 to graphene and its lifetime in the trilayer.

Image: 
by Yuhan Wang, Zhonghui Nie, Fengqiu Wang

Two-dimensional (2D) semiconductors can host a rich set of excitonic species because of the greatly enhanced Coulomb interactions. The excitonic states can exhibit large oscillator strengths and strong light-matter interactions, and dominate the optical properties of 2D semiconductors. In addition, because of the low dimensionality, excitonic dynamics of 2D semiconductors can be more susceptible to various external stimuli, enriching the possible tailoring methods that can be exploited. Understanding the factors that can influence the dynamics of the optically-generated excited states represents an important aspect of excitonic physics in 2D semiconductors, and is also crucial for practical application as excited state lifetimes are linked to the key figures of merit of multiple optoelectronic and photonic devices. While certain experiences have been accumulated for bulk semiconductors, the atomic nature of 2D semiconductors might makes these approaches less effective or difficult to be adapted. One the other hand, the unique properties of 2D semiconductors, such as the robust excitonic states, the sensitivity to external environmental factors and flexibility in constructing vdW heterostructures, promise modulation strategies different from conventional materials.

In a new review article published in Light: Science & Application, a team of researchers, led by Professor Fengqiu Wang from Nanjing University, China summarize the so far obtained knowledge and progresses on the modulation of photocarrier relaxation dynamics in 2D semiconductors. After a brief summary on the photocarrier relaxation dynamics in 2D semiconductors, the authors first discuss the modulation of Coulomb interactions and the resulting effects on the transient properties. The Coulomb interactions in 2D semiconductors can be modulated by introducing additional screening from the external dielectric environment or injected charge carriers, leading to the modification of quasi-particle bandgaps and the exciton binding energy. Then the influencing factors on photocarrier dynamics and the manipulating methods are discussed according to the relaxation pathways or mechanisms they are associated with. The first discussed factor is the initial distribution of photocarriers in electronic band structures, which can affect their decay processes by enabling different available relaxation pathways in the energy and momentum space. After that the defect-assisted and phonon-assisted relaxation are discussed. While the approaches utilizing defect-assisted relaxation such as ion bombardment and encapsulation are similar to those for bulk semiconductors, the modulation on phonon-assisted relaxation for 2D semiconductors can be different. "On one hand, the coupling between charge carriers and phonons can be enhanced due to the suppressed dielectric screening; on the other hand, the high surface-to-volume ratio make 2D materials more susceptible to the external phononic environment." Moreover, the flexibility in constructing vdW heterostructures and the ultrafast charge transfer across the interfaces enables tailoring the photocarrier dynamics though band alignment engineering. The transition between different particle species also offers the opportunity to modulate through changing the ratios between different quasiparticles, which can modify the relative portion of different relaxation pathways, and thus the transient optical responses of the whole sample. At last, the modulation of the dynamics of spin/valley polarization in 2D TMDs is discussed, and the discussion is mainly focusing on the methods to increase the lifetime of the spin/valley polarization.

Through this review, the authors aim to provide guidance for developing robust methods tuning the photocarrier relaxation behaviors and strength the physical understanding on this fundamental process in 2D semiconductors. As is commented by the authors at the end "Tremendous research efforts are still needed in both fundamental understanding and practical modulation of the photocarrier relaxation in 2D semiconductors."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Information transition mechanisms of spatiotemporal metasurfaces

image: a, The conceptual illustration of manipulating the transmission of electromagnetic waves via spatiotemporal metasurface. b, The input phase states of the meta-atom with N=4. c, Cayley diagram of the product group Z_N×Z_q (Z_4×Z_3) that generates N×q (4×3) output phase states. d, The generated output spectral states of each meta-atom e, The schematic of independent control of spectral responses of the meta-atom at two harmonics, in which each line connects the spectral responses of the meta-atom generated by the same temporal sequence.

Image: 
by Haotian Wu, Xin Xin Gao, Lei Zhang, Guo Dong Bai, Qiang Cheng, Lianlin Li, and Tie Jun Cui

Spatiotemporal metasurfaces, driven by ultrafast dynamic modulations, opened up new possibilities for manipulating the harmonic modes of electromagnetic waves and generations of exotic physical phenomena, such as dispersion cancellation, Lorentz reciprocity broken, and Doppler illusions. In recent years, rapid development of information technologies have stimulated many information processing applications for metasurfaces, including computational imaging, wireless communications, and performing mathematical operations. With increasing amount of researches focused on the topic of information processing with metasurfaces, a general theory is urgently required to characterize the information processing abilities of the spatiotemporal metasurfaces. In a new paper published in Light Science & Application, Prof. Tie Jun Cui's group at Southeast University (SEU) has reported a breakthrough on this topic. In this work, the information transition mechanisms of spatiotemporal metasurfaces are proposed and analyzed, in which the group extension and independent control of multiple harmonics are revealed and characterized as two major information transition mechanisms of the spatiotemporal metasurfaces.

Specifically, the group extension mechanism could be adopted to extend the output phase states of each meta-atom by a factor of q, where q is a function over the modulation periodicity, input phase states, and harmonic index. Accordingly, the output spectral response states of the spatiotemporal metasurface are extended greatly, such that more accurate manipulations of the electromagnetic information could be obtained without increasing the design complexity of the metasurfaces. Additionally, the researchers demonstrated that the independent control of the spectral responses of the spatiotemporal metasurface could be realized as well. The independent control of multiple harmonics could open up new possibilities for the metasurface-based multitasking, by which electromagnetic information could be independently processed with frequency-gapped channels. A proof-of-concept experiment is performed in the microwave regime to verify the mechanisms of the group extension and independent control of multiple harmonics with the spatiotemporal metasurface.

By incorporating the proposed model with the Shannon's entropy theory, the authors further discovered the information transition efficiencies of the spatiotemporal metasurfaces with respect to the above two mechanisms. The obtained results could be applied to predict the channel capacity of the spatiotemporal metasurface, which would be helpful to guide the analysis and design of spatiotemporal metasurfaces for wireless communications. Moreover, they demonstrated that the output spectral responses of the spatiotemporal metasurface can help prove the Fermat's little theorem, which in turn might provide more clues to understand the non-vanished spectra responses of the spatiotemporal metasurfaces.

"The proposed theory establishes a quantitative framework to characterize the information transition capabilities of the spatiotemporal metasurfaces, which provides deeper physical insights in understanding the spatiotemporal metasurfaces from the information perspective, and offers new approaches to facilitate the analysis and design. The presented framework and obtained results, with wide-ranged spectral applicability, would be helpful to lay the groundwork for future researches into the regime of information-based spatiotemporal metasurfaces, and expected to enable new information-oriented applications including cognitive harmonic wavefront engineering, intelligent computational imaging, and the 6th generation (6G) wireless communications." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Loss induced nonreciprocity

image: a, sketch of a system with an array of main resonance modes a_m connected via a series of connecting modes c_m^((n)), with the decay rate κ_m^((n)); b, sketch of a multichannel coupled system with synthetic frequency dimensions; c, illustration of multichannel interference for realizing nonreciprocity. For the first (second) coupling channel, the coherent coupling phase is ?_m^((1)) (?_m^((2))) for the forward coupling and -?_m^((1)) (-?_m^((2))) for the backward coupling, while the loss phases are θ_m^((1)) (θ_m^((2))) for both the forward and backward couplings. By tuning the phases, constructive (destructive) interference can be achieved for the forward (backward) coupling, giving rise to nonreciprocity and unidirectional energy transmission.

Image: 
by Xinyao Huang, Cuicui Lu, Chao Liang, Honggeng Tao, and Yong-Chun Liu

Optical nonreciprocity, which prohibits the light field returning along the original path after passing through the optical system in one direction, is not only of vast interest to fundamental science, which brings us a deeper understanding of Lorentz reciprocity, time-reversal symmetry, and topological effects, but is also of great importance for realizing nonreciprocal optical and electromagnetics devices such as isolators, circulator and directional amplifiers, which are indispensable for applications ranging from optical communication to optical information processing.

However, realizing nonreciprocity is rather difficult as it requires breaking of the Lorentz reciprocity theorem. In other words, the protocols to nonreciprocity generation are limited as a result of the time-reversal symmetry and linear property of Maxwell's equations. Most of the existing approaches to realize nonreciprocity can be grouped into three categories with the following requirements: (i) magnetic field induced breaking of time-reversal asymmetry, (ii) spatiotemporal modulation of system permittivity; (iii) nonlinearity. Nevertheless, these principles either meets difficulties for integration, or requires stringent experimental conditions, or has limited performances.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Yong-Chun Liu from State Key Laboratory of Low-Dimensional Quantum Physics, Department of Physics, Tsinghua University, China, and co-workers have proposed a new principle to realize nonreciprocity by going beyond the existing three categories of approaches. The idea is using energy loss to induce nonreciprocity. The loss in a resonance mode induces a phase lag, which is independent of the energy transmission direction. By combining multichannel lossy resonance modes, the interference gives rise to nonreciprocity, with different coupling strength for forward and backward directions [Fig. 1].

For the completely nonreciprocal points, i.e., the coupling only exists in one direction, the eigenvalues of the system are degenerate, accompanied by the coalescence of the eigenmodes, which present the features of the exceptional points.

The unidirectional nonreciprocal coupling directly gives rise to the unidirectional energy transmission between the main resonance modes. [Fig. 2].

In the scheme, magnetic field, nonlinearity and spatiotemporal modulation of permittivity are all not required. On the contrary, simply energy loss is used, which exists ubiquitously in a variety of physical systems, and is regarded as harmful and undesirable in most studies. The scheme is advantageous since it works for pure lossy and linear systems, in which neither gain nor nonlinearity is required.

The scheme is generic and can be applied to a variety of systems, such as optical cavities and waveguides, mechanical resonators, atomic ensembles, as well as superconducting circuits. This research paves the way for the design of nonreciprocal devices and the study of topological properties in lossy systems, without stringent requirements.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Optimal design for acoustic unobservability in water

video: Square norm of sound pressure in water around a designed acoustic cloak during topology optimization.

Image: 
GARUDA FUJII, INSTITUTE OF ENGINEERING, SHINSHU UNIVERSITY, JAPAN

Until now, it was only possible to optimize an acoustic cloaking structure for the air-environment. However, with this latest research, Acoustic cloak designed by topology optimization for acoustic-elastic coupled systems, published in the latest Applied Physics Letters, it is possible to design an acoustic cloak for underwater environments.

In the conventional topology optimization of acoustic cloaking, the design method was based on an analysis that approximated an elastic body in the air as a rigid body. However, since the approximation holds only for materials that are sufficiently stiff and dense such as metal in the air, there were few material options other than metal. Moreover, it was impossible to design an acoustic cloak in water by the approximation method.

In this study led by Garuda Fujii of Shinshu University, the group developed topology optimization based on the finite element analysis of coupled acoustic-elastic wave propagation. By considering the interaction between the vibration of the elastic body and the sound wave in the optimization calculation, it is now possible to select the material that constitutes the acoustic cloak from light ABS and other materials and to design the acoustic cloak for use in air and water. Furthermore, the group successfully designed wide frequency band acoustic cloaks optimized respectively for each environment, aerial and underwater.

This novel research has made it possible to select the constituent materials of the acoustic cloak and the surrounding acoustic medium environment (air or underwater) with a high degree of functionality. It is expected that the functions of acoustic cloaking will be greatly expanded.

Credit: 
Shinshu University