Tech

Wild bees depend on the landscape structure

image: Costanza Geppert sampling bees at the edge of a field full of grain.

Image: 
Bettina Donkó

Sowing strips of wildflowers along conventional cereal fields and the increased density of flowers in organic farming encourage bumblebees as well as solitary wild bees and hoverflies. Bumblebee colonies benefit from flower strips along small fields, but in organic farming, they benefit from large fields. This research was carried out by agroecologists from the University of Göttingen in a comparison of different farming systems and landscape types. The results of the study have been published in the Journal of Applied Ecology.

Organic farming and flower strips are financially supported by the European Union in order to enhance populations of wild bees and hoverflies, which are major pollinators of most crops and wild plants. The research team selected nine landscapes in the vicinity of Göttingen along a gradient of increasing field size and then analysed the wild bees and hoverflies in each landscape at the edge of an organic wheat field, in a flower strip along conventional wheat, and at the edge of a conventional wheat field without flower strips. The result: most pollinators were found in the flower strips, but organic fields, characterized by more flowering wild plants than conventional fields, were also beneficial. Bumblebee colonies established on the margins of fields as part of the project produced more queens in flower strips when located in landscapes with small conventional fields. In contrast, large areas were particularly advantageous when it came to flower-rich organic fields. Flower strips offer a high local density of pollen and nectar, but organic areas compensate for this by their increased area.

"The results show that action at both local and landscape level is important to promote wild bees," emphasises Costanza Geppert, first author of the study. The investigations were part of her Master's thesis in the Agroecology Group in the Department of Crop Sciences at the University of Göttingen. "Wild bees and other insects cannot survive in a field simply by making improvements to that field, they depend on the structure of the surrounding landscape," adds Head of Department Professor Teja Tscharntke. "Therefore, future agri-environmental schemes should take more account of the overall landscape structure," adds Dr Péter Batáry who initiated the study.

Credit: 
University of Göttingen

NASA catches a short-lived Eastern Pacific Depression 4E

image: NASA-NOAA's Suomi NPP satellite provided forecasters with a visible image of Tropical Depression 4E on June 29, 2020. The depression was located southwest of the southernmost tip of Baja California, Mexico.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

Tropical Depression 4E formed late on June 29 and it is forecast to become a remnant low-pressure area by the end of the day on June 30. NASA-NOAA's Suomi NPP satellite provided forecasters with an image of the depression, located just southwest of the southern tip of Mexico's Baja Peninsula.

Tropical Depression Four-E (TD4E) formed on June 29 at 11 p.m. EDT (June 30 at 0300 UTC). It maintained depression status on June 30 and is forecast to weaken.

Visible imagery from NASA satellites help forecasters understand if a storm is organizing or weakening. The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of TD4E late on June 29 that showed it remained disorganized and lop-sided. The imagery showed that the bulk of clouds and precipitation was displaced more than 90 nautical miles northeast of the center of circulation. Clouds associated with the depression were streaming over the southern tip of the Baja Peninsula.

At 5 a.m. EDT (0900 UTC) on June 30, NOAA's National Hurricane Center (NHC) noted the center of Tropical Depression Four-E was located near latitude 20.6 degrees north and longitude 113.2 degrees west. That is about 265 miles (425 km) southwest of the southern tip of Baja California, Mexico. The estimated minimum central pressure is 1005 millibars.

Maximum sustained winds had decreased to near 30 mph (45 kph) with higher gusts. The low should gradually spin down during the next day or two over cool waters while it moves slowly northwestward.

NHC forecasters said the system is forecast to become a remnant low later today, and degenerate into a trough of low pressure in a couple of days.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Pandemic resource allocation needs to address health inequity

If some regions become hot spots and hospitals reach maximum capacity during the COVID-19 pandemic, hospitals have plans for how to decide who gets critical care resources, such as a bed in the intensive care unit or a respirator. Many hospitals recommend distributing resources to the healthiest patients who are most likely to survive. However, Johns Hopkins Medicine physicians and bioethicists say that using this kind of selection method preferentially chooses people who are white or affluent over patients who are Black, Latino or from the inner city.

In a commentary published June 22 in The Lancet, the Johns Hopkins team provides recommendations for how hospitals can provide equitable care during pandemic resource allocation, such as by requiring regular bias training and creating periodic checkpoints to assess inequities in the system.

"Prejudice, institutional racism and redlining over generations has led to drastic health inequities in Baltimore and many other cities around the country, making these populations of people inherently sicker," says Panagis Galiatsatos, M.D., M.H.S., assistant professor of medicine at the Johns Hopkins University School of Medicine. "We wanted to make sure that we developed a plan that ensures that resources are fairly distributed and that we weren't contributing to existing inequalities. And we want to be able to share these guidelines to other hospitals so they can also be prepared to make humane decisions for their patient communities."

The American College of Chest Physicians and the Society of Critical Care Medicine recommend using the sequential organ failure assessment (SOFA) to determine which patients are the healthiest and should get resources, based on factors such as whether the patient has health issues such as heart failure or diabetes. However, the aforementioned organization's physicians say this model hasn't been effectively evaluated for the COVID-19 pandemic. They also say that although the method works on entire populations, it hasn't been tested on individual disadvantaged groups. And, because the criteria disproportionally affect minority groups and the poor, the researchers say, the proposed system need adjusting.

The first thing the team recommends is to have unconscious-bias training for the people in critical care medicine making the decisions about who gets resources.

Next, the team says hospitals need to periodically assess their survival numbers by income, race and other socioeconomic factors. Then, they must have an outside committee that includes community members to assess where there are weaknesses in the system and develop strategies to address these deficiencies. For example, some populations might need more time with a specific resource than affluent, white patients because people in the group may otherwise be more likely to die.

Galiatsatos is available to discuss how current resource allocation methods cast aside vulnerable populations. He can also talk about the methods his team suggests to address the inequities.

Credit: 
Johns Hopkins Medicine

Plant tissue engineering improves drought and salinity tolerance

image: Arabidopsis thaliana, Thale Cress, engineered to behave like a succulent, improving water-use efficiency, salinity tolerance and reducing the effects of drought. Leaf anatomy changes to be helpful in crassulacean acid metabolism (CAM) plant engineering work.

Image: 
Photo courtesy of University of Nevada, Reno

RENO, Nev.- After several years of experimentation, scientists have engineered thale cress, or Arabidopsis thaliana, to behave like a succulent, improving water-use efficiency, salinity tolerance and reducing the effects of drought. The tissue succulence engineering method devised for this small flowering plant can be used in other plants to improve drought and salinity tolerance with the goal of moving this approach into food and bioenergy crops.

"Water-storing tissue is one of the most successful adaptations in plants that enables them to survive long periods of drought. This anatomical trait will become more important as global temperatures rise, increasing the magnitude and duration of drought events during the 21st century," said University of Nevada, Reno Biochemistry and Molecular Biology Professor John Cushman, co-author of a new scientific paper on plant tissue succulence published in the Plant Journal.

The work will be combined with another of Cushman's projects: engineering another trait called crassulacean acid metabolism (CAM), a water-conserving mode of photosynthesis that can be applied to plants to improve water-use efficiency.

"The two adaptations work hand-in-hand," Cushman, of the University's College of Agriculture, Biotechnology & Natural Resources, said. "Our overall goal is to engineer CAM, but in order to do this efficiently we needed to engineer a leaf anatomy that had larger cells to store malic acid that accumulates in the plant at night. An added bonus was that these larger cells also served to store water to overcome drought and to dilute salt and other ions taken up by the plant, making them more salt tolerant."

When a plant takes up carbon dioxide, it takes it through its pores on the leaf, called stomata. They open their stomata so carbon dioxide goes in, and then it gets fixed into sugars and all other compounds that support most of life on earth. But, when stomata open, not only does carbon dioxide come in, but also water vapor goes out, and because plants transpire to cool themselves, they lose enormous amounts of water."

Cushman's team of scientists created genetically modified A. thaliana with increased cell size resulting in larger plants with increased leaf thickness, more water-storage capacity, and fewer and less open stomatal pores to limit water loss from the leaf due to the overexpression of a gene, known as VvCEB1 to scientists. The gene is involved in the cell expansion phase of berry development in wine grapes.

The resulting tissue succulence serves two purposes.

"Larger cells have larger vacuoles to store malate at night, which serves as the carbon source for carbon dioxide release and refixation, by what's called Rubisco enzyme action, during the day behind closed stomatal pores, thereby limiting photorespiration and water loss" Cushman said. "And, the succulent tissue traps the carbon dioxide that is released during the day from the decarboxylation of malate so that it can be refixed more efficiently by Rubisco.

One of the major benefits of VvCEB1 gene overexpression was the observed improvements in whole-plant instantaneous and integrated water-use efficiency, which increased up to 2.6-fold and 2.3-fold, respectively. Water-use efficiency is the ratio of carbon fixed or biomass produced to the rate of transpiration or water loss by the plant. These improvements were correlated with the degree of leaf thickness and tissue succulence, as well as lower stomatal pore density and reduced pore openings.

"We tried a number of candidate genes, but we only observed this remarkable phenotype with the VvCEB1 gene," Cushman said. "We typically will survey between 10 to 30 independent transgenic lines, and then these are grown for two to three generations before detailed testing."

Arabidopsis thaliana is a powerful model for the study of growth and development processes in plants. It is a small weed-like plant that has a short generation time of about six weeks and grows well under laboratory conditions where it produces large amounts of seeds.

Engineered tissue succulence is expected to provide an effective strategy for improving water?use efficiency, drought avoidance or attenuation, salinity tolerance and for optimizing performance of CAM.

CAM plants are very smart, keeping their stomata closed during the day, and only opening them at night when evapotranspiration is low because it is cooler and the sun is not shining, Cushman explained. The significance of CAM is found in its unique ability to conserve water. Where most plants would take in carbon dioxide during the day, CAM plants do so at night.

"Essentially, CAM plants are five to six times more water-use efficient, whereas most plants are very water inefficient," he said. "The tissue succulence associated with CAM and other adaptive traits like thicker cuticles and the accumulation of epicuticular waxes, means that they can reduce leaf heating during the day by reflecting some of the light hitting the leaf. Many desert-adapted CAM plants also have a greater ability to tolerate high temperatures."

With demand for agricultural products expected to increase by as much as 70% to serve a growing human population, which is predicted to reach about 9.6 billion by 2050, Cushman and his team are pursuing these biotechnology solutions to address potential future food and bioenergy shortages.

"We plan to move both tissue succulence and CAM engineering into crop plants. This current work is proof-of-concept," Cushman said.

Credit: 
University of Nevada, Reno

Just add sugar: How a protein's small change leads to big trouble for cells

JUNE 30, 2020, NEW YORK CITY -- In molecular biology, chaperones are a class of proteins that help regulate how other proteins fold. Folding is an important step in the manufacturing process for proteins. When they don't fold the way they're supposed to, it can lead to the development of diseases such as cancer.

Researchers at the Sloan Kettering Institute have uncovered important findings about what causes chaperones to malfunction as well as a way to fix them when they go awry. The discovery points the way to a new approach for developing targeted drugs for cancer and other diseases, including Alzheimer's disease.

"Our earlier work showed that defects in chaperones could lead to widespread changes in cells, but no one knew exactly how it happened," says SKI scientist Gabriela Chiosis, senior author of a study published June 30 in Cell Reports. "This paper finally gets into the nuts and bolts of that biochemical mechanism. I think it's a pretty big leap forward."

How a "Good Guy" Turns Bad

The research focused on a chaperone called GRP94, which plays an important role in regulating how cells respond to stress. Stress in cells is a common sign of disease, especially those related to aging, such as cancer and Alzheimer's. Dr. Chiosis has studied the role of chaperones and stress in both of these disorders for many years.

In the new study, Dr. Chiosis and her colleagues looked at changes in GRP94 in cancer cells, including cells from patients treated for breast cancer at Memorial Sloan Kettering. They found that when GRP94 undergoes a process called glycosylation, in which a sugar molecule is added, it completely changes the way that chaperone behaves.

"It goes from protein that was very floppy and flexible to one that's very rigid," explains Dr. Chiosis, a member of SKI's Chemical Biology Program. "This one change is enough to convert it from a good guy in the cell to a bad guy. That, in turn, can make the cell behave in a way that's not normal."

When GRP94 undergoes this change, it moves to a different part of the cell. Normally, it's found in the endoplasmic reticulum, where proteins are made and folded. But after the sugar is added, it moves to the part of the cell called the plasma membrane. This leads to widespread dysfunction of proteins and a more aggressive cancer.

Finding a Prototype for Future Drugs

The researchers report in the paper that they have already identified a small molecule that acts on GRP94 in the plasma membrane, called PU-WS13. This molecule appears to repair the defects in GRP94, allowing it to behave normally again.

"The changes that we saw only happen in diseased cells, such as cancer cells or those related to Alzheimer's," Dr. Chiosis says. "That makes them a good target for therapies because healthy cells are unlikely to be affected."

But Dr. Chiosis explains that more research is needed before a new drug can be developed based on this approach. "PU-WS13 is just a prototype," she says. "It has to be tailored for use in humans. We're investigating how to make this into something that might work as a drug."

This study was funded by the National Institutes of Health under grants R01 CA172546, R56 AG061869, R01 CA186866, P30 CA08748, and P50 CA192937; William H. Goodwin and Alice Goodwin and the Commonwealth Foundation for Cancer Research, through the Experimental Therapeutics Center of MSK; the Lymphoma Research Foundation; and the Steven A. Greenberg Charitable Trust.

Dr. Chiosis and several of her co-authors are inventors on patents covering PU-WS13 and related science. Dr. Chiosis is also a founder of Samus Therapeutics and a member of its board of directors.

Credit: 
Memorial Sloan Kettering Cancer Center

No touching: Skoltech researchers find contactless way to measure thickness of carbon nanotube films

image: Preparing SWCNT films for the experiment.

Image: 
Pavel Odinev / Skoltech

Scientists from Skoltech and their colleagues from Russia and Finland have figured out a non-invasive way to measure the thickness of single-walled carbon nanotube films, which may find applications in a wide variety of fields from solar energy to smart textiles.

A single-walled carbon nanotube (SWCNT) is essentially a sheet of graphite one atom thick that is rolled into a tube. They are an allotrope (a physical form) of carbon, much like fullerenes, graphene, diamond, and graphite. SWCNTs hold a lot of promise in various industrial applications, ranging from solar cells and LEDs to ultrafast lasers, transparent electrodes, and smart textiles.

All these applications, however, require rather precise measurements of SWCNT film thickness and optical properties. "Film thickness is quite important for many applications and usually characterized by how much light can be transferred through the film in the visible spectral range: the higher the transparency, the less the thickness of the film. However, precise control over film thickness and optical constants is critical when one needs to design effiecient transparent electrodes. For instance, we need to know the thickness to improve antireflection properties of the surface based on transparent SWCNT window layer for solar cells. To estimate and subsequently utilize the mechanical properties of SWCNT films, we need to predict the geometrical dimensions of the films," says Professor Albert Nasibulin, head of Laboratory of Nanomaterials at Skoltech Center for Photonics and Quantum Materials.

Existing methods for optical constant measurements include absorption and electron energy-loss spectroscopies, while geometric parameters can be determined by transmission electron microscopy, scanning electron microscopy or atomic force microscopy. These methods are resource-inefficient and require sample preparation, which might affect the very properties of SWCNT films that one is trying to measure.

A team of researchers led by Albert Nasibulin of Skoltech and Aalto University was able to design a rapid, contactless, and universal technique for accurate estimation of both SWCNT film thickness and their dielectric functions. They figured out a workaround to use spectroscopic ellipsometry (SE), a non-destructive, fast, and very sensitive measurement technique, for SWCNT films.

"Ellipsometry is an indirect method that we can use to determine film parameters, and standard methods of data processing are not always applicable here. At first glance, a carbon nanotube thin film is a very difficult object for this technique: consisting of many millions of randomly oriented nanometer-sized individual and bundled tubes, it has strong absorption in the entire spectral range, low reflection and anisotropy in its optical properties. Nevertheless, the first author of the paper, Georgy Ermolaev, a student of a joint Skoltech-MIPT Master's program, has found an elegant algorithm to retrieve the thickness and optical constants in a single set of optical measurements," says Yuriy Gladush, one of the coauthors of the paper.

The researchers manufactured SWCNT films of varying thickness and absorption between 90% and 45% at 550 nm and determined the broadband (250-3300 nm) refractive index and corresponding thickness of the films.

"It was expected that optical properties would depend on the density of packaging of the carbon nanotubes in the film, but the surprise was in how large this effect is. A single droplet of ethanol can compress or densify the film and change the refractive index from 1.07 to 1.7, opening simple opportunities to adjust the optial properties of the SWCNT films," Albert Nasibulin adds.

The team believes other scientists can build on their work and, among other things, use their approach beyond the realm of carbon nanotubes for other kinds of these structures.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Decoding material wear with supercomputers

video: A metal surface (copper and nickel) under stress. different kinds of deformation are visible. Changes in the granular structure of the metal are simulated on a supercomputer with atomic resolition.

Image: 
TU Wien

Wear and friction are crucial issues in many industrial sectors: What happens when one surface slides across another? Which changes must be expected in the material? What does this mean for the durability and safety of machines?

What happens at the atomic level cannot be observed directly. However, an additional scientific tool is now available for this purpose: For the first time, complex computer simulations have become so powerful that wear and friction of real materials can be simulated on an atomic scale.

The tribology team at TU Wien (Vienna), led by Prof. Carsten Gachot, has now proven that this new research field now delivers reliable results in a current publication in the renowned scientific journal "ACS Applied Materials & Interfaces". The behavior of surfaces consisting of copper and nickel was simulated with high-performance computers. The results correspond amazingly well with images from electron microscopy - but they also provide valuable additional information.

Friction Changes Tiny Grains

To the naked eye, it does not look particularly spectacular when two surfaces slide across each other. But on the microscopic level, highly complicated processes take place: "Metals, as they are used in technology, have a special microstructure," explains Dr. Stefan Eder, first author of the current publication. "They consist of small grains with a diameter of the order of micrometers or even less."

When one metal slides over the other under high shear stress, the grains of the two materials come into intense contact with each other: they can be rotated, deformed or shifted, they can be broken up into smaller grains or grow due to increased temperature or mechanical force. All these processes, which take place on a microscopic scale, ultimately determine the behavior of the material on a large scale - and thus they also determine the service life of a machine, the amount of energy lost in a motor due to friction, or how well a brake works, in which the highest possible friction force is desired.

Computer Simulation and Experiment

"The result of these microscopic processes can then be examined in an electron microscope," says Stefan Eder. "You can see how the grain structure of the surface has changed. However, it has not yet been possible to study the time evolution of these processes and explain exactly what causes which effects at which point in time."

This gap is now being closed by large molecular dynamics simulations developed by the tribology team at TU Wien in cooperation with the Excellence Centre of Tribology (AC²T) in Wiener Neustadt and the Imperial College in London: Atom by atom, the surfaces are simulated on the computer. The larger the simulated chunk of material and the longer the simulated time period, the more computer power is needed. "We simulate sections with a side length of up to 85 nanometers, over a period of several nanoseconds," says Stefan Eder. That doesn't sound like much, but it's remarkable: Even the Vienna Scientific Cluster 4, Austria's largest supercomputer, may sometimes be busy with such tasks for months at a time.

The team investigated the wear of alloys of copper and nickel - and did so using different mixing ratios of the two metals and different mechanical loads. "Our computer simulations revealed exactly the variety of processes, microstructural changes and wear effects that are already known from experiments," says Stefan Eder. "We can use our simulations to produce images that correspond exactly to the images from the electron microscope. However, our method has a decisive advantage: We can then analyze the process in detail on the computer. We know which atom changed its place at what point in time, and what exactly happened to which grain in which phase of the process."

Understanding Wear - Optimizing Industrial Processes

The new methods are already met with great interest from industry: "For years, there has been an ongoing discussion that tribology could benefit from reliable computer simulations. Now we have reached a stage where the quality of the simulations and the available computing power are so great that we could use them to answer exciting questions that would otherwise not be accessible," says Carsten Gachot. In the future, they also want to analyze, understand, and improve industrial processes on the atomic level.

Credit: 
Vienna University of Technology

Scientists urge business and government to treat PFAS chemicals as a class

BERKELEY, Calif.--All per- and polyfluorinated substances (PFAS) should be treated as one class and avoided for nonessential uses, according to a peer-reviewed article published today in Environmental Science & Technology Letters.

The authors--16 scientists from universities, the U.S. National Institutes of Health, the European Environment Agency, and NGOs--say the extreme persistence and known toxicity of PFAS that have been studied render traditional chemical-by-chemical management dangerously inadequate. The article lays out how businesses and government can apply a class-based approach to reduce harm from PFAS, including fluoropolymers, which are large molecules.

"With thousands of PFAS in existence, assessing and managing their risks individually is like trying to drink from a fire hose," said Tom Bruton, Senior Scientist at the Green Science Policy Institute. "Phased-out PFAS that were used to make products like non-stick cookware have been replaced with other PFAS that have turned out to be similarly toxic. By avoiding the entire class of PFAS, we can avoid further rounds of replacing a banned substance with a chemical cousin which is also later banned."

Studied PFAS have been associated with cancer, decreased fertility, endocrine disruption, immune system harms, adverse developmental effects, and other serious health problems. The authors note that people are exposed to multiple PFAS at once, and there is little research on the effects of combined exposures.

Less than one percent of PFAS have been tested for toxicity, but all PFAS are either extremely persistent in the environment or break down into extremely persistent PFAS. Cleaning up contamination can take decades to centuries or more and every time an individual "forever chemical" has been studied, it was found to be harmful.

"When it comes to harm from PFAS, it is much more than our own health that's at stake. It is the health of our children, grandchildren and generations to come--indeed, of every creature on our planet," said Arlene Blum, Executive Director of the Green Science Policy Institute. "The longer we continue the unnecessary use of PFAS, the more likely the overall future harm to our world will rival, or even surpass, that of the coronavirus."

The article notes that some companies have already employed a class-based approach to PFAS. For example, IKEA phased out all PFAS in its textile products and Levi Strauss & Co. has committed to a similar phase-out.

"We're proud that our class-based approach to chemicals has helped protect our customers and the environment, for example by removing all PFAS from IKEA's textiles in 2016," said Therese Lilliebladh of IKEA. "It also helps us stay ahead of the curve and avoid falling into a problematic cycle of substituting a similar chemical for one that has been phased out."

Some government bodies have banned the entire class of PFAS for use in some products. For example, Maine and Washington have banned all PFAS in food contact materials and Denmark has banned PFAS from paper-based food packaging. The authors recommend expanding such regulation to all nonessential uses.

Contrary to recent PFAS manufacturer messaging, the authors emphasize that fluorinated polymers should be included in a class-based approach to PFAS. "These large molecule chemicals can release smaller toxic PFAS and other hazardous substances into the environment throughout their lifecycle, from production, to use, to disposal," said author Carol Kwiatkowski, an Adjunct Assistant Professor at North Carolina State University. "Fluoropolymer microplastics also contribute to global plastic and microplastics debris."

"PFAS are a complex class of chemicals, but there is a clear pattern of persistence and potential for health harm that unites them all," said retired NIEHS Director Linda Birnbaum. "The use of any PFAS should be avoided whenever possible."

Credit: 
Green Science Policy Institute

A revolutionary new treatment alternative to corneal transplantation

The team was co-led by May Griffith, a researcher at Maisonneuve-Rosemont Hospital Research Centre, which is affiliated with Université de Montréal and is part of the CIUSSS de l'Est-de-l'Île-de-Montréal.

The results of this multinational project have just been published in the journal Science Advances.

"Our work has led to an effective and accessible solution called LiQD Cornea to treat corneal perforations without the need for transplantation," said Griffith. She is also a full professor in the Department of Ophthalmology at Université de Montréal.

"This is good news for the many patients who are unable to undergo this operation due to a severe worldwide shortage of donor corneas," she said.

"Until now, patients on the waiting list have had their perforated corneas sealed with a medical-grade super glue, but this is only a short-term solution because it is often poorly tolerated in the eye, making transplantation necessary."

A synthetic, biocompatible and adhesive liquid hydrogel, LiQD Cornea, is applied as a liquid, but quickly adheres and gels within the corneal tissue. The LiQD Cornea promotes tissue regeneration, thus treating corneal perforations without the need for transplantation.

Griffith praised the work of her trainees, Christopher McTiernan and Fiona Simpson, and her collaborators from around the world who have helped create a potentially revolutionary treatment to help people with vision loss avoid going blind.

"Vision is the sense that allows us to appreciate how the world around us looks," said Griffith. "Allowing patients to retain this precious asset is what motivates our actions as researchers every day of the week."

For Sylvain Lemieux, president and CEO of the CIUSSS de l'Est-de-l'Île-de-Montréal, "this innovative treatment in ophthalmology confirms the level of expertise of the Centre universitaire d'ophtalmologie de l'Université de Montréal (CUO) at the Maisonneuve-Rosemont Hospital (HMR).

"The HMR has one of the largest teams of ophthalmologists in Quebec and one of the best-equipped ophthalmology research laboratories in North America," he said. "The hard work of our scientists and clinicians contributes daily to best practices and knowledge development.

"The multiple therapeutic possibilities resulting from our fundamental research, particularly in regenerative medicine, benefit and give hope to people suffering from ophthalmological diseases not only in Quebec, but in the rest of the world," he concluded.

Credit: 
University of Montreal

Food taxes and subsidies would bring major health gains, study shows

image: This is professor Tony Blakely, Honorary Professor in Public Health at the University of Otago and Professor at the University of Melbourne.

Image: 
University of Otago

A consumer tax on the saturated fat, salt and sugar content of food, accompanied by a 20 per cent subsidy on fruit and vegetables, would bring major benefits for the health sector, researchers from Otago, Auckland and Melbourne Universities say.

The researchers have published estimates of the health gains and cost savings to New Zealand's health system which a combined tax and subsidy scheme would bring. Their research is published today in the international science journal, The Lancet Public Health.

Lead author Professor Tony Blakely, who is an Honorary Professor in Public Health at the University of Otago, and Professor at the University of Melbourne, says there would be large potential health gains from such a scheme, which would be cost neutral for consumers.

"They range from 212 health-adjusted life years gained per 1,000 New Zealanders for the fruit and vegetable subsidy, up to 361 health-adjusted life years for the saturated fat tax, 375 health-adjusted life years for a salt tax, and 581 health-adjusted life years for the sugar tax."

A food tax and subsidy scheme would also contribute to closing health inequalities by bringing greater per capita gains for Māori.

"A tax on sugar alone could save the health system NZ$14.7 billion over the lifetime of New Zealand's 2011 population."

Professor Blakely says with obesity being one of the leading causes of health loss and health system costs, the health gains from a tax and subsidy regime would be larger than those projected for a 10 per cent a year increase in tobacco tax between 2011 and 2025.

"All the interventions we modelled led to notable reductions in most chronic disease rates - the largest being a one quarter to one third reduction in the incidence of diabetes in 30 years' time."

In order to estimate the potential health gains, the researchers used simulation modelling of the entire New Zealand population until their death under a 'business-as-usual' scenario. They then layered on the impacts of food taxes and subsidies on the rates of 17 diet-related diseases to estimate the changes in health-adjusted life years and health system expenditure over the 2011 New Zealand population's remaining lifespan.

The researchers used the 2008-2009 New Zealand Nutrition Survey from the University of Otago's Department of Human Nutrition for information about New Zealanders' business-as-usual diet.

A co-author on the paper, Dr Cristina Cleghorn, from the University of Otago, Wellington, says food taxes and subsidies are some of the most powerful tools which can be used to improve health.

"Mexico, for instance, introduced an eight per cent tax on junk food in 2014, which reduced purchasing of these foods by six per cent. These measures are both feasible and effective."

Dr Cleghorn says that with food production and distribution a major source of greenhouse gas emissions, a tax and subsidy regime could contribute to reducing New Zealand's carbon footprint, moving the country towards a diet that is both healthy and sustainable.

There are downsides to food taxes and subsidies, including administration and other costs, Professor Blakely says.

"Nevertheless the health gains and cost savings from food taxes and subsidies require serious policy consideration and public discussion."

Credit: 
University of Otago

The "eyes" say more than the "mouth" and can distinguish English sounds

image: The horizontal axis represents the scores of the English auditory distinguishing test performed in advance, and the vertical axis represents the pupil dilation response difference when playing sounds of English words including L and R.

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview

A research team of the Department of Computer Science and Engineering and the Electronics-Inspired Interdisciplinary Research Institute at Toyohashi University of Technology has discovered that the difference in the ability to hear and distinguish English words including L and R, which are considered difficult for Japanese people, appears in pupillary (the so-called "black part of the eye") responses. While the pupil has the role of adjusting the amount of light that enters the eye, it is known that the size changes reflect the cognitive state of humans. In this study, the research team conducted experiments to simultaneously measure the size of the pupil while playing English words in combinations such as "Light" and "Right", and clarified that it is possible to objectively estimate the ability to distinguish English words from the eyes.

Details

The improvement of English proficiency is drawing attention in various fields as our society becomes more globalized. However, we Japanese are said to be very weak in the pronunciation of and the hearing and distinguishing of L and R which are sounds that do not originally exist in Japanese, such as in "glass" and "grass". Similarly, to how words that cannot be recognized cannot be pronounced, the ability of each person to hear and distinguish English is a very important indicator in effective English learning.

As in the proverb "The eyes say more than the mouth", it is known that the human pupil reflects a variety of our cognitive states. So, the research team tried to estimate the ability to hear and distinguish L and R by focusing on pupillary dilation response in which the pupil dilates with respect to the difference in sound. Specifically, the research team occasionally mixed in English words containing R (e.g., "right") with the continuously playing English words containing L (e.g., "light"), and investigated how the pupils of test participants responded to those sounds. Participants were classified into two groups according to their scores in the English auditory distinguishing test performed in advance, to compare the pupillary response of both groups. As a result, the group with a strong ability to hear and distinguish L and R showed a larger pupillary response than the group with a weak ability to hear and distinguish them. It was also found that this pupillary response itself could estimate the ability of the participants who had been tested in advance to hear and distinguish English, with extremely high accuracy. Participants of the experiment were not required to pay attention to the English words they were listening to, they just needed to let them play, but their ability to hear and distinguish could be estimated from their pupillary responses alone at that time. The researchers believe that in future, this finding could be a new indicator for the easy estimation of the ability of a person to hear and distinguish English.

"Up to now, the evaluation of an individual's English listening ability has been carried out by actually performing a test in which they are made to listen to English words, and scoring whether the answers are correct or incorrect. However, we focused on the pupil, which is a biological signal, with the goal of extracting objective abilities that did not depend on the participants' responses. Although the English words including L and R which were not being taken notice of were easy for all participants to potentially hear and distinguish, pupillary responses differed according to their abilities. So, I believe that this indicates that there is a possibility that our pupillary responses are reflecting differences in unconscious language processing", the lead author Yuya Kinzuka, a PhD candidate, explained.

Professor Shigeki Nakauchi, who is the leader of the research team, explained, "It was difficult for even the person themselves to recognize their listening ability, which sometimes led to a decrease in training motivation. However, this research has made it possible for not only the person themselves, but also a third party to visualize the listening ability of the learner objectively from the outside. I expect that in the future, objective measurement of the ability to hear and distinguish things will progress in various fields such as language and music." In addition, research member Professor Tetsuto Minami explains, "This discovery shows that not only simple sounds such as pure tones, but also higher-order factors such as differences in utterances are reflected in pupillary response. I expect that it will be useful as an English learning method if it is possible to improve the ability to hear and distinguish things by controlling pupil dilation from the outside."

Future Outlook

The research team has indicated that a new method to estimate, from pupillary response, the ability to hear and distinguish English based on these research results could lead to the construction of a system wherein the distinction between L and R which Japanese are not good at, can be efficiently studied. Furthermore, it is known that learning difficulties caused by the distinguishing of sounds that do not exist in the native language also occur when an English speaker learns Chinese. Ultimately, we hope that this will become a new indicator of estimating language ability that is not limited to Japanese. In addition, these research results are expected to be useful for language learning in patients with movement disorders and speech disorders as there is no need for the participants to pay attention to or physically respond to English words.

Credit: 
Toyohashi University of Technology (TUT)

Ethics and AI: An unethical optimization principle

Artificial intelligence (AI) is increasingly deployed around us and may have large potential benefits. But there are growing concerns about the unethical use of AI. Professor Anthony Davison, who holds the Chair of Statistics at EPFL, and colleagues in the UK, have tackled these questions from a mathematical point of view, focusing on commercial AI that seek to maximize profits.

One example is an insurance company using AI to find a strategy for deciding premiums for potential customers. The AI will choose from many potential strategies, some of which may be discriminatory or may otherwise misuse customer data in ways that later lead to severe penalties for the company. Ideally, unethical strategies such as these would be removed from the space of potential strategies beforehand, but the AI does not have a moral sense, so it cannot distinguish between ethical and unethical strategies.

In work published in Royal Society Open Science on 1 July 2020, Davison and his co-authors Heather Battey (Imperial College London), Nicholas Beale (Sciteb Limited) and Robert MacKay (University of Warwick), show that an AI is likely to pick an unethical strategy in many situations. They formulate their results as an "Unethical Optimization Principle":

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.

This principle can help risk managers, regulators or others to detect the unethical strategies that might be hidden in a large strategy space. In an ideal world one would configure the AI to avoid unethical strategies, but this may be impossible because they cannot be specified in advance. In order to guide the use of the AI, the article suggests how to estimate the proportion of unethical strategies and the distribution of the most profitable strategies.

"Our work can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Such a space can be expected to contain disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them," says Professor Davison. "It also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected during the learning process."

Professor Wendy Hall of the University of Southampton, known worldwide for her work on the potential practical benefits and problems brought by AI, said: "This is a really important paper. It shows that we can't just rely on AI systems to act ethically because their objectives seem ethically neutral. On the contrary, under mild conditions, an AI system will disproportionately find unethical solutions unless it is carefully designed to avoid them.

"The tremendous potential benefits of AI will only be realised properly if ethical behaviour is designed in from the ground up, taking account of this Unethical Optimisation Principle from a diverse set of perspectives. Encouragingly, this Principle can also be used to help find ethical problems with existing systems which can then be addressed by better design."

Credit: 
Ecole Polytechnique Fédérale de Lausanne

About half of people use health technology to communicate with their health providers

image: New research from Regenstrief Institute and Indiana University School of Medicine shows that 47 percent of people are using technology to communicate with their healthcare providers, and less than a quarter are having conversations with their providers about using health information technology (HIT).

Image: 
Regenstrief Institute

INDIANAPOLIS -- New research shows that 47 percent of people are using technology to communicate with their healthcare providers, and less than a quarter are having conversations with their providers about using health information technology (HIT). Regenstrief Institute and Indiana University research scientists say these numbers indicate there are more opportunities to engage patients in this type of communication.

"The results of our statewide survey indicate patients are using health information technology," said Joy L. Lee, PhD, first author of the paper and Regenstrief research scientist. "However, they aren't talking to their provider about it. One of the few widely agreed upon recommendations for electronic communication in healthcare is for providers to be talking to their patients about it ahead of time. This does not appear to be happening regularly, and may be impacting the use of this technology." Dr. Lee is also an assistant professor of medicine at Indiana University School of Medicine.

"How patients and providers are using technology to communicate may have changed over the last few months in response to the COVID-19 outbreak, but having a shared agenda about how to communicate, what is appropriate to send as a message, and being able to discuss it openly is still important to foster the electronic patient-provider relationship," Dr. Lee continued.

Researchers sent surveys to Indiana University Health patients across the state of Indiana asking about their use of technology to communicate with their doctor. Data analysis showed 47 percent had used HIT for communication in the last year. Of those respondents,

31 percent reported using an electronic health record messaging system

24 percent used email

18 percent used text messages

Because this survey was statewide, researchers say it gives a more representative snapshot of health behaviors. These numbers are similar to other research results from across the country. The numbers show a shift toward secure messaging, which is the platform health systems are encouraging people to use because of its integration with the electronic health record.

Out of all the respondents, only 21 percent reported having a conversation with their doctor or provider about how to correspond digitally.

"This lack of conversation may lead to patients not taking advantage of these online communication platforms which have strong potential for patient engagement," said David Haggstrom, M.D., senior author on the paper and interim director of the Regenstrief Institute William M. Tierney Center for Health Services Research. "Individuals may be more likely to use messaging if they know what subjects are appropriate and how their provider might respond. We need to look at providing more support for both patients and providers to facilitate these conversations. The need for remote communication has been dramatically highlighted in the rapidly changing healthcare environment associated with COVID-19." Dr. Haggstrom is also an associate professor at the IU School of Medicine and a researcher at the Indiana University Melvin and Bren Simon Comprehensive Cancer Center.

Credit: 
Regenstrief Institute

Novel potassium channel activator which acts as a potential anticonvulsant discovered

image: Figures: Identification of a G protein-independent activator (GiGA1) of GIRK channels

Image: 
Paul Slesinger, PhD

Bottom Line: Potassium channels play an important role in controlling the electrical activity of neurons in the brain. We identified and characterized a new compound, called GiGA1, that selectively opens a subclass of potassium channels and can functionally suppress seizures in an epilepsy animal model.

Results: GiGA1 was identified in a library of ~750 thousand chemical compounds, using computer-based modeling and virtual screening of an alcohol-binding pocket in the channel. GiGA1 preferentially activates a subset of GIRK channels, those containing the GIRK1 and GIRK2 subunits, most commonly found in the brain. GiGA1 also activates natively expressed GIRK channels in hippocampal neurons and in turn reduces neuronal excitability. Systemic administration of GiGA1 exhibits anti-seizure properties in an acute epilepsy animal model.

Why the Research Is Interesting: Two reasons. First, the study highlights an integrated approach of identifying ion channel modulators with subunit-specificity. Second, the newly discovered activator of GIRK channels could provide potential treatments for brain disorders, including epilepsy and alcohol use disorder.

Who: Activation of an ion channel in the brain, called GIRK channels, with a selective chemical compound reduces seizures in mice.

When: A systemic injection of GiGA1 15-30 min prior to inducing convulsions significantly reduced the severity of seizures.

What: The study identified and characterized GiGA1-induced GIRK activity and studied its effect on brain neurons and on suppressing seizures in an epilepsy animal model.

How: Utilizing a structural model of an alcohol bound GIRK channel for virtual screening and subsequent functional analysis, we identified a GIRK channel activator (GiGA1). Single-cell electrophysiology demonstrated that GiGA1 has similar properties as alcohol, but is more potent, and activates natively expressed GIRK channels in the hippocampus. Systemic administration of GiGA1 prior to inducing convulsions with pentylenetetrazole (PTZ) significantly reduces the number and severity of seizures.

Study Conclusions: We identified GiGA1 through a virtual screen with ~750 thousand small-molecule compounds and a structural model of the alcohol pocket in GIRK channels. GiGA1 selectively activated GIRK1-containing channels, and, like alcohol, was G protein-independent. GiGA1 activated endogenous GIRK channels in the brain, and suppressed drug-induced seizures.

Paper Title: Identification of a G protein-independent activator (GiGA1) of GIRK channels

Said Mount Sinai's Dr. Paul Slesinger of the research:
This study 'walks' people through the process of identifying new compounds that selectively modify neuronal activity in the brain. We focused on a particular type of ion channel, called a GIRK channel, which provides a major source of inhibition in the brain. GIRK channels are implicated in a variety of neurological disorders, and are known to be directly affected by alcohol. We took advantage of our structural understanding of how alcohol modulates GIRK channels, and ultimately identified a subunit-specific compound that activates GIRK channels and mitigates convulsant activity. This study was a collaborative effort with the groups of Dr. Schlessinger at Mount Sinai and Dr. Marugan at the National Center for Advancing Translational Sciences at NIH.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Geologists identify deep-earth structures that may signal hidden metal lodes

image: A new study shows that giant ore deposits are tightly distributed above where rigid rocks that comprise the nuclei of ancient continents begin to thin, far below the surface (white areas). Redder areas indicate the thinnest rocks beyond the boundary; bluer ones, the thickest. Circles, triangles and squares show known large sediment-hosted deposits of different metals.

Image: 
Adapted from Hoggard et al., Nature Geoscience, 2020

If the world is to maintain a sustainable economy and fend off the worst effects of climate change, at least one industry will soon have to ramp up dramatically: the mining of metals needed to create a vast infrastructure for renewable power generation, storage, transmission and usage. The problem is, demand for such metals is likely to far outstrip currently both known deposits and the existing technology used to find more ore bodies.

Now, in a new study, scientists have discovered previously unrecognized structural lines 100 miles or more down in the earth that appear to signal the locations of giant deposits of copper, lead, zinc and other vital metals lying close enough to the surface to be mined, but too far down to be found using current exploration methods. The discovery could greatly narrow down search areas, and reduce the footprint of future mines, the authors say. The study appears this week in the journal Nature Geoscience.

"We can't get away from these metals-they're in everything, and we're not going to recycle everything that was ever made," said lead author Mark Hoggard, a postdoctoral researcher at Harvard University and Columbia University's Lamont-Doherty Earth Observatory. "There's a real need for alternative sources."

The study found that 85 percent of all known base-metal deposits hosted in sediments-and 100 percent of all "giant" deposits (those holding more than 10 million tons of metal)-lie above deeply buried lines girdling the planet that mark the edges of ancient continents. Specifically, the deposits lie along boundaries where the earth's lithosphere-the rigid outermost cladding of the planet, comprising the crust and upper mantle-thins out to about 170 kilometers below the surface.

Up to now, all such deposits have been found pretty much at the surface, and their locations have seemed to be somewhat random. Most discoveries have been made basically by geologists combing the ground and whacking at rocks with hammers. Geophysical exploration methods using gravity and other parameters to find buried ore bodies have entered in recent decades, but the results have been underwhelming. The new study presents geologists with a new, high-tech treasure map telling them where to look.

Due to the demands of modern technology and the growth of populations and economies, the need for base metals in the next 25 years is projected to outpace all the base metals so far mined in human history. Copper is used in basically all electronics wiring, from cell phones to generators; lead for photovoltaic cells, high-voltage cables, batteries and super capacitors; and zinc for batteries, as well as fertilizers in regions where it is a limiting factor in soils, including much of China and India. Many base-metal mines also yield rarer needed elements, including cobalt, iridium and molybdenum. One recent study suggests that in order to develop a sustainable global economy, between 2015 and 2050 electric passenger vehicles must increase from 1.2 million to 1 billion; battery capacity from 0.5 gigawatt hours to 12,000; and photovoltaic capacity from 223 gigawatts to more than 7,000.

The new study started in 2016 in Australia, where much of the world's lead, zinc and copper is mined. The government funded work to see whether mines in the northern part of the continent had anything in common. It built on the fact that in recent years, scientists around the world have been using seismic waves to map the highly variable depth of the lithosphere, which ranges down to 300 kilometers in the nuclei of the most ancient, undisturbed continental masses, and tapers to near zero under the younger rocks of the ocean floors. As continents have shifted, collided and rifted over many eons, their subsurfaces have developed scar-like lithospheric irregularities, many of which have now been mapped.

The study's authors found that the richest Australian mines lay neatly along the line where thick, old lithosphere grades out to 170 kilometers as it approaches the coast. They then expanded their investigation to some 2,100 sediment-hosted mines across the world, and found an identical pattern. Some of the 170-kilometer boundaries lie near current coastlines, but many are nestled deep within the continents, having formed at various points in the distant past when the continents had different shapes. Some are up to 2 billion years old.

The scientists' map shows such zones looping through all the continents, including areas in western Canada; the coasts of Australia, Greenland and Antarctica; the western, southeastern and Great Lakes regions of the United States; and much of the Amazon, northwest and southern Africa, northern India and central Asia. While some of the identified areas already host enormous mines, others are complete blanks on the mining map.

The authors believe that the metal deposits formed when thick continental rocks stretched out and sagged to form a depression, like a wad of gum pulled apart. This thinned the lithosphere and allowed seawater to flood in. Over long periods, these watery low spots got filled in with metal-bearing sediments from adjoining, higher-elevation rocks. Salty water then circulated downward until reaching depths where chemical and temperature conditions were just right for metals picked up by the water in deep parts of the basin to precipitate out to form giant deposits, anywhere from 100 meters to 10 kilometers below the then-surface. The key ingredient was the depth of the lithosphere. Where it is thickest, little heat from the hot lower mantle rises to potential near-surface ore-forming zones, and where it is thinnest, a lot of heat gets through. The 170-kilometer boundary seems to be Goldilocks zone for creating just the right temperature conditions, as long as the right chemistry also is present.

"It really just hits the sweet spot," said Hoggard. "These deposits contain lots of metal bound up in high-grade ores, so once you find something like this, you only have to dig one hole." Most current base-metal mines are sprawling, destructive open-pit operations. But in many cases, deposits starting as far down as a kilometer could probably be mined economically, and these would "almost certainly be taken out via much less disruptive shafts," said Hoggard.

The study promises to open exploration in so far poorly explored areas, including parts of Australia, central Asia and western Africa. Based on a preliminary report of the new study that the authors presented at an academic conference last year, a few companies appear to have already claimed ground in Australia and North America. But the mining industry is notoriously secretive, so it is not clear yet how widespread such activity might be.

"This is a truly profound finding and is the first time anyone has suggested that mineral deposits formed in sedimentary basins ... at depths of only kilometers in the crust were being controlled by forces at depths of hundreds of kilometers at the base of the lithosphere," said a report in Mining Journal reviewing the preliminary presentation last year.

The study's other authors are Karol Czarnota of Geoscience Australia, who led the initial Australian mapping project; Fred Richards of Harvard University and Imperial College London; David Huston of Geoscience Australia; and A. Lynton Jaques and Sia Ghelichkhan of Australian National University.

Hoggard has put the study into a global context on his website: https://mjhoggard.com/2020/06/29/treasure-maps

Credit: 
Columbia Climate School