Tech

The first AI universe sim is fast and accurate -- and its creators don't know how it works

image: A comparison of the accuracy of two models of the universe. The new model (left), dubbed D3M, is both faster and more accurate than an existing method (right) called second-order perturbation theory, or 2LPT. The colors represent the average displacement error in millions of light-years for each point in the grid relative to a high-accuracy (though much slower) model.

Image: 
S. He et al./<I>Proceedings of the National Academy of Sciences</I> 2019

For the first time, astrophysicists have used artificial intelligence techniques to generate complex 3D simulations of the universe. The results are so fast, accurate and robust that even the creators aren't sure how it all works.

"We can run these simulations in a few milliseconds, while other 'fast' simulations take a couple of minutes," says study co-author Shirley Ho, a group leader at the Flatiron Institute's Center for Computational Astrophysics in New York City and an adjunct professor at Carnegie Mellon University. "Not only that, but we're much more accurate."

The speed and accuracy of the project, called the Deep Density Displacement Model, or D3M for short, wasn't the biggest surprise to the researchers. The real shock was that D3M could accurately simulate how the universe would look if certain parameters were tweaked -- such as how much of the cosmos is dark matter -- even though the model had never received any training data where those parameters varied.

"It's like teaching image recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," Ho explains. "Nobody knows how it does this, and it's a great mystery to be solved."

Ho and her colleagues present D3M June 24 in the Proceedings of the National Academy of Sciences. The study was led by Siyu He, a Flatiron Institute research analyst.

Ho and He worked in collaboration with Yin Li of the Berkeley Center for Cosmological Physics at the University of California, Berkeley, and the Kavli Institute for the Physics and Mathematics of the Universe near Tokyo; Yu Feng of the Berkeley Center for Cosmological Physics; Wei Chen of the Flatiron Institute; Siamak Ravanbakhsh of the University of British Columbia in Vancouver; and Barnabás Póczos of Carnegie Mellon University.

Computer simulations like those made by D3M have become essential to theoretical astrophysics. Scientists want to know how the cosmos might evolve under various scenarios, such as if the dark energy pulling the universe apart varied over time. Such studies require running thousands of simulations, making a lightning-fast and highly accurate computer model one of the major objectives of modern astrophysics.

D3M models how gravity shapes the universe. The researchers opted to focus on gravity alone because it is by far the most important force when it comes to the large-scale evolution of the cosmos.

The most accurate universe simulations calculate how gravity shifts each of billions of individual particles over the entire age of the universe. That level of accuracy takes time, requiring around 300 computation hours for one simulation. Faster methods can finish the same simulations in about two minutes, but the shortcuts required result in lower accuracy.

Ho, He and their colleagues honed the deep neural network that powers D3M by feeding it 8,000 different simulations from one of the highest-accuracy models available. Neural networks take training data and run calculations on the information; researchers then compare the resulting outcome with the expected outcome. With further training, neural networks adapt over time to yield faster and more accurate results.

After training D3M, the researchers ran simulations of a box-shaped universe 600 million light-years across and compared the results to those of the slow and fast models. Whereas the slow-but-accurate approach took hundreds of hours of computation time per simulation and the existing fast method took a couple of minutes, D3M could complete a simulation in just 30 milliseconds.

D3M also churned out accurate results. When compared with the high-accuracy model, D3M had a relative error of 2.8 percent. Using the same comparison, the existing fast model had a relative error of 9.3 percent.

D3M's remarkable ability to handle parameter variations not found in its training data makes it an especially useful and flexible tool, Ho says. In addition to modeling other forces, such as hydrodynamics, Ho's team hopes to learn more about how the model works under the hood. Doing so could yield benefits for the advancement of artificial intelligence and machine learning, Ho says.

"We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs," she says. "It's a two-way street between science and deep learning."

Credit: 
Simons Foundation

The RoboBee flies solo

video: Changes to the Robobee -- including an additional pair of wings and improvements to the actuators and transmission ratio -- made the vehicle more efficient and allowed the addition of solar cells and an electronics panel. This Robobee is the first to fly without a power cord and is the lightest, untethered vehicle to achieve sustained flight.

Image: 
Harvard Microrobotics Lab/Harvard SEAS

In the Harvard Microrobotics Lab, on a late afternoon in August, decades of research culminated in a moment of stress as the tiny, groundbreaking Robobee made its first solo flight.

Graduate student Elizabeth Farrell Helbling, PhD '19, and postdoctoral fellow Noah T. Jafferis from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), the Graduate School of Arts and Sciences and The Wyss Institute for Biologically Inspired Engineering caught the moment on camera.

Helbling, who has worked on the project for six years, counted down.

"Three, two, one, go."

The bright halogens switched on and the solar-powered Robobee launched into the air. For a terrifying second, the tiny robot, still without on-board steering and control, careened towards the lights.

Off camera, Helbling exclaimed and cut the power. The Robobee fell dead out of the air, caught by its Kevlar safety harness.

"That went really close to me," Helbling said, with a nervous laugh.

"It went up," Jafferis, who has also worked on the project for about six years, responded excitedly from the high-speed camera monitor where he was recording the test.

And with that, Harvard University's Robobee reached its latest major milestone -- becoming the lightest vehicle ever to achieve sustained untethered flight.

"This is a result several decades in the making," said Robert Wood, Charles River Professor of Engineering and Applied Sciences at SEAS, Core Faculty member of the Wyss Institute and principle investigator of the Robobee project. "Powering flight is something of a Catch-22 as the tradeoff between mass and power becomes extremely problematic at small scales where flight is inherently inefficient. It doesn't help that even the smallest commercially available batteries weigh much more than the robot. We have developed strategies to address this challenge by increasing vehicle efficiency, creating extremely lightweight power circuits, and integrating high efficiency solar cells."

The milestone is described in Nature.

To achieve untethered flight, this latest iteration of the Robobee underwent several important changes, including the addition of a second pair of wings.

"The change from two to four wings, along with less visible changes to the actuator and transmission ratio, made the vehicle more efficient, gave it more lift, and allowed us to put everything we need on-board without using more power," said Jafferis.

(The addition of the wings also earned this Robobee the nickname X-Wing, after the four-winged starfighters from Star Wars.)

That extra lift, with no additional power requirements, allowed the researchers to cut the power cord -- which has kept the Robobee tethered for nearly a decade -- and attach solar cells and an electronics panel to the vehicle.

The solar cells, the smallest commercially available, weigh 10 milligrams each and get 0.76 milliwatts per milligram of power when the sun is at full intensity. The Robobee X-Wing needs the power of about three Earth suns to fly, making outdoor flight out of reach for now. Instead, the researchers simulate that level of sunlight in the lab with halogen lights.

The solar cells are connected to an electronics panel under the bee, which converts the low voltage signals of the solar array into high voltage drive signals needed to control the actuators. The solar cells sit about three centimeters above the wings, to avoid interference.

In all, the final vehicle, with the solar cells and electronics, weights 259 milligrams (about a quarter of a paper clip) and uses about 120 milliwatts of power, which is less power than it would take to light a single bulb on a string of LED Christmas lights.

"When you see engineering in movies, if something doesn't work, people hack at it once or twice and suddenly it works. Real science isn't like that," said Helbling. "We hacked at this problem in every which way to finally achieve what we did. In the end, it's pretty thrilling."

The researchers will continue to hack away, aiming to bring down the power and add on-board control to enable the Robobee to fly outside.

"Over the life of this project we have sequentially developed solutions to challenging problems, like how to build complex devices at millimeter scales, how to create high-performance millimeter-scale artificial muscles, bioinspired designs, and novel sensors, and flight control strategies," said Wood. "Now that power solutions are emerging, the next step is onboard control. Beyond these robots, we are excited that these underlying technologies are finding applications in other areas such as minimally-invasive surgical devices, wearable sensors, assistive robots, and haptic communication devices - to name just a few."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Genetically modified virus combats prostate cancer

image: In a study with mice, a gene therapy developed in Brazil kills cancer cells and avoids adverse side effects when combined with chemotherapy.

Image: 
Marcos Santos / USP Imagens

Researchers at the São Paulo State Cancer Institute (ICESP) in Brazil have succeeded in using a genetically manipulated virus to destroy tumor cells upon injection into mice with prostate cancer.

The virus also made tumor cells more sensitive to chemotherapy drugs, halting tumor progression and almost eliminating tumors in some cases.

The results were obtained by a team led by Bryan Eric Strauss, head of the Viral Vector Laboratory at ICESP's Center for Translational Research in Oncology (CTO), and are described in an article in Gene Therapy, a journal published by Springer Nature.

The research was supported by São Paulo Research Foundation - FAPESP under the aegis of the Thematic Project "Cancer gene therapy: strategic positioning for translational studies". Brazil's National Council for Scientific and Technological Development (CNPq) also supplied funding, as did Sanofi.

"We used a combination of gene therapy and chemotherapy to combat prostate cancer in mice," said Strauss. "We chose the weapon we considered most likely to work as a tumor suppressant," he said, referring to p53, a gene that controls important aspects of cell death and is present in both rodents and humans.

In the laboratory, the gene was inserted into the genetic code of an adenovirus. The modified virus was then injected directly into tumors in mice.

"First, we implanted human prostate cancer cells in the mice and waited for tumors to grow. We then injected the virus directly into the tumors. We repeated this procedure several times. On two of these occasions, we also systemically administered cabazitaxel, a drug commonly used in chemotherapy. After that, we observed the mice to see if the tumors developed," Strauss said.

The experiments used several groups of mice, all of which were inoculated with prostate tumor cells. To verify the efficacy of the gene therapy, the researchers administered an unrelated virus to one of the groups as a control.

The second group received only the virus with p53. The third group received only cabazitaxel. The fourth group, corresponding to 25% of the mice, received a combination of the drug and the virus.

When the tumor cells were infected by the modified virus, it penetrated the cell nucleus - where genes act - and triggered cell death. The p53 gene was particularly effective at inducing cell death in prostate cancer.

"Individual treatments with p53 or cabazitaxel alone had an intermediate effect in terms of controlling tumor growth, but the combination had the most striking result, totally inhibiting tumors," Strauss said.

The experiments proved that the modified virus caused the death of the tumor cells it infected. "The association of the drug with gene therapy resulted in full control of tumor growth. In other words, we observed an additive or even synergistic effect. It can also be assumed that the virus with p53 made tumor cells more sensitive to the action of the chemotherapy drug," he said.

According to Strauss, the virus cannot be injected into the bloodstream. "For the therapy to work, we need to inject the virus directly into tumor cells," he said.

Tumors can evidently be controlled using chemotherapy drugs alone, he recalled, but the high doses required can have significant side effects. One is leukopenia, or loss of white blood cells, a constraint for this type of chemotherapy because it impairs the immune system.

"In our study, we used a subtherapeutic dose, which was not sufficient to control the tumor. This was done to avoid leukopenia," Strauss said.

Immune system

Destroying tumor cells with p53 does not guarantee that all cancer cells will be eliminated, including metastases. Stimulation of the organism's immune response was the answer found by the researchers.

According to Strauss, if the combination of p53 and cabazitaxel is not sufficient to activate the immune system, the use of a second gene in addition to p53 can be considered.

The interferon-beta gene was chosen for its key role in the immune system. Interferons are proteins produced by leukocytes (white blood cells) and fibroblasts that interfere with the replication of fungi, viruses, bacteria and tumor cells while also stimulating the defense activities of other cells.

"Both p53 and interferon-beta can kill tumor cells. We wanted to combine them for cell death to wake up the immune system. This is known as immunogenic cell death," Strauss said.

Previous studies by the group served as a basis for the idea. When a combination of ARF (a functional partner of p53) and interferon-beta was inserted into the tumor cell nucleus, the mouse's immune system ceased recognizing the tumor cell as part of its organism and identified it as an external agent to be combated.

"When this happens, the immune system combats tumor cells both at the treatment site and in tumors located elsewhere," Strauss said.

"Our goal now is to refine these approaches. We're engaged in experiments to find out whether they deserve to advance to the stage of clinical trials in human patients."

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

'Female leadership trust advantage' gives women edge in some crisis situations

image: Corinne Post is a professor of management at Lehigh University.

Image: 
Lehigh University

Certain crises require certain female leaders. Researchers at Lehigh University and Queen's University Belfast have found that trust established by female leaders practicing strong interpersonal skills results in better crisis resolution in cases when outcomes are predictable.

They describe this "female leadership trust advantage" in a paper published in this month's print issue of Psychology of Women Quarterly. Their research is the first to examine why and when a female leadership trust advantage emerges for leaders during organizational crises.

"People trust female leaders more than male leaders in times of crisis, but only under specific conditions," said paper co-author Corinne Post, professor of management at Lehigh University. "We showed that when a crisis hits an organization, people trust leaders who behave in relational ways, and especially so when the leaders are women and when there is a predictable path out of the crisis."

Relational behaviors are shown by those who think of themselves in relation to others. Such skills help build and restore trust, and, on average, are adopted more by women than men. The researchers specifically looked at the relational behavior of interpersonal emotion management (IEM), which alleviates feelings of threat during a crisis by anticipating and managing the emotions of others. IEM behaviors include removing or altering a problem to reduce emotional impact; directing attention to something more pleasant; reappraising a situation as more positive; and modulating or suppressing one's emotional response. IEM is central to establishing or repairing trust, often eroded when a crisis occurs.

Researchers defined crisis as a common, though often unexpected, time-sensitive, high-impact event that may disrupt organizational functioning and pose relational threats. For a company, this could be a product safety concern, consumer data breach, oil spill, corruption allegation or widespread harassment.

"Crises are fraught with relational issues, which, unless handled properly, threaten not only organizational performance but also the allocation of organizational resources and even organizational survival," they said. "Organizational crises, therefore, require a great deal of relational and emotional work to build or restore trust among those affected and may influence such trusting behaviors as provision of resources to the organization," including economic resources and investment in the firm, as well as inspiring employee cooperation.

To examine differences in trust for men and women leaders during an organizational crisis, researchers created a set of crisis scenarios. In some scenarios, the CEO (at times a male and at other times a female) anticipated and managed the emotions of others as the crisis unfolded - and in others the CEO did not attend to others' emotions at all. Scenarios were varied to depict crises with predictable or uncertain outcomes.

"We found that this female leadership trust advantage was not just attitudinal, but that - when the consequences of the crisis were foreseeable - people were actually ready to invest much more in the firms led by relational women," Post said. "Our finding also suggests that, in an organizational crisis, female (relative to male) leaders may generate more goodwill and resources for their organization by using relational behaviors when the crisis fallout is predictable, but may not benefit from the same advantage in crises with uncertain consequences."

Demonstrating superior relational skills may help female leaders gain a trust advantage in crises that focus primarily on relationship aspects in an organization, when there is certainty around the resolution and fallout from the crisis is more controllable, for example. But it may not be as valuable when crisis outcomes are uncertain or difficult to control, when both agentic leadership (making decisions and acting quickly) and relational leadership (such as maintaining high levels of communication) are required.

Authors on the paper, "A Female Leadership Trust Advantage in Times of Crisis: Under What Conditions?" include Post; Iona Latu, lecturer in experimental social psychology at Queen's University Belfast; and Liuba Belkin, associate professor of management at Lehigh University.

The study is unique in basing its scenarios on production and food safety crises, when most studies of female leaders and organizational crisis look at financial performance crises. It also is different from most other research that "simply assumes female leaders behave more relationally," Post said. "We were able to determine how leader gender and leader relational behaviors (interpersonal emotional management) influenced trust both independently from each other and in combination."

The findings have important implications for leadership and gender research, as well as business professionals.

"Identifying what crisis management behaviors enhance trust in female leaders, and under what conditions such trust is enhanced may, for example, help to mitigate the documented higher risk for women (compared to men) of being replaced during drawn-out crises," the researchers said.

The results also suggest that to realize their leadership advantage potential, women may need to embrace relational leadership behaviors, at least under some circumstances. "Female leaders may find it helpful to know that, when uncertainty around a crisis is low, using relational leadership behaviors may help them elicit more trust from others," they said.

The research also holds implications for human resource professionals and organizational leaders.

"Because our findings reveal the importance of relational skills in eliciting trust during a crisis, we would encourage firms to consider hiring for, training and rewarding relational skills in their leaders, especially in jobs with high potential for crises," Post said.

Credit: 
Lehigh University

Understanding what makes captive gorilla hearts tick

Cleveland, Ohio; Kent, Ohio -- We've known for some time that heart disease is prevalent in captive gorilla populations and is a leading cause of death. This is why, in 2010, the Great Ape Heart Project based at Zoo Atlanta was formed. The project provides a network of clinical, pathologic and research strategies to aid in the understanding and treating of cardiac disease in all the ape species, with the ultimate goal of reducing cardiovascular-related mortalities and improving the health and welfare of great apes in human care.

"Gorilla heart disease is similar to, but different from, what we see in humans," said Hayley Murphy, D.V.M., deputy director of Zoo Atlanta, director of the Great Ape Heart Project and co-author of a recent paper that appears in the journal PLOS ONE. "In humans, we primarily see atherosclerosis - plaques that form in the vessels from cholesterol. In contrast, gorilla hearts get thick, which causes scarring and interferes with normal heart function."

In the present study, veterinarians, human cardiologists and researchers joined forces to examine data gathered from zoos across the United States. They gathered information during routine health exams from 44 males and 25 females. Using echocardiograph data and serum measures that veterinarians gathered at the various institutions, the authors were able to examine not only which gorillas had heart disease, but also what factors may be related to illness.

"One mysterious finding is that the majority of gorillas that develop heart disease are males," said co-author Mary Ann Raghanti, Ph.D., anthropology professor in the College of Arts and Sciences at Kent State University. "While females do develop heart disease in some cases, the disease is less prevalent in the females, despite living just as long - or longer - than males."

Lead author Patricia Dennis, D.V.M., Ph.D., veterinary epidemiologist at Cleveland Metroparks Zoo and associate professor at the Ohio State University College of Veterinary Medicine, remarked, "We are one step closer to understanding heart disease in gorillas. As we have become more aware of the health risks of obesity in ourselves, we also are managing animal nutrition to prevent obesity. Zoos are actively managing and improving diets. Research will continue as we try to understand not only what causes heart disease in male gorillas but also why females don't seem to develop as much heart disease. Ultimately, our goal is to prevent heart disease in the next generation of gorillas in our care."

Credit: 
Kent State University

Building a bridge to the quantum world

image: This is an illustration of a prototype of what may, in the future, serve as a link to connect quantum computers.

Image: 
IST Austria/Philip Krantz, Krantz NanoArt

Entanglement is one of the main principles of quantum mechanics. Physicists from Professor Johannes Fink's research group at the Institute of Science and Technology Austria (IST Austria) have found a way to use a mechanical oscillator to produce entangled radiation. This method, which the authors published in the current edition of Nature, might prove extremely useful when it comes to connecting quantum computers.

Entanglement is a phenomenon typical of the quantum world, which is not present in the so-called classical world -- the world and laws of physics that govern our everyday lives. When two particles are entangled, the characteristics of one particle can be determined by looking at the other. This was discovered by Einstein, and the phenomenon is now actively used in quantum cryptography where it is said to lead to unbreakable codes. But it not only just affects particles, radiation can also be entangled: This is the phenomenon that Shabir Barzanjeh, a postdoc in the group of Professor Fink at IST Austria and first author of the study, is currently researching.

"Imagine a box with two exits. If the exits are entangled, one can characterize the radiation coming out of one exit by looking at the other," he explains. Entangled radiation has been created before, but in this study a mechanical object was used for the first time. With a length of 30 micrometers and composed of about a trillion (1012) atoms the silicon beam created by the group might still be small in our eyes but, for the quantum world, however, it is large. "For me, this experiment was interesting on a fundamental level," says Barzanjeh. "The question was: can one use such a large system to produce non-classical radiation? Now we know that the answer is: yes."

But the device also has practical value. Mechanical oscillators could serve as a link between the extremely sensitive quantum computers and optical fibers connecting them inside data centers and beyond. "What we have built is a prototype for a quantum link," says Barzanjeh.

In superconducting quantum computers, the electronics only work at extremely low temperatures which are only a few thousandths of a degree above 'absolute zero' (-273.15 °C). This is because such quantum computers operate on the basis of microwave photons which are extremely sensitive to noise and losses. If the temperature in a quantum computer rises, all the information is destroyed. As a consequence, transferring information from one quantum computer to another is at the moment almost impossible, as the information would have to cross an environment that is too hot for it to survive.

Classical computers in networks, on the other hand, are usually connected via optical fibers, because optical radiation is very robust against disturbances that could corrupt or destroy data. In order to use this successful technology also for quantum computers, one would have to build a link that can convert the quantum computer's microwave photons to optical information carriers or a device that generates entangled microwave-optical fields as a resource for quantum teleportation. Such a link would serve as a bridge between the room temperature optical and the cryogenic quantum world, and the device developed by the physicists is one step in that direction. "The oscillator that we have built has brought us one step closer to a quantum internet," says first author Barzanjeh.

But this is not the only potential application of the device. "Our system could also be used to improve the performance of gravitational wave detectors," explains Shabir Barzanjeh and Johannes Fink adds: "It turns out that observing such steady-state entangled fields implies that the mechanical oscillator producing it has to be a quantum object. This holds for any type of mediator and without the need to measuring it directly, so in the future our measurement principle could help to verify or falsify the potentially quantum nature of other hard to interrogate systems like living organisms or the gravitational field."

Credit: 
Institute of Science and Technology Austria

Study reveals key factor in Himalayan earthquake rupture

image: The velocity model showing a map view of the MHT and a cross-section passing through the Gorkha earthquake

Image: 
BAI Ling

The Himalayan orogenic belt produces frequent large earthquakes that impact population centers for a distance of over 2500 km. In the central region, the 2015 Gorkha earthquake in Nepal, with moment magnitude (MW) 7.8, partially ruptured a ~120-km by 80-km patch of the Main Himalayan Thrust (MHT), the detachment that separates the underthrusting Indian plate from the overriding Himalayan orogeny.

The rupture highlights important scientific questions about Himalayan formation and seismic hazards. These questions include how to distinguish between different possible geometries of the MHT, and how to better define the structural causes and locations of rupture segmentation both across-strike and along-strike in the orogenic belt.

A study led by Prof. BAI Ling from the Institute of Tibetan Plateau Research (ITP) of the Chinese Academy of Sciences revealed that the rupture length of the 2015 MW 7.8 Gorkha earthquake was likely controlled by spatial (both along- and across-strike) variations in the Main Himalayan Thrust.

The researchers combined seismic waveforms from several different deployments, including 22 seismic stations ITP had deployed along the China-Nepal border with an average elevation of 4.5 km prior to the earthquake. Using arrival times and waveform modeling, they determined source parameters of earthquakes, velocity structures and discontinuity topography in and around the source area.

The study showed that the MHT exhibited clear lateral variation along the geologic strike, with the Lesser Himalayan ramp having moderate dip on the MHT beneath the mainshock area, and a flatter and deeper MHT beneath the eastern end of the aftershock zone.

Following these observations, the impetus now is to image the entire 2,500-km Himalayan front to determine the morphology of the MHT and the likely controls on the maximum magnitude of rupture that can be accommodated in different parts of this convergence zone.

Credit: 
Chinese Academy of Sciences Headquarters

New research shows Parkinson's disease origins in the gut

image: Route of Parkinson's disease-causing protein propogation in mice.

Image: 
Ted Dawson

WATCH NOW: New Animal Study Adds to Evidence of Parkinson's Disease Origins in the Gut

In experiments in mice, Johns Hopkins Medicine researchers say they have found additional evidence that Parkinson's disease originates among cells in the gut and travels up the body's neurons to the brain. The study, described in the June issue of the journal Neuron, offers a new, more accurate model in which to test treatments that could prevent or halt Parkinson's disease progression.

"These findings provide further proof of the gut's role in Parkinson's disease, and give us a model to study the disease's progression from the start," says Ted Dawson, M.D., Ph.D., director of the Johns Hopkins Institute for Cell Engineering and professor of neurology at the Johns Hopkins University School of Medicine.

Parkinson's disease is characterized by the buildup of a misfolded protein, called alpha-synuclein, in the cells of the brain. As more of these proteins begin to clump together, they cause nerve tissues to die off, leaving behind large swaths of dead brain matter known as Lewy bodies. As brain cells die, they impair a person's ability to move, think or regulate emotions.

The new study builds off observations made in 2003 by German neuroanatomist Heiko Braak that showed people with Parkinson's disease also had accumulations of the misfolded alpha-synuclein protein in the parts of the central nervous system that control the gut. The appearance of these neuron-damaging proteins is consistent with some early symptoms of Parkinson's disease, which include constipation, says Hanseok Ko, Ph.D., associate professor of neurology at the Johns Hopkins University School of Medicine. Braak hypothesized that Parkinson's disease advanced up the nerves connecting the gut and the brain like going up a ladder.

A growing body of evidence has implicated the gut-brain connection in initiating Parkinson's disease. The researchers were most curious whether the misfolded alpha-synuclein protein could travel along the nerve bundle known as the vagus nerve, which runs like an electrical cable from the stomach and small intestine into the base of the brain.

To test this, the researchers injected 25 micrograms of synthetic misfolded alpha-synuclein created in the lab into the guts of dozens of healthy mice. The researchers sampled and analyzed the mouse brain tissue at one, three, seven and 10 months after injection. Over the course of the 10 month experiment, the researchers saw evidence that the alpha-synuclein began building where the vagus nerve connected to the gut and continued to spread through all parts of the brain.

The researchers then conducted a similar experiment, but this time surgically cut the vagus nerve in one group of mice and injected their guts with the misfolded alpha-synuclein. Upon examination at seven months, the researchers found that mice with severed vagus nerves showed none of the signs of cell death found in mice with intact vagus nerves. The severed nerve appeared to halt the misfolded protein's advances, says Dawson.

The researchers then investigated whether these physical differences in Parkinson's disease progression resulted in behavioral changes. To do this, they evaluated the behavior of three groups: mice injected with misfolded alpha-synuclein, mice injected with misfolded alpha-synuclein with cut vagus nerves, and control mice with no injection and intact vagus nerves. The researchers looked at tasks they commonly used to distinguish signs of mouse Parkinson's disease, including nest building and exploring new environments.

The researchers first observed the mice build nests in their enclosure as a test for fine motor dexterity, which is commonly affected by Parkinson's disease in humans. Healthy mice often make large, dense mounds in which to burrow. Smaller, messier nests are often signs of problems with motor control.

Seven months after injection, the researchers provided the mice with nesting materials and observed their nest building behavior for 16 hours, scoring their capabilities on a scale of 0-6. They found that mice that received the misfolded alpha-synuclein injection scored consistently lower on nest building.

While the control and severed vagus nerve groups consistently scored 3 or 4 on the nest building scale, mice that received the misfolded alpha-synuclein scored lower than 1. Also, while most mice used the entire 2.5 grams of material provided, the group of mice that received the alpha-synuclein injection used less than half a gram of the nesting material. In ways similar to Parkinson's disease symptoms in humans, the mice's fine motor control deteriorated as the disease progressed, says Ko.

In another experiment analyzing the mice for symptoms similar to Parkinson's disease in humans, the researchers measured anxiety levels of the mice by monitoring how they responded to new environments.

For this test, the researchers placed the mice in a large open box where a camera could track their exploration. Healthy mice are curious and will spend time investigating every part of a new environment. However, mice affected by cognitive decline are more anxious, causing them to be more likely to stay toward the sheltered edges of a box.

The research team found that control mice and mice that had their vagus nerves cut to protect against Parkinson's disease spent between 20 and 30 minutes exploring the center of the box. On the other hand, mice that received the misfolded alpha-synuclein injection but had intact vagus nerves spent less than five minutes exploring the center of the box and moved mostly around the borders, indicating higher anxiety levels, which the researchers report are consistent with symptoms of Parkinson's disease.

Overall, the results of this study show that misfolded alpha-synuclein can be transmitted from the gut to the brain in mice along the vagus nerve, and blocking the transmission route could be key to preventing the physical and cognitive manifestations of Parkinson's disease.

"This is an exciting discovery for the field and presents a target for early intervention in the disease," says Dawson.

Next, the researchers say, they plan to explore what parts of the vagus nerve allow the misfolded protein to climb to the brain, and to investigate potential mechanisms to stop it.

Credit: 
Johns Hopkins Medicine

How you charge your mobile phone could compromise its battery lifespan

image: The three modes of charging, based on (a) AC mains charging (cable charging) and inductive charging when coils are (b) aligned and (c) misaligned.

Image: 
WMG, University of Warwick

Researchers at WMG at the University of Warwick have found that use of inductive charging, whilst highly convenient, risks depleting the life of mobile phones using typical LIBs (Lithium-ion batteries)

Consumers and manufacturers have ramped up their interest in this convenient charging technology, abandoning fiddling with plugs and cables in a favour of just setting the phone directly on a charging base.

Standardisation of charging stations, and inclusion of inductive charging coils in many new smartphones has led to rapidly increasing adoption of the technology. In 2017, 15 automobile models announced the inclusion of consoles within vehicles for inductively charging consumer electronic devices, such as smartphones - and at a much larger scale, many are considering it for charging electric vehicle batteries.

Inductive charging enables a power source to transmit energy across an air gap, without the use of connecting wire but one of the main issues with this mode of charging is the amount of unwanted and potentially damaging heat that can be generated. There are several sources of heat generation associated with any inductive charging system - in both the charger and the device being charged. This additional heating is made worse by the fact that the device and the charging base are in close physical contact, any heat generated in one device may be transferred to the other by simple thermal conduction and convection.

In a smartphone, the power receiving coil is close to the back cover of the phone (which is usually electrically nonconductive) and packaging constraints necessitate placement of the phone's battery and power electronics in close proximity, with limited opportunities to dissipate heat generated in the phone, or shield the phone from heat generated by the charger. It has been well-documented that batteries age more quickly when stored at elevated temperatures and that exposure to higher temperatures can thus significantly influence the state-of-health (SoH) of batteries over their useful lifetime.

The rule of thumb (or more technically the Arrhenuis equation) is that for most chemical reactions, the reaction rate doubles with each 10 °C rise in temperature. In a battery, the reactions which can occur include the accelerated growth rate of passivating films (a thin inert coating making the surface underneath unreactive) on the cell's electrodes. This occurs by way of cell redox reactions, which irreversibly increase the internal resistance of the cell, ultimately resulting in performance degradation and failure. A lithium ion battery dwelling above 30 °C is typically considered to be at elevated temperature exposing the battery to risk of a shortened useful life.

Guidelines issued by battery manufacturers also specify that the upper operational temperature range of their products should not surpass the 50?60 °C range to avoid gas generation and catastrophic failure.

These facts led WMG researchers to carry out experiments comparing the temperature rises in normal battery charging by wire with inductive charging. However the WMG were even more interested in inductive charging when the consumer misaligns the phone on the charging base. To compensate for poor alignment of the phone and the charger, inductive charging systems typically increase the transmitter power and/or adjust their operating frequency, which incurs further efficiency losses and increases heat generation.

This misalignment can be a very common occurrence as the actual position of the receiving antenna in the phone is not always intuitive or obvious to the consumer using the phone. The WMG research team therefore also tested phone charging with deliberate misalignment of transmitter and receiver coils.

All three charging methods (wire, aligned inductive and misaligned inductive) were tested with simultaneous charging and thermal imaging over time to generate temperature maps to help quantify the heating effects. The results of those experiments have been published in the journal ACS Energy Letters in an article entitled "Temperature Considerations for Charging Li-Ion Batteries: Inductive versus Mains Charging Modes for Portable Electronic Devices."

The graphics with this press release illustrates three modes of charging, based on (a) AC mains charging (cable charging) and inductive charging when coils are (b) aligned and (c) misaligned. Panels i and ii show a realistic view of the charging modes with a snapshot of the thermal maps of the phone after 50 min of charging. Regardless of the mode of charging, the right edge of the phone showed a higher rate of increase in temperature than other areas of the phone and remained higher throughout the charging process. A CT scan of the phone showed that this hotspot is where the motherboard is located

In the case of the phone charged with conventional mains power, the maximum average temperature reached within 3 hours of charging did not exceed 27 °C.

In contrast this for the phone charged by aligned inductive charging, the temperature peaked at 30.5 °C but gradually reduced for the latter half of the charging period. This is similar to the maximum average temperature observed during misaligned inductive charging.

In the case of misaligned inductive charging, the peak temperature was of similar magnitude (30.5 °C) but this temperature was reached sooner and persisted for much longer at this level (125 minutes versus 55 minutes for properly aligned charging).

Also noteworthy was the fact that the maximum input power to the charging base was greater in the test where the phone was misaligned (11W) than the well-aligned phone (9.5 W). This is due to the charging system increasing the transmitter power under misalignment in order to maintain target input power to the device. The maximum average temperature of the charging base while charging under misalignment reached 35.3 °C, two degrees higher than the temperature detected when the phone was aligned, which achieved 33 °C. This is symptomatic of deterioration in system efficiency, with additional heat generation attributable to power electronics losses and eddy currents.

The researchers do note that future approaches to inductive charging design can diminish these transfer losses, and thus reduce heating, by using ultrathin coils, higher frequencies, and optimized drive electronics to provide chargers and receivers that are compact and more efficient and can be integrated into mobile devices or batteries with minimal change.

In conclusion, the research team found that inductive charging, whilst convenient, will likely lead to a reduction in the life of the mobile phone battery. For many users, this degradation may be an acceptable price for the convenience of charging, but for those wishing to eke out the longest life from their phone, cable charging is still recommended.

Credit: 
University of Warwick

Sustainability-linked loans provide opportunities for chemical firms

Spurred by calculations showing that companies with a lower environmental impact are less of a financial risk, banks are beginning to offer cheaper loans if chemical firms hit agreed-upon levels of environmental performance. In this way, companies and banks can obtain a mutually beneficial relationship through green finance, according to an article in Chemical and Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.

Companies such as Stora Enso, Indorama Ventures, DSM and Solvay became some of the first to take sustainability-linked loans, Senior Editor Alex Scott writes. Kemira, a specialty chemical firm, agreed to meet three criteria to obtain a sustainability-linked loan: reduce greenhouse gas emissions by 20% by 2020, ensure at least half its revenue is from products that improve customer resource-use efficiency and maintain its gold-standard sustainability rating from the environmental assessment firm EcoVadis. This sustainability rating determined the initial loan interest rate, which will rise if the firm fails to meet environmental targets.

Ecoloans have mostly been adopted by a handful of European companies, but the approach could easily be used in Asia, especially in China and Southeast Asia, where sustainability in the supply chain is critical. In general, the financial sector is becoming more interested in environmental, social, government and sustainability issues, which may eventually lead some banks to deny loans to firms with poor environmental track records. This is only the beginning of chemical firms linking loans to environmental sustainability, experts say. 

Credit: 
American Chemical Society

Study: No outcome differences after hernia surgery by medical doctors vs surgeons in Ghana

image: Jessica H. Beard, M.D., M.P.H., Assistant Professor of Surgery in the Division of Trauma and Surgical Critical Care at the Lewis Katz School of Medicine at Temple University.

Image: 
Temple University Health System

(Philadelphia, PA) - Inguinal hernia is one of the most common general surgical conditions in the world, with an estimated 220 million cases and 20 million surgeries performed annually. Inguinal hernia occurs when tissue, like part of the intestine, pushes through a weak spot in the abdominal wall and into the groin area. Hernias can be congenital or emerge over time. The condition is eight to 10 times more common in men, who have a 27 percent lifetime chance of developing one. Other risk factors include older age, a family history, and a previous hernia.

While not necessarily dangerous initially, inguinal hernias will not improve by themselves and typically grow larger. They can affect one's quality of life - as daily tasks such as bending, coughing, lifting, or exercising can exacerbate any pain. In parts of the world where access to surgical care for hernia is limited, they can become so large that they are disfiguring, preclude work, and limit even everyday activity. Sometimes, life-threatening complications such as a bowel obstruction can arise. Inguinal hernia repair with mesh is a safe, effective, and cost-effective surgery that can cure the condition, alleviate pain, and prevent complications.

The need for inguinal hernia repair is perhaps greatest in sub-Saharan Africa. According to research published in the World Journal of Surgery, an estimated 1 million inguinal hernias will await repair in Ghana by 2022. Unfortunately, a shortage of available surgeons persists in that area of the world, limiting access to hernia repair.

New research published June 26 in JAMA Surgery and co-led by Temple's Jessica H. Beard, MD, MPH, examines one approach to tackling the situation: training medical doctors to perform inguinal hernia surgery. The research team found no statistically significant differences in hernia recurrence, post-surgery complications, patient satisfaction, or severe chronic pain when the procedure was performed by a medical doctor compared to when it was performed by a surgeon. This method, called surgical "task-sharing," has been applied in previous studies which showed that non-surgeons can perform cesarean sections and laparotomy with results similar to those of specialists.

"Sharing these tasks saves lives and produces not only excellent outcomes but also helps further develop the skills of doctors who live in the community," said Dr. Beard, Assistant Professor of Surgery in the Division of Trauma and Surgical Critical Care at the Lewis Katz School of Medicine at Temple University as well as co-first author and corresponding author on the study titled, "Outcomes after inguinal hernia repair with mesh performed by medical doctors and surgeons: a prospective cohort study in Ghana."

"Surgical task-sharing is a way to safely strengthen and grow the healthcare workforce in Ghana. Without treatment, patients with inguinal hernia may be forced to live with severe side effects or disability from the condition - or even face death," Dr. Beard added.

The prospective cohort study ran from February 2017 to September 2018 at the Volta Regional Hospital in Ho, Ghana, where 242 adult male patients with primary reducible inguinal hernia were operated on - 119 by medical doctors and 123 by surgeons. Dr. Beard and three general surgeons from Ghana trained three medical doctors and two general surgeons in tension-free mesh repair. Medical doctors had completed medical school followed by a two-year general internship. They had no formal training in surgery and learned inguinal hernia repair first through observation and then through supervision.

After one year, hernias had recurred in only four patients - one (0.9%) who was treated by a medical doctor and three (2.8%) who were treated by a surgeon. The overall hernia recurrence rate of 1.8% compares favorably with recurrence rates in high-income countries. Importantly, there was no statistically significant difference in recurrence rates following surgery by medical doctors when compared to surgeons.

At two weeks post-surgery, the authors found no differences between medical doctors compared to surgeons in terms of complications (29.1% vs. 24.2%). One year later, there also were no statistically significant differences between medical doctors and surgeons with regard to patient satisfaction (98.3% vs. 99.1%) or severe chronic pain (0.9% vs. 3.7%).

"As we strive to achieve health care equity and access for all, this research demonstrates one way to address the currently unmet need for essential surgical services without sacrificing safety or quality in care and delivery," Dr. Beard said.

Credit: 
Temple University Health System

Long-term statin use associated with lower glaucoma risk

Editor's Note: This release has been removed upon request of the submitting institution. Jae Hee Kang et al. have notified JAMA Ophthalmology about errors that occurred in their Original Investigation “Association of Statin Use and High Serum Cholesterol Levels With Risk of Primary Open-Angle Glaucoma,” published online on May 2, 2019. In the original article, the team reported an association between long-term statin use and a lower risk of primary open-angle glaucoma. However, because of serious coding errors in their analyses, these findings were not accurate. When corrected, the results were substantially changed, and they no longer observed a significant association between statin use and risk of glaucoma. Their article has been retracted and replaced online. For more information, please contact Haley Bridger, hbridger@bwh.harvard.edu.

Journal

JAMA Ophthalmology

Credit: 
Brigham and Women's Hospital

The observation of topologically protected magnetic quasiparticles

image: Neutron inelastic scattering spectrometer AMATERAS installed at MLF, J-PARC.

Image: 
J-PARC(KEK/JAEA)

A team of researchers from Tohoku University, J-PARC, and Tokyo Institute of Technology conducted an in-depth study of magnetic quasiparticles called "triplons." The team conducted the study with a low-dimensional quantum magnet, Ba2CuSi2O6Cl2, using neutron inelastic scattering by AMATERAS at J-PARC. Their findings lead to the discovery of a new "topologically protected triplon edge state" in the aforementioned compound.

The conceptual discovery of a topological insulator generates attention from a fundamental and technological aspect. The study showed that we can expect a non-dissipative electron flow, otherwise known as an "edge-state," to appear on the surface of topological insulators due to the difference in topological characteristics between the inside and outside of the topological insulator.

Tremendous efforts have been made to realize the topological edge state in real two-dimensional and three-dimensional electronic materials since this non-dissipative flow has the potential be utilized for energy-efficient information transmission and processing in the future.

The edge-state concept not only applies to electrons, but to quasiparticles, which carry spin current in materials, emerging from electron spin fluctuations such as magnons and triplons. However, to date, only few examples have demonstrated bosonic quasiparticles with topological characters.

Using neutron inelastic scattering by AMATERAS at J-PARC, the team was able to precisely determine the dispersion relations of triplons in the quantum magnet Ba2CuSi2O6Cl2. The observed dispersion relations fix parameters in the model Hamiltonian, which indeed, show that the compound is a new realization of the Su-Schriffer-Heeger (SSH) model - the most fundamental model to ascertain topological insulators. The SSH model is renowned for being equivalent to a single spin under a fictitious magnetic field. The dispersion relations, as well as the fictitious magnetic field, are shown in the title image.

As the quasiparticle moves from left to right in the figure, a fictitious magnetic field makes a single rotation. Simultaneously, quasiparticle phases rotate half, leading to a nontrivial topology. This nontrivial topology of triplons stipulates that edge states exist in the middle of the energy gap of Ba2CuSi2O6Cl2. The observation of topological triplons should accelerate the detection of magnetic and thermodynamic properties of edge states, and may lead to the further development of energy efficient information transmission and processing materials.

Credit: 
Tohoku University

Significant UK air quality improvements over past 40 years cut death rates

Policies to improve air quality in the UK over the past 40 years have led to significant reductions in pollution and associated mortality rates, a new study has found.

Research led by the Centre for Ecology & Hydrology charted the levels of emissions of a variety of air pollutants in the UK between 1970 and 2010 - a period in which there was a raft of national and European legislation to tackle pollution. The scientists say their study is ground-breaking due to the long timeframe studied and the removal of weather factors from modelling, meaning any changes in air pollution can be directly attributed to emission levels.

They found that over the 40-year period, total annual emissions of PM2.5 (fine particulate matter), nitrogen oxides (NOx), sulphur dioxide (SO2) and non-methane volatile organic compounds (NMVOC) in the UK all reduced substantially - by between 58% and 93%. Emissions of ammonia (NH3) fell by 17% between 1970 and 2010 but have increased slightly in recent years.

Based on these reduced emissions levels, the study estimated that mortality rates attributed to PM2.5 and NO2 (nitrogen dioxide) pollutants that increase the risk of respiratory and cardiovascular diseases declined by 56% and 44%, respectively, in the UK over the 40-year period. The estimated mortality rate related to pollution from ground-level ozone (O3) - which can damage the lungs - fell by 24% between 1990 and 2010, following a significant rise in the 20 years prior to that.

However, scientists involved in the research stress that tackling air pollution in the UK remains an ongoing challenge. Nitrogen dioxide concentrations are still often above legal limits in many urban areas and levels of ammonia emissions are increasing.

Edward Carnell of the Centre for Ecology & Hydrology, lead author of the study, said: "Technology advances over the past 40 years, such as the three-way catalytic converter for cars and equipment to reduce sulphur and nitrogen dioxide emissions from large power plants have contributed to significant reductions in emission levels and therefore improved public health. However, it is legislation that has driven these technological improvements.

"Our results demonstrate the effectiveness of a series of policies at UK and European level since 1970 and this research supports policy-makers' efforts to continue implementing much-needed measures to further improve air quality."

The 40-year period investigated by this study saw the implementation of landmark policies on controlling air pollution. These included the 1979 UN Air Convention, major UK legislation such as the Clean Air Act 1993, Environment Act 1995 and several Air Quality Standards Regulations, plus a series of EU directives relating to different pollutants.

Emissions of ammonia, mainly from agriculture, have so far not been a target of stringent legislation. Ammonia is released into the air when manure, slurry and fertiliser are applied to agricultural land. Together with nitrogen oxides from traffic and domestic stoves, for example, it can form fine particles that affect air quality in urban areas far away from the source. In addition to posing a risk to human health, ammonia pollution affects water and soil quality and therefore animal and plant life.

Environment Minister, Thérèse Coffey, said: "We have taken huge strides in tackling air quality over the last 40 years, and this research shows our actions are producing results.

"But we know there is a lot more to do. That is why our landmark Clean Air Strategy addresses all sources of air pollution. We have clear plans in place to tackle roadside nitrogen emissions and agricultural ammonia, and are working closely with industry, local authorities and other government departments to accelerate progress".

Dr Stefan Reis of the Centre for Ecology & Hydrology (CEH), a senior author of the study, added: "Ammonia contributes not only to threats to human health, but also causes biodiversity loss. However, for the past 30 years, it has been the 'forgotten pollutant'.

"Therefore, we were very pleased to see Defra's new Clean Air Strategy aim for a 16 per cent reduction of UK ammonia emissions by 2030 (compared with 2005 levels), to fulfil commitments under the European National Emission Ceilings Directive. This landmark strategy proposes regulations and financial support, which, if adopted, would substantially reduce UK ammonia emissions, bringing substantial benefits for both for vulnerable ecosystems and human health."

Dr Sotiris Vardoulakis of the Institute of Occupational Medicine in Edinburgh, one of the co-authors of the study, said: "This study highlights the substantial improvements in air quality we have experienced over four decades, as well as the risks that air pollution still poses to public health in the UK. Concerted action is needed by the Government, local authorities, businesses and individuals to further improve air quality and protect human health."

Credit: 
UK Centre for Ecology & Hydrology

ALMA pinpoints the formation site of planet around nearest young star

image: A small clump of dust was found in the southwestern (bottom right) part of the otherwise highly symmetric disk.

Image: 
ALMA (ESO/NAOJ/NRAO), Tsukagoshi et al.

Researchers using ALMA (Atacama Large Millimeter/submillimeter Array) found a small dust concentration in the disk around TW Hydrae, the nearest young star. It is highly possible that a planet is growing or about to be formed in this concentration. This is the first time that the exact place where cold materials are forming the seed of a planet has been pinpointed in the disk around a young star.

The young star TW Hydrae, located194 light-years away in the constellation Hydra, is the closest star around which planets may be forming. Its surrounding dust disk is the best target to study the process of planet formation.

Previous ALMA observations revealed that the disk is composed of concentric rings. Now, new higher sensitivity ALMA observations revealed a previously unknown small clump in the planet forming disk. The clump is elongated along the direction of the disk rotation, with a width approximately equal to the distance between the Sun and the Earth, and a length of about four-and-a-half times that.

"The true nature of the clump is still not clear," says Takashi Tsukagoshi at the National Astronomical Observatory of Japan and the lead author of the research paper. "It could be a 'circumplanetary' disk feeding a Neptune-sized infant planet. Or it might be that swirling gas is raking up the dust particles."

Planets form in disks of gas and dust around young stars. Micrometer-sized dust particles stick together to grow to larger grains, rocks, and finally a planet. Theoretical studies predict that an infant planet is surrounded by a 'circumplanetary' disk, a small structure within the larger dust disk around the star. The planet collects material through this circumplanetary disk. It is important to find such a circumplanetary disk to understand the final stage of planet growth.

Cold dust and gas in the disks around young stars are difficult to see in visible light, but they emit radio waves. With its high sensitivity and resolution for such radio waves, ALMA is one of the most powerful instruments to study the genesis of planets.

However, the brightness and elongated shape of the structure revealed by ALMA don't exactly match theoretical predictions for circumplanetary disks. It might be a gas vortex, which are also expected to form here and there around a young star. Finding only a single dust clump at this time is also contrary to theoretical studies. So the research team could not reach a definitive answer on the nature of the dusty clump.

"Although we do not have a robust conclusion," says Tsukagoshi. "Pinpointing the exact place of planet formation is highly valuable to us. Next we'll obtain even higher resolution ALMA images to reveal the temperature distribution in the clump to look for hints of a planet inside. Also we plan to observe it with the Subaru Telescope in infrared to see if there is hot gas around a potential planet."

Credit: 
National Institutes of Natural Sciences