Tech

Freezing cells made safer thanks to new polymer made at University of Warwick

image: The cells frozen with the polymer (left) and without the polymer (right).

Image: 
University of Warwick

Cell freezing (cryopreservation) - which is essential in cell transfusions as well as basic biomedical research - can be dramatically improved using a new polymeric cryoprotectant, discovered at the University of Warwick, which reduces the amount of 'anti-freeze' needed to protect cells.

The ability to freeze and store cells for cell-based therapies and research has taken a step forward in the paper 'A synthetically scalable poly(ampholyte) which dramatically Enhances Cellular Cryopreservation.' published by the University of Warwick's Department of Chemistry and Medical School in the journal Biomacromolecules. The new polymer material protects the cells during freezing, leading to more cells being recovered and less solvent-based antifreeze being required.

Cryopreservation of cells is an essential process, enabling banking and distribution of cells, which would otherwise degrade. The current methods rely on adding traditional 'antifreezes' to the cells to protect them from the cold stress, but not all the cells are recovered and it is desirable to lower the amount of solvent added.

The new Warwick material was shown to allow cryopreservation using less solvent. In particular, the material was very potent at protecting cell monolayers - cells which are attached to a surface, which is the format of how they are grown and used in most biomedical research.

Having more, and better quality cells, is crucial not just for their use in medicine, but to improve the quality and accessibility of cells for the discovery of new drugs for example.

Cell-based therapies are emerging as the "fourth pillar" of chemo-therapy. New methods to help distribute and bank these cells will help make them more accessible and speed up their roll-out, and this new material may aid this process.

Professor Matthew Gibson who holds a joint appointment between the Department of Chemistry and Warwick Medical School comments:

"Cryopreservation is fundamental to so much modern bioscience and medicine, but we urgently need better methods to meet the needs of advanced cell-based therapies. Our new material is easy to scale up, which is essential if this is to be widely used, and we found it to be very protective for several cell lines. The simplicity of our approach will hopefully help us translate this to real applications quickly, and make an impact in healthcare and basic research."

Credit: 
University of Warwick

NASA's TESS mission scores 'hat trick' with 3 new worlds

image: This infographic illustrates key features of the TOI 270 system, located about 73 light-years away in the southern constellation Pictor. The three known planets were discovered by NASA's Transiting Exoplanet Survey Satellite through periodic dips in starlight caused by each orbiting world. Insets show information about the planets, including their relative sizes, and how they compare to Earth. Temperatures given for TOI 270's planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres.

Image: 
NASA's Goddard Space Flight Center/Scott Wiessinger

NASA's newest planet hunter, the Transiting Exoplanet Survey Satellite (TESS), has discovered three new worlds -- one slightly larger than Earth and two of a type not found in our solar system -- orbiting a nearby star. The planets straddle an observed gap in the sizes of known planets and promise to be among the most curious targets for future studies.

TESS Object of Interest (TOI) 270 is a faint, cool star more commonly identified by its catalog name: UCAC4 191-004642. The M-type dwarf star is about 40% smaller than the Sun in both size and mass, and it has a surface temperature about one-third cooler than the Sun's. The planetary system lies about 73 light-years away in the southern constellation of Pictor.

"This system is exactly what TESS was designed to find -- small, temperate planets that pass, or transit, in front of an inactive host star, one lacking excessive stellar activity, such as flares," said lead researcher Maximilian Günther, a Torres Postdoctoral Fellow at the Massachusetts Institute of Technology's (MIT) Kavli Institute for Astrophysics and Space Research in Cambridge. "This star is quiet and very close to us, and therefore much brighter than the host stars of comparable systems. With extended follow-up observations, we'll soon be able to determine the make-up of these worlds, establish if atmospheres are present and what gases they contain, and more."

A paper describing the system was published in the journal Nature Astronomy and is now available online.

The innermost planet, TOI 270 b, is likely a rocky world about 25% larger than Earth. It orbits the star every 3.4 days at a distance about 13 times closer than Mercury orbits the Sun. Based on statistical studies of known exoplanets of similar size, the science team estimates TOI 270 b has a mass around 1.9 times greater than Earth's.

Due to its proximity to the star, planet b is an oven-hot world. Its equilibrium temperature -- that is, the temperature based only on energy it receives from the star, which ignores additional warming effects from a possible atmosphere -- is around 490 degrees Fahrenheit (254 degrees Celsius).

The other two planets, TOI 270 c and d, are, respectively, 2.4 and 2.1 times larger than Earth and orbit the star every 5.7 and 11.4 days. Although only about half its size, both may be similar to Neptune in our solar system, with compositions dominated by gases rather than rock, and they likely weigh around 7 and 5 times Earth's mass, respectively.

All of the planets are expected to be tidally locked to the star, which means they only rotate once every orbit and keep the same side facing the star at all times, just as the Moon does in its orbit around Earth.

Planet c and d might best be described as mini-Neptunes, a type of planet not seen in our own solar system. The researchers hope further exploration of TOI 270 may help explain how two of these mini-Neptunes formed alongside a nearly Earth-size world.

"An interesting aspect of this system is that its planets straddle a well-established gap in known planetary sizes," said co-author Fran Pozuelos, a postdoctoral researcher at the University of Liège in Belgium. "It is uncommon for planets to have sizes between 1.5 and two times that of Earth for reasons likely related to the way planets form, but this is still a highly controversial topic. TOI 270 is an excellent laboratory for studying the margins of this gap and will help us better understand how planetary systems form and evolve."

Günther's team is particularly interested in the outermost planet, TOI 270 d. The team estimates the planet's equilibrium temperature to be about 150 degrees Fahrenheit (66 degrees C). This makes it the most temperate world in the system -- and as such, a rarity among known transiting planets.

"TOI 270 is perfectly situated in the sky for studying the atmospheres of its outer planets with NASA's future James Webb Space Telescope," said co-author Adina Feinstein, a doctoral student at the University of Chicago. "It will be observable by Webb for over half a year, which could allow for really interesting comparison studies between the atmospheres of TOI 270 c and d."

The team hopes further research may reveal additional planets beyond the three now known. If planet d has a rocky core covered by a thick atmosphere, its surface would be too warm for the presence of liquid water, considered a key requirement for a potentially habitable world. But follow-up studies may discover additional rocky planets at slightly greater distances from the star, where cooler temperatures could allow liquid water to pool on their surfaces.

Credit: 
NASA/Goddard Space Flight Center

Seeking new physics, scientists borrow from social networks

CAMBRIDGE, Mass. -- When two protons collide, they release pyrotechnic jets of particles, the details of which can tell scientists something about the nature of physics and the fundamental forces that govern the universe.

Enormous particle accelerators such as the Large Hadron Collider can generate billions of such collisions per minute by smashing together beams of protons at close to the speed of light. Scientists then search through measurements of these collisions in hopes of unearthing weird, unpredictable behavior beyond the established playbook of physics known as the Standard Model.

Now MIT physicists have found a way to automate the search for strange and potentially new physics, with a technique that determines the degree of similarity between pairs of collision events. In this way, they can estimate the relationships among hundreds of thousands of collisions in a proton beam smashup, and create a geometric map of events according to their degree of similarity.

The researchers say their new technique is the first to relate multitudes of particle collisions to each other, similar to a social network.

"Maps of social networks are based on the degree of connectivity between people, and for example, how many neighbors you need before you get from one friend to another," says Jesse Thaler, associate professor of physics at MIT. "It's the same idea here."

Thaler says this social networking of particle collisions can give researchers a sense of the more connected, and therefore more typical, events that occur when protons collide. They can also quickly spot the dissimilar events, on the outskirts of a collision network, which they can further investigate for potentially new physics. He and his collaborators, graduate students Patrick Komiske and Eric Metodiev, carried out the research at the MIT Center for Theoretical Physics and the MIT Laboratory for Nuclear Science. They detail their new technique this week in the journal Physical Review Letters.

Seeing the data agnostically

Thaler's group focuses, in part, on developing techniques to analyze open data from the LHC and other particle collider facilities in hopes of digging up interesting physics that others might have initially missed.

"Having access to this public data has been wonderful," Thaler says. "But it's daunting to sift through this mountain of data to figure out what's going on."

Physicists normally look through collider data for specific patterns or energies of collisions that they believe to be of interest based on theoretical predictions. Such was the case for the discovery of the Higgs boson, the elusive elementary particle that was predicted by the Standard Model. The particle's properties were theoretically outlined in detail but had not been observed until 2012, when physicists, knowing approximately what to look for, found signatures of the Higgs boson hidden amid trillions of proton collisions.

But what if particles exhibit behavior beyond what the Standard Model predicts, that physicists have no theory to anticipate?

Thaler, Komiske, and Metodiev have landed on a novel way to sift through collider data without knowing ahead of time what to look for. Rather than consider a single collision event at a time, they looked for ways to compare multiple events with each other, with the idea that perhaps by determining which events are more typical and which are less so, they might pick out outliers with potentially interesting, unexpected behavior.

"What we're trying to do is to be agnostic about what we think is new physics or not," says Metodiev. "We want to let the data speak for itself."

Moving dirt

Particle collider data are jam-packed with billions of proton collisions, each of which comprises individual sprays of particles. The team realized these sprays are essentially point clouds -- collections of dots, similar to the point clouds that represent scenes and objects in computer vision. Researchers in that field have developed an arsenal of techniques to compare point clouds, for example to enable robots to accurately identify objects and obstacles in their environment.

Metodiev and Komiske utilized similar techniques to compare point clouds between pairs of collisions in particle collider data. In particular, they adapted an existing algorithm that is designed to calculate the optimal amount of energy, or "work" that is needed to transform one point cloud into another. The crux of the algorithm is based on an abstract idea known as the "earth's mover's distance."

"You can imagine deposits of energy as being dirt, and you're the earth mover who has to move that dirt from one place to another," Thaler explains. "The amount of sweat that you expend getting from one configuration to another is the notion of distance that we're calculating."

In other words, the more energy it takes to rearrange one point cloud to resemble another, the farther apart they are in terms of their similarity. Applying this idea to particle collider data, the team was able to calculate the optimal energy it would take to transform a given point cloud into another, one pair at a time. For each pair, they assigned a number, based on the "distance," or degree of similarity they calculated between the two. They then considered each point cloud as a single point and arranged these points in a social network of sorts.

The team has been able to construct a social network of 100,000 pairs of collision events, from open data provided by the LHC, using their technique. The researchers hope that by looking at collision datasets as networks, scientists may be able to quickly flag potentially interesting events at the edges of a given network.

"We'd like to have an Instagram page for all the craziest events, or point clouds, recorded by the LHC on a given day," says Komiske. "This technique is an ideal way to determine that image. Because you just find the thing that's farthest away from everything else."

Typical collider datasets that are made publicly available normally include several million events, which have been preselected from an original chaos of billions of collisions that occurred at any given moment in a particle accelerator. Thaler says the team is working on ways to scale up their technique to construct larger networks, to potentially visualize the "shape," or general relationships within an entire dataset of particle collisions.

In the near future, he envisions testing the technique on historical data that physicists now know contain milestone discoveries, such as the first detection in 1995 of the top quark, the most massive of all known elementary particles.

"The top quark is an object that gives rise to these funny, three-pronged sprays of radiation, which are very dissimilar from typical sprays of one or two prongs," Thaler says. "If we could rediscover the top quark in this archival data, with this technique that doesn't need to know what new physics it is looking for, it would be very exciting and could give us confidence in applying this to current datasets, to find more exotic objects."

Credit: 
Massachusetts Institute of Technology

Demonstration of alpha particle confinement capability in helical fusion plasmas

image: Inside of the Large Helical Device. High-temperature plasma is confined by the two helical superconducting coils.

Image: 
NIFS

A team of fusion researchers succeeded in proving that energetic ions with energy in mega electron volt (MeV) range are superiorly confined in a plasma for the first time in helical systems. This promises the alpha particle (helium ion) confinement required for realizing fusion energy in a helical reactor.

The deuterium-tritium reaction in a high-temperature plasma will be used in fusion reactors in the future. Alpha particles with 3.5 MeV energy are generated by the fusion reaction. The alpha particles transfer their energy to the plasma, and this alpha particle heating sustains the high-temperature plasma condition required for the fusion reaction. In order to realize such a plasma, which is called a burning plasma, the energetic ions in MeV range must be superiorly confined in the plasma.

Numerical simulations predicted the favorable results of MeV ion confinement in a plasma in helical systems that have the advantage of steady-state operation in comparison with tokamak systems. However, demonstration of MeV ion confinement by experiment had not been reported. Recently, the study has greatly advanced by MeV ion confinement experiment performed in the deuterium operation of the Large Helical Device (LHD), which is owned by National Institute for Fusion Science (NIFS), National Institutes of Natural Sciences (NINS), in Japan. In deuterium plasmas, 1 MeV tritons (tritium ions) are created by deuteron-deuteron fusion reactions. The tritons have the similar behavior with alpha particles generated in a future burning plasma.

The research group led by Assistant Professor Kunihiro Ogawa and Professor Mitsutaka Isobe of NIFS has performed MeV triton confinement experiment in LHD. The tritons confined in the plasma undergo secondary reaction and emit high-energy neutrons by fusion reaction with background deuterons (deuterium ions). The research group developed the detector for selective measurement of the high-energy neutrons to evaluate the MeV ion confinement performance. The high-energy neutrons were measured for different magnetic-field configurations. When the magnetic field axis is shifted inward, the MeV ion confinement shows better performance. The result obtained by this study proves the MeV ion confinement for the first time in helical systems. This promises the alpha particle confinement required for realizing fusion energy in a helical reactor.

Credit: 
National Institutes of Natural Sciences

Camera can watch moving objects around corners

image: Objects -- including books, a stuffed animal and a disco ball -- in and around a bookshelf tested the system's versatility in capturing light from different surfaces in a large-scale scene.

Image: 
David Lindell

David Lindell, a graduate student in electrical engineering at Stanford University, donned a high visibility tracksuit and got to work, stretching, pacing and hopping across an empty room. Through a camera aimed away from Lindell - at what appeared to be a blank wall - his colleagues could watch his every move.

That's because, hidden to the naked eye, he was being scanned by a high powered laser and the single particles of light he reflected onto the walls around him were captured and reconstructed by the camera's advanced sensors and processing algorithm.

"People talk about building a camera that can see as well as humans for applications such as autonomous cars and robots, but we want to build systems that go well beyond that," said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford. "We want to see things in 3D, around corners and beyond the visible light spectrum."

The camera system Lindell tested, which the researchers are presenting at the SIGGRAPH 2019 conference Aug. 1, builds upon previous around-the-corner cameras this team developed. It's able to capture more light from a greater variety of surfaces, see wider and farther away and is fast enough to monitor out-of-sight movement - such as Lindell's calisthenics - for the first time. Someday, the researchers hope superhuman vision systems could help autonomous cars and robots operate even more safely than they would with human guidance.

Practicality and seismology

Keeping their system practical is a high priority for these researchers. The hardware they chose, the scanning and image processing speeds, and the style of imaging are already common in autonomous car vision systems. Previous systems for viewing scenes outside a camera's line of sight relied on objects that either reflect light evenly or strongly. But real-world objects, including shiny cars, fall outside these categories, so this system can handle light bouncing off a range of surfaces, including disco balls, books and intricately textured statues.

Central to their advance was a laser 10,000 times more powerful than what they were using a year ago. The laser scans a wall opposite the scene of interest and that light bounces off the wall, hits the objects in the scene, bounces back to the wall and to the camera sensors. By the time the laser light reaches the camera only specks remain, but the sensor captures every one, sending it along to a highly efficient algorithm, also developed by this team, that untangles these echoes of light to decipher the hidden tableau.

"When you're watching the laser scanning it out, you don't see anything," described Lindell. "With this hardware, we can basically slow down time and reveal these tracks of light. It almost looks like magic."

The system can scan at four frames per second. It can reconstruct a scene at speeds of 60 frames per second on a computer with a graphics processing unit, which enhances graphics processing capabilities.

To advance their algorithm, the team looked to other fields for inspiration. The researchers were particularly drawn to seismic imaging systems - which bounce sound waves off underground layers of Earth to learn what's beneath the surface - and reconfigured their algorithm to likewise interpret bouncing light as waves emanating from the hidden objects. The result was the same high-speed and low memory usage with improvements in their abilities to see large scenes containing various materials.

"There are many ideas being used in other spaces - seismology, imaging with satellites, synthetic aperture radar - that are applicable to looking around corners," said Matthew O'Toole, an assistant professor at Carnegie Mellon University who was previously a postdoctoral fellow in Wetzstein's lab. "We're trying to take a little bit from these fields and we'll hopefully be able to give something back to them at some point."

Humble steps

Being able to see real-time movement from otherwise invisible light bounced around a corner was a thrilling moment for this team but a practical system for autonomous cars or robots will require further enhancements.

"It's very humble steps. The movement still looks low-resolution and it's not super-fast but compared to the state-of-the-art last year it is a significant improvement," said Wetzstein. "We were blown away the first time we saw these results because we've captured data that nobody's seen before."

The team hopes to move toward testing their system on autonomous research cars, while looking into other possible applications, such as medical imaging that can see through tissues. Among other improvements to speed and resolution, they'll also work at making their system even more versatile to address challenging visual conditions that drivers encounter, such as fog, rain, sandstorms and snow.

Credit: 
Stanford University

Numerical model pinpoints source of pre-cursor to seismic signals

image: These before and after simulations show the collapse of a stress chain after a laboratory quake.

Image: 
Los Alamos National Laboratory

Numerical simulations have pinpointed the source of acoustic signals emitted by stressed faults in laboratory earthquake machines. The work further unpacks the physics driving geologic faults, knowledge that could one day enable accurately predicting earthquakes.

"Previous machine-learning studies found that the acoustic signals detected from an earthquake fault can be used to predict when the next earthquake will occur," said Ke Gao, a computational geophysicist in the Geophysics group at Los Alamos National Laboratory. "This new modeling work shows us that the collapse of stress chains inside the earthquake gouge emits that signal in the lab, pointing to mechanisms that may also be important in Earth." Gao is lead author of the paper, "From Stress Chains to Acoustic Emission," published today in Physical Review Letters and selected as the "Editors' Suggestion."

Stress chains are bridges composed of grains that transmit stresses from one side of a fault block to the other.

Gao works on a Los Alamos team that has identified the predictive acoustic signal in data from both laboratory quakes and megathrust regions in North America, South America and New Zealand. The signal accurately indicates the state of stress in the fault, no matter when the signal is read.

"Using the numerical model that we developed at Los Alamos, we examine and connect the dynamics in a granular system of fault gouge to signals detected on passive remote monitors," Gao said. Fault gouge is the ground-up, gravelly rock material created by the stresses and movements of a fault.

To investigate the cause of acoustic signals, the team conducted a series of numerical simulations on supercomputers using the Los Alamos-developed code HOSS (Hybrid Optimization Software Suite). This novel numerical tool is a hybrid methodology--the combined finite-discrete element method. It merges techniques developed under discrete element methods, to describe grain-to-grain interactions; and under finite element methods, to describe stresses as a function of deformation within the grains and wave propagation away from the granular system. The simulations accurately mimic the dynamics of earthquake fault evolution, such as how the materials inside the gouge grind and collide with each other, and how the stress chains form and evolve over time via interactions between adjacent gouge materials.

Los Alamos has funded a multi-million-dollar, multi-year program consisting of experiments, numerical modeling, and machine-learning efforts to develop and test a highly novel approach to probe the earthquake cycle and, in particular, to detect and locate stressed faults that are approaching failure.

Credit: 
DOE/Los Alamos National Laboratory

NASA tropical storm Erick strengthening

image: On July 29, 2019 at 6:35 a.m. EDT (1035 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite showed strongest storms in Tropical Storm Erick were around the center and in a band of thunderstorms southwest of the center where cloud top temperatures were as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Image: 
NASA/NRL

Infrared imagery from NASA's Aqua satellite revealed a stronger Tropical Storm Erick in the Eastern Pacific Ocean. Satellite imagery revealed two areas of very cold cloud tops indicating powerful thunderstorms as the storm is on the cusp of hurricane status.

Erick developed as Tropical Depression Six-E on Saturday, July 27, 2019. It formed about 1,215 miles (1,955 km) southwest of the southern tip of Baja California. Mexico. At 5:15 p.m. EDT that day, it strengthened into a tropical storm and was re-named Erick.

NASA's Aqua satellite used infrared light to analyze the strength of storms and found the bulk of them in the southern quadrant. Infrared data provides temperature information, and the strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures.

On July 29 at 6:35 a.m. EDT (1035 UTC),the Moderate Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite gathered infrared data on Tropical Storm Erick were around the center and in a band of thunderstorms southwest of the center where cloud top temperatures were as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall. Those strongest storms were south and southeast of the center of the elongated circulation. Recent microwave data reveal the development of an eye.

The National Hurricane Center or NHC said, "At 5 a.m. HST (Hawaii local time) (1500 UTC) on July 29, 2019, the center of Tropical Storm Erick was located near latitude 12.3 degrees north and longitude 136.9 degrees west. That's about 1,310 miles (2,110 km) east-southeast of Hilo, Hawaii.

Erick is moving toward the west near 17 mph (28 kph). A turn to the west-northwest and a slower forward speed is expected to start on Tuesday and continue through Wednesday. The estimated minimum central pressure is 991 millibars. Maximum sustained winds are near 70 mph (110 kph) with higher gusts. NHC said that the environment is currently favorable for intensification, and Erick is expected to become a hurricane at any time.

Erick can potentially become a major hurricane on Tuesday, July 30, and weakening trend is forecast to begin by later in the week.

Credit: 
NASA/Goddard Space Flight Center

Ultra-thin layers of rust generate electricity from flowing water

There are many ways to generate electricity--batteries, solar panels, wind turbines, and hydroelectric dams, to name a few examples. .... And now there's rust.

New research conducted by scientists at Caltech and Northwestern University shows that thin films of rust--iron oxide--can generate electricity when saltwater flows over them. These films represent an entirely new way of generating electricity and could be used to develop new forms of sustainable power production.

Interactions between metal compounds and saltwater often generate electricity, but this is usually the result of a chemical reaction in which one or more compounds are converted to new compounds. Reactions like these are what is at work inside batteries.

In contrast, the phenomenon discovered by Tom Miller, Caltech professor of chemistry, and Franz Geiger, Dow Professor of Chemistry at Northwestern, does not involve chemical reactions, but rather converts the kinetic energy of flowing saltwater into electricity.

The phenomenon, the electrokinetic effect, has been observed before in thin films of graphene--sheets of carbon atoms arranged in a hexagonal lattice--and it is remarkably efficient. The effect is around 30 percent efficient at converting kinetic energy into electricity. For reference, the best solar panels are only about 20 percent efficient.

"A similar effect has been seen in some other materials. You can take a drop of saltwater and drag it across graphene and see some electricity generated," Miller says.

However, it is difficult to fabricate graphene films and scale them up to usable sizes. The iron oxide films discovered by Miller and Geiger are relatively easy to produce and scalable to larger sizes, Miller says.

"It's basically just rust on iron, so it's pretty easy to make in large areas," Miller says. "This is a more robust implementation of the thing seen in graphene."

Though rust will form on iron alloys on its own, the team needed to ensure it formed in a consistently thin layer. To do that, they used a process called physical vapor deposition (PVD), which turns normally solid materials, in this case iron oxide, into a vapor that condenses on a desired surface. PVD allowed them to create an iron oxide layer 10 nanometers thick, about 10 thousand times thinner than a human hair.

When they took that rust-coated iron and flowed saltwater solutions of varying concentrations over it, they found that it generated several tens of millivolts and several microamps per cm-2.

"For perspective, plates having an area of 10 square meters each would generate a few kilowatts per hour--enough for a standard US home," Miller says. "Of course, less demanding applications, including low-power devices in remote locations, are more promising in the near term."

The mechanism behind the electricity generation is complex, involving ion adsorption and desorption, but it essentially works like this: The ions present in saltwater attract electrons in the iron beneath the layer of rust. As the saltwater flows, so do those ions, and through that attractive force, they drag the electrons in the iron along with them, generating an electrical current.

Miller says this effect could be useful in specific scenarios where there are moving saline solutions, like in the ocean or the human body.

"For example, tidal energy, or things bobbing in the ocean, like buoys, could be used for passive electrical energy conversion," he says. "You have saltwater flowing in your veins in periodic pulses. That could be used to generate electricity for powering implants."

Credit: 
California Institute of Technology

Analysis reveals economic cost of Alzheimer's disease and dementia are 'tip of the iceberg'

image: Tip of the Iceberg: Measured Societal Costs Represent Only a Proportion of the Total Burden of Alzheimer's disease and Related Dementias (ADRD)

Image: 
El Hayek <em>et al. J Alzheimers Dis.</em> 2019 Jun 24. doi: 10.3233/JAD 190426. [ Epub ahead of print]

A new research review highlighting the hidden costs of dementia suggests that traditional measures only show the 'tip of the iceberg' of the cost impact on society.

The analysis, from an international team of experts from academia, research institutes, health care organizations, consulting firms and Alzheimer's Research UK, looked at the true cost of Alzheimer's disease and Related Dementias (ADRD). The study found that socioeconomic costs such as the cost of healthcare for care partners/carers, reduced quality of life and "hidden" costs that stack up before diagnosis are overlooked by current estimates of the condition's economic impact.

The authors argue that better data on the extent of these costs is vital for informing future dementia policy. The paper is published on Tuesday 23 July in the Journal of Alzheimer's Disease.

Some of dementia's hidden costs explored in the analysis include:

People developing other health conditions, such as anxiety or depression, as a result of caring for someone with dementia.

Families forced to cut back on spending or to use savings to support their loved ones.

Reduced quality of life for people with dementia and their care partners/carers.

Costs that are incurred in the years before a diagnosis of impairment or dementia is made.

Currently, dementia is estimated to cost the US economy $290bn a year; the UK economy £26bn a year, and $1tn globally. A team of experts from institutions in the UK, Canada, Spain and the US* reviewed existing evidence to assess what different costs are associated with dementia and analyse how these costs are measured. The review included studies that measured:

direct costs (the cost of health care and paid-for social care),

indirect costs (such as informal care provided by loved ones and reduced productivity from people being unable to work), and

'intangible costs' (including the reduced quality of life experienced by people with dementia and their care partners/carers).

Current estimates of ADRD/dementia only look at direct and indirect costs.

The analysis showed that most official estimates of the cost of dementia fail to capture a number of hidden costs. For example, several studies showed that people who are caring for someone with dementia may be more likely to develop conditions such as depression, anxiety and hypertension - with each of these conditions carrying their own cost of care.

Many studies also showed that families can find themselves forced to cut back on expenditure or dip into savings in order to support a loved one with dementia.

The review also found that current estimates rarely account for the reduced quality of life experienced by people with dementia and their care partners/carers, with no standard measure for capturing these changes - making it harder to assess whether treatments and policies are improving people's lives.

The diseases that cause dementia are often not diagnosed in the early stages, symptoms are often misattributed, misdiagnosed or ignored, and many studies showed that the socioeconomic costs of these diseases typically begin in the years before a diagnosis is made. These could include the cost of diagnostic tests to rule out other conditions as symptoms start to manifest, higher costs of managing other health conditions that may be worsened by the person's dementia, and declining quality of life.

Alireza Atri, MD, PhD, senior and corresponding author of the study and Director of the Banner Sun Health Research Institute in Arizona, said: "We found staggering inconsistencies between how costs of dementia are calculated across studies and our analysis strongly supports that current estimates fail to recognise the true costs of the diseases, such as Alzheimer's, that cause dementia. Some studies have estimated that out of pocket expenses for people with dementia are up to one third of their household wealth in the final five years of their life, and that caregivers have healthcare costs that are twice as high as non-caregivers. We also found evidence that costs begin rising up to 10 years prior to diagnosis - we need to better measure and factor all these into future societal cost estimates."

Dr. Atri went on to say: "We must come together to develop and implement comprehensive national dementia prevention, treatment, care, workforce education and training, and research action plans that better measure societal impact; to promote private-public partnerships; and to focus priorities, policies and plans to combat ADRD. To succeed, we must do so not just through local and national governments and organizations, but also in a coordinated way internationally, with governments, industry, and through federations and health organizations such as Alzheimer's Disease International (the federation of national Alzheimer's associations) and WHO, in order to drive policy and expand on recent steps forward such as the Osaka G20 Summit declaration that recognized dementia as a global health priority. This is not a "their problem", it is an "our problem" - we are all stakeholders, the bell tolls for all of us, and we have to act now - not in isolation but in a coordinated, strategic, systematic, and resolute way, locally and globally; we can't afford not to."

Other study co-authors resonated and expanded on Dr. Atri's statement and the study findings.

Strategy consulting firm Shift Health's Dr. Youssef Hayek said: "This study exposes major gaps in our understanding of the full burden of ADRD/dementia to society and challenges us to think differently about how we assess, value and prioritize strategies to mitigate this looming public health crisis--including innovative social care and support systems for patients and families, the integration of novel biomarkers for early diagnosis into routine clinical practice, and the introduction of future disease-modifying therapies."

Dr. Jose' Luis Molinuevo, Scientific Director of Spain's BarcelonaBeta Brain Research Center stressed the importance of dementia support infrastructure and prevention, stating "that a properly estimated cost of Alzheimer's disease and related dementias will result in an enormous impact on our currently fragile supports system and should also reflect on the value of implementing potential preventive strategies."

Taking a holistic and global view of dementia, Professor Clive Ballard, Executive Dean at the University of Exeter Medical School, stated that "Dementia is a complex condition that often occurs with additional "co-morbid" physical and mental health conditions. To calculate the true needs for people with dementia and the true cost, we must also consider key co-morbidities such as falls, fractures, frailty and the increased risk of infections as well as mental health conditions such as depression, agitation and psychosis. These are huge issues in terms of both high cost and devastating impact on individuals, and they must be considered holistically. The full toll of dementia further highlights the urgency to take global action now."

Hilary Evans, Chief Executive of Alzheimer's Research UK who fed into the review, said:

"This work highlights just some of the challenges that dementia brings for families, and shows that the impact of the condition often starts years before a diagnosis is made. For anyone with experience of dementia these findings won't come as a surprise, yet these impacts are not currently reflected in official estimates of the cost of dementia.

"It's critical that we begin to acknowledge these costs and find better ways of measuring them in order to provide a complete picture of dementia's impact. Without this, policymakers cannot make informed decisions or understand whether policies designed to meet this challenge are working.

"We must also continue investing in research to find better treatments and improve the way diseases like Alzheimer's are diagnosed. Research offers our best hope for ending the fear, harm and heartbreak of dementia and transforming people's lives."

Credit: 
IOS Press

New study reveals how TB bacteria may survive in human tissues

Carbon monoxide is an infamous and silent killer that can cause death in minutes. But while it is deadly for us, some microorganisms actually thrive on it, by using this gas as an energy source.

Associate Professor Chris Greening and his team of microbiologists from the School of Biological Sciences, Monash University, have discovered that some pathogens depend on carbon monoxide to survive when other nutrients are not available.

The research focused on mycobacteria, a bacterial group that causes killer diseases such as tuberculosis (TB), leprosy, and Buruli ulcer. During infection, these microbes are in a hostile environment with very few nutrients to go around, meaning that anything they can do to get extra energy can be hugely advantageous.

"When microbial cells are starved of their preferred energy sources, one way they subsist is by scavenging gases such as carbon monoxide," said Monash PhD student Paul Cordero, the co-lead author of the study.

"They breakdown this gas into its fundamental components, which provide the cells just enough energy to persist."

The researchers showed that an enzyme called carbon monoxide dehydrogenase is what allows mycobacteria to obtain energy from this gas. While the energy gained is not enough to allow for growth, the researchers found that carbon monoxide consumption allowed mycobacteria to survive for longer periods of time.

The study, which was supported by the Australian Research Council (ARC) and National Health and Medical Research Council (NHMRC), was published today in the renowned ISME Journal.

The group's findings suggest that Mycobacterium tuberculosis might be able to survive inside the human host by using carbon monoxide. Present in humans since ancient times, TB remains a major global health burden. This bacterium infects one quarter of the world's population and is now the leading cause of death from infectious disease worldwide.

"It has been known for years that Mycobacterium tuberculosis can use carbon monoxide, but nobody knew why," said fellow study co-first author, PhD candidate Katie Bayly.

"Based on these findings, we predict that it uses this gas to its advantage to persist inside human lungs," she said.

"Our immune cells actually make small amounts of carbon monoxide, which the bacterium may be able to use as an energy supply while dormant."

Dormancy allows Mycobacterium tuberculosis to stay alive inside patients for years. This dormant infection usually has no symptoms, but can advance into full-blown TB, for example when people become immuno-compromised.

This new discovery on the survival mechanism of mycobacteria could pave the way for new strategies to better fight communicable diseases such as tuberculosis.

Credit: 
Monash University

Scientists film molecular rotation

video: The movie, assembled from the individual snapshots, covers about 1.5 rotational periods.

Image: 
DESY, Evangelos Karamatskos

Scientists have used precisely tuned pulses of laser light to film the ultrafast rotation of a molecule. The resulting "molecular movie" tracks one and a half revolutions of carbonyl sulphide (OCS) - a rod-shaped molecule consisting of one oxygen, one carbon and one sulphur atom - taking place within 125 trillionths of a second, at a high temporal and spatial resolution. The team headed by DESY´s Jochen Küpper from the Center for Free-Electron Laser Science (CFEL) and Arnaud Rouzée from the Max Born Institute in Berlin are presenting their findings in the journal Nature Communications. CFEL is a cooperation of DESY, the Max Planck Society and Universität Hamburg.

"Molecular physics has long dreamed of capturing the ultrafast motion of atoms during dynamic processes on film," explains Küpper, who is also a professor at the University of Hamburg. This is by no means simple, however. Because in the realm of molecules, you normally need high-energy radiation with a wavelength of the order of the size of an atom in order to be able to see details. So Küpper´s team took a different approach: they used two pulses of infrared laser light which were precisely tuned to each other and separated by 38 trillionths of a second (picoseconds), to set the carbonyl sulphide molecules spinning rapidly in unison (i.e. coherently). They then used a further laser pulse, having a longer wavelength, to determine the position of the molecules at intervals of around 0.2 trillionths of a second each. "Since this diagnostic laser pulse destroys the molecules, the experiment had to be restarted again for each snapshot," reports Evangelos Karamatskos, the principal author of the study from CFEL.

Altogether, the scientists took 651 pictures covering one and a half periods of rotation of the molecule. Assembled sequentially, the pictures produced a 125 picosecond film of the molecule´s rotation. The carbonyl sulphide molecule takes about 82 trillionths of a second, i.e. 0.000 000 000 082 seconds, to complete one whole revolution. "It would be wrong to think of its motion as being like that of a rotating stick, though," says Küpper. "The processes we are observing here are governed by quantum mechanics. On this scale, very small objects like atoms and molecules behave differently from the everyday objects in our surroundings. The position and momentum of a molecule cannot be determined simultaneously with the highest precision; you can only define a certain probability of finding the molecule in a specific place at a particular point in time."

The peculiar features of quantum mechanics can be seen in several of the movie´s many images, in which the molecule does not simply point in one direction, but in various different directions at the same time - each with a different probability (see for example the 3 o´clock position in the figure). "It is precisely those directions and probabilities that we imaged experimentally in this study," adds Rouzée. "From the fact that these individual images start to repeat after about 82 picoseconds, we can deduce the period of rotation of a carbonyl sulphide molecule."

The scientists believe that their method can also be used for other molecules and processes, for example to study the internal twisting, i.e., torsion, of molecules or chiral compounds, that are compounds that exist in two forms, which are mirror images of each other - much like the right and left hands of a human being. "We recorded a high-resolution molecular movie of the ultrafast rotation of carbonyl sulphide as a pilot project," says Karamatskos, summarising the experiment. "The level of detail we were able to achieve indicates that our method could be used to produce instructive films about the dynamics of other processes and molecules."

Credit: 
Deutsches Elektronen-Synchrotron DESY

Support needed for foster carers of LGBTQ young people

More support is needed for fosters carers looking after LGBTQ young people, according to new research led by the University of East Anglia (UEA).

Findings from the first ever study of LGBTQ young people in care in England found good examples of foster carers being available and sensitive, and offering acceptance and membership of their family.

However, there was also evidence of foster carers struggling in some areas in relation to meeting the needs of LGBTQ young people, whether because of their lack of knowledge, skills and support or because of ambivalence, discomfort or, in a few cases, homophobia or transphobia among foster family members.

Although there were some positive descriptions of the support available from social workers, most carers felt alone with the question of how best to support LGBTQ young people. This lack of support also meant that negative attitudes and approaches could go unchallenged.

The research, conducted by UEA's Centre for Research on Children and Families, focused on the nature of foster carers' experiences and perspectives on caring for LGBTQ young people. It involved interviews with 26 carers, who described the importance of offering LGBTQ young people not only the nurturing relationships that all children in care need, but helping young people manage stigma and other challenges associated with minority sexual orientation and gender identity.

The findings - published in the journal Child and Family Social Work as part of a special issue on fostering teenagers - are from a wider study of the experiences of LGBTQ young people in care, funded by the Economic and Social Research Council and led by Dr Jeanette Cossar from UEA's School of Social Work. This also included a survey of local authorities in England and interviews with 46 LGBTQ young people who were or had been in care.

Gillian Schofield, Professor of Child and Family Social Work at UEA and lead author of the foster carer paper, said the experiences and needs of LGBTQ young people in care had been overlooked in England, both in policy and research.

"LGBTQ young people in foster families are likely to have many of the same needs as other fostered adolescents, but they also face additional challenges," said Prof Schofield. "Their emotional, psychological and social well-being depends on how they manage, and are supported in managing, both the difficult histories they share with other children in care and their minority sexual orientation and gender identities.

"Understanding caregiving roles and relationships for LGBTQ young people in care has important implications for recruiting, training, matching and supporting foster carers to care for LGBTQ young people effectively, to ensure their needs are met. Our work highlights one of the key areas in fostering that professionals supporting young people in foster care and training and supporting foster carers need to be better informed about."

For LGBTQ young people, trust in caregivers was often said by carers to have been damaged by previous adverse experiences that included abuse, neglect, separation and loss. For some this had been compounded by moves linked to rejection of their sexual orientation or trans identity by birth, foster or adoptive parents.

Carers described needing to be sensitive to the difficult choices facing young people about how open they wanted to be about sexuality or gender, especially when they were anxious about being rejected or moved. They reported particular dilemmas in supporting young people in care to feel confident in expressing their LGBTQ identities while simultaneously protecting them and helping them to protect themselves from bullying.

Carers talked with pride of the way in which young people treated them as parents, and often recognised the additional element of security that accepting young people's LGBTQ identity contributed to a sense of family belonging. Where foster carers had helped LGBTQ young people to feel fully accepted as family members, this gave them greater confidence in other areas of their lives. However, it was also important for foster carers to promote positive relationships between young people and their birth families.

A number of implications for practice emerged from the interviews with foster carers, and were supported by other data from the project from young people and social workers.

Prof Schofield said: "At the initial assessment, training and preparation stage, it will be important for fostering agencies to explore prospective foster carers' values and attitudes in relation to LGBTQ issues.

"Key also to ensuring high quality foster care will be the quality of the work of supervising social workers and children's social workers. Foster carers in this study felt that they needed social workers to offer better information, for example in relation to LGBTQ support groups or gender identity services."

Carers also needed clearer policies and better support to manage the day-to-day decisions within the care system, whether regarding decisions over sleepovers or managing inter-professional meetings such as statutory reviews. Better training for social workers about the experiences and needs of LGBTQ young people and their carers is also essential, both in qualifying and post-qualifying programmes.

'Providing a secure base for LGBTQ young people in foster care: the role of foster carers', Gillian Schofield, Jeanette Cossar, Emma Ward, Birgit Larsson and Pippa Belderson, is published in Child and Family Social Work.

Credit: 
University of East Anglia

'Digital twins' -- An aid to tailor medication to individual patients

image: The researchers analysed T cells from patients with thirteen diseases.

Image: 
Magnus Johansson

Advanced computer models of diseases can be used to improve diagnosis and treatment. The goal is to develop the models to "digital twins" of individual patients. Those twins may help to computationally identify and try the best medication, before actually treating a patient. The models are the result of an international study, published in the open access journal Genome Medicine.

One of the greatest problems in medical care is that medication is ineffective in 40-70% of patients with common diseases. One important reason is that diseases are seldom caused by a single, easily treatable "fault". Instead, most diseases depend on altered interactions between thousands of genes in many different cell types. Another reason is that those interactions may differ between patients with the same diagnosis. There is a wide gap between this complexity and modern health care. An international research team aimed to bridge this gap by constructing computational disease models of the altered gene interactions across many cell types.

"Our aim is to develop those models into 'digital twins' of individual patients' diseases in order to tailor medication to each patient. Ideally, each twin will be computationally matched with and treated with thousands of drugs, before actually selecting the best drug to treat the patient", says Dr Mikael Benson, professor at Linköping University, Sweden, who led the study.

The researchers started by developing methods to construct digital twins, using a mouse model of human rheumatoid arthritis. They used a technique, single-cell RNA sequencing, to determine all gene activity in each of thousands of individual cells from the sick mouse joints. In order to construct computer models of all the data, the researchers used network analyses. "Networks can be used to describe and analyse most complex systems", says Dr Benson. "A simple example is a soccer team, in which the players are connected into a network based on their passes. The player that exchanges passes with most other players may be most important". Similar principles were applied to construct the mouse "twins", as well as to identify the most important cell type. That cell type was computationally matched with thousands of drugs. Finally, the researchers showed that the "best" drug could be used to treat and cure the sick mice.

The study also demonstrated that it may be possible to use the computer models to diagnose disease in humans. The researchers focused on the same cell type that was used for drug identification. This cell type, T cells, plays an important role in the immune defence, and serves as a fingerprint of the whole digital twin. The researchers analysed T cells from patients with thirteen diseases, including autoimmune diseases, cardiovascular diseases and various types of cancer. The diagnostic fingerprints could be used not only to distinguish patients from healthy people, but also to distinguish most of the diseases from each other.

"Since T cells function as a sort of spy satellite, which is continuously surveying the body to discover and combat disease as early as possible, it may be possible to use this cell type for the early diagnosis of many different diseases", says Mikael Benson.

Credit: 
Linköping University

KIST develops technology for creating flexible sensors on topographic surfaces

image: The real transfer-printing electrode that uses hydrogel and nano ink produced by Hyunjung Yi of the KIST's Post-Silicon Semiconductor Institute.

Image: 
Korea Institute of Science and Technology (KIST)

The Korea Institute of Science and Technology (KIST, president: Byung-gwon Lee) announced that Dr. Hyunjung Yi of the Post-Silicon Semiconductor Institute and her research team developed a transfer-printing** technology that uses hydrogel* and nano ink to easily create high-performance sensors on flexible substrates of diverse shapes and structures.

*Hydrogel: a three-dimensional hydrophilic polymer network that absorbs large amounts of water

**Transfer printing: a process for producing electrical devices via which electrodes are printed on a transfer mold and then transferred to a final substrate

Interest in wearable devices, including smartwatches and fitness bands, is increasing rapidly. With the scope of such devices expanding from types that are worn (like clothing) to those that are attached directly to the skin, there is growing demand for technologies that allow for the production of high-performance sensors on surfaces of various shapes and types.

Transfer printing works in a way similar to that of a tattoo sticker: just as sticking the tattoo sticker onto the skin and then removing the paper section leaves the image on the skin, this process creates a structure on one surface and then transfers it onto another. The most notable advantage of this process is that it largely avoids the difficulties involved in creating devices directly on substrates that are thermally and/or chemically sensitive, which is why transfer printing is widely used for the manufacturing of flexible devices. On the other hand, the primary disadvantage of current transfer printing processes is that they can usually only be used for substrates with flat surfaces.

The KIST team overcame these limitations by developing a simple and easy transfer printing process that allows for the creation of high-performance, flexible sensors on topographic surfaces with diverse features and textures.

Using the porous*** and hydrophilic nature of hydrogels, the KIST team inkjet-printed an aqueous solution-based nano ink**** onto a hydrogel layer (which was solidified onto a topographic surface). The surfactant and water in the nano ink passed quickly through the hydrogel's porous structure, leaving only the hydrophobic nanomaterial (the particles of which are greater in length than the diameter of the holes in the hydrogel) remaining on the surface, allowing for the creation of the desired electrode pattern.

***Porous: the presence of many small holes on the surface or inside of a solid substance

****Aqueous solution-based nano ink: an ink in which a hydrophobic material is dispersed in water using a surfactant

The amount of nano ink used for this printing process was very small, allowing for the rapid formation of electrodes. Moreover, the electrical performance of the electrodes was outstanding, due to the high levels of purity and uniformity of the resulting nanonetworks. Also, because of the hydrophobic nature of the nanomaterial, there was an extremely low degree of interaction between it and the hydrogel, allowing for the easy transfer of the electrodes to diverse topographic surfaces.

In particular, the KIST team developed a technology for transferring nanonetworks by employing a method that solidifies a moldable elastomeric fluid onto a hydrogel surface, enabling the easy creation of flexible electrodes even on substrates with rough surfaces. The team transferred nanoelectrodes directly onto a glove to create a modified sensor that can immediately detect finger movements. It also created a flexible, high-performance pressure sensor that can measure the pulse in the wrist.

Yi said, "The outcome of this study is a new and easy method for creating flexible, high-performance sensors on surfaces with diverse characteristics and structures. We expect that this study will be utilized in the many areas that require the application of high-performance materials onto flexible and/or non-traditional substrates, including digital healthcare, intelligent human-machine interfaces, medical engineering, and next-generation electrical materials."

Credit: 
National Research Council of Science & Technology

Larger ethnic communities help new refugees find work, Stanford research shows

Ethnic enclaves are often viewed as a negative for the integration of immigrants with natives in their new country. But it turns out that ethnic communities can help newly arrived refugees find work, according to a new Stanford study that analyzed a cohort of asylum seekers in Switzerland.

Researchers at the Stanford Immigration Policy Lab found that new refugees were more likely to become employed within their first five years if Swiss officials assigned them to live in an area with a larger community of people who share their nationality, ethnicity or language.

"Our study shows that ethnic networks can be beneficial for the economic status of refugees at least within the first few years of their arrival in the host country," said Jens Hainmueller, a professor of political science at Stanford and a co-author on the research paper, published July 29 in the Proceedings of the National Academy of Sciences. Hainmueller is also a faculty co-director of the Stanford Immigration Policy Lab.

The paper was co-authored by Linna Martén, a researcher at Uppsala University, and Dominik Hangartner, an associate professor of public policy at ETH Zurich in Switzerland and a co-director of the Immigration Policy Lab, which has a branch in Zurich.

Digging into Swiss records

Researchers analyzed governmental data of 8,590 asylum seekers who were granted temporary protection status when they arrived in Switzerland between 2008 and 2013. The data also included five years of information on each refugee, including whether they found employment and in which industry.

In Switzerland, immigration officials randomly assign each new refugee to live in one of the country's 26 cantons, which are member states. The refugees' preferences typically are not considered as part of the process unless they have a family member already living in a particular canton. In addition, new refugees with temporary protection status cannot move outside of their assigned canton within their first five years in Switzerland, Hainmueller said.

Analysis of the data revealed that no more than 40 percent of refugees had a job during their fifth year in Switzerland. But those refugees who were assigned to cantons with a larger ethnic network were more likely to have found work.

If a group of new refugees was assigned to a canton with a large share of others from their country, about 20 percent of those new arrivals became employed within three years of living in the country. But if that same group was settled in an area with a small share of co-nationals, only 14 percent of the new arrivals had a job three years later.

"Given that refugee employment is generally very low, the increase in employment is an important effect," Hainmueller said. "This is just one piece of a bigger puzzle on what helps refugees integrate within their host country."

Informing asylum, refugee policies

In European countries, many people view ethnic enclaves as a result of a failure to integrate immigrants with natives. But those negative perceptions are not grounded in evidence, Hainmueller said.

In part, because of this general concern, officials in countries such as Sweden, Denmark and Switzerland have designed policies for dispersing newly arrived refugees to avoid the creation of ethnic enclaves.

"What this research suggests is that those dispersal policies come with some costs, in terms of new refugees not benefitting from the positive effects of ethnic networks," Hainmueller said. "It doesn't mean that these policies are generally bad, but it does highlight that there is one potential benefit of geographically concentrated ethnic networks that European officials are not capturing."

In the U.S., people who arrive as part of the refugee resettlement program which includes an extensive background check conducted through the United Nations Refugee Agency, are assigned to live in areas based on where there is available space. Unlike in some European countries, new refugees are allowed to move after their initial settlement.

"U.S. officials and the public have a slightly more positive view of ethnic enclaves because ethnic neighborhoods formed at the foundation of this country," Hainmueller said.

The new study is a part of a bigger project at the Immigration Policy Lab that aims to examine how the asylum process and its implementation affect the subsequent integration of refugees both in the U.S. and Europe, Hainmueller said.

"We are interested in a lot of different asylum policy choices, such as how asylum seekers are geographically located and what rules govern their access to the labor market," Hainmueller said. "There are a lot of rules that affect refugees and asylum seekers, and they aren't necessarily grounded in solid evidence. Our research agenda is to try to quantify the impacts of those policy choices and point the way to policies that might work better."

Credit: 
Stanford University