Tech

Do we tend to centre our Instagram selfies on our left eye?

Do we tend to centre our Instagram selfies on our left eye?

A new study suggests that it may not just be artists who make their eyes the centre-point of their own original work.

New research suggests that we tend to compose 'selfies' that horizontally centre on one of our eyes, particularly the left.

The study authors from City, University of London, the University of Parma, and the University of Liverpool speculate that this alignment is because our eyes provide a wealth of information about our gaze direction and what we are paying attention to, which may in turn be used to share important information with the viewer about our mood and what we are thinking about.

Previous research has suggested that painters apply the same eye-centring principle in their portraits of others and of themselves, whether knowingly or not1, while other research has argued that the eye-centring phenomenon may just be a statistical artefact caused by random processes2.

In the current study, the researchers analysed over 4,000 Instagram 'selfie' photos available from the website http://www.selfiecity.net, with an equal proportion taken in the major cities of New York (US), São Paulo (Brazil), Moscow (Russia), Berlin (Germany) and Bangkok (Thailand).

The study subdivided the images into 'standard selfies' taken at arm's length using a camera-phone or similar digital device, or a 'mirror selfie' taken of the creator's reflection through a mirror and including the digital device in shot. This is an important distinction, partly as it is needed to differentiate whether people have a left or right bias toward composing their selfies.

The study did not include photos commonly known as 'wefies', 'usies' or 'groupies' (i.e. with multiple friends in the shot), those taken next to pets or life-sized dolls, or self-portraits taken from unnatural angles and positions (such as with the head cocked at an extreme angle, or a full body selfie).

For each selfie the horizontal position of each eye relative to the centre-line of the image was measured, with the distance and direction of the closest eye recorded.

Statistical analyses applied to this information showed that the selfie creators tended to centre one of their eyes slightly to the left of centre of the selfie, and usually the left eye.

Interestingly, this centring tendency varied less among selfie subjects than expected if the phenomenon happened by chance, and was seen consistently across all the cities sampled in the study.

Furthermore, the slight centring of the eye to the left is consistent with a phenomenon observed in neurologically healthy people known as 'pseudoneglect' in which spatial attention tends to be shifted to the left. This is shown, for example, when people are asked to indicate the middle of a horizontal line drawn on a sheet of paper; on average, the mark is made slightly to the left3.

The fact that the left eye was more commonly centred than the right is also consistent with some previous research suggesting that selfie-takers and artists of self-portraits prefer showing more of their left cheek4.

The authors do however urge caution in interpretation of the findings of left-right bias due to limitations of the study, including the possibility of some of the selfie creators 'left-right' flipping their images before posting them.

Professor Christopher Tyler, Professor of Optometry and Visual Sciences at City, University of London and a collaborator in the study said:

"The core result of this study was to replicate my earlier finding that painters tend to centre one eye in portraits, throughout the centuries1, in a modern version of which the selfie takers are simultaneously both the artists and the subjects of the portrait.

"This centring tendency opposes the alternative possibility of placing the symmetric face symmetrically in the frame, which would avoid leaving the non-centred eye 'out in the cold'. These results are important for understanding the perceptual principles in operation as these diverse 'portraitists' choose the framing and composition of their pictures.

"The tendency to centre a feature of particular interest in the frame presumably derives from the fact that we humans have a single focal region of high resolution in the centre of our retinas, the fovea, providing a natural point of attraction for this largely unsuspected tendency in composing the portrait."

Credit: 
City St George’s, University of London

Researchers unveil new volcanic eruption forecasting technique

image: University of Illinois geologists Jack Albright, left, and professor Patricia Gregg are part of a team that has developed new computer models to help researchers better forecast volcanic eruptions.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Volcanic eruptions and their ash clouds pose a significant hazard to population centers and air travel, especially those that show few to no signs of unrest beforehand. Geologists are now using a technique traditionally used in weather and climate forecasting to develop new eruption forecasting models. By testing if the models are able to capture the likelihood of past eruptions, the researchers are making strides in the science of volcanic forecasting.

The study, published in the journal Geophysical Research Letters, examined the eruption history of the Okmok volcano in Alaska. In 2008, a large eruption produced an ash plume that extended approximately 1 mile into the sky over the Aleutian Islands - posing a significant hazard to aircraft engines along a route that transports roughly 50,000 people between Asia and North America each day, the researchers said.

"The 2008 eruption of Okmok came as a bit of surprise," said University of Illinois graduate student and lead author Jack Albright. "After an eruption that occurred in 1997, there were periods of slight unrest, but very little seismicity or other eruption precursors. In order to develop better forecasting, it is crucial to understand volcanic eruptions that deviate from the norm."

Geologists typically forecast eruptions by looking for established patterns of preeruption unrest such as earthquake activity, groundswell and gas release, the researchers said. Volcanoes like Okmok, however, don't seem to follow these established patterns.

To build and test new models, the team utilized a statistical data analysis technique developed after World War II called Kalman filtering.

"The version of Kalman filtering that we used for our study was updated in 1996 and has continued to be used in weather and climate forecasting, as well as physical oceanography," said geology professor Patricia Gregg, a co-author of the study that included collaborators from Southern Methodist University and Michigan State University. "We are the first group to use the updated method in volcanology, however, and it turns out that this technique works well for the unique unrest that led up to Okmok's 2008 eruption."

One of those unique attributes is the lack of increased seismicity before the eruption, the researchers said. In a typical preeruption sequence, it is hypothesized that the reservoir under the volcano stays the same size as it fills with magma and hot gases. That filling causes pressure in the chamber to increase and the surrounding rocks fracture and move, causing earthquakes.

"In the 2008 eruption, it appears that the magma chamber grew larger to accommodate the increasing pressure, so we did not see the precursor seismic activity we would expect," Albright said. "By looking back in time with our models, or hindcasting, we can now observe is that stress had been building up in the rocks around the chamber for weeks, and the growth of the magma system ultimately led to its failure and eruption."

This type of backward and forward modeling allows researchers to watch a volcanic system evolve over time. "While we stopped our analysis after the 2008 eruption, we are now able to propagate this new model forward in time, bring it to present day, and forecast where Okmok volcano is heading next," Gregg said.

The researchers posit that these models will continue to find other less-recognized eruption precursors, but acknowledge that every volcano is different and that the models must be tailored to fit each unique system.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

The secret strength of gnashing teeth

image: Ghosh and his team use several models to study how cracks form in glass; this animation reveals how a crack can be contained within a softer defect until it reaches the breaking point in the more brittle materials.

Image: 
Susanta Ghosh/Michigan Tech

The strength of teeth is told on the scale of millimeters. Porcelain smiles are kind of like ceramics -- except that while china plates shatter when smashed against each other, our teeth don't, and it's because they are full of defects.

Those defects are what inspired research led by Susanta Ghosh, assistant professor in the Department of Mechanical Engineering-Engineering Mechanics at Michigan Technological University. The work came out recently in the journal Mechanics of Materials. Along with a team of dedicated graduate students -- Upendra Yadav, Mark Coldren and Praveen Bulusu -- and fellow mechanical engineer Trisha Sain, Ghosh examined what's called the microarchitecture of brittle materials like glass and ceramics.

"Since the time of alchemists people have tried to create new materials," Ghosh said. "What they did was at the chemical level and we work at the microscale. Changing the geometries -- the microarchitecture -- of a material is a new paradigm and opens up many new possibilities because we're working with well-known materials."

Glass is one such material. Making stronger glass brings us back to teeth -- and seashells. On the micro level, the primary hard and brittle components of teeth and shells have weak interfaces or defects. These interfaces are filled with soft polymers. As teeth gnash and shells bump, the soft spots cushion the hard plates, letting them slide past one another. Under further deformation, they get interlocked like hook-and-loop fasteners or Velcro, thus carrying huge loads. But while chewing, no one would be able to see the shape of a tooth change with the naked eye. The shifting microarchitecture happens on the scale of microns, and its interlocking structure rebounds until a sticky caramel or rogue popcorn kernel pushes the sliding plates to the breaking point.

That breaking point is what Ghosh studies. Researchers in the field have found in experiments that adding small defects to glass can increase the strength of the material 200 times over. That means that the soft defects slow down the failure, guiding the propagation of cracks, and increases the energy absorption in the brittle material.

"The failure process is irreversible and complicated because the architectures that trap the crack through a predetermined path can be curved and complex," Ghosh said. "The models we work with try to describe fracture propagation and the contact mechanics at the interface between two hard-brittle building blocks."

Ghosh's team developed two models. The first uses finite element modeling (FEM) and is detailed and highly accurate, but expensive. The second is surprisingly accurate, though less so than FEM techniques, and is much cheaper to calculate.

FEM is a numerical model that takes apart a complex whole by evaluating separate pieces -- called finite elements -- then puts everything back together again using the calculus of variations. Humpty Dumpty and all the king's men would have liked FEM, but it's no quick roadside trick. To run such complex calculations requires a supercomputer, like Superior at Michigan Tech, and ensuring that the right inputs get plugged in takes diligence, patience and a keen eye for coding detail. Using FEM for super strong glass means modeling all the possible interactions between the material's hard plates and soft spots. Analytical modeling offers an alternative.

"We wanted a simple, approximate model to describe the material," Ghosh said, explaining the team used more basic math equations than the FEM calculations to outline and describe the shapes within the material and how they might interact. "Of course, an experiment is the ultimate test, but more efficient modeling helps us speed up the development process and save money by focusing on materials that work well in the models."

Both the FEM and analytical microarchitecture modeling from Ghosh's lab can help make ceramics, biomedical implants and the glass in buildings as tough as our teeth.

Credit: 
Michigan Technological University

Addressing food insecurity in health care settings

Many health care settings are exploring ways to reduce patient food insecurity, but there is little rigorously conducted research in this area. A review of articles covering food insecurity interventions in health care settings from 2000-2018 found that interventions focused on either referrals or direct provision of food or vouchers both suffered from poor follow-up, a general lack of comparison groups, and limited statistical power and generalizability. Given the clear and convincing evidence that food insecurity has an adverse impact on health and well-being across the life course, more research is needed to better explore what makes for effective food interventions.

Credit: 
American Academy of Family Physicians

ADHD medication: How much is too much for a hyperactive child?

When children with ADHD don't respond well to Methylphenidate (MPH, also known as Ritalin) doctors often increase the dose. Now a new review shows that increasing the dose may not always be the best option, as it may have no effect on some of the functional impairments associated with ADHD. The researchers caution against increasing the doses is based on findings that this effect may only be observed for behavioral factors (such as reduction in attention and/or hyperactivity/impulsivity) and not for the child's ability to control their impulses. This work is presented at the ECNP Conference in Copenhagen.

Attention-Deficit/Hyperactivity Disorder (ADHD), is the most common childhood-onset psychiatric disorder, characterized by symptoms such as inattention, hyperactivity and impulsivity. Worldwide, around 5% of children and adolescents suffer from ADHD*.

ADHD is a complex condition comprising both behavioural and neurocognitive symptoms, but diagnosis requires only that a patient exhibit at least 6 behavioural symptoms. You can see a list of these symptoms at: https://www.nhs.uk/conditions/attention-deficit-hyperactivity-disorder-adhd/symptoms/. Treatment is normally judged on how well these behavioural symptoms are improved. However, children with ADHD can also be characterized, by looking at functional impairments such as neurocognitive functioning, including inhibitory control which is a measure of how they keep their impulsiveness under control.

Methylphenidate (MPH) has been commonly used as a first line medication to treat children with ADHD since the 1990s. It is generally effective and well tolerated, but around 30% of children taking MPH don't respond to standard doses, often leading doctors to consider increasing the dose.

Like all drugs, MPH carries the risk of side effects, which may become more significant at increased dose and with long-term use. These side effects include growth retardation and difficulty in gaining weight: 3 years of MPH use can cause a child to be 2cm shorter and 2.7 kg lighter than normal.

To understand and distil the effects of the drug on children with ADHD, Karen Vertessen (MD & PhD student at the Vrije Universiteit Amsterdam) and colleagues undertook a review of all the scientific literature (a metanalysis) relating to dose effects of MPH on inhibitory control (an aspect of impulsiveness) in children and adolescents.

They managed to identify 18 studies, comprising in total 606 subjects with ADHD. They were able to classify the MPH doses reported as low, medium, or high dose. Results showed that a medium dose of MPH had the strongest beneficial effects on inhibitory control. However increasing the dose past the medium dose did not make the drug work more effectively.

Karen Vertessen said, "Scientifically, this is an interesting result. Generally, high doses of MPH does not help the child or adolescent keep their inhibitions under better control, although an increased dose, in general ,does have a greater effect on the core behavioral symptoms of ADHD.

Even though inhibitory control is just one aspect of impulsivity, we suggest that medically we need to be cautious about just increasing the dose when a child does not instantly respond to the drugs. Children are more vulnerable than adults in these cases, especially since they will be just beginning to receive treatment, and so many treatment variables will still need to be established. If clinicians decide to start therapy with MPH, they need to keep a close eye on the patient and objectively evaluate every dose, to make sure that the higher dose is actually having an effect. Current ADHD evaluation only uses behavioral outcomes, whereas we suggest adding neurocognitive outcomes to this evaluation, given that these outcomes are important for, among others, academic functioning. In other words, checking for whether or not MPH is dealing with inhibitory control might allow us to see if increasing the dose makes sense. To see to what extent these findings might have a clinical impact we are currently investigating the other most relevant neurocognitive factors related to ADHD".

Commenting, Dr Kerstin von Plessen (Centre Hospitalier Universitaire Vaudois, Lausanne) said:

"This is an elegant and highly relevant study, which sheds light on an interesting phenomenon which has not received sufficient attention up to now. However the study does not address the question why some children receive this higher dosage. This is probably due to the lower dose having a lesser effect. This means that the findings agree with the clinical reality telling us that children, who do not respond sufficiently to the regular dosages of MPH, require a second more comprehensive diagnostic examination before increasing the medication. In addition, not all children respond to MPH, and so other treatment options should also be explored. The conclusion of the study, that we should add neurocognitive tests to the evaluation, may be a highly useful option to further identify academic capacity and behaviour, but should not be a substitute for the clinical evaluation of impulsivity (inhibitory control) during any change of medication."

This is an independent comment: Dr von Plessen was not involved in this work.

Credit: 
European College of Neuropsychopharmacology

NASA estimates Hurricane Dorian's massive rainfall track

image: At one-day intervals, the image shows the distance that tropical-storm force (39 mph) winds extended from Hurricane Dorian's low-pressure center, as estimated by the National Hurricane Center. The Saffir-Simpson hurricane-intensity category is the number following the "H" in the label on the image. "TS" or "PT" indicate times when the storm was either at tropical storm strength or when the storm was categorized as post-tropical. Red circles over North Carolina indicate preliminary reports of tornadoes on Sept. 5.

Image: 
NASA Goddard

On Monday morning, September 9, Hurricane Dorian was a post-tropical storm after a mid-latitude weather front and cold seas had altered its tropical characteristics over the weekend. NASA compiled data on Hurricane Dorian and created a map that showed the heavy rainfall totals it left in its wake from the Bahamas to Canada.

On Saturday and Sunday, Sept. 7 and 8, Hurricane Dorian struck eastern Canada, causing wind damage and bringing heavy rainfall.  According to the Associated Press, a peak of 400,000 people were without power in Nova Scotia, Canada, because of Dorian.

At NASA's Goddard Space Flight Center in Greenbelt, Maryland, a graphic was produced that shows precipitation that fell during the almost two-week period from August 27 to the early hours of Sept. 9. The near-real-time rain estimates come from the NASA's IMERG algorithm, which combines observations from a fleet of satellites, in near real time, to provide near-global estimates of precipitation every 30 minutes.

This year, NASA began running an improved version of the IMERG algorithm that does a better job estimating precipitation at high latitudes, specifically north of 60 degrees North latitude. The post-tropical remnants of Hurricane Dorian were approaching this cold region at the end of the analysis period. While the IMERG algorithm is still unable to estimate precipitation falling over ice-covered surfaces (such as Greenland), IMERG can now give a more complete picture of the water cycle in places such as Canada, which is, for the most part, free of snow cover at this time of year.

In addition to rainfall totals, the map includes preliminary reports of tornadoes from 4:50 AM to 5:00 PM EDT on September 5 in North Carolina as provided by NOAA's Storm Prediction Center.

IMERG showed largest rainfall amounts of more than 36 inches over the Bahamas and in an area off the coast of northeastern Florida. A large area of rainfall between 16 and 24 inches fell in many areas off the U.S. East Coast. Areas include those from South Carolina to the Bahamas, another off the North Carolina coast, a third area off the coasts of southern New Jersey, Delaware and Maryland, and the New England states.

By combining NASA precipitation estimates with other data sources, we can gain a greater understanding of major storms that affect our planet.

On Monday, Sept. 9 at 0300 UTC (Sept. 8 a t 11 p.m. EDT), NOAA's National Hurricane Center (NHC) issued the final advisory on Dorian. At that time, Dorian had moved into the Labrador Sea and its impacts on Newfoundland were beginning to subside. Post-tropical cyclone Dorian had maximum sustained winds near 50 knots (57 mph/93 kph). It was centered near 52.1 degrees north latitude and 53.4 degrees west longitude. That puts the center about 375 miles north of Cape Race, Newfoundland, Canada. Dorian was speeding to the east-northeast at 21 knots. Minimum central pressure was 980 millibars.

On Sept. 9, additional rainfall totals expected from Dorian in far eastern Quebec, Newfoundland and Labrador are expected to be less than 1 inch. Meanwhile, life-threatening rip tide and surf conditions are expected to affect mid-Atlantic and New England coasts of the U.S., as well as the coast of Atlantic Canada.

The NHC said the cyclone will continue into the open Atlantic, where it will dissipate south of Greenland.

Credit: 
NASA/Goddard Space Flight Center

Fermilab achieves world-record field strength for accelerator magnet

image: Fermilab recently achieved a magnetic field strength of 14.1 teslas at 4.5 kelvins on an accelerator steering magnet -- a world record.

Image: 
Thomas Strauss

To build the next generation of powerful proton accelerators, scientists need the strongest magnets possible to steer particles close to the speed of light around a ring. For a given ring size, the higher the beam's energy, the stronger the accelerator's magnets need to be to keep the beam on course.

Scientists at the Department of Energy's Fermilab have announced that they achieved the highest magnetic field strength ever recorded for an accelerator steering magnet, setting a world record of 14.1 teslas, with the magnet cooled to 4.5 kelvins or minus 450 degrees Fahrenheit. The previous record of 13.8 teslas, achieved at the same temperature, was held for 11 years by Lawrence Berkeley National Laboratory.

That's more than a thousand times stronger magnet than the refrigerator magnet that's holding your grocery list to your refrigerator.

The achievement is a remarkable milestone for the particle physics community, which is studying designs for a future collider that could serve as a potential successor to the powerful 17-mile-around Large Hadron Collider operating at CERN laboratory since 2009. Such a machine would need to accelerate protons to energies several times higher than those at the LHC.

And that calls for steering magnets that are stronger than the LHC's, about 15 teslas.

"We've been working on breaking the 14-tesla wall for several years, so getting to this point is an important step," said Fermilab scientist Alexander Zlobin, who leads the project at Fermilab. "We got to 14.1 teslas with our 15-tesla demonstrator magnet in its first test. Now we're working to draw one more tesla from it."

The success of a future high-energy hadron collider depends crucially on viable high-field magnets, and the international high-energy physics community is encouraging research toward the 15-tesla niobium-tin magnet.

At the heart of the magnet's design is an advanced superconducting material called niobium-tin.

Electrical current flowing through it generates a magnetic field. Because the current encounters no resistance when the material is cooled to very low temperature, it loses no energy and generates no heat. All of the current contributes to the creation of the magnetic field. In other words, you get lots of magnetic bang for the electrical buck.

The strength of the magnetic field depends on the strength of the current that the material can handle. Unlike the niobium-titanium used in the current LHC magnets, niobium-tin can support the amount of current needed to make 15-tesla magnetic fields. But niobium-tin is brittle and susceptible to break when subject to the enormous forces at work inside an accelerator magnet.

So the Fermilab team developed a magnet design that would shore up the coil against every stress and strain it could encounter during operation. Several dozen round wires were twisted into cables in a certain way, enabling it to meet the requisite electrical and mechanical specifications. These cables were wound into coils and heat-treated at high temperatures for approximately two weeks, with a peak temperature of about 1,200 degrees Fahrenheit, to convert the niobium-tin wires into superconductor at operation temperatures. The team encased several coils in a strong innovative structure composed of an iron yoke with aluminum clamps and a stainless-steel skin to stabilize the coils against the huge electromagnetic forces that can deform the brittle coils, thus degrading the niobium-tin wires.

The Fermilab group took every known design feature into consideration, and it paid off.

This is a tremendous achievement in a key enabling technology for circular colliders beyond the LHC," said Soren Prestemon, a senior scientist at Berkeley Lab and director of the multilaboratory U.S. Magnet Development Program, which includes the Fermilab team. "This is an exceptional milestone for the international community that develops these magnets, and the result has been enthusiastically received by researchers who will use the beams from a future collider to push forward the frontiers of high-energy physics."

And the Fermilab team is geared up to make their mark in the 15-tesla territory.

"There are so many variables to consider in designing a magnet like this: the field parameters, superconducting wire and cable, mechanical structure and its performance during assembly and operation, magnet technology, and magnet protection during operation," Zlobin said. "All of these issues are even more important for magnets with record parameters."

Over the next few months, the group plans to reinforce the coil's mechanical support and then retest the magnet this fall. They expect to achieve the 15-tesla design goal.

And they're setting their sights even higher for the further future.

"Based on the success of this project and the lessons we learned, we're planning to advance the field in niobium-tin magnets for future colliders to 17 teslas," Zlobin said.

It doesn't stop there. Zlobin says they may be able to design steering magnets that reach a field of 20 teslas using special inserts made of new advanced superconducting materials.

Call it a field goal.

The project is supported by the Department of Energy Office of Science. It is a key part of the U.S. Magnet Development Program, which includes Fermilab, Brookhaven National Laboratory, Lawrence Berkeley National Laboratory and the National High Magnetic Field Laboratory.

Credit: 
DOE/Fermi National Accelerator Laboratory

NASA finds Tropical Storm Faxai's heavy rainmaking storms off-shore from Japan

image: On Sept. 8 at 11.59 p.m. EDT (Sept. 9 at 0359 UTC) the AIRS instrument aboard NASA's Aqua satellite analyzed cloud top temperatures of Tropical Storm Faxai in infrared light. AIRS found coldest cloud top temperatures (purple) of strongest thunderstorms were as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center and in a large band east of center.

Image: 
NASA JPL/Heidar Thrastarson

The big island of Japan received Tropical Storm Faxai and NASA's Aqua satellite provided forecasters at the Joint Typhoon Warning Center infrared data and cloud top temperature information that revealed the most powerful storms just off-shore when the satellite flew overhead.

NASA researches tropical cyclones and one of the ways NASA does that is with infrared data that provides temperature information. Cloud top temperatures provide information to forecasters about where the strongest storms are located within a tropical cyclone. Tropical cyclones do not always have uniform strength, and some sides have stronger sides than others. The stronger the storms, the higher they extend into the troposphere, and they have the colder cloud temperatures.

NASA's Aqua satellite analyzed the storm on Sept. 8 at 11:59 p.m. EDT (Sept. 9 at 0359 UTC) using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found coldest cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around Faxai's center and in a large band east of center. NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

Satellite imagery has revealed that Faxai continues to decay as it moves east of Honshu, Japan over the cooler waters of the Pacific Ocean.

Typhoon Faxai made landfall just east of Tokyo on Sept. 8. Japan's Kyodo News Service reported that 3 people were killed and 700,000 people were left without power.

On Sept. 8 at 11:59 p.m. EDT (Sept. 9 at 0359 UTC) the center of Faxai was located near latitude 38.2 degrees north and longitude 144.5 degrees west. That places the center 289 nautical miles northeast of Yokosuka, Japan. Faxai is moving toward the east-northeast. Maximum sustained winds are near 55 knots.

Faxai is moving northeast and forecasters at the Joint Tropical Storm Warning Center expect Faxai will become extra-tropical.

For updated forecasts from the Japan Meteorological Agency, visit: https://www.jma.go.jp/en/typh/

Credit: 
NASA/Goddard Space Flight Center

New drug may protect against memory loss in Alzheimer's disease

BUFFALO, N.Y. ¬- A new drug discovered through a research collaboration between the University at Buffalo and Tetra Therapeutics may protect against memory loss, nerve damage and other symptoms of Alzheimer's disease.

Preclinical research found that the drug -- called BPN14770 -- deters the effects of amyloid beta, a hallmark protein of Alzheimer's that is toxic to nerve cells.

Recent studies find Alzheimer's may develop without dementia in nearly 25% of healthy 80-year-old patients, suggesting the body may turn to compensatory mechanisms to maintain the nervous system.

BPN14770, under development by Tetra Therapeutics, could help activate these mechanisms that support nerve health and prevent dementia, even with the progression of Alzheimer's.

Its benefits could also translate to Fragile X syndrome, developmental disabilities and schizophrenia, researchers say.

"Such observations imply that Alzheimer's pathology can be tolerated by the brain to some extent due to compensatory mechanisms operating at the cellular and synaptic levels," said Ying Xu, MD, PhD, co-lead investigator and research associate professor in the UB School of Pharmacy and Pharmaceutical Sciences.

"Our new research suggests that BPN14770 may be capable of activating multiple biological mechanisms that protect the brain from memory deficits, neuronal damage and biochemical impairments."

The study, published on Sept. 5 in the Journal of Pharmacology and Experimental Therapeutics, was also led by James M. O'Donnell, PhD, dean and professor of the UB School of Pharmacy and Pharmaceutical Sciences. Mark E. Gurney, PhD, chairman and chief executive officer of Tetra Therapeutics, based in Grand Rapids, Michigan, collaborated on the research.

Guarding memory against toxic proteins

The research, conducted in mice, discovered that BPN14770 inhibits the activity of phosphodiesterase?4D (PDE4D), an enzyme that plays a key role in memory formation, learning, neuroinflammation and traumatic brain injury.

PDE4D lowers cyclic adenosine monophosphate (cAMP) -- a messenger molecule that signals physiological changes such as cell division, change, migration and death -- in the body, leading to physical alterations in the brain.

cAMP has numerous beneficial functions, including improved memory. By inhibiting PDE4D, BPN14770 increases cAMP signaling in the brain, which ultimately protects against the toxic effects of amyloid beta.

"The role of PDE4D in modulating brain pathways involved in memory formation and cognition, and the ability of our PDE4D inhibitor to selectively enhance this process, has been well studied," said Gurney. "We are very excited by our colleagues' findings, which now suggest a second protective mechanism of action for BPN14770 against the progressive neurological damage associated with Alzheimer's disease."

"Developing effective drugs for memory deficits associated with Alzheimer's disease has been challenging," said O'Donnell. "BPN14770 works by a novel mechanism to increase cyclic AMP signaling in the brain, which has been shown to improve memory. The collaborative project has led to clinical trials that will begin to test its effectiveness."

Tetra Therapeutics is conducting Phase 2 clinical trials of BPN14770 in patients with early Alzheimer's and adults with Fragile X syndrome, a genetic disorder that causes intellectual and developmental disabilities.

Results of previous Phase 1 studies in healthy elderly volunteers suggest the drug benefits working, or immediate, memory. Animal studies found that BPN14770 has the potential to promote the maturation of connections between neurons, which are impaired in patients with Fragile X syndrome, as well as protect these connections, which are lost in patients with Alzheimer's.

"There has been enormous interest in our ongoing Phase 2 trial of BPN14770 in 255 patients with early Alzheimer's, and we are hopeful this study will show an impact of PDE4D modulation in this disease. Topline results are expected mid-2020," said Gurney.

Credit: 
University at Buffalo

Plastics, fuels and chemical feedstocks from CO2? They're working on it

image: Researchers at Stanford and SLAC are working on ways to convert waste carbon dioxide (CO2) into chemical feedstocks and fuels, turning a potent greenhouse gas into valuable products. The process is called electrochemical conversion. When powered by renewable energy sources, it could reduce levels of carbon dioxide in the air and store energy from these intermittent sources in a form that can be used any time.

Image: 
(Greg Stewart/SLAC National Accelerator Laboratory)

One way to reduce the level of carbon dioxide in the atmosphere, which is now at its highest point in 800,000 years, would be to capture the potent greenhouse gas from the smokestacks of factories and power plants and use renewable energy to turn it into things we need, says Thomas Jaramillo.

As director of SUNCAT Center for Interface Science and Catalysis, a joint institute of Stanford University and the Department of Energy's SLAC National Accelerator Laboratory, he's in a position to help make that happen.

A major focus of SUNCAT research is finding ways to transform CO2 into chemicals, fuels, and other products, from methanol to plastics, detergents and synthetic natural gas. The production of these chemicals and materials from fossil fuel ingredients now accounts for 10% of global carbon emissions; the production of gasoline, diesel, and jet fuel accounts for much, much more.

"We have already emitted too much CO2, and we're on track to continue emitting it for years, since 80% of the energy consumed worldwide today comes from fossil fuels," says Stephanie Nitopi, whose SUNCAT research is the basis of her newly acquired Stanford PhD.

"You could capture CO2 from smokestacks and store it underground," she says. "That's one technology currently in play. An alternative is to use it as a feedstock to make fuels, plastics, and specialty chemicals, which shifts the financial paradigm. Waste CO2 emissions now become something you can recycle into valuable products, providing a new incentive to reduce the amount of CO2 released into the atmosphere. That's a win-win."

We asked Nitopi, Jaramillo, SUNCAT staff scientist Christopher Hahn and postdoctoral researcher Lei Wang to tell us what they're working on and why it matters.

First the basics: How do you convert CO2 into these other products?

Tom: It's essentially a form of artificial photosynthesis, which is why DOE's Joint Center for Artificial Photosynthesis funds our work. Plants use solar energy to convert CO2 from the air into carbon in their tissues. Similarly, we want to develop technologies that use renewable energy, like solar or wind, to convert CO2 from industrial emissions into carbon-based products.

Chris: One way to do this is called electrochemical CO2 reduction, where you bubble CO2 gas up through water and it reacts with the water on the surface of a copper-based electrode. The copper acts as a catalyst, bringing the chemical ingredients together in a way that encourages them to react. Put very simply, the initial reaction strips an oxygen atom from CO2 to form carbon monoxide, or CO, which is an important industrial chemical in its own right. Then other electrochemical reactions turn CO into important molecules such as alcohols, fuels and other things.

Today this process requires a copper-based catalyst. It's the only one known to do the job. But these reactions can produce numerous products, and separating out the one you want is costly, so we need to identify new catalysts that are able to guide the reaction toward making only the desired product.

How so?

Lei: When it comes to improving a catalyst's performance, one of the key things we look at is how to make them more selective, so they generate just one product and nothing else. About 90 percent of fuel and chemical manufacturing depends on catalysts, and getting rid of unwanted byproducts is a big part of the cost.

We also look at how to make catalysts more efficient by increasing their surface area, so there are a lot more places in a given volume of material where reactions can occur simultaneously. This increases the production rate.

Recently we discovered something surprising: When we increased the surface area of a copper-based catalyst by forming it into a flaky "nanoflower" shape, it made the reaction both more efficient and more selective. In fact, it produced virtually no byproduct hydrogen gas that we could measure. So this could offer a way to tune reactions to make them more selective and cost-competitive.

Stephanie: This was so surprising that we decided to revisit all the research we could find on catalyzing electrochemical CO2 conversion with copper, and the many ways people have tried to understand and fine-tune the process, using both theory and experiments, going back four decades. There's been an explosion of research on this - about 60 papers had been published as of 2006, versus more than 430 out there today - and analyzing all the studies with our collaborators at the Technical University of Denmark took two years.

We were trying to figure out what makes copper special, why it's the only catalyst that can make some of these interesting products, and how we can make it even more efficient and selective - what techniques have actually pushed the needle forward? We also offered our perspectives on promising research directions.

One of our conclusions confirms the results of the earlier study: The copper catalyst's surface area can be used to improve both the selectivity and overall efficiency of reactions. So this is well worth considering as a chemical production strategy.

Does this approach have other benefits?

Tom: Absolutely. If we use clean, renewable energy, like wind or solar, to power the controlled conversion of waste CO2 to a wide range of other products, this could actually draw down levels of CO2 in the atmosphere, which we will need to do to stave off the worst effects of global climate change.

Chris: And when we use renewable energy to convert CO2 to fuels, we're storing the variable energy from those renewables in a form that can be used any time. In addition, with the right catalyst, these reactions could take place at close to room temperature, instead of the high temperatures and pressures often needed today, making them much more energy efficient.

How close are we to making it happen?

Tom: Chris and I explored this question in a recent Perspective article in Science, written with researchers from the University of Toronto and TOTAL American Services, which is an oil and gas exploration and production services firm.

We concluded that renewable energy prices would have to fall below 4 cents per kilowatt hour, and systems would need to convert incoming electricity to chemical products with at least 60% efficiency, to make the approach economically competitive with today's methods.

Chris: This switch couldn't happen all at once; the chemical industry is too big and complex for that. So one approach would be to start with making high-value, high-volume products like ethylene, which is used to make alcohols, polyester, antifreeze, plastics and synthetic rubber. It's a $230 billion global market today. Switching from fossil fuels to CO2 as a starting ingredient for ethylene in a process powered by renewables could potentially save the equivalent of about 860 million metric tons of CO2 emissions per year.

The same step-by-step approach applies to sources of CO2. Industry could initially use relatively pure CO2 emissions from cement plants, breweries or distilleries, for instance, and this would have the side benefit of decentralizing manufacturing. Every country could provide for itself, develop the technology it needs, and give its people a better quality of life.

Tom: Once you enter certain markets and start scaling up the technology, you can attack other products that are tougher to make competitively today. What this paper concludes is that these new processes have a chance to change the world.

Credit: 
DOE/SLAC National Accelerator Laboratory

Stretchy plastic electrolytes could enable new lithium-ion battery design

image: A lithium-ion battery is shown using a promising new cathode and electrolyte system that replaces expensive metals and traditional liquid electrolyte with lower cost transition metal fluorides and a solid polymer electrolyte.

Image: 
Allison Carter

The growing popularity of lithium-ion batteries in recent years has put a strain on the world's supply of cobalt and nickel - two metals integral to current battery designs - and sent prices surging.

In a bid to develop alternative designs for lithium-based batteries with less reliance on those scarce metals, researchers at the Georgia Institute of Technology have developed a promising new cathode and electrolyte system that replaces expensive metals and traditional liquid electrolyte with lower cost transition metal fluorides and a solid polymer electrolyte.

"Electrodes made from transition metal fluorides have long shown stability problems and rapid failure, leading to significant skepticism about their ability to be used in next generation batteries," said Gleb Yushin, a professor in Georgia Tech's School of Materials Science and Engineering. "But we've shown that when used with a solid polymer electrolyte, the metal fluorides show remarkable stability - even at higher temperatures - which could eventually lead to safer, lighter and cheaper lithium-ion batteries."

In a typical lithium-ion battery, energy is released during the transfer of lithium ions between two electrodes - an anode and a cathode, with a cathode typically comprising lithium and transition metals such as cobalt, nickel and manganese. The ions flow between the electrodes through a liquid electrolyte.

For the study, which was published Sept. 9 in the journal Nature Materials and sponsored by the Army Research Office, the research team fabricated a new type of cathode from iron fluoride active material and a solid polymer electrolyte nanocomposite. Iron fluorides have more than double the lithium capacity of traditional cobalt- or nickel-based cathodes. In addition, iron is 300 times cheaper than cobalt and 150 times cheaper than nickel.

To produce such a cathode, the researchers developed a process to infiltrate a solid polymer electrolyte into the prefabricated iron fluoride electrode. They then hot pressed the entire structure to increase density and reduce any voids.

Two central features of the polymer-based electrolyte are its ability to flex and accommodate the swelling of the iron fluoride while cycling and its ability to form a very stable and flexible interphase with iron fluoride. Traditionally, that swelling and massive side reactions have been key problems with using iron fluoride in previous battery designs.

"Cathodes made from iron fluoride have enormous potential because of their high capacity, low material costs and very broad availability of iron," Yushin said. "But the volume changes during cycling as well as parasitic side reactions with liquid electrolytes and other degradation issues have limited their use previously. Using a solid electrolyte with elastic properties solves many of these problems."

The researchers then tested several variations of the new solid-state batteries to analyze their performance over more than 300 cycles of charging and discharging at elevated temperature of 122 degrees Fahrenheit, noting that they outperformed previous designs using metal fluoride even when these were kept cool at room temperatures.

The researchers found that the key to the enhanced battery performance was the solid polymer electrolyte. In previous attempts to use metal fluorides, it was believed that metallic ions migrated to the surface of the cathode and eventually dissolved into the liquid electrolyte, causing a capacity loss, particularly at elevated temperatures. In addition, metal fluorides catalyzed massive decomposition of liquid electrolytes when cells were operating above 100 degrees Fahrenheit. However, at the connection between the solid electrolyte and the cathode, such dissolving doesn't take place and the solid electrolyte remains remarkably stable, preventing such degradations, the researchers wrote.

"The polymer electrolyte we used was very common, but many other solid electrolytes and other battery or electrode architectures - such as core-shell particle morphologies - should be able to similarly dramatically mitigate or even fully prevent parasitic side reactions and attain stable performance characteristics," said Kostiantyn Turcheniuk, research scientist in Yushin's lab and a co-author of the manuscript.

In the future, the researchers aim to develop new and improved solid electrolytes to enable fast charging and also to combine solid and liquid electrolytes in new designs that are fully compatible with conventional cell manufacturing technologies employed in large battery factories.

Credit: 
Georgia Institute of Technology

Watching music move through the brain

image: Musical information flowed from sensory to frontal regions during listening and from frontal to sensory regions during recall.

Image: 
Ding et al., JNeurosci (2019)

Scientists have observed how the human brain represents a familiar piece of music, according to research published in JNeurosci. Their results suggest that listening to and remembering music involve different cognitive processes.

Previous research has pinpointed areas of the brain -- primarily on the right side -- that are activated by music. However, less is known about how activity in these regions unfolds over time.

In a new study of male and female epilepsy patients, Ding et al. recorded electrical activity directly from the surface of the brain as participants listened to well-known pieces of music, including Beethoven's "Für Elise" and Richard Wagner's "Wedding March." A network of overlapping brain regions was associated with the act of listening to the music and continuing the melody in one's head. The researchers found that musical information traveled in opposite directions during these processes, flowing from sensory to frontal regions during listening and from frontal to sensory regions during recall.

Credit: 
Society for Neuroscience

Tiny capsules offer alternative to viral delivery of gene therapy

image: This is a graphic description of the nanocapsule delivery system.

Image: 
UW-Madison

MADISON -- New tools for editing genetic code offer hope for new treatments for inherited diseases, some cancers, and even stubborn viral infections. But the typical method for delivering gene therapies to specific tissues in the body can be complicated and may cause troubling side effects.

Researchers at the University of Wisconsin-Madison have addressed many of those problems by packing a gene-editing payload into a tiny customizable, synthetic nanocapsule. They described the delivery system and its cargo today (Sept. 9, 2019) in the journal Nature Nanotechnology.

"In order to edit a gene in a cell, the editing tool needs to be delivered inside the cell safely and efficiently," says Shaoqin "Sarah" Gong, a professor of biomedical engineering and investigator at the Wisconsin Institute for Discovery at UW-Madison. Her lab specializes in designing and building nanoscale delivery systems for targeted therapy.

"Editing the wrong tissue in the body after injecting gene therapies is of grave concern," says Krishanu Saha, also a UW-Madison biomedical engineering professor and steering committee co-chair for a nationwide consortium on genome editing with $190 million in support from the National Institutes of Health. "If reproductive organs are inadvertently edited, then the patient would pass on the gene edits to their children and every subsequent generation."

Most genome editing is done with viral vectors, according to Gong. Viruses have billions of years of experience invading cells and co-opting the cell's own machinery to make new copies of the virus. In gene therapy, viruses can be altered to carry genome-editing machinery rather than their own viral genes into cells. The editing machinery can then alter the cell's DNA to, say, correct a problem in the genetic code that causes or contributes to disease.

"Viral vectors are attractive because they can be very efficient, but they are also associated with a number of safety concerns including undesirable immune responses," says Gong.

New cell targets can also require laborious alterations of viral vectors, and manufacturing tailored viral vectors can be complicated.

"It is very difficult -- if not impossible -- to customize many viral vectors for delivery to a specific cell or tissue in the body," Saha says.

Gong's lab coated a gene therapy payload -- namely, a version of the gene-editing tool CRISPR-Cas9 with guide RNA designed in Saha's lab -- with a thin polymer shell, resulting in a capsule about 25 nanometers in diameter. The surface of the nanocapsule can be decorated with functional groups such as peptides which give the nanoparticles the ability to target certain cell types.

The nanocapsule stays intact outside cells -- in the bloodstream, for example -- only to fall apart inside the target cell when triggered by a molecule called glutathione. The freed payload then moves to the nucleus to edit the cell's DNA. The nanocapsules are expected to reduce unplanned genetic edits due to their short lifespan inside a cell's cytoplasm.

This project is a collaboration combining UW-Madison expertise in chemistry, engineering, biology and medicine. Pediatrics and ophthalmology professor Bikash R. Pattnaik and comparative biosciences professor Masatoshi Suzuki and their teams worked to demonstrate gene editing in mouse eyes and skeletal muscles, respectively, using the nanocapsules.

Because the nanocapsules can be freeze-dried, they can be conveniently purified, stored, and transported as a powder, while providing flexibility for dosage control. The researchers, with the Wisconsin Alumni Research Foundation, have a patent pending on the nanoparticles.

"The small size, superior stability, versatility in surface modification, and high editing efficiency of the nanocapsules make them a promising platform for many types of gene therapies," says Gong.

The team aims to further optimize the nanocapsules in ongoing research for efficient editing in the brain and the eye.

Credit: 
University of Wisconsin-Madison

Online crowdfunding to pay for cancer care

What The Study Did: This research letter examined crowdfunding efforts to defray expenses associated with cancer care.

Authors: Benjamin N. Breyer, M.D., M.A.S., F.A.C.S., of the University of California, San Francisco, is the corresponding author.

(doi:10.1001/jamainternmed.2019.3330)

Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Feeling legs again improves ampu-tees' health

video: Two volunteers are the first above-knee amputees in the world to feel their prosthetic foot and knee in real time. Their bionic prosthesis, which was devel-oped by an international team of researchers, features sensors that connect to residual nerves in the thigh. The resulting neurofeedback greatly reduces physical and mental strain for users of the prosthesis, as well as their phantom limb pain, while increasing their confidence and speed when walking.

Image: 
ETH Zurich / Stanisa Raspopovic

While walking, people with intact legs feel when they move their knee or when their feet touch the ground. The nervous system constantly draws on sensory feedback of this sort to precisely control muscles. People using a leg prosthesis, however, do not know precisely where the prosthesis is located, how it is moving, or what type of terrain it is standing on. They often cannot trust their pros-thesis completely when walking, leading them to rely too often on their intact leg, which in turn re-duces their mobility and causes them to tire quickly. A simple walk on pebbles or sand, for example, can prove very exhausting for people using a prosthesis. Furthermore, people with amputations can experience phantom limb pain, a condition that existing medications often cannot treat. Savo Panic, who experiences this phenomenon, says he wakes up at night due to the phantom pain: "The toe that I don't have hurts. My big toe, foot, heel, ankle, calf - they all hurt, and I don't even have them."

An international team of researchers led by ETH Zurich and Lausanne-based start-up company Sen-sArs has now developed an interface to connect a leg prosthesis with the residual nerves present in the user's thigh, thus providing sensory feedback. In a study conducted in collaboration with the University of Belgrade, the scientists tested this neurofeedback system with two volunteers who have an above-knee leg amputation and use a leg prosthesis (one of whom is Panic).

The solution benefited the amputees in a variety of ways, as the researchers reported in the latest issue of the journal Nature Medicine. "This proof-of-concept study shows how beneficial it is to the health of leg amputees to have a prosthesis that works with neural implants to restore sensory feed-back," says Stanisa Raspopovic, a Professor at the Institute of Robotics and Intelligent Systems at ETH Zurich.

Transforming artificial signals into natural ones

To provide the nervous system with sensory information, the scientists began with a commercially available high-tech prosthesis: they attached tactile sensors to the sole of the prosthetic foot, and collected the data on knee movement provided by the prosthesis's electronic knee joint.

For the three months that the experiment lasted, surgeons placed tiny electrodes in each volunteer's thigh and connected them to the residual leg nerves. "The goal of the surgery was to introduce elec-trodes in the right places inside the nerve to allow the restoration of lifelike sensory feedback, and to allow the stability of the electrodes," said Marko Bumbasirevic, Professor and orthopaedic microsur-geon at the Clinical Centre of Serbia in Belgrade, who was the clinician responsible for the electrode implant. The electrodes were developed by scientists from the University of Freiburg, and the pros-thesis came from the prosthetic company Össur; both were actively involved in the project.

The research team developed algorithms to translate the information from the tactile and motion sensors into impulses of current - the language of the nervous system - which were delivered to the residual nerve. Then nature does the rest: the signals from the residual nerves are conveyed to the person's brain, which is thus able to sense the prosthesis and helps the user to adjust their gait ac-cordingly. The machine and the body are finally connected.

Less effort during walking

As part of the study, the volunteers underwent a series of tests - alternating trials with and without neurofeedback. The results made it very clear just how advantageous the feedback was: walking with neurofeedback was physically much less demanding, as shown by the significant reduction in the volunteers' oxygen consumption while walking.

Also mentally, ambulation with neurofeedback was less strenuous, as the researchers showed with brain activity measurements during the trials. The volunteers didn't have to concentrate as hard on their gait, which meant that they were able to devote more of their attention to other tasks.

In one difficult test, the volunteers had to walk over sand, and again the feedback enabled them to walk considerably faster. In surveys, the volunteers stated that the neurofeedback greatly increased their confidence in the prosthesis.

Reduced phantom limb pain

The interface with the nervous system can also be used to stimulate the nerves independently of the prosthesis. Before they started the trial, both volunteers complained of phantom limb pain. Over the course of a one-month therapy programme with neurostimulation, the scientists managed to consid-erably reduce this pain in one of the volunteers; in the other, Panic, the pain disappeared completely. "Since I have started this treatment programme, after having received electrical stimulations, I don't feel any phantom pain," he says.

The scientists view these outcomes optimistically. However, they point out the need for a longer investigation with in-home assessments and a greater number of volunteers, in order to provide more robust data that they can use to draw more significant conclusions. For the time-limited clinical study, the signals from the prosthesis were sent along cables through the skin to the electrodes in the thigh. This meant that the volunteers had to undergo regular medical examinations. To eliminate this need, the scientists intend to develop a fully implantable system. "At SensArs, we're planning to develop a wireless neurostimulation device that can be fully implanted into the patient like a pace-maker, and that can be brought to the market," says Francesco Petrini, CEO of SensArs.

Credit: 
ETH Zurich