Tech

New cell models for ocular drug discovery

image: Re-pigmentation of retinal pigment epithelial (RPE) cells. Typically, continuously growing RPE cell lines lack melanin pigment (left). Non-pigmented RPE cells can be re-pigmented with controlled quantities of purified melanosomes resulting from RPE cells from light pigmentation (center) to cell cultures displaying very strong pigmentation (right) which is normally seen in the eye.

Image: 
Mika Reinisalo

Researchers at the University of Eastern Finland have developed two new cell models that can open up new avenues for ocular drug discovery. The new cell models are continuously growing retinal pigment epithelial cells, which have many benefits over the models currently used by researchers and pharmaceutical companies. The models were developed by Professor Arto Urtti's Ocular Drug Delivery group at the University of Eastern Finland.

The retinal pigment epithelium is located in the back of the eye, forming the outer blood-retinal barrier. Pigment epithelial cells play a key role in, e.g., age-related macular degeneration, which makes them an interesting target for drug therapy. The retinal pigment epithelium also regulates the access of drugs from the blood stream into the eye and vice versa, further highlighting the importance of this cell type for drug discovery.

Up until now, cultivated retinal pigment epithelial cell lines have lacked both pigmentation and the blood-retinal barrier, which has complicated their use.

Cells can be fed with pigment that binds drugs

The retinal pigment epithelium (RPE) is characterised by strong pigmentation. Many drugs bind to cellular pigment and, consequently, can accumulate in RPE cells. The new cell model makes it possible to study this accumulation in greater detail than before.

In the new model, non-pigmented RPE cells can be re-pigmented by feeding them with melanin pigment.

"Re-pigmentation is possible, thanks to the naturally active phagocytosis, or cellular eating, of RPE cells. Instead of just using pigment, we used melanosomes, i.e. live pigmented cell organelles, to re-pigment the cells. Our research group has already previously developed a method to isolate live melanosomes. In this study, we showed that the host cell accepts the transferred melanosomes and doesn't destroy them," Postdoctoral Researcher Mika Reinisalo from the University of Eastern Finland says.

Published in Scientific Reports, the study also described the uptake of different drugs in melanosomes. According to the researchers, most drugs that bind to melanin at high levels were taken up by the melanosomes, whereas the same was true only for a small number of drugs that bind to melanin at low levels.

"By using non-pigmented cells as controls, we were able to study how much of a drug given to the cells eventually ends up in the melanosomes. Next, we need to find out for how long the melanosomes can hold on to a drug, and whether this causes any harm. The accumulation of drugs in the melanosomes can, on the other hand, make it possible to target drugs at specific tissues," Reinisalo notes.

New cell model forms a more realistic blood-retinal barrier

Retinal pigment epithelial cells also form the blood-retinal barrier that protects the eye from circulating xenobiotics. For drug therapy, the blood-retinal barrier is crucial, as it regulates both the entry and exit of drugs to and from the eye. In earlier cell models, the epithelial layer isn't thick enough, allowing drugs to pass through it too quickly. This paints an erroneous picture of how drugs get transported into the patient's eye.

Earlier this year, the same research group published a study in Pharmaceuticals, presenting a new RPE cell population that had spontaneously arisen from the RPE cell line. This population forms a tight epithelial layer, resembling the real-life epithelial layer of the eye.

"We named this cell population after its discoverer, Senior Laboratory Technician Lea Pirskanen, who is also one of the authors of the study. These cells are now known as LEPI cells. We decided to study them in more detail and discovered that in comparison to earlier cell models, LEPI cells are better differentiated and they form a tighter and more realistic barrier through which drugs have to pass," Postdoctoral Researcher Laura Hellinen from the University of Eastern Finland says.

Both cell models are useful for ocular drug discovery and they also reduce the need for animal testing.

Credit: 
University of Eastern Finland

Researchers discover a new, young volcano in the Pacific

image: A new petit-spot volcano at the oldest section of the Pacific Plate.

Image: 
Tohoku University

Researchers from Tohoku University have discovered a new petit-spot volcano at the oldest section of the Pacific Plate. The research team, led by Associate Professor Naoto Hirano of the Center for Northeast Asian Studies, published their discovery in the in the journal Deep-Sea Research Part I.

Petit-spot volcanoes are a relatively new phenomenon on Earth. They are young, small volcanoes that come about along fissures from the base of tectonic plates. As the tectonic plates sink deeper into the Earth's upper mantle, fissures occur where the plate begins to bend causing small volcanoes to erupt. The first discovery of petit-spot volcanoes was made in 2006 near the Japan Trench, located to the northeast of Japan.

Rock samples collected from previous studies of petit-spot volcanoes signify that the magma emitted stems directly from the asthenosphere - the uppermost part of Earth's mantle which drives the movement of tectonic plates. Studying petit-spot volcanoes provides a window into the largely unknown asthenosphere giving scientists a greater understanding of plate tectonics, the kind of rocks existing there, and the melting process undergone below the tectonic plates.

The volcano was discovered in the western part of the Pacific Ocean, near Minamitorishima Island, Japan's easternmost point, also known as Marcus Island. The volcano is thought to have erupted less than 3 million years ago due to the subduction of the Pacific Plate deeper into the mantle of the Marina Trench. Previously, this area is thought to have contained only seamounts and islands formed 70-140 million years ago.

The research team initially suspected the presence of a small volcano after observing bathymetric data collected by the Japan Coast Guard. They then analyzed rock samples collected by the Shnkai6500, a manned submersible that can dive to depths of 6,500 meters, which observed the presence of volcano.

"The discovery of this new Volcano provides and exciting opportunity for us to explore this area further, and hopefully reveal further petit-spot volcano," says Professor Hirano. He adds, "This will tell us more about the true nature of the asthenosphere." Professor Hirano and his team will continue to explore the site for similar volcanoes since mapping data demonstrates that the discovered volcano is part of a cluster.

Credit: 
Tohoku University

Study shows lake methane emissions should prompt rethink on climate change

image: The research team collecting water samples at Lake Stechlin, Germany, to study oxic methane production in lake water.

Image: 
Marco Günthel

A new study from Swansea University has given new insights into how the greenhouse gas methane is being produced in the surface waters of lakes, which should signal a rethink on the global methane cycle.

After carbon dioxide, methane is the second most important carbon-based greenhouse gas and its continuous increase in the atmosphere is a global climate threat.

Conventional research, including the assessments by the Intergovernmental Panel on Climate Change (IPCC), has suggested that methane is produced naturally in oxygen-depleted environments such as swamps and wetlands. However the result of this new study, which is published in Nature Communications has now challenged these previous assessments.

The research team from the University's College of Science analysed Lake Stechlin in north-eastern Germany and found that a significant amount of methane was being produced there in the well-oxygenated surface layer.

It was also discovered that as the methane gas is produced at the surface in direct contact with air, the levels of emissions that travel directly into the atmosphere are also significant.

The researchers also predicted that emissions from these surface waters are likely to increase with the lake size, and could account for over half of surface methane emission for lakes larger than one square kilometer.

Professor Kam Tang, of Swansea University's Department of Biosciences said: "Our research shows that well oxygenated lake waters are an important, but long overlooked, source of methane emissions to the atmosphere. These novel findings open new avenues for methane research and support a more accurate global assessment of this powerful greenhouse gas."

Lead author of the study, Marco Günthel said: "Methane emission in lakes is based on a complex network of biochemical and physical processes, some of which are still poorly understood. I hope our study will stimulate more research on this topic as it is needed to fully understand the global methane cycle and to improve climate change predictions."

Credit: 
Swansea University

A new way to measure long-term firm performance and shareholder value

image: INSEAD and Wharton professors introduce LIVA: a metric that gauges the true impact of investment or strategic action on shareholder value.

Image: 
INSEAD

Business leaders and investors are increasingly rating companies not just on short-term financial performance but on their ability to create long-term value for shareholders.

Today the most commonly used metrics to gauge a firm's success are profitability ratios such as return on capital (ROC), total shareholder return (TSR) and earnings per share (EPS). While these provide useful information, they are focused on the short-term and do little to reflect underlying economic performance or to precisely capture the creation (or destruction) of long-term shareholder value.

To address this gap, Phebo Wibbens, INSEAD Assistant Professor of Strategy, and Nicolaj Siggelkow, the David M. Knott Professor at The Wharton School of the University of Pennsylvania, have developed a new tool they call Long-term Investor Value Appropriation (LIVA).

LIVA, as introduced in a recent paper, "Introducing LIVA to measure long-term firm performance", published in the Strategic Management Journal, uses stock price data to measure the degree to which a company has created and destroyed shareholder value by calculating a backward-looking net present value (NPV) over a given period.

This is best explained using an example such as Apple which between 1999-2018 had a LIVA of 1.002 trillion dollars. This number indicates that if an investor bought all outstanding shares of the company in 1999 at the market price, borrowing the money for the purchase at a cost equal to the average market return, and then sold the shares at the end of 2018 at the market price, using the money received as well as dividends and share buy backs to repay the 'debt', they would have $1.002T left in the bank.

Because LIVA is based on share price, expectations as well as earnings are factored into the result.

Unlike many other performance measures, the LIVA metric:

Is not weighted in favour of firms that outperform relative to their size ensuring it is driven by companies with significant economic impact.

More accurately captures the actual economic effect of major corporate events such as M&As, spin-offs and bankruptcy

Values both profits and growth

Reflects the long-term value creation of a firm's entire shareholder base.

Is particularly suited for settings in which there is a long period before strategic actions are reflected in performance. In these cases, it is difficult for measures such as ROA and CAR (cumulative abnormal returns) to pick up long-term performance consequences.

Because it is an additive measure, LIVA is simple to aggregate over multiple companies or time-periods. To assist with this the authors have created a database including more than 500,000 observations of 45,000 firms using 20 years of statistics from the Compustat North American and Global Security databases. Managers and researchers can access the LIVA database and web app by going to https://www.liva-measure.com/. They can use our tools there to compare a firm's long-term performance against its peers, the best and worst performing companies globally or across a specific region or industry.

Users are also able to break down the LIVA to identify which parts of the firm have performed well at specific moments and to identify which strategic decisions have created or destroyed value - assisting executives and investors to make better-informed strategic decisions in the future. Media and public policy makers can use LIVA to assess the performance of listed companies, groups of companies, sectors or geographies.

In their paper, Wibbens and Siggelkow use case studies of companies such as Circuit City and Best Buy to show how the LIVA approach provides more direct links between strategic actions and organisational performance.

Their study reveals the markedly different strategic insights that can come from using LIVA compared to commonly used short-term ratios measures such as ROA and CAR (cumulative abnormal returns). For instance, they show how the much-recognised U-shaped relationship between organisation acquisition experience and organisation performance is largely driven by short-term investor expectations and vanishes when using 10-year LIVA.

By helping decision-makers to measure and focus on long-term value creation, metrics like LIVA are also likely to have a broader societal impact.

"Managers concerned mainly with short-term metrics such as quarterly earnings growth will be tempted to make quick cash at the expense of suppliers, customers and broader society," said Wibbens, noting that, "In the long run these actions will likely backfire due to customer protests or stricter regulations, ultimately destroying value. LIVA provides managers with a metric that can help them gauge long-term value creation and consider which strategic decisions can allow their firms to flourish."

Credit: 
INSEAD

Seismologists see future in fiber optic cables as earthquake sensors

Each hair-thin glass fiber in a buried fiber optic cable contains tiny internal flaws--and that's a good thing for scientists looking for new ways to collect seismic data in places from a busy urban downtown to a remote glacier.

In Seismological Research Letters, California Institute of Technology seismologist Zhongwen Zhan describes growing interest in this method--called Distributed Acoustic Sensing--and its potential applications. His paper is part of the journal's Emerging Topics series, in which authors are invited by SRL editors to explore developments that are shaping various areas of seismology and earthquake science.

DAS works by using the tiny internal flaws of a long optical fiber as thousands of seismic sensors along tens of kilometers of fiber optic cable. An instrument at one end sends laser pulses down a cable and collects and measures the "echo" of each pulse as it is reflected off the internal fiber flaws.

When the fiber is disturbed by changes in temperature, strain or vibrations--caused by seismic waves, for instance--there are changes in the size, frequency and phase of laser light scattered back to the DAS instrument. Seismologists can use these changes to determine the kinds of seismic waves that might have nudged the fiber, even if just by a few tens of nanometers.

The sensitivity of DAS instruments has improved markedly over the past five years, opening new possibilities for their deployment, Zhan said. "The sensitivity is getting better and better, to the point that a few years ago that if you compare the waveforms from a fiber section with a geophone, they look very similar to each other."

Their performance makes them suitable for use across a variety of environments, especially in places where it would be too expensive to set up a more sensitive or dense seismic network. Researchers can also tap into the large amounts of unused or "dark" fiber that has been laid down previously by telecommunication companies and others. A few strands off a larger cable, said Zhan, would serve a seismologist's purposes.

Zhan said the oil and gas industry has been one of the biggest drivers of the new method, as they used cable down boreholes to monitor fluid changes in deep-water oil fields and during hydraulic fracturing and wastewater injection.

DAS researchers think the method is especially promising for seismic monitoring in harsh environments, like Antarctica--or the moon. With a regular network of seismometers, scientists "need to protect and power each node" of instruments in the network, Zhan explained. "Where for DAS, you lay down one long strand of fiber, which is fairly sturdy, and all your sensitive instruments are only at one end of the fiber."

"You can imagine that on the moon or some other planet, with a high radiation or high temperature scenario, the electronics might not survive that long in that environment," he added. "But fiber can."

Scientists are already using DAS to probe thawing and freezing cycles in permafrost and on glaciers, to better characterize their dynamic motion of ice flows and sliding on bedrock, which could help researchers learn more about how glacial melt driven by climate change contributes to sea level rise.

At the moment, the range of most DAS systems is 10 to 20 kilometers. Researchers hope to extend this in the near future to 100 kilometers, Zhan said, which could be useful for seismic coverage in ocean bottom environments, including offshore subduction zones.

DAS is also well-suited for rapid response after earthquakes, especially in areas where dark fiber is numerous and seismologists have made arrangements to use the fiber beforehand. After the 2019 Ridgecrest earthquakes in southern California, for instance, Zhan and his colleagues moved quickly to monitor the aftershock sequence in the area using DAS. "We turned about 50 kilometers of cable into more than 6,000 sensors in three days," he said.

If seismologists have done their legwork in identifying and requesting access to fibers ahead of time, Zhan said, a DAS system can be deployed within a few hours after an earthquake.

One challenge in using fiber is knowing exactly how it lies in the ground. With the DAS method, researchers know how far along a fiber a particular sensor lays, but if the fiber optic cable is coiled or bent or sagging, the calculations could be off. To remedy this, seismologists sometimes do a "tap test"--which maps sledgehammer blows along the ground above the cable with GPS, as the blows reverberate off the fiber to create a sort of sonar image of its twists and turns.

DAS sensors also contain more "self-noise"--background seismic signals that could interfere with an earthquake identification--than traditional seismic sensors, "but frankly we don't exactly know why," said Zhan. Some of the noise might come from the interrogating laser pulses, which might not be stable, or from the cable itself. Some cables lie loose in their tunnels, and other have multiple fiber connectors, which might produce reflection and loss of the light signal.

"While still in its infancy, DAS already has shown itself as the working heart - or perhaps ear drums - of a valuable new seismic listening tool," Zhan concluded.

Credit: 
Seismological Society of America

Brewing beer that tastes fresh longer

Unlike wine, which generally improves with time, beer does not age well. Usually within a year of bottling, the beverage starts to develop an unpleasant papery or cardboard-like flavor that drinkers describe as "stale." Now, researchers reporting in ACS' Journal of Agricultural and Food Chemistry have engineered lager yeast to make more molecules that protect beer against staling, resulting in improved flavor stability.

Scientists have linked stale beer flavors to aldehyde compounds, such as (E)-2-nonenal and acetaldehyde. Many of these compounds are produced by yeast during fermentation, and chemical reactions during beer storage can increase their levels. Brewers have tried different approaches to reduce levels of these compounds, such as controlling the fermentation conditions or adding antioxidants, but staling remains a problem for the beer industry. That's why Qi Li and colleagues wanted to genetically modify lager yeast to produce more of a molecule called NADH. Extra NADH could boost the activities of natural yeast enzymes that change aldehydes into other types of compounds that don't contribute to a stale flavor, the researchers reasoned.

The researchers used a genetic technique called "overexpression," in which they artificially increased the levels of various genes related to NADH production. With this method, they identified four genes that, when overexpressed, increased NADH levels. The team found that beer from the overexpressing yeast contained 26.3-47.3% less acetaldehyde than control beer, as well as decreased levels of other aldehydes. In addition, the modified strains produced more sulfur dioxide, a natural antioxidant that also helps reduce staling. Other flavor components were marginally changed. This approach could be useful for improving the flavor stability and prolonging the shelf life of beer, the researchers say.

Credit: 
American Chemical Society

Atmospheric chemists move indoors

Most people spend the majority of their time at home, yet little is known about the air they breathe inside their houses. That's why some atmospheric chemists are turning their attention toward indoor air, using tools developed for monitoring pollutants outside. By cataloguing compounds in indoor air, scientists could someday link them with health effects, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.

Scientists have shown that the levels of many substances, such as volatile and semivolatile organic compounds, are substantially higher indoors than outdoors. Indoor emissions come from many sources, such as stoves, smoking, cleaning products, home furnishings and even people, Senior Correspondent Celia Henry Arnaud writes. These chemicals can react with others to form new molecules. Sunlight from windows and emissions from gas stoves often accelerate these reactions. Houses also contain many surfaces including windows, carpets, walls and furniture that can adsorb, react and re-emit compounds. And people themselves emit volatile organic compounds from their breath, skin and personal care products.

Considering the plethora of compounds in indoor air, it's not surprising that chemists have had a hard time figuring out when, where and how the substances arise. In 2018, a 4-week field study called HOMEChem -- House Observations of Microbial and Environmental Chemistry -- sought to answer some of these questions. In a manufactured house at the University of Texas at Austin, dozens of researchers from across the country systematically studied how different human activities influenced indoor air composition. Among many interesting findings, the researchers observed that the order of activities -- such as mopping and then cooking, or vice versa -- influenced indoor air chemistry. These and other studies aim to establish a baseline of what's in indoor air, so that eventually researchers can assess the potential health effects of specific compounds or groups of compounds.

Credit: 
American Chemical Society

Patient diaries reveal propensity for epileptic seizures

image: A study to validate an epilepsy seizure assessment tool relied in part on patient data from a web-based service patients may use to track seizures.

Image: 
Courtesy of SeizureTracker.com

HOUSTON - (Dec. 4, 2019) - A researcher at Rice University's Brown School of Engineering and an alumna of her lab have the first validation of their program to assess the risk of seizures in patients with epilepsy.

In a preliminary study, their Epilepsy Seizure Assessment Tool (EpiSAT) proved equally able or better than 24 specialized epilepsy clinicians at using patients' histories to identify periods of heightened propensity for seizures.

"Epilepsy affects more than 3.4 million people nationwide," said Rice statistician Marina Vannucci, co-author of the study led by her former student, Sharon Chiang, now a resident physician at the University of California, San Francisco (UCSF) Department of Neurology. "This study could serve as a benchmark case for other diseases or situations that could employ a statistical approach."

The researchers' automated machine-learning algorithm correctly identified changes in seizure risk -- improvement, worsening or no change -- in more than 87% of cases. They achieved those results by analyzing 120 seizures from four "synthetic" diaries and 120 seizures from real seizure diaries gathered by SeizureTracker.com, one of the largest electronic seizure diaries in the world. EpiSAT showed "substantial observed agreement" with clinicians more than 75% of the time, they reported.

The results appear in the journal Epilepsia.

"One challenge in treating people with epilepsy is that, like the chance of rain, there has never been a good way to quantify seizure risk and to determine whether apparent changes in seizure frequency reflect chance or actual improvement or worsening in their clinical state," said Vikram Rao, chief of the epilepsy division and an associate professor of neurology at UCSF. "The algorithm that Dr. Chiang developed in this study directly addresses that clinical dilemma."

Clinicians rely on seizure diaries kept by the patients to get a sense of the severity and frequency of the attacks.

"Seizure-counting is one of the oldest indices physicians rely on to monitor the underlying epilepsy burden and assess response to treatment," Chiang said. "A fundamental challenge is that changes in observed seizure frequency are not necessarily reflective of treatment, but may be due to natural variability.

"Crude estimates of seizure frequency can be misleading, and if misinterpreted as a change in seizure propensity can lead to unnecessary or harmful treatment decisions," she said.

Matching real patient data to simulated data allowed the team to know with certainty when the algorithm was on target and raised confidence in their ability to analyze patient data, Vannucci said. "The ability to mimic what happens in reality lets us assess the performance of our method," she said. "We can know for sure whether it recovers the truth."

"This new publication shows the benefits of a quantitative approach that can guide treatment when deciding whether treatment has been helpful," said John Stern, a professor of neurology and co-director of the UCLA Seizure Disorder Center and co-principal investigator of the study.

The study follows extensive collaboration by Vannucci and Chiang, who degrees from Rice and Baylor College of Medicine in 2016 and 2018, respectively, through the institutions' M.D./Ph.D. program. A 2017 paper by the team outlined a method integrating neuroimaging scans to identify patients at high risk of continued seizures before employing invasive surgery that may or may not provide complete relief.

For their work on the 2018 paper that proposed EpiSAT, Chiang won the 2019 Clinical Epilepsia Open Prize, presented by the International League Against Epilepsy.

Vannucci said the team plans to refine EpiSAT by incorporating data from electronic health records. "Including possible covariates could serve as an additional validation of the method," she said. "The ultimate goal is to see the tool deployed in clinical practice."

Credit: 
Rice University

Researchers develop method to improve skeleton of common chemicals

image: Researchers in Japan developed a new method to produce more complicated and medium-sized (seven- and eight-membered) carbocycles.

Image: 
Keiji Mori, TUAT

Every chemical, from the simplest to the most complex, have a structural skeleton of atoms. The atoms can be added or removed to transform the chemical into different types, for use in different applications. For many pharmaceutical and agricultural chemicals, the skeleton consists of a multi-membered carbon ring called a carbocycle.

It's incredibly difficult to make a carbocycle with more than five or six members, but a research team at Tokyo University of Agriculture and Technology (TUAT) in Japan has developed a new method that can easily produce seven- and eight-membered carbocycles.

They published their results in the November 28 print edition of Chemical Communications, a journal of the Royal Society of Chemistry.

"Generally speaking, formation of middle-sized rings is a difficult issue because of their instability and disorder," said Keiji Mori, paper author and associate professor in the Department of Applied Chemistry in the Graduate School of Engineering at TUAT. "In this study, we developed a simple and effective method for the construction of seven- and eight-membered carbocycles."

Mori and co-author Yuna Otawa, also with the Department of Applied Chemistry in the Graduate School of Engineering at TUAT, used a process called the "internal redox process."

The carbocycles include carbon atoms bonded to hydrogen atoms. The system of carbocycles is oxidized, which involves the chemicals rearranging and exchanging components. This weakens the bonds between carbons and hydrogens, allowing hydrogen atoms to bond to a different carbon. The researchers induced this process in a cyclical manner, causing hydrogen shift to distal carbon, leading to a cleanly arranged, medium-sized (seven- and eight-membered) carbocycles. Noteworthy point is that their formation is overwhelmingly favored compared to five- or six-membered ring formation, which are more facile process.

"The construction of seven-membered or larger carbocycles is a major research topic in modern synthetic organic chemistry," Mori said. "Many organic chemists have devoted much time and effort to the development of an effective method for the synthesis of this class of skeleton, but most of the reported methods require relatively dilute conditions and special precautions to suppress unwanted intermolecular reactions, thereby decreasing the practicality of the synthesis."

According to Mori, the new method can be performed without special precautions and without undesired molecular effects.

"Our ultimate goal is the development of a synthetic method to two types of medium-sized rings fused-polycycles, which is difficult to achieve with current conventional methods," Mori said.

Credit: 
Tokyo University of Agriculture and Technology

Typhoid vaccine over 81% effective in tackling disease in Nepal

Caused by the bacterium Salmonella Typhi, typhoid is a major cause of fever in children in low- and middle-income countries and is responsible for nearly 11 million cases and more than 116,000 deaths a year worldwide.

In 2018, the World Health Organization (WHO) recommended the introduction of TCV for infants and children over six months of age in typhoid-endemic countries, and added it to its list of prequalified vaccines.

Although TCV has been shown to protect against the disease in studies involving healthy volunteers in the UK, no efficacy studies in endemic populations had been completed.

Now, the Typhoid Vaccine Acceleration Consortium (TyVAC), which includes researchers from the University of Oxford, the University of Maryland School of Medicine, and PATH has completed a large field study in Nepal and published the interim analysis in the New England Journal of Medicine.

The study involved 20,000 children aged 9 months to

Blood tests showed that typhoid occurred in 7 participants who received TCV and 38 receiving Men A vaccine. The researchers noted that these were preliminary results, and that the study will continue to follow-up the participants for two years.

Andrew Pollard, Professor of Paediatric Infection and Immunity at Oxford University's Department of Paediatrics, said: "This is the first study to show that a single dose of TCV is safe, immunogenic, and effective, which provides clear evidence that vaccination will help efforts to control this serious disease and is a strong endorsement of the WHO policy for vaccine implementation."

"The efficacy of these results in an endemic population adds to a growing body of evidence supporting the use of TCV to reduce disease and save lives in populations that lack clean water and improved sanitation," said Dr. Kathleen Neuzil, MD, MPH director of the Center for Vaccine Development and Global Health at the University of Maryland School of Medicine and director of TyVAC.

These results show the vaccine has the potential to significantly reduce the burden of typhoid in high-risk populations. This is especially timely with the recent spread of extensively drug resistant typhoid, which threatens child health in affected regions.

Pakistan's current typhoid outbreak is the first-ever reported outbreak of ceftriaxone-resistant typhoid and represents an alarming trend in the spread of drug-resistant typhoid. Not only is the strain resistant to ceftriaxone, the standard treatment in many parts of the world, but it is also resistant to the majority of antibiotics commonly used for typhoid, making it increasingly challenging and costly to treat.

TCVs have the potential to overcome many of the challenges that impeded uptake of earlier vaccines, including longer-lasting protection, fewer doses, and suitability for children under two years of age, allowing for inclusion in routine childhood immunization programs.

Credit: 
University of Oxford

Parker Solar Probe traces solar wind to its source on sun's surface: coronal holes

image: NASA's Parker Solar Probe mission has traveled closer to the Sun than any human-made object before it.

Image: 
NASA/Johns Hopkins APL

A year ago, NASA's Parker Solar Probe flew closer to the sun than any satellite in history, collecting a spectacular trove of data from the very edge of the sun's million-degree corona.

Now, that data has allowed solar physicists to map the source of a major component of the solar wind that continually peppers Earth's atmosphere, while revealing strange magnetic field reversals that could be accelerating these particles toward our planet.

These accelerated particles interact with Earth's magnetic field, generating the colorful northern and southern lights. But they also have the potential to damage the electrical grid and telecommunications networks on Earth's surface, threaten orbiting satellites and perhaps endanger astronauts in space.

The more solar physicists understand about the magnetic environment of the sun and how it flings solar wind particles toward the planets, the better they will be able to predict events and prevent damage.

"There was a major space weather event in 1859 that blew out telegraph networks on Earth and one in 1972 that set off naval mines in North Vietnam, just from the electrical currents generated by the solar storm," said Stuart Bale, a University of California, Berkeley, professor of physics and lead author of an article about new results from the probe's FIELDS experiment. "We're much more of a technological society than we were in 1972, the communications networks and the power grid on Earth are extraordinarily complex, so big disturbances from the sun are potentially a very serious thing. If we could predict space weather, we could shut down or isolate parts of the power grid, or shut down satellite systems that might be vulnerable."

The journal Nature will post these findings online on Dec. 4 in one of four papers describing all the new findings from the probe's 2018 close encounter with the sun. All four papers will appear in the Dec. 12 print edition of the journal.

Coronal holes

One of the main goals of the Parker Solar Probe is to discover the source of the "slow" solar wind and how it is accelerated in the hot atmosphere of the sun -- the 1 million-degree Celsius (about 2 million degrees Fahrenheit) solar corona. The solar wind consists of charged particles, mostly protons and helium nuclei, traveling along the sun's magnetic field lines. The so-called "fast" solar wind, clocked at between 500 and 1,000 kilometers per second, is known to come from large holes in the solar corona at the sun's north and south poles. But the origin of the "slow" solar wind, which is denser but about half the speed of the "fast" solar wind, is more poorly understood.

The data from the probe's first close encounter -- the probe has since had two other intimate encounters during the closest approach, or perihelion, of its orbit around the sun -- reveals a wealth of new physics.

"The first three encounters of the solar probe that we have had so far have been spectacular," said Bale, the principle investigator for FIELDS. "We can see the magnetic structure of the corona, which tells us that the solar wind is emerging from small coronal holes; we see impulsive activity, large jets or switchbacks which we think are related to the origin of the solar wind; we see instability -- the gas itself is unstable and is generating waves on its own. And we are also surprised by the ferocity of the dust environment in the inner heliosphere."

During each close encounter, the probe parked for as long as a week above a coronal hole that was streaming solar wind particles along magnetic field lines past the probe, giving instruments aboard the probe an unprecedented look at what was happening on the solar surface below.

Thanks to extreme ultraviolet mapping of the sun by other spacecraft, such as STEREO, Bale and his colleagues were able to trace the wind and the magnetic fields back to a source -- coronal holes -- that strongly suggests that these holes are the source of the slow solar wind. Coronal holes, which are related to sun spots, are areas that are cooler and less dense than the surrounding corona.

What was unexpected was a series of flips in the magnetic field as it streamed past the spacecraft. During these periods, the magnetic field suddenly reversed itself by 180 degrees and then, seconds to hours later, flipped back.

"These switchbacks are probably associated with some kind of plasma jets," Bale said. "My own feeling is that these switchbacks, or jets, are central to the solar wind heating problem."

Comet dust

Another surprise was the dust that peppered the spacecraft repeatedly during each fly-by at perihelion -- the point in the orbit where the spacecraft was closest to the sun. Probably smaller than a micron, which is a thousandth of a millimeter, the dust particles are likely debris from asteroids or comets that melted near the sun and left behind their trapped dust. That dust is now orbiting the sun, and Bale suspects that much of it that hitting the spacecraft is being ejected outward by light pressure and destined to escape the solar system entirely.

Bale said that studying the solar wind from Earth is like studying the source of a waterfall from near the bottom, where the turbulence obscures what's happening at the top.

"Now, with the Parker Solar Probe, we are getting closer and closer to the top of the waterfall, and we can see that there is underlying structure," he said. "At the source, what we see is something that is coherent with impulsive jets on top of it. You have a small hole -- a coronal hole -- and the solar wind is coming out of that in a smooth flow. But then, on top of it, there are jets. By the time you get all the way downstream from it at Earth, it is all just mixed up."

Bale will discuss results from the first close encounter and compare them to those of the two subsequent close encounters in talks at the upcoming American Geophysical Union meeting in San Francisco that starts Dec. 8.

"We have been working nearly around the clock for a decade on this thing, so to see the data ... it is just a pleasure," Bale said. "It is a big case of delayed gratification, but it is terrific stuff."

Credit: 
University of California - Berkeley

New diagnostic techniques and drug may slow and even reverse cognitive decline from aging

BEER-SHEVA, ISRAEL and BERKELEY, CALIFORNIA...DECEMBER 4, 2019 - A groundbreaking clinical approach has been developed combining new diagnostic techniques to detect a leaking blood-brain barrier (BBB) with a new anti-inflammatory drug that for the first time slows or reverses age-related cognitive decline.

In two related studies published in the journal Science Translational Medicine, researchers from Ben-Gurion University of the Negev (BGU) and the University of California, Berkeley (UC) report that when given the new drug to reduce inflammation, senile mice had fewer signs of dysfunctional brain electrical activity and were better able to learn new tasks, becoming almost cognitively adept as mice half their age.

Other findings indicate two practical pathways -- measuring the leakiness of the blood-brain barrier via MRI and abnormal electrical brain activity via EEG -- that can be used to screen people for a leaky BBB.

"These findings represent real hope that we can stop, and even reverse, the deterioration that until now we considered an inevitable part of aging," said senior study author BGU Prof. Alon Friedman M.D., Ph.D., and research partner Prof. Daniela Kaufer, UC Berkeley Department of Integrative Biology. "It is the first diagnostic, coupled with personalized drug intervention targeting the BBB."

"We tend to think about the aged brain in the same way we think about neurodegeneration: age involves loss of function and dead cells," said Kaufer. "But our new data tell a different story about why the aged brain is not functioning well -- it is because of this 'fog' of inflammatory load. But when you remove that inflammatory fog, within days the aged brain acts like a young brain. It is an extremely optimistic finding in terms of the capacity for plasticity that exists in the brain and indicates that we can reverse brain aging."

The BBB is a semi-permeable interface that separates circulating blood from the brain. It also prevents the transfer of unwanted molecules or infectious organisms from the blood to the brain. Increasing evidence shows that breaching the integrity of this barrier causes many brain diseases and neurodegeneration as a result of aging.

In analyzing brain tissue from humans, Kaufer found evidence of albumin in aged brains as well as increased neuroinflammation and production of TGF-β, a protein that controls cell growth.

Because albumin is typically synthesized only outside the BBB, increased albumin within the brain indicates BBB damage leading to inflammation.

Profs. Kaufer and Friedman also showed that introducing albumin into the brain can, within a week, make the brains of young mice look like those of old mice, in terms of neuronal functions and their susceptibility to seizures. These albumin-treated mice also navigated a maze as poorly as aged mice.

The Friedman group in the BGU Brain Imaging Center developed an MRI imaging protocol -- dynamic contrast-enhanced (DCE) imaging -- and mathematical algorithms that quantify leakage in the BBB.

"When we infuse albumin into the brains of young mice, we recapitulate aging of the brain: the gene expression, the inflammatory response, resilience to induced seizures, mortality after seizures, and performance in a maze. And when we record their brain activity, we find these paroxysmal slow wave events (PSWE)," Kaufer said. "And all is specific to the site we infuse, so doing this is sufficient to get an aged phenotype of this very young brain."

Administering a new anti-inflammatory drug that specifically targets TGF-β signaling decreased the PSWE occurrences in BBB leakiness. The drug, a small molecule called IPW, not only helps to alleviate the effects of a leaky BBB but seems to also heal the barrier. "These PSWEs may explain some of the symptoms we see in Alzheimer's disease patients and therefore lowering the PSWE burden may help those patients," said Dr. Dan Milikovsky who led the project in Prof. Friedman's laboratory.

"Together, the evidence points to a dysfunction in the brain's vasculature as one of the earliest triggers of neurological aging," Friedman said. "This combination of two biomarkers and a drug gives us the innovative ability to diagnose and treat patients with blood-brain barrier leakiness, and cease treatment once the BBB closes and danger decreases."

Kaufer added, "We got here through this back door starting with questions about plasticity having to do with the blood-brain barrier, traumatic brain injury and how epilepsy develops. But after we'd learned a lot about the mechanisms, we started thinking that maybe in aging it is the same story. This is new biology, a completely new angle on why neurological function deteriorates as the brain ages."

The researchers have started a company to develop IPW and other therapeutics with the goal of reducing brain inflammation, and thus permanent damage, after stroke, concussion or traumatic brain injury. The drug may eventually help older adults suffering from early dementia or Alzheimer's disease with demonstrated BBB leakage.

Credit: 
American Associates, Ben-Gurion University of the Negev

Adding copper strengthens 3D-printed titanium

image: 3D-printed titanium-copper bars with titanium powder and copper powder.

Image: 
RMIT University

Successful trials of titanium-copper alloys for 3D printing could kickstart a new range of high-performance alloys for medical device, defence and aerospace applications.

Current titanium alloys used in additive manufacturing often cool and bond together in column-shaped crystals during the 3D printing process, making them prone to cracking or distortion.

And unlike aluminium or other commonly used metals, there is no commercial grain refiner for titanium that manufacturers can use to effectively refine the microstructure to avoid these issues.

But now a new titanium alloy with copper, unveiled today in Nature, appears to have solved this problem.

Professor Mark Easton from RMIT University's School of Engineering said their titanium-copper alloy printed with "exceptional properties" without any special process control or additional treatment.

"Of particular note was its fully equiaxed grain structure: this means the crystal grains had grown equally in all directions to form a strong bond, instead of in columns, which can lead to weak points liable to cracking."

"Alloys with this microstructure can withstand much higher forces and will be much less likely to have defects, such as cracking or distortion, during manufacture," Easton said.

The collaborative project involved leading researchers in the area of alloy composition and grain microstructure from RMIT University, CSIRO, the University of Queensland and the Ohio State University.

CSIRO Senior Principal Research Scientist, Dr Mark Gibson, said their findings also suggest similar metal systems could be treated in the same way to improve their properties.

"Titanium-copper alloys are one option, particularly if the use of other additional alloying elements or heat treatments can be employed to improve the properties further," he said.

"But there are also a number of other alloying elements that are likely to have similar effects. These could all have applications in the aerospace and biomedical industries."

Gibson said the new breed of alloys could increase manufacturers' production rates and allow for more complex parts to be manufactured.

"In general, it opens up the possibility of developing a new range of titanium-based alloys specifically developed for 3D printing with exceptional properties," he said.

"It has been a delight, as it has been my good fortune for some time, to work on such an interesting and significant project as this with such a talented band of scientists."

Credit: 
RMIT University

Looking at tropical forests through new eyes

image: Images of the lowland Amazon rainforest canopy. Different colors represent different spectral fingerprints.

Image: 
Global Airborne Observatory (GAO)

An international team of scientists led by the University of Arizona used the latest technology in remote sensing to measure plant biodiversity from the Amazon basin to the Andes Mountains in Peru to better understand how tropical forests will respond to climate change.

The researchers used Arizona State University's Global Airborne Observatory, or GAO, to show that by combining traditional on-the-ground field measurements of carbon with aerial measurements of plant chemistry, the ability to model and predict the role that tropical forests play in the global carbon cycle can be improved.

"This work is important because it can be difficult to obtain measurements in some of these remote places," said Sandra Durán, a postdoctoral fellow at the University of Arizona and lead author of the paper published in Science Advances.

"Climate scientists are interested in predicting how much carbon is able to be captured by specific forests. We're showing that these measurements of plant chemistry taken from airplanes have potential to make predictions of carbon gain in one of the most biodiverse forests in the world for the first time."

A tree's ability to grow and survive is affected by traits such as nutrient concentrations and defense compounds in its leaves. While researchers have measured these traits in trees located around the globe, incomplete data from the Andes' highly diverse forests has made it difficult to understand how trees impact the functioning of tropical forests.

Durán and her collaborators utilized GAO maps of plant chemistry to quantify the diversity of plant chemistry that is not visible to the naked eye. The measurements allowed them to see how diversity can predict rates of carbon capture in tropical forests that contain a large range of temperatures and elevations.

"New technology is enabling us to see the functioning of the forest in a new light," said corresponding author Brian Enquist, professor of ecology and evolutionary biology at the University of Arizona. "This technology creates a continuous map of the variation in plant chemistry, even in highly remote areas of the Earth that would be almost impossible to gather from ground surveys. This will improve our ability to model tree responses to environmental changes, from small plots to large regions."

Greg Asner, co-author and principal investigator of GAO, said, "Our plant canopy chemistry maps have been used for many purposes over the past 10 years, but this new application - to assess drivers of forest carbon cycling - is new and opens doors for the use of our mapping approach throughout the world's tropical forests."

Credit: 
University of Arizona

Asia-wide genome mapping project reveals insights into Asian ancestry and genetic diversity

image: (Left to right) NTU Assistant Professor Hie Lim Kim, NTU Professor Stephan Schuster and GenomeAsia 100K Executive Chairman Mahesh Pratapneni, with the analysis of the Asian genomes which have been sequenced by all members of the consortium using the same methodology.

Image: 
NTU Singapore

After a global genetic comparison, a team of international scientists has discovered that Asia has at least ten ancestral lineages, whereas northern Europe has a single ancestral lineage.

In their first study reported in Nature this week, the GenomeAsia 100K consortium analysed the genomes of 1,739 people, which represents the widest coverage of genetic diversity in Asia to date.

The study covers 64 different countries and provides what the authors call "the first comprehensive genetic map for Asia" that will guide scientists in studying diseases unique to Asians, improve precision medicine and identify drugs that may carry higher risk of adverse reactions for certain ethnic groups.

Despite forming over 40 per cent of the world's population, Asian people have previously accounted for only six per cent of the world's recorded genome sequences.

The goal of GenomeAsia 100K, which launched in 2016, is to better understand the genome diversity of Asian ethnicities by sequencing 100,000 genomes of people living in Asia. It is a non-profit consortium hosted by Nanyang Technological University, Singapore (NTU Singapore), the only academic member. Its three other members are Macrogen based in South Korea, Genentech, a member of the Roche Group in United States, and MedGenome from India/US.

NTU Professor Stephan C. Schuster, the consortium's scientific chairman and a co-leader of the study, explained the significance of GenomeAsia 100K's initial findings on the vast genomic diversity in Asia: "To put it into context, imagine we looked at all people of European and based on the level of their genetic diversity, observed that they could all be grouped into just one ancestral lineage or population. Now, if we took that same approach with our new data from people of Asian, then based on the much higher levels of genetic diversity observed we would say that there are 10 different ancestral groups or lineages in Asia."

Prof Schuster added, "'GenomeAsia 100K is a significant and far-reaching project that will affect the well-being and health of Asians worldwide, and it is a great honour for Singapore and NTU to be hosting it."

Executive Chairman of GenomeAsia 100K, Mahesh Pratapneni said, "The publication of this pilot study is a first milestone for GenomeAsia 100K, which is an unprecedented collaboration between academia and industry leaders in the field of genomics. We are certain more partners will join GenomeAsia 100K to accelerate medical breakthroughs for people of Asian heritage."

Chairman and CEO of MedGenome, the largest genomics and molecular diagnostics provider in South Asia with facilities in the US, Singapore and across India, Sam Santhosh, said, "We are excited that over 1000 whole genome sequence data from the Indian sub-continent will now be available to researchers; this is an initial step in covering the underrepresented geographies."

Prof Jeong-Sun Seo, at Seoul National University Bundang Hospital Consortium scientific co-chair and Chairman of Macrogen, said, "I hope this Asian-focused study serves as a stepping stone for the democratisation of health care and precision medicine in Asia."

How the database of Asian genomes was formed

Over the course of the last three decades prior to the pilot project, thousands of blood and saliva samples have already been collected by scientists and anthropologists from donors across Asia in hopes that one day, a deeper analysis to gain insights into the Asian community can be done.

Of particular interest were participants from remote and isolated communities, who have long been the subjects of study by anthropologists but have not yet undergone genomic analysis, until the GenomeAsia 100K project was kickstarted.

The pilot study included 598 genomes from India, 156 from Malaysia, 152 from South Korea, 113 from Pakistan, 100 from Mongolia, 70 from China, 70 from Papua New Guinea, 68 from Indonesia, 52 from the Philippines, 35 from Japan, and 32 from Russia.

Genomic DNA extracted from the blood and saliva samples was then sequenced in laboratories of the four consortium members in the US, India, South Korea and Singapore. The digital sequencing data were subsequently sent to Singapore for processing and storage.

Singapore was selected by the consortium as the host, as the country offered good travel connections for collaborating scientists, strong supercomputing facilities to crunch the data, and the required cybersecurity standards in its data centre for handling sensitive genetic data.

The combined data was compiled and analysed by NTU scientists, including Asst Prof Hie Lim Kim, a population genomics expert at the Asian School of The Environment, with the help of the National Supercomputing Centre Singapore (NSCC) and international collaborators.

Different Asian ethnic groups respond differently to mainstream drugs

Every person has approximately 3.2 billion different nucleotides, or building blocks, in their genome, which form their DNA 'code'.

It's estimated that for the genomes of any two people, 99.9 per cent of this code is the same and on average, 0.1 per cent or three million nucleotides, are different between them.

This genetic variance help humankind colonise the most diverse environments on the planet and make it resilient to disease, but it also results in a differential response to many medicines.

"Genetic variance is the reason we are distinctively different from each other including differences in the diseases that each of us suffer from during our lifetimes. Understanding these differences is the most important source of clues that we have for driving the discovery of innovative new medicines," said Dr Andrew Peterson, an author of the paper and an expert in the use of genetics to drive drug discovery.

Dr Peterson was head of Molecular Biology at Genentech while this work was being carried out, is now Chief Scientific Officer at MedGenome, where he leads drug discovery efforts at MedGenome's Seven Rivers Genomic Medicines division.

The frequencies of known genetic variants related to adverse drug response were analysed for the genomes collected in this study.

For example, Warfarin, a common anticoagulant drug prescribed to treat cardiovascular diseases, likely has a higher than usual risk of adverse drug response for people carrying a certain genetic variant. This particular genetic variant has a higher frequency to appear in those with North Asian ancestry, such as Japanese, Korean, Mongolian or Chinese.

Using data analysis, scientists can now screen populations to identify groups that are more likely to have a negative predisposition to a specific drug.

Knowing a person's population group and their predisposition to drugs is extremely important if personalised medicine is to work, stressed Prof Schuster: "For precision medicine to be precise, you need to know precisely who you are."

NTU Asst Prof Hie Lim Kim, who leads the project's efforts in population genetics, added: "Only by sequencing the entire genome of an individual can a person's ancestry and genetic background be known. Their genome explains why some people are afflicted by certain diseases while others aren't. Scientists know that there is no single drug that works well for everybody and our latest findings not only reinforce this, but suggest how specific groups could be harmed by specific medicines."

Moving forward, the GenomeAsia 100K will continue to collect and analyse up to 100,000 genomes from all of Asia's geographic regions, in order to fill in the gaps on the world's genetic map and to account for Asia's unexpected genetic diversity.

Credit: 
Nanyang Technological University