Tech

NASA finds Super Typhoon Jebi undergoing eyewall replacement

image: On Aug. 31 at 8:20 a.m. EDT (1220 UTC) NASA's Aqua satellite found the coldest temperatures of the strongest thunderstorms (yellow) in Super Typhoon Jebi were as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius) around the eye and west of the center. They were embedded in a large area of storms (red) where cloud top temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 degrees Celsius).

Image: 
Credits: NRL/NASA

The tropical cyclone known as Jebi has intensified into a super typhoon and NASA's Terra satellite found it was undergoing eyewall replacement. Terra found powerful storms around the 15 nautical-mile wide eye in this Category 5 storm.

When Terra passed over Super Typhoon Jebi it appeared to be undergoing eyewall replacement. Mature, intense tropical cyclones can and often undergo an eyewall replacement cycle. That happens when a new eyewall or ring of thunderstorms within the outer rain bands forms further out from the storm's center, outside of the original eye wall. Then, that outer ring of thunderstorms chokes off the original eye wall, starving it of moisture and momentum. Eventually, if the cycle is completed, the original eye wall of thunderstorms dissipates and the new outer eye wall of thunderstorms contracts and replace the old eye wall. The storm's intensity can fluctuate over this period, initially weakening as the inner eye wall fades before again strengthening as the outer eye wall contracts.

On Aug. 31 at 7:15 a.m. EDT (1150 UTC) the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Terra satellite found the coldest temperatures of the strongest thunderstorms around Jebi's eye and west of the center. They were as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius). Those powerful storms were embedded in a large area of storms where cloud top temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 degrees Celsius). NASA research has shown that storms with cloud top temperatures that cold (that are very high in the troposphere) have the capability to generate heavy rain.

At 11 a.m. EDT (1500 UTC) Super Typhoon Jebi's maximum sustained winds were near 172 mph (150 knots/278 kph). Jebi's eye was centered near 18.8 degrees north latitude and 141.0 degrees east longitude, or about 316 nautical miles northwest of Saipan. Jebi was moving to the west-northwest at 14 mph (12 knots/22 kph).

The Joint Typhoon Warning Center (JTWC) expects Jebi to start weakening but it is forecast to make landfall between Kyoto and Tokyo, Japan on Sept. 4.

For updated forecasts, visit: http://www.prh.noaa.gov/cphc

Credit: 
NASA/Goddard Space Flight Center

Water worlds could support life, study says

image: Prof. Edwin Kite, University of Chicago

Image: 
University of Chicago

The conditions for life surviving on planets entirely covered in water are more fluid than previously thought, opening up the possibility that water worlds could be habitable, according to a new paper from the University of Chicago and Pennsylvania State University.

The scientific community has largely assumed that planets covered in a deep ocean would not support the cycling of minerals and gases that keeps the climate stable on Earth, and thus wouldn't be friendly to life. But the study, published Aug. 30 in The Astrophysical Journal, found that ocean planets could stay in the "sweet spot" for habitability much longer than previously assumed. The authors based their findings on more than a thousand simulations.

"This really pushes back against the idea you need an Earth clone--that is, a planet with some land and a shallow ocean," said Edwin Kite, assistant professor of geophysical sciences at UChicago and lead author of the study.

As telescopes get better, scientists are finding more and more planets orbiting stars in other solar systems. Such discoveries are resulting in new research into how life could potentially survive on other planets, some of which are very different from Earth--some may be covered entirely in water hundreds of miles deep.

Because life needs an extended period to evolve, and because the light and heat on planets can change as their stars age, scientists usually look for planets that have both some water and some way to keep their climates stable over time. The primary method we know of is how Earth does it. Over long timescales, our planet cools itself by drawing down greenhouse gases into minerals and warms itself up by releasing them via volcanoes.

But this model doesn't work on a water world, with deep water covering the rock and suppressing volcanoes.

Kite, and Penn State coauthor Eric Ford, wanted to know if there was another way. They set up a simulation with thousands of randomly generated planets, and tracked the evolution of their climates over billions of years.

"The surprise was that many of them stay stable for more than a billion years, just by luck of the draw," Kite said. "Our best guess is that it's on the order of 10 percent of them."

These lucky planets sit in the right location around their stars. They happened to have the right amount of carbon present, and they don't have too many minerals and elements from the crust dissolved in the oceans that would pull carbon out of the atmosphere. They have enough water from the start, and they cycle carbon between the atmosphere and ocean only, which in the right concentrations is sufficient to keep things stable.

"How much time a planet has is basically dependent on carbon dioxide and how it's partitioned between the ocean, atmosphere and rocks in its early years," said Kite. "It does seem there is a way to keep a planet habitable long-term without the geochemical cycling we see on Earth."

The simulations assumed stars that are like our own, but the results are optimistic for red dwarf stars, too, Kite said. Planets in red dwarf systems are thought to be promising candidates for fostering life because these stars get brighter much more slowly than our sun--giving life a much longer time period to get started. The same conditions modeled in this paper could be applied to planets around red dwarfs, they said: Theoretically, all you would need is the steady light of a star.

Credit: 
University of Chicago

Pushing big data to rapidly advance patient care

The breakneck pace of biomedical discovery is outstripping clinicians' ability to incorporate this new knowledge into practice.

Charles Friedman, Ph.D. and his colleagues recently wrote an article in the Journal of General Internal Medicine about a possible way to approach this problem, one that will accelerate the movement of newly-generated evidence about the management of health and disease into practice that improves the health of patients.

Traditionally, it has taken many years, and even decades, for the knowledge produced from studies to change medical practice. For example, the authors note in the article, the use of clot-busting drugs for the treatment of heart attacks was delayed by as much as 20 years because of this inability to quickly incorporate new evidence.

"There are lots of reasons why new knowledge isn't being rapidly incorporated into practice," says Friedman. "If you have to read it in a journal, understand it, figure out what to do based on it, and fit that process into your busy day and complicated work flow, for a lot of practitioners, there's just not enough room for this."

Informing medical practice

Much of the generation of new evidence is done by groups like the federal Agency for Healthcare Quality and Research and the Cochrane Collaboration, a UK-based non-profit group designed to organize medical research into systematic reviews and meta analyses. These reviews synthesize all of the available medical research about a given topic with the hope of informing medical practice. However, that movement of this accumulated knowledge to medical practice can happen incredibly slowly, if at all.

The new article focuses on the need to harness the power of technology to enable health systems to analyze the data they generate during the process of taking care of patients to generate new "local" evidence and use this in combination with published reviewed evidence to improve health outcomes.

The key to using both types of evidence, they argue, is transforming human readable knowledge--the words, tables and figures in a typical journal article--into computable forms of that same knowledge.

"A lot of scientific studies result in some kind of model: an equation, a guideline, a statistical relationship, or an algorithm. All of these kinds of models can be expressed as computer code that can automatically generate advice about a specific patient," Friedman explains. When both "local" models and published models are available in computable forms, it is suddenly possible to generate advice that reflects both kinds of sources.

Computable forms are key

He notes that while Michigan Medicine, along with most other health systems that use electronic health records, is using its data to continuously improve quality of care, putting this knowledge in computable forms creates many new ways to apply that knowledge to improve care.

The University of Michigan Medical School's Department of Learning Health Sciences is taking the lead in transforming biomedical knowledge into computable forms that are open and accessible to anyone. They've created a computer platform called the Knowledge Grid, that stores computable knowledge in digital libraries and then uses that knowledge to generate patient-specific advice.

"The value of Big Data is to generate Big Knowledge," says Friedman. "The power of Big Data is to provide better models. If all those models do is sit in journal articles, no one's going to be any healthier."

Credit: 
Michigan Medicine - University of Michigan

NASA finds very cold storm tops circling Hurricane Norman's center

image: On Aug. 30 at 11 a.m. EDT (1500 UTC), NOAA's National Hurricane Center or NHC noted Norman had rapidly strengthened during the past 12 to 24 hours, with the development of a well-defined 20-nautical mile wide eye and a thick ring of cold cloud tops (purple) of minus 94 to minus 121 degrees Fahrenheit (minus 70 to minus 85 degrees Celsius)

Image: 
NASA JPL/Heidar Thrastarson

When NASA's Aqua satellite passed over Hurricane Norman on Aug. 30 infrared data showed very cold storm tops around a 20 nautical-mile-wide eye.

NASA's Aqua satellite passed over Norman on Aug. 30 at 5:29 a.m. EDT (0929 UTC). The Atmospheric Infrared Sounder or AIRS instrument analyzed the storm in infrared light which provides temperature information. Temperature is important when trying to understand how strong storms can be. The higher the cloud tops, the colder and the stronger they are.

On Aug. 30 at 11 a.m. EDT (1500 UTC), NOAA's National Hurricane Center or NHC noted Norman had rapidly strengthened during the past 12 to 24 hours, with the development of a well-defined 20-nautical mile wide eye and a thick ring of cold cloud tops of minus 94 to minus 121 degrees Fahrenheit (minus 70 to minus 85 degrees Celsius) Storms with cloud top temperatures that cold have the capability to produce heavy rainfall.

The eye of Hurricane Norman was located near latitude 17.8 degrees north and longitude 118.0 degrees west. That's about 630 miles (1,015 km) west-southwest of the southern tip of Baja California, Mexico.

Norman was moving toward the west near 8 mph (13 kph), and this motion is expected to continue today.  A west-southwestward motion is forecast on Friday, followed by a turn back toward the west and west-northwest over the weekend.

Maximum sustained winds have increased to near 150 mph (240 kph) with higher gusts.  Norman is a category 4 hurricane on the Saffir-Simpson Hurricane Wind Scale.  Some additional strengthening is forecast during the next 12 to 24 hours.

Gradual weakening is anticipated to begin by Friday night or Saturday, however, Norman is expected to remain a very powerful hurricane during the next few days.

Credit: 
NASA/Goddard Space Flight Center

Crop losses due to insects could nearly double in Europe's bread basket due to climate

Wheat, maize and rice yields (particularly in northern climates) are projected to fall as insects in temperate regions thrive in a warmer climate, new research shows.

The study, which was published today in the journal Science, models increases in insect populations and their metabolic rates in a warmer world. It projects a 50 to 100 percent increase in pest-induced crop losses in European wheat and 30 to 40 percent increases in North American maize even if countries meet their existing commitments to reduce greenhouse gas emissions.

"In some temperate countries, insect pest damage to crops is projected to rise sharply as temperatures continue to climb, putting serious pressure on grain producers," said Joshua Tewksbury, co-lead author of the research, a research professor at CU Boulder and a director of Future Earth, an international research network for global sustainability.

Insect pestilence already reduces net yields of wheat, maize and rice, three staple grains. Combined, these grains provide 42 percent of total calorie consumption worldwide. However, models assessing the agricultural effects of climate change rarely consider losses due to insects.

Future bugs, however, in a warmer climate are expected to be even hungrier and more numerous. Warmer temperatures have been shown to accelerate an individual insect's metabolic rate, leading it to consume more food during its lifespan. And while pest populations may decline in some hotter tropical areas, they are expected to increase elsewhere as temperatures rise and additional ecosystems become favorable to the insects.

The researchers calculated the potential for crop damage through 2050 by combining robust climate projection data, crop yield statistics, insect metabolic rates and other demographic information.

The study finds that Europe's bread basket could be among the hardest hit. Currently the most productive wheat producing region in the world, pest impacts on European wheat could create total annual pest-induced yield losses that could top 16 million tons. Eleven European countries are predicted to see 75 percent or higher increases in insect-induced wheat losses, including the U.K., Denmark, Sweden and Ireland.

Insects could also create major impacts on maize and rice yields in North America and Asia, respectively. The U.S., the world's largest maize producer, could see an almost 40 percent increase in insect-induced maize losses under current climate warming trajectories, a reduction of over 20 million tons annually. Meanwhile, one-third of the world's rice production comes from China, where future insect-induced losses could top 27 million tons annually.

"On average, the impacts from insects add up to about a 2.5 percent reduction in crop yield for every degree Celsius increase in temperature," said Tewksbury. "For context, this is about half the estimated direct impact of temperature change on crop yields, but in north temperate areas, the impact of increases insect damage will likely be greater than the direct impact of climate on crop yields."

The study recommends changes to global agricultural practices, including increased selection for heat- and pest-resistant crops and new crop rotation patterns to reduce vulnerability to insects. In some extreme cases, greater pesticide use may become necessary to secure regional food supplies, even at the cost of possible associated health and environmental damage.

Credit: 
University of Colorado at Boulder

Children's bone cancers could remain hidden for years before diagnosis

Scientists have discovered that some childhood bone cancers start growing years before they are currently diagnosed. Researchers at the Wellcome Sanger Institute and Hospital for Sick Children (SickKids), Canada discovered large-scale genetic rearrangements in Ewing Sarcomas and other children's cancers, and showed these can take years to form in bone or soft tissue. This study will help unravel the causes of childhood cancers and raises the possibility of finding ways to diagnose and treat these cancers earlier in the future.

Reported in the journal Science today (31st August 2018), the research also showed that cancers with the complex genetic rearrangements were more aggressive and could benefit from more intense treatment than other cancers. This will help doctors decide on the best treatment for each patient.

Ewing sarcoma is a rare cancer found mainly in bone or soft tissue of young teenagers as they grow, and is the second most commonly diagnosed bone cancer in children and young people. Treatment involves chemotherapy, surgery to remove the affected part of the bone if possible and radiotherapy. However, this harsh regime has hardly changed for the last 40 years and fails about one third of patients.

Cancer is a genetic disease and in Ewing sarcoma, two specific genes, EWSR1 and ETS, are fused together. To understand the genetic events leading to this, researchers sequenced and analysed the genomes of 124 tumours. They discovered that in nearly half of the cases, the main gene fusion occurred when the DNA completely rearranged itself, forming complex loops of DNA.

Dr Adam Shlien, one of the lead authors on the paper, Associate Director of Translational Genetics and Scientist in Genetics & Genome Biology, and co-Director of the SickKids Cancer Sequencing (KiCS) program at SickKids, said: "Many childhood sarcomas are driven by gene fusions, however until now we have not known how or when these key events occur, or whether these processes change at relapse. We found dramatic early chromosomal shattering in 42 per cent of Ewing sarcomas, not only fusing two critical genes together, but also disrupting a number of important areas."

The earlier a cancer is diagnosed, the easier it is to treat, but until now it was thought that Ewing sarcoma was very fast growing. Surprisingly, the researchers found that the complex DNA rearrangements that cause Ewing sarcoma had occurred years before the tumour was diagnosed. This offers possibilities of finding ways to screen for these cancers to treat them earlier.

Dr Sam Behjati one of the lead authors on the paper from Wellcome Sanger Institute and University of Cambridge Department of Pediatrics, said: "In principle this study provides evidence that Ewing sarcoma could be detectable earlier, possibly even before it reveals itself as disease. If we could detect these childhood cancers sooner, when tumours are smaller, they would be much easier to treat. Further research is needed, but this possibility of finding a way to diagnose Ewing sarcomas earlier could help patients in the future."

The researchers discovered that Ewing Sarcomas with these complex genetic rearrangements were more aggressive than those with simple gene-fusions, and that any relapses needed different treatments. Understanding this could help clinicians offer the best treatment options for each patient.

Dr. David Malkin, co-lead author, Staff Oncologist, Scientist and co-Director of the SickKids Cancer Sequencing (KiCS) program, said: "As an increasing and diverse number of tumour genome sequences become available, we may be able to define further rearrangement processes that underlie fusion genes and thus unravel the causes of fusion-driven human cancers. Our goal is to better understand these cancers in an attempt to improve treatment and outcomes."

Credit: 
Wellcome Trust Sanger Institute

Chemotherapy may lead to early menopause in young women with lung cancer

CLEVELAND, Ohio (August 29, 2018)--A new study suggests chemotherapy may cause acute amenorrhea leading to early menopause in women with lung cancer. The study is the first to comment on amenorrhea rates in women younger than 50, concluding that women with lung cancer who desire future fertility should be educated about risks and options before starting treatment. Study results are published online today in Menopause, the journal of The North American Menopause Society (NAMS).

According to the Mayo Clinic, although the rate of lung cancer diagnoses in men has decreased by 32% since 1975, it has risen 94% percent in women and now has surpassed breast cancer as the leading cause of cancer death in US women. Although lung cancer is more common in older adults, women are diagnosed at a younger age compared with men, and approximately 5,000 premenopausal US women are diagnosed with lung cancer annually. Extensive research of women receiving treatment for breast cancer has found that between 40% and 80% have premature menopause. However, early menopause rates after lung cancer treatments are understudied.

Unique to the premenopausal survivor population is the concern that systemic chemotherapy may cause acute amenorrhea and menopause, leading not only to hot flashes, vaginal dryness, and bone loss but also the possibility of loss of fertility. Premenopausal women with lung cancer may want children and should consult their healthcare providers about options for embryo and oocyte cryopreservation, the gold standard for fertility preservation.

The study included 182 premenopausal women (average age at diagnosis, 43 years). The Mayo Clinic Epidemiology and Genetics of Lung Cancer Research Program surveyed women between 1999 and 2016 at diagnosis and annually thereafter about their menstrual status. Types of lung cancer treatments were recorded, and frequencies of self-reported menopause at each survey were calculated.

The results of the study appear in the article "Amenorrhea after lung cancer treatment." Although the study is small, for the 85 women who received chemotherapy, 64% self-reported that they were menopausal within a year of diagnosis. Only 15% of the 94 patients who did not receive systemic therapy within a year of diagnosis experienced self-reported menopause. Three patients received targeted therapy alone, two of whom remained premenopausal at the final survey completed a median of 3 years after diagnosis. The results suggest that chemotherapy for patients with lung cancer increases the risk of the early loss of menses in survivors.

"Although more definitive research is needed, premenopausal women who need chemotherapy for lung cancer appear to have a similar risk of amenorrhea, early menopause, and loss of fertility as premenopausal women receiving chemotherapy for breast cancer and lymphoma," according to Dr. JoAnn Pinkerton, executive director of NAMS. "I agree that premenopausal patients with lung cancer need to be educated about the risk for chemotherapy-related amenorrhea, menopause issues (hot flashes, vaginal dryness, and bone loss), and the potential loss of fertility before chemotherapy is initiated."

Credit: 
The Menopause Society

Soy natural: Genetic resistance against aphids

image: The difference between resistant (left) and susceptible (right) soy plants.

Image: 
Anthony Hanson

A tiny pest can cause huge losses to soybean farmers.

Several top soybean producing states in the U.S. are in the Upper Midwest. In these states, an insect-the soybean aphid-is a damaging pest. Each year, soybean aphids cause billions of dollars in crop losses.

In a recent study, researchers have taken a big step toward identifying new soybean genes associated with aphid resistance.

"Discovering new resistance genes will help develop soybean varieties with more robust aphid resistance," says lead author Aaron Lorenz. "There are very few commercially-available varieties of soybean with aphid resistance genes. Newly-identified genes can serve as backup sources of resistance if the ones currently used are no longer useful." Lorenz is an agronomist and plant geneticist at the University of Minnesota.

Currently, insecticides are used to control aphid populations to reduce damage. But aphid populations that are resistant to widely-used insecticides have been found. Environmental issues with insecticide use can also be a concern. These issues may limit insecticide use in the future.

Using soybean varieties that are naturally resistant to aphids is an alternative to using insecticides. "But the soybean aphid is a genetically diverse species. It is capable of quickly overcoming plant resistance," says Lorenz. "So we need to identify new sources of soybean aphid resistance."

To find previously unknown aphid resistance genes, researchers used already-published research. Thousands of varieties of soybean have been tested for aphid resistance. Genetic information also exists for many of these soybean varieties.

Lorenz and colleagues combined data on existing aphid resistance and genetics. "Our goal was to find which parts of the soybean genome contain genes related to aphid resistance," says Lorenz.

To do so, the researchers scanned the soybean genome for small genetic landmarks, called SNPs (pronounced "snips"). Then they tested if any of these landmarks were present more often in soybean varieties that are resistant to aphids. If so, "we can infer that a gene associated with aphid resistance may be near that landmark," says Lorenz.

Researchers have to be careful, though. "There are many reasons-beyond physical proximity-that could cause these associations," says Lorenz. "We build statistical models to account for the other reasons."

Lorenz and colleagues found several genetic landmarks that were more common in aphid-resistant soybean varieties. Some of these landmarks were in genetic regions near aphid resistance genes. But many others were in genetic regions not previously associated with aphid resistance.

That's exciting, says Lorenz. "These results can help guide researchers toward discovering new aphid resistance genes. That could be key for developing new aphid-resistant varieties of soybean."

Also encouraging is that the researchers found genetic landmarks associated with aphid resistance in several different soybean varieties. "That means a broad range of genetic backgrounds can be used for breeding purposes," says Lorenz.

There is still work to do, though. Ultimately, multiple resistance genes can be bred into single soybean varieties. These varieties will then have highly robust resistance to aphids.

"I think resistance to aphids will become increasingly important to maintain soybean production," says Lorenz. "Soybean farmers should know about them. Demanding soybean aphid resistance in the varieties they use will help their development and availability."

Read more about this research in The Plant Genome. Funding for this research was provided by Minnesota Department of Agriculture, Minnesota Soybean Research and Promotion Council, and Minnesota Invasive Terrestrial Plants and Pests Center.

Credit: 
American Society of Agronomy

Attacking aftershocks

In the weeks and months following a major earthquake, the surrounding area is often wracked by powerful aftershocks that can leave an already damaged community reeling and significantly hamper recovery efforts.

While scientists have developed empirical laws, like Bäth's Law and Ohmori's Law, to describe the likely size and timing of those aftershocks, methods for forecasting their location have been harder to grasp.

But sparked by a suggestion from researchers at Google, Brendan Meade, a Professor of Earth and Planetary Sciences, and Phoebe DeVries, a post-doctoral fellow working in his lab, are using artificial intelligence technology to try to get a handle on the problem.

Using deep learning algorithms, the pair analyzed a database of earthquakes from around the world to try to predict where aftershocks might occur, and developed a system that, while still imprecise, was able to forecast aftershocks significantly better than random assignment. The work is described in an August 30 paper published in Nature.

"There are three things you want to know about earthquakes - you want to know when they are going to occur, how big they're going to be and where they're going to be," Meade said. "Prior to this work we had empirical laws for when they would occur and how big they were going to be, and now we're working the third leg, where they might occur."

"I'm very excited for the potential for machine learning going forward with these kind of problems - it's a very important problem to go after," DeVries said. "Aftershock forecasting in particular is a challenge that's well-suited to machine learning because there are so many physical phenomena that could influence aftershock behavior and machine learning is extremely good at teasing out those relationships. I think we've really just scratched the surface of what could be done with aftershock forecasting...and that's really exciting."

The notion of using artificial intelligent neural networks to try to predict aftershocks first came up several years ago, during the first of Meade's two sabbaticals at Google in Cambridge.

While working on a related problem with a team of researchers, Meade said, a colleague suggested that that the then-emerging "deep learning" algorithms might make the problem more tractable. Meade would later partner with DeVries, who had been using neural networks to transform high performance computing code into algorithms that could run on a laptop to focus on aftershocks.

"The goal is to complete the picture and we hope we've contributed to that," Meade said.

To do it, Meade and DeVries began by accessing a database of observations made following more than 199 major earthquakes.

"After earthquakes of magnitude 5 or larger, people spend a great deal of time mapping which part of the fault slipped and how much it moved," Meade said. "Many studies might use observations from one or two earthquakes, but we used the whole database...and we combined it with a physics-based model of how the Earth will be stressed and strained after the earthquake, with the idea being that the stresses and strains caused by the main shock may be what trigger the aftershocks."

Armed with that information, they then separate an area found the into 5-kilometer-square grids. In each grid, the system checks whether there was an aftershock, and asks the neural network to look for correlations between locations where aftershocks occurred and the stresses generated by the main earthquake.

"The question is what combination of factors might be predictive," Meade said. "There are many theories, but one thing this paper does is clearly upend the most dominant theory - it shows it has negligible predictive power, and it instead comes up with one that has significantly better predictive power."

What the system pointed to, Meade said, is a quantity known as the second invariant of the deviatoric stress tensor - better known simply as J2.

"This is a quantity that occurs in metallurgy and other theories, but has never been popular in earthquake science," Meade said. "But what that means is the neural network didn't come up with something crazy, it came up with something that was highly interpretable. It was able to identify what physics we should be looking at, which is pretty cool."

That interpretability, DeVries said, is critical because artificial intelligence systems have long been viewed by many scientists as black boxes - capable of producing an answer based on some data.

"This was one of the most important steps in our process," she said. "When we first trained the neural network, we noticed it did pretty well at predicting the locations of aftershocks, but we thought it would be important if we could interpret what factors it was finding were important or useful for that forecast."

Taking on such a challenge with highly complex real-world data, however, would be a daunting task, so the pair instead asked the system to create forecasts for synthetic, highly-idealized earthquakes and then examining the predictions.

"We looked at the output of the neural network and then we looked at what we would expect if different quantities controlled aftershock forecasting," she said. "By comparing them spatially, we were able to show that J2 seems to be important in forecasting."

And because the network was trained using earthquakes and aftershocks from around the globe, Meade said, the resulting system worked for many different types of faults.

"Faults in different parts of the world have different geometry," Meade said. "In California, most are slip-faults, but in other places, like Japan, they have very shallow subduction zones. But what's cool about this system is you can train it on one, and it will predict on the other, so it's really generalizable."

"We're still a long way from actually being able to forecast them," she said. "We're a very long way from doing it in any real-time sense, but I think machine learning has huge potential here."

Going forward, Meade said, he is working on efforts to predict the magnitude of earthquakes themselves using artificial intelligence technology with the goal of one day helping to prevent the devastating impacts of the disasters.

"Orthodox seismologists are largely pathologists," Meade said. "They study what happens after the catastrophic event. I don't want to do that - I want to be an epidemiologist. I want to understand the triggers, causing and transfers that lead to these events."

Ultimately, Meade said, the study serves to highlight the potential for deep learning algorithms to answer questions that - until recently - scientists barely knew how to ask.

"I think there's a quiet revolution in thinking about earthquake prediction," he said. "It's not an idea that's totally out there anymore. And while this result is interesting, I think this is part of a revolution in general about rebuilding all of science in the artificial intelligence era.

"Problems that are dauntingly hard are extremely accessible these days," he continued. "That's not just due to computing power - the scientific community is going to benefit tremendously from this because...AI sounds extremely daunting, but it's actually not. It's an extraordinarily democratizing type of computing, and I think a lot of people are beginning to get that."

Credit: 
Harvard University

AI can deliver specialty-level diagnosis in primary care setting

image: Clinic staff member at the Diabetes and Endocrinology Center at University of Iowa Health Care-Iowa River Landing in Coralville, Iowa, using IDx-DR - the first medical device that uses AI for the autonomous detection of diabetic retinopathy.

Image: 
University of Iowa Health Care

A system designed by a University of Iowa ophthalmologist that uses artificial intelligence (AI) to detect diabetic retinopathy without a person interpreting the results earned Food and Drug Administration (FDA) authorization in April, following a clinical trial in primary care offices. Results of that study were published Aug. 28 online in Nature Digital Medicine, offering the first look at data that led to FDA clearance for IDx-DR, the first medical device that uses AI for the autonomous detection of diabetic retinopathy.

The clinical trial, which also was the first study to prospectively assess the safety of an autonomous AI system in patient care, compared the performance of IDx-DR to the gold standard diagnostic for diabetic retinopathy, which is the leading cause of vision loss in adults and one of the most severe complications for the 30.3 million Americans living with diabetes.

IDx-DR exceeded all pre-specified superiority endpoints in sensitivity, the ability to correctly identify a patient with disease; specificity, the ability to correctly classify a person as disease-free; and imageability, or the capability to produce quality images of the retina and determine the severity of the disease.

"The AI system's primary role is to identify those people with diabetes who are likely to have diabetic retinopathy that requires further evaluation by an eye-care provider. The study results demonstrate the safety of autonomous AI systems to bring specialty-level diagnostics to a primary care setting, with the potential to increase access and lower cost," says Michael Abràmoff, MD, PhD, the Robert C. Watzke Professor of Ophthalmology and Visual Sciences with UI Health Care and principal investigator on the study. He is founder and president of IDx, the company that created the IDx-DR system and funded the study.

Early detection may prevent vision loss

More than 24,000 people in the U.S. lose their sight to diabetic retinopathy each year. Early detection and treatment can reduce the risk of blindness by 95 percent, but less than 50 percent of patients with diabetes schedule regular exams with an eye-care specialist.

In the study, 900 adult patients with diabetes--but no history of diabetic retinopathy--were examined at 10 primary care sites across the U.S. Retinal images of the patients were obtained using a robotic camera, with an AI assisting the operator in getting good quality images. Once the four images were complete, the diagnostic AI then made a clinical diagnosis in 20 seconds. The diagnostic AI detects disease just as expert clinicians do, by having detectors for the lesions characteristic for diabetic retinopathy, including microaneurysms, hemorrhages, and lipoprotein exudates.

Camera operators in the study were existing staff of the primary care clinics, but not physicians or trained photographers.

"This was much more than just a study testing an algorithm on an image. We wanted to test it in the places where it will be used, by the people who will use it, and we compared it to the highest standard in the world," says Abràmoff, who also holds faculty appointments in the UI College of Engineering.

AI measured against gold standard

Study participants also had retinal images taken at each of the primary care clinics using specialized widefield and 3D imaging equipment without AI operated by experienced retinal photographers certified by the Wisconsin Fundus Photograph Reading Center (FPRC)--the gold standard in grading the severity of diabetic retinopathy.

Complete diagnostic data accomplished by both the AI system and FPRC readers was available for 819 of the original 900 study participants. FPRC readers identified 198 participants with more than mild diabetic retinopathy who should be further examined by a specialist; the AI was able to correctly identify 173 of the 198 participants with disease, resulting in a sensitivity of 87 percent. Among the 621 disease-free participants identified by FPRC readers, AI identified 556 participants, for a specificity of 90 percent. The AI had a 96 percent imageability rate: of the 852 participants who had an FPRC diagnosis, 819 had an AI system diagnostic output.

In June, following FDA clearance, providers at the Diabetes and Endocrinology Center at UI Health Care-Iowa River Landing in Coralville, Iowa, were the first in the nation to begin using IDx-DR to screen patients.

"We are hoping to do this also for early detection of diseases like glaucoma and macular degeneration. We are working on those algorithms already. The goal is to get these specialty diagnostics into primary care and retail, which is where the patients are," Abràmoff says.

IDx is working with the American Medical Association to ensure that there is clear coding guidance for billing of IDx-DR. Providers, physicians, and suppliers should contact their third-party payers for specific and current information on their coding, coverage, and payment policies. IDx is a licensed distributor of the robotic camera used in the study.

Credit: 
University of Iowa Health Care

Bioengineers unveil surprising sensory and self-healing abilities of seashore creatures

image: A limpet shell that has failed at the apex due to impact.

Image: 
Professor David Taylor and Maeve O'Neill, Trinity College Dublin

New research from bioengineers paints a surprisingly complex picture of limpets - the little seashore creatures that are ubiquitous on rocky patches of beaches in many parts of the world. The bioengineers have discovered that limpets are able to detect minor damage to their shells with surprising accuracy before remodelling them to make them stronger. In many ways, the way they heal is similar to the way broken bones mend in mammals.

The bioengineers discovered that the apex of a limpet's shell acts like the "crumple zone" of a car, by taking the brunt of any major damage to protect what's inside. What most surprised them, however, was that limpets that experience shell damage seem to know or sense this - and, unlike typical car owners - actively carry out repairs themselves, by depositing new biological material to repair structural weaknesses and restore former mechanical strength.

The research, led by a team from Trinity College Dublin, has just been published by the Journal of the Royal Society Interface.

To assess how limpets reacted to damage, the researchers simulated some of the stresses they experience in the wild from rough seas and moving rocks/debris (by dropping weights); general abrasion and shell weathering (by using a metal file); and from attacks from predators (by using a nail to create a small hole in the apex). The limpets reacted to these stresses by repairing their shells from within, and while after 60 days the shells were never as thick as before, they had regained their former protective strength.

Co-first author, Maeve O'Neill is a PhD Candidate in Trinity's School of Engineering. She said: "Our study shows that limpets are able to repair damage to their shell, both visually and functionally, and that they are also able to restore mechanical strength in as little as a month. The way they do this is essentially similar to how bones heal in mammals, as the process is at least partially carried out by the deposition of new material."

Professor of Materials Engineering at Trinity, David Taylor, added: "We've studied healing before, in human bones and also in the exoskeletons of insects, but we were amazed to discover that these simple marine organisms are capable of reacting in a very subtle and clever way."

Credit: 
Trinity College Dublin

Tackling the great paradox of biodiversity with game theory

For decades, scientists struggled to explain how scarce resources can sustain the multitude of species that exists on Earth. Early theoretical attempts to understand biodiversity led to a nonsensical situation: theory predicted that the number of species had to be equal to the number of resource types available in the environment, a conclusion that clearly fails the test of reality. The contrast between the theoretical prediction and the experimental observation is so glaring that it has been called paradoxical.

This paradox is often known as the "plankton paradox", because it is very crisply illustrated by the properties of plankton ecosystems. In open sea water, there are fewer than 10 different growth-sustaining resources such as light, nitrogen, carbon, phosphorus, iron, etc. Yet even there, hundreds of different species of plankton are able to stably coexist without driving each other extinct.

Despite recent progress, this enigma of biodiversity has not yet been solved. Now, scientists at the Champalimaud Centre for the Unknown, in Lisbon, Portugal, developed a new mathematical model which may be the answer. Their findings were published in the scientific journal Proceedings of the Royal Society B.

According to Andres Laan, the first author of the study, led by principal investigator Gonzalo de Polavieja, classical resource competition models predict that each resource will sustain the one species which is best at consuming it, consequently driving all competing species to extinction. But this one-to-one correspondence is not what we observe in nature. On the contrary, the number of species living on Earth is orders of magnitude larger.

The scientists set out to provide a new solution to the plankton paradox. The novelty of this study is that, to explain the biodiversity paradox, they used game theory models of aggression as their source of inspiration. "We started from a theoretical scenario where we had just two 'species': hawks and doves", Laan explains. "Hawks are blood-thirsty and always ready to fight. Doves are pacific and tend to split resource or run away from fights. According to game theory, in the end, neither purely hawks nor purely doves are dominant, but instead the two 'species' coexist."

They then wondered what would happen if various species played this hawk-dove game over many different kinds of resources simultaneously. For each resource, a species had an independent choice between being a hawk or being a dove. "This rich set of choices generated combinatorial diversity leading to a large number of potential species and just as was the case for the simple hawk-dove game, the species ended up coexisting, rather than driving each other extinct", says Laan.

According to their model, biodiversity actually increases exponentially with the number of resources. "On one resource, two species can coexist, on two resources, four species, on four resources, 16 species, and on 10 resources, we get to more than 1000 species. Exponential growth is very fast, so it provides a nice way to maintain biodiversity", he explains.

Their theory, the authors say, also has a number of experimentally confirmed predictions. The model precisely captures how abundant different species are in real ecosystems. In these ecosystems, there are a few species that are most abundant, yet they account for a disproportionately large fraction of the total biomass in the system. "You can think of this as being similar to wealth inequality in human societies, where the rich hold a disproportionately large share of total wealth", says Laan.

The authors believe that resolving this paradox might provide the key not only to understanding biodiversity, but also understanding extinctions and predicting possible future directions of animal evolution. "These ideas are still largely theoretical, so we need to test how well the competition mechanisms proposed in the paper describe what happens when real species compete, but early results look quite promising," Laan concludes.

Credit: 
Champalimaud Centre for the Unknown

Scientists in Fiji examine how forest conservation helps coral reefs

image: Researchers from the University of Hawai'i at Mānoa (UH Mānoa), WCS (Wildlife Conservation Society), and other groups are discovering how forest conservation in Fiji can minimize the impact of human activities on coral reefs and their fish populations.

Image: 
Stacy Jupiter/WCS

Researchers from the University of Hawai'i at Mānoa (UH Mānoa), WCS (Wildlife Conservation Society), and other groups are discovering how forest conservation in Fiji can minimize the impact of human activities on coral reefs and their fish populations.

Specifically, authors of a newly published study in the journal Scientific Reports have used innovative modeling tools to identify specific locations on the land where conservation actions would yield the highest benefits for downstream reefs in terms of mitigating harm to coral communities and associated reef fish populations.

The authors of the study titled "Scenario Planning with Linked Land-Sea Models Inform Where Forest Conservation Actions Will Promote Coral Reef Resilience" are: Jade M. S. Delevaux, Stacy D. Jupiter, Kostantinos A. Stamoulis, Leah L. Bremer, Amelia S. Wenger, Rachel Dacks, Peter Garrod, Kim A. Falinski, and Tamara Ticktin.

The researchers of the study focused on Fiji's Kubulau District, where indigenous landowners are already taking action to manage their resources through a ridge-to-reef management plan.

Human activities on land often have cascading effects for marine ecosystems, and human-related impacts on Fiji are threatening more than 25 percent of the total global reef area. Expansion of commercial agriculture, logging, mining, and coastal development can harm coral reefs and their associated fisheries through increases in sediment and nutrient runoff. Consequent reef degradation directly affects food security, human wellbeing, and cultural practices in tropical island communities around the world.

To determine where management and conservation efforts would be most impactful, the researchers built a fine-scale, linked land and sea model that integrates existing land-use with coral reef condition and fish biomass. The team then simulated various future land-use and climate change scenarios to pinpoint areas in key watersheds where conservation would provide the most benefit to downstream coral reef systems. In every simulated scenario, coral reef impacts were minimized when native forest was protected or restored.

"The results of this study can be used by the village chiefs and the resource management committee in Kubulau to provide a geographic focus to their management actions," said Dr. Sangeeta Mangubhai, Director of the WCS Fiji Country Program.

The methods also have applications far beyond Kubulau, particularly as many indigenous island communities are mobilizing to revitalize customary ridge to reef management systems and governments are becoming more interested in applying an integrated land-sea planning approach.

Dr. Jade Delevaux of the University of Hawai'i and lead author of the study said: "This novel tool relies on two freely available software packages and can be used in open access geographic information systems (GIS). As more and more remote sensing and bathymetry data become freely available to serve as data inputs, the model can serve even very data-poor regions around the world to allow for better management of linked land and sea areas."

The model thus provides a platform for evidence-based decision making for ridge to reef management and lends confidence that directed terrestrial conservation actions can bolster reef resilience by minimizing damage from land-based runoff.

Dr. Stacy Jupiter, WCS Melanesia Regional Program Director, added: "The results provide hope because they demonstrate that resilience of coral reefs to global change can be promoted through local actions, thereby empowering local people to become better stewards over their resources."

Credit: 
Wildlife Conservation Society

Writing a 'thank you' note is more powerful than we realize, study shows

video: Research from The University of Texas at Austin explores the value of writing thank you notes.

Image: 
The University of Texas at Austin

AUSTIN, Texas -- New research from The University of Texas at Austin proves writing letters of gratitude, like Jimmy Fallon's "Thank You Notes," is a pro-social experience people should commit to more often. The gesture improves well-being for not only letter writers but recipients as well.

Published in Psychological Science, research conducted by assistant professor of marketing in the McCombs School of Business at UT Amit Kumar and Nicholas Epley at The University of Chicago asked participants, in three different experiments, to write a letter of gratitude to someone who's done something nice for them and then anticipate the recipient's reaction. In each experiment, letter writers overestimated how awkward recipients would feel about the gesture and underestimated how surprised and positive recipients would feel.

"We looked at what's correlating with people's likelihood of expressing gratitude -- what drives those choices -- and what we found is that predictions or expectations of that awkwardness, that anticipation of how a recipient would feel -- those are the things that matter when people are deciding whether to express gratitude or not," said Kumar.

Kumar says anxiety about what to say or fear of their gesture being misinterpreted causes many people to shy away from expressing genuine gratitude.

"I don't think it's a societal thing," said Kumar. "It's more fundamental to how the human mind works and a well-established symmetry about how we evaluate ourselves and other people. When we're thinking about ourselves, we tend to think about how competent we are, and whether we are going to be articulate in how we're expressing gratitude."

Kumar says what is significant about the research and its results is that thank-you notes and letters of gratitude should be written and sent more often.

"What we saw is that it only takes a couple of minutes to compose letters like these, thoughtful ones and sincere ones," said Kumar. "It comes at little cost, but the benefits are larger than people expect."

Credit: 
University of Texas at Austin

The more pesticides bees eat, the more they like them

Bumblebees acquire a taste for pesticide-laced food as they become more exposed to it, a behaviour showing possible symptoms of addiction.

This study of bumblebee behaviour indicates that the risk of pesticide-contaminated food entering bee colonies may be higher than previously thought, which can have impacts on colony reproductive success.

In research published today in Proceedings of the Royal Society B, a team from Imperial College London and Queen Mary University of London (QMUL) have shown that bumblebee colonies increasingly feed on pesticide-laced food (sugar solution) over time.

The researchers tested the controversial class of pesticides the 'neonicotinoids', which are currently one of the most widely used classes of pesticides worldwide, despite the near-total ban in the EU. The impact of neonicotinoids on bees is hotly debated, and the ban is a decision that has received mixed views.

Lead researcher Dr Richard Gill, from the Department of Life Sciences at Imperial, said: "Given a choice, naïve bees appear to avoid neonicotinoid-treated food. However, as individual bees increasingly experience the treated food they develop a preference for it.

"Interestingly, neonicotinoids target nerve receptors in insects that are similar to receptors targeted by nicotine in mammals. Our findings that bumblebees acquire a taste for neonicotinoids ticks certain symptoms of addictive behaviour, which is intriguing given the addictive properties of nicotine on humans, although more research is needed to determine this in bees."

The team tracked ten bumblebee colonies over ten days, giving each colony access to its own foraging arena in which bees could choose feeders that did or did not contain a neonicotinoid.

They found that while the bees preferred the pesticide-free food to begin with, over time they fed on the pesticide-laced food more and visited the pesticide-free food less. They continued to prefer the pesticide-laced food even when the positions of the feeders were changed, suggesting they can detect the pesticide inside the food.

Lead author Dr Andres Arce, from the Department of Life Sciences at Imperial, said: "Many studies on neonicotinoids feed bees exclusively with pesticide-laden food, but in reality, wild bees have a choice of where to feed. We wanted to know if the bees could detect the pesticides and eventually learn to avoid them by feeding on the uncontaminated food we were offering.

"Whilst at first it appeared that the bees did avoid the food containing the pesticide, we found that over time the bumblebees increased their visits to pesticide-laden food. We now need to conduct further studies to try and understand the mechanism behind why they acquire this preference."

Dr Gill added: "This research expands on important previous work by groups at Newcastle and Dublin Universities. Here, we added a time dimension and allowed the bees to carry out more normal foraging behaviour, to understand the dynamics of pesticide preference. Together these studies allow us to properly assess the risks of exposure and not just the hazard posed.

"Whilst neonicotinoids are controversial, if the effects of replacements on non-target insects are not understood, then I believe it is sensible that we take advantage of current knowledge and further studies to provide guidance for using neonicotinoids more responsibly, rather than necessarily an outright ban."

Credit: 
Imperial College London