Culture

Can science writing be automated?

CAMBRIDGE, Mass. -- The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljacic, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mico Tatalovic, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

"We have been doing various kinds of work in AI for a few years now," Soljacic says. "We use AI to help with our research, basically to do physics better. And as we got to be more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics -- a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm."

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. "We can't say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm."

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and "learns" what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what's needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space -- a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

"RUM helps neural networks to do two things very well," Nakov says. "It helps them to remember better, and it enables them to recall information more accurately."

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, "we realized one of the places where we thought this approach could be useful would be natural language processing," says Soljacic, recalling a conversation with Tatalovic, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalovic was at the time exploring AI in science journalism as his Knight fellowship project.

"And so we tried a few natural language processing tasks on it," Soljacic says. "One that we tried was summarizing articles, and that seems to be working quite well."

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: "Baylisascariasis," kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed "baylisascariasis," kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed "baylisascariasis," kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can "read" through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings -- the paper that this news story is attempting to summarize.

Here is the new neural network's summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Credit: 
Massachusetts Institute of Technology

How do we make moral decisions?

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

To examine moral decision-making within the context of reciprocity, the researchers designed a modified trust game called the Hidden Multiplier Trust Game, which allowed them to classify decisions in reciprocating trust as a function of an individual's moral strategy. With this method, the team could determine which type of moral strategy a study participant was using: inequity aversion (where people reciprocate because they want to seek fairness in outcomes), guilt aversion (where people reciprocate because they want to avoid feeling guilty), greed, or moral opportunism (a new strategy that the team identified, where people switch between inequity aversion and guilt aversion depending on what will serve their interests best). The researchers also developed a computational, moral strategy model that could be used to explain how people behave in the game and examined the brain activity patterns associated with the moral strategies.

The findings reveal for the first time that unique patterns of brain activity underlie the inequity aversion and guilt aversion strategies, even when the strategies yield the same behavior. For the participants that were morally opportunistic, the researchers observed that their brain patterns switched between the two moral strategies across different contexts. "Our results demonstrate that people may use different moral principles to make their decisions, and that some people are much more flexible and will apply different principles depending on the situation," explained Chang. "This may explain why people that we like and respect occasionally do things that we find morally objectionable."

Credit: 
Dartmouth College

Firms are better off revealing their environmental practices, new research shows

image: This is Michel Magnan, professor of accountancy at the John Molson School of School of Business.

Image: 
Concordia University

Is honesty the best policy when it comes to being green?

It just might be, according to a new paper by Michel Magnan, a professor of accountancy at the John Molson School of School of Business.

In their article for Sustainability Accounting, Management and Policy Journal, Magnan and co-author Hani Tadros of Elon University in North Carolina looked at 78 US firms in environmentally sensitive industries from 1997 to 2010. They wanted to deepen their understanding of the driving forces behind the firms' disclosure of their environmental practices and management.

"There is tension out there," says Magnan, the Stephen A. Jarislowsky Chair in Corporate Governance. "Many people are skeptical and will adopt a cynical perspective regarding what corporations choose to disclose."

With public trust in business in general decline, it may be natural to assume that most firms are padding their numbers or deciding to obscure their environmental behaviour. But Magnan says he has found that that is not the case.

Many are keenly aware of growing environmental concerns among members of the public, including investors and consumers of their products. In response, some are quite literally cleaning up their act.

What is said vs. what is done

The researchers separated the firms they studied into two groups based on the data they collected, including public information and the firms' annual disclosure reports and regulatory filings.

The companies whose environmental performance scored positively when compared to existing government regulations (meaning they respected guidelines on pollution, emissions and so on) were designated "high performers." Those that did poorly were designated "low performers."

"High- and low-performing firms will adopt different patterns when it comes to disclosure," explains Magnan. "High performers will provide more information because they are doing well and want to convey that message to their various stakeholders. Poor performers, meanwhile, will try to manage impressions in some way."

The researchers paid close attention to the usefulness of the information firms disclosed. They preferred data that was objectively verifiable and quantitative -- they called that "hard" information. "Soft" information generally consisted of vague statements, unattached to specifics.

They found that high-performing corporations were more likely to disclose hard information because they could afford to be forthcoming. They were using their disclosure as a way of building trust and earning public goodwill, which pays dividends down the line.

"If more disclosure raises your market value, it makes sense," Magnan says.

Look for good, clean facts

With stakeholders paying more attention to environmental issues, Magnan says there is added pressure on firms to come clean on their environmental performance. He sees corporate culture heading in that direction already.

"Some firms will be more forthcoming because that is their governance model, and they feel that it is better to be forthcoming early on," he says. "The costs will be less, and it shows they are operating in good faith."

Companies that engage in practices designed to obfuscate, deny or lie about poor environmental performances are likely to suffer serious consequences, he adds.

"In the short run, that kind of behaviour may help you, but in the long run it may come back to hurt you. Everything becomes public at some point."

Credit: 
Concordia University

Study: Infamous 'death roll' almost universal among crocodile species

image: Paleosuchus palpebrosus, also known as Cuvier's dwarf caiman.

Image: 
Kent Vliet/University of Florida.

The iconic "death roll" of alligators and crocodiles may be more common among species than previously believed, according to a new study published in Ethology, Ecology & Evolution and coauthored by a researcher at the University of Tennessee, Knoxville.

Contrary to popular belief, crocodiles can't chew, so they use a powerful bite coupled with a full-bodied twisting motion--a death roll--to disable, kill, and dismember prey into smaller pieces. The lethal movement is characteristic of both alligators and crocodiles and has been featured in numerous movies and nature documentaries.

Until now, the death roll had only been documented in a few of the 25 living crocodilian species, but how many actually do it?

"We conducted tests in all 25 species, and 24 of them exhibited the behavior," said lead author Stephanie Drumheller-Horton, a paleontologist and adjunct assistant professor in the Department of Earth and Planetary Sciences at UT.

For the research, Drumheller-Horton teamed up with Kent Vliet from the University of Florida and Jim Darlington, curator of reptiles at the St. Augustine Alligator Farm.

It was previously believed that slender-snouted species, like the Indian gharial, didn't roll because their diets consist of small prey like fish, eaten whole.

But it turns out that feeding isn't the only time the animals might roll.

"Aggression between individual crocodylians can become quite intense, often involving bites and death rolls in establishing dominance or competition for females," Vliet said.

Paleosuchus palpebrosus, commonly called Cuvier's dwarf caiman, is the only species that did not perform a death roll under experimental conditions. "Although, it's also possible that they were just being uncooperative," said Darlington.

And the fossil ancestors of modern crocodiles? If they share a similar body plan and lifestyle with their modern counterparts, it's likely that they could death roll, too.

"Crocodile relatives have played the role of semi-aquatic ambush predator since the Age of Dinosaurs," said Drumheller-Horton.

Whether in the Northern Territories of Australia, a lake in the Serengeti, or a watering hole in the late Cretaceous, chances are that a patient predator is waiting in the water to surprise its next meal with a burst of speed, a powerful bite, and a spinning finish.

Credit: 
University of Tennessee at Knoxville

Antimicrobial paints have a blind spot

image: This is a scanning electron microscopy (SEM) image ofBacillus timonensis.

Image: 
Jinglin Hu/Northwestern University

EVANSTON, Ill. -- Antimicrobial paints offer the promise of extra protection against bacteria. But Northwestern University researchers caution that these paints might be doing more harm than good.

In a new study, the researchers tested bacteria commonly found inside homes on samples of drywall coated with antimicrobial, synthetic latex paints. Within 24 hours, all bacteria died except for Bacillus timonensis, a spore-forming bacterium. Most bacilli are commonly inhabit soil, but many are found in indoor environments.

"If you attack bacteria with antimicrobial chemicals, then they will mount a defense," said Northwestern's Erica Hartmann, who led the study. "Bacillus is typically innocuous, but by attacking it, you might prompt it to develop more antibiotic resistance."

Bacteria thrive in warm, moist environments, so most die on indoor surfaces, which are dry and cold, anyway. This makes Hartmann question the need to use antimicrobial paints, which may only be causing bacteria to become stronger.

Spore-forming bacteria, such as Bacillus, protect themselves by falling dormant for a period of time. While dormant, they are highly resistant to even the harshest conditions. After those conditions improve, they reactivate.

"When it's in spore form, you can hit it with everything you've got, and it's still going to survive," said Hartmann, assistant professor of civil and environmental engineering in Northwestern's McCormick School of Engineering. "We should be judicious in our use of antimicrobial products to make sure that we're not exposing the more harmless bacteria to something that could make them harmful."

The study was published online on April 13 in the journal Indoor Air.

One problem with antimicrobial products -- such as these paints -- is that they are not tested against more common bacteria. Manufacturers test how well more pathogenic bacteria, such as E. coli or Staphylococcus, survive but largely ignore the bacteria that people (and the products they use) would more plausibly encounter.

"E. coli is like the 'lab rat' of the microbial world," Hartmann said. "It is way less abundant in the environment than people think. We wanted to see how the authentic indoor bacteria would respond to antimicrobial surfaces because they don't behave the same way as E. coli."

Credit: 
Northwestern University

Decline in measles vaccination is causing a preventable global resurgence of the disease

image: This is an illustration of the virus which causes measles.

Image: 
CDC/ Allison M. Maiuri, MPH, CHES

WHAT:
In 2000, measles was declared to be eliminated in the United States, when no sustained transmission of the virus was seen in this country for more than 12 months. Today, however, the United States and many other countries that had also eliminated the disease are experiencing concerning outbreaks of measles because of declines in measles vaccine coverage. Without renewed focus on measles vaccination efforts, the disease may rebound in full force, according to a new commentary in the New England Journal of Medicine by infectious diseases experts at the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, and the Penn State University College of Medicine's Milton S. Hershey Medical Center.

Measles is an extremely contagious illness transmitted through respiratory droplets and aerosolized particles that can remain in the air for up to two hours. Most often seen in young children, the disease is characterized by fever, malaise, nasal congestion, conjunctivitis, cough and a red, splotchy rash. Most people with measles recover without complications within a week. However, for infants, people with immune deficiencies, and other vulnerable populations, the consequences of a measles infection can be severe. Rare complications can occur, including pneumonia, encephalitis, other secondary infections, blindness and even death. Before the measles vaccine was developed, the disease killed between two and three million people annually worldwide. Today, measles still causes more than 100,000 deaths globally each year.

Measles can be prevented with a vaccine that is both highly effective and safe. Each complication and death related to measles is a "preventable tragedy that could have been avoided through vaccination," the authors write. Some people are reluctant to vaccinate their children based on widespread misinformation about the vaccine. For example, they may fear that the vaccine raises their child's risk of autism, a falsehood based on a debunked and fraudulent claim. A very small number of people have valid medical contraindications to the measles vaccine, such as certain immunodeficiencies, but almost everyone can be safely vaccinated.

When levels of vaccine coverage fall, the weakened umbrella of protection provided by herd immunity--indirect protection that results when a sufficiently high percentage of the community is immune to the disease--places unvaccinated young children and immunocompromised people at greater risk. This can have disastrous consequences with measles. The authors describe a case in which a single child with measles infected 23 other children in a pediatric oncology clinic, with a fatality rate of 21 percent.

If vaccination rates continue to decline, measles outbreaks may become even more frequent, a prospect the authors describe as "alarming." This is particularly confounding, they note, since measles is one of the most easily prevented contagious illnesses. In fact, it is possible to eliminate and even eradicate the disease. However, they say, achieving this goal will require collective action on the part of parents and healthcare practitioners alike.

Credit: 
NIH/National Institute of Allergy and Infectious Diseases

Cell-killing proteins suppress listeria without killing cells

image: After infection by Listeria, cells without RIPK3 proteins (top) showed greater bacterial replication than those with RIPK3 (bottom). Bar chart (right) shows replication over a 24-hour period.

Image: 
Kazuhito Sai, NC State University

New North Carolina State University research shows that key proteins known for their ability to prevent viral infections by inducing cell death can also block certain bacterial infections without triggering the death of the host cells.

Rather than killing host cells infected by Listeria in the gastrointestinal tract, the RIPK3 and MLKL proteins recognize the chemical composition of the bacteria and MLKL binds to it, preventing the spread of Listeria while keeping the host cells alive.

"While we've shown that these proteins take on a different function in intestinal epithelial cells than they do in immune cells, we're still not sure how or why this differentiation occurs," said Jun Ninomiya-Tsuji, professor of biological sciences and co-corresponding author of a paper describing the research.

The researchers, led by Kazuhito Sai, a toxicology research associate and co-corresponding author of the paper, first used human intestinal cells to show that RIPK3-deficient cells were infected by Listeria while cells with RIPK3 had few such infections. The researchers then used mice to see if Listeria could reach mouse livers by invading intestinal cells. They found many Listeria in RIPK3-deficient mice but few Listeria in normal mice.

They then showed that RIPK3 and a protein that works with it, MLKL, were activated by the presence of Listeria. This protein-pathway activation inhibited Listeria replication, showing that the proteins effectively blunted Listeria.

Next, and most surprisingly, the researchers showed that the activation of RIPK3 and MLKL by Listeria did not result in cell death. Instead, MLKL proteins bound themselves to Listeria, stopping its spread.

"These proteins induce cell death to prevent certain infections, particularly in immune cells," Sai said. "Inducing death of epithelial cells in the GI tract may cause removal of an important barrier to viruses and bacteria, so it's possible that these proteins recognize that killing these cells could make things worse instead of better."

Future research will attempt to understand how and why these proteins take different approaches - inducing cell death or not - to stave off bacteria in the GI tract, the researchers said.

Credit: 
North Carolina State University

Investigators incorporate randomized trial within dialysis care delivery

Highlights

The Time to Reduce Mortality in ESRD (TiME) trial was a large pragmatic trial demonstration project designed to determine the benefits of hemodialysis sessions that are longer than many patients currently receive.

The trial was conducted through a partnership between academic investigators and 2 large dialysis provider organizations using a highly centralized implementation approach.

Although the trial accomplished most of its demonstration project objectives, uptake of the intervention was insufficient to determine whether longer sessions improve outcomes.

Washington, DC (April 18, 2019) -- A recent clinical trial fully embedded into the routine delivery of care at dialysis facilities sought to determine if hemodialysis sessions that are longer than many patients in the United States currently receive can improve patients' health. Although the trial accomplished most of its objectives, uptake of the intervention was insufficient to determine whether longer sessions are beneficial. The findings, which appear in an upcoming issue of JASN, indicate that embedding trials into dialysis care will require more effective strategies for engaging clinicians and patients.

The trial's investigators had 2 goals: to develop approaches for embedding large randomized trials into the routine delivery of clinical care, and to determine whether patients benefit from hemodialysis sessions that are longer than usual. In the Time to Reduce Mortality in ESRD (TiME) trial, 266 dialysis facilities randomized to the intervention adopted a default hemodialysis session duration of at least 4.25 hours for new dialysis patients; those randomized to usual care had no trial-specified approach to duration. Trial implementation was highly centralized, with no on-site research personnel and complete reliance on clinically acquired data.

The team demonstrated that a trial embedded into clinical care delivery with no on-site research personnel could efficiently enroll a large number of participants using an opt-out approach to informed consent. (The trial enrolled 7,035 patients.) The trial was also able to obtain useful treatment and outcomes data from hundreds of medical facilities and monitor trial conduct and safety through a centralized approach.

The trial was discontinued at a median follow-up of 1.1 years because of an inadequate between-group difference in session duration. Average session duration was 216 minutes for the intervention group and 207 minutes for the usual care group. Investigators found no reduction in mortality or hospitalization rates for the intervention vs. usual care.

"There is a pressing need for data from randomized trials to guide clinical practice in dialysis," said lead author Laura M. Dember, MD (University of Pennsylvania Perelman School of Medicine). "Pragmatic trials embedded in clinical care delivery have tremendous potential for efficiently producing evidence that is highly generalizable to the non-research setting; however, experience with this approach is limited. The TiME trial provides an important foundation for future pragmatic trials in dialysis as well as in other settings."

Credit: 
American Society of Nephrology

Asian nations in early tobacco epidemic: study

image: From left, Wei Zheng, MD, PhD, Jae Jeong Yang, PhD, Danxia Yu, PhD, and colleagues are studying smoking patterns and associated deaths in Asian countries.

Image: 
Photo by Susan Urmy

Asian countries are in the early stages of a tobacco smoking epidemic with habits mirroring those of the United States from past decades, setting the stage for a spike in future deaths from smoking-related diseases.

That's the conclusion of researchers from Vanderbilt-Ingram Cancer Center and the Vanderbilt Epidemiology Center after analyzing 20 prospective cohort studies from mainland China, Japan, South Korea, Singapore, Taiwan and India.

Using long-term follow-up data from those cohorts, the study -- published in JAMA Network Open -- is the largest investigation in Asian countries of birth cohort-specific and county- or region-specific smoking patterns and their association with deaths.

Future deaths are likely to echo the pattern that occurred in the United States as the popularity of smoking increased during and after World War II, which resulted in lung cancer mortality peaking around 1990, said Wei Zheng, MD, PhD, Anne Potter Wilson Professor of Medicine.

"There is about a 30-year gap or incubation period for the mortality to occur," said Zheng, the study's senior author. "Smoking takes about 20 or 30 years to have this full effect on lung cancer mortality."

Tobacco control interventions may be having an effect on the smoking epidemic in some countries or areas because male smokers in the most recent birth cohort tended to quit smoking at younger ages.

"Asian countries that are richer, like Japan, South Korea and urban China, are doing a better job with this than rural China, India and other places," said Danxia Yu, PhD, who is co-first author of the study along with Jae Jeong Yang, PhD, a visiting research fellow from the Seoul National University, South Korea.

The study calls for immediate actions for all Asian countries to implement comprehensive tobacco control policies, including raising tobacco taxes, implementing laws for smoke-free areas, banning tobacco advertising, requiring warning labels for tobacco products and providing help with quitting.

Older generations of Asians tended to start smoking later in life and smoke less than people in the United States did in past decades, Zheng said, but that behavior pattern is changing.

"Younger people in more recent cohorts started smoking at a younger age, and they smoked a lot more," Zheng said. "The deaths due to tobacco smoking also increased with this cohort."

The researchers classified the birth cohorts by decades, ranging from pre-1910 to 1950 or later. Smoking accounted for 12.5% of all-cause mortality in the pre-1920 birth cohort, 21.1% in the 1920s cohort and 29.3% for the cohort born in 1930 or later. Lung cancer deaths attributable to smoking, which were 56.6% among men in the pre-1920s cohort, increased to 68.4% for men born in 1930 or later.

The researchers also studied cohorts with more recent data for men and women born in later decades to analyze smoking habits.

The rate for men in mainland China who have ever smoked has increased. Among Chinese men born in 1950 or later, 79.4% of those living in urban areas had smoked, and 74.3% of those in rural areas had. Traditionally, Japanese men have had the highest rate of having ever smoked.

Women in Asia have a much lower rate for smoking. The average percentage of women smokers for all 20 cohort studies was 7.8% compared to 65.4% for men.

Asia will face a growing burden of smoking-related health problems unless urgent tobacco control policies are implemented, the authors concluded.

Credit: 
Vanderbilt University Medical Center

Disappearing bumblebee species under threat of extinction

The American Bumblebee - a species once more commonly seen buzzing around Southern Ontario - is critically endangered, according to a new study led by York University.

The finding, published in Journal of Insect Conservation, found the native North American species, Bombus pensylvanicus, is facing imminent extinction from Canada, considered the highest and most at-risk classification before extinction. Many bumblebee species are rapidly declining across North America, but are important pollinators needed to grow Canada's crops including apples, tomatoes, blueberries and legumes, as well as countless types of trees, shrubs, and wildflowers.

The researchers assessed the extinction risk of the American Bumblebee, ranking the risk much higher than a federal advisory committee's most recent assessment which classifies the species' extinction risk at special concern.

"This species is at risk of extinction and it's currently not protected in any way despite the drastic decline," said Assistant Professor Sheila Colla, an expert in bees and endangered species in the Faculty of Environmental Studies.

"Now that we have assessed the extent of the decline and located where the remaining populations are, we can look more closely at threats and habitat requirements to design an effective conservation management plan so that this species does not disappear from Canada forever," said Colla, who co-authored and helped design the study.

Colla has been studying bumblebees in Southern Ontario since the mid-2000s. This study relies on the annual data that she and her fellow researchers have collected.

The study's research team - led by Victoria MacPhail, Colla's doctoral student, and including a scientist from the University of Vermont - used data from three sources. They analyzed Southern Ontario data from the citizen science program, Bumble Bee Watch, a collaboration of volunteers who submit bumblebee photos through a website or phone app for experts to identify. The researchers used the Bumble Bees of North America database to obtain records of bumblebee species in Ontario and Quebec dating back to the late-1800s. They also used their own field survey work which allowed them to evaluate the status of the species within its Canadian range, using the globally-recognized International Union for the Conservation of Nature (IUCN) Red List assessment criteria.

The researchers found that the American Bumblebee's area of occurrence has decreased by about 70 percent and its relative abundance fell by 89 percent from 2007-2016 compared to 1907-2006.

"This bumblebee species now has a reduced overall range," explained MacPhail. "It used to stretch from Windsor to Toronto, and all the way to Ottawa and into the Quebec area, but it is now only found in some core areas and has experienced a 37 percent decrease in overall range."

"It's now a rare sighting in Toronto," said MacPhail. "In terms of relative abundance, compared to other bees, you'd have to catch 1,000 bumblebees to find four of this species, and that compares to finding 37 bees in the past. You could walk out the door and win the lottery and find it, or you could be searching for years and not find any."

This study echoes Colla's previous findings with the critically endangered Rusty-patched Bumblebee, once found in Southern Ontario. The species has not been seen in Canada for about ten years and drastically declined towards extinction without receiving protection or conservation management.

"The American bumblebee is still found in areas throughout its Canadian range and immediate action may save it from the same fate as the Rusty-patched Bumblebee," said Colla.

Credit: 
York University

When the physics say 'don't follow your nose'

video: This robot is sniffing out the source of an ethanol leak, but it's being clever about doing it. Rather than just following the strongest scent, the robot is plugging measurements of concentration and airflow into a complex partial differential equation and then deciding where the most useful position to take another measurement is. By repeating this process, it can find an ethanol source in just a dozen or two tries in a complex environment with multiple sources.

Image: 
Reza Khodayi-mehr

Engineers at Duke University are developing a smart robotic system for sniffing out pollution hotspots and sources of toxic leaks. Their approach enables a robot to incorporate calculations made on the fly to account for the complex airflows of confined spaces rather than simply 'following its nose.'

"Many existing approaches that employ robots to locate sources of airborne particles rely on bio-inspired educated but simplistic guesses, or heuristic techniques, that drive the robots upwind or to follow increasing concentrations," said Michael M. Zavlanos, the Mary Milus Yoh and Harold L. Yoh, Jr. Associate Professor of Mechanical Engineering and Materials Science at Duke. "These methods can usually only localize a single source in open space, and they cannot estimate other equally important parameters such as release rates."

But in complex environments, these simplistic methods can send the robots on wild goose chases into areas where concentrations are artificially increased by the physics of the airflows, not because they're the source of the leak.

"If somebody is smoking outside, it doesn't take long to find them by just following your nose because there's nothing stopping the air currents from being predictable," said Wilkins Aquino, the Anderson-Rupp Professor of Mechanical Engineering and Materials Science at Duke. "But put the same cigarette inside an office and suddenly it becomes much more difficult because of the irregular air currents created by hallways, corners and offices."

In a recent paper published online in the IEEE Transactions on Robotics, Zavlanos, Aquino and newly minted PhD graduate Reza Khodayi-mehr instead take advantage of the physics behind these airflows to trace the source of an emission more efficiently.

Their approach combines physics-based models of the source identification problem with path planning algorithms for robotics in a feedback loop. The robots take measurements of contaminant concentrations in the environment and then use these measurements to incrementally calculate where the chemicals are actually coming from.

"Creating these physics-based models requires the solution of partial differential equations, which is computationally demanding and makes their application onboard small, mobile robots very challenging," said Khodayi-mehr. "We've had to create simplified models to make the calculations more efficient, which also makes them less accurate. It's a challenging trade-off."

Khodayi-mehr built a rectangular box with a wall nearly bisecting the space length-wise to create a miniature U-shaped hallway that mimics a simplified office space. A fan pumps air into the corridor at one end of the U and back out of the other, while gaseous ethanol is slowly leaked into one of the corners. Despite the simplicity of the setup, the air currents created within are turbulent and messy, creating a difficult source identification problem for any ethanol-sniffing robot to solve.

But the robot solves the problem anyway.

The robot takes a concentration measurement, fuses it with previous measurements, and solves a challenging optimization problem to estimate where the source is. It then figures out the most useful location to take its next measurement and repeats the process until the source is found.

"By combining physics-based models with optimal path planning, we can figure out where the source is with very few measurements," said Zavlanos. "This is because physics-based models provide correlations between measurements that are not accounted for in purely data-driven approaches, and optimal path planning allows the robot to select those few measurements with the most information content."

"The physics-based models are not perfect but they still carry way more information than just the sensors alone," added Aquino. "They don't have to be exact, but they allow the robot to make inferences based on what is possible within the physics of the airflows. This results in a much more efficient approach."

This complex series of problem solving isn't necessarily faster, but it's much more robust. It can handle situations with multiple sources, which is currently impossible for heuristic approaches, and can even measure the rate of contamination.

The group is still working to create machine-learning algorithms to make their models even more efficient and accurate at the same time. They're also working to extend this idea to programming a fleet of robots to conduct a methodical search of a large area. While they haven't tried the group approach in practice yet, they have published simulations that demonstrate its potential.

"Moving from a lab environment with controlled settings to a more practical scenario obviously requires addressing other challenges too," said Khodayi-mehr. "For example, in a real-world scenario we probably won't know the geometry of the domain going in. Those are some of the ongoing research directions we're currently working on."

"Model-Based Active Source Identification in Complex Environments." Reza Khodayi-mehr, Wilkins Aquino, Michael M. Zavlanos. IEEE Transactions on Robots, 2019.

Credit: 
Duke University

The Leukemia Atlas: researchers unveil proteins that signal disease

video: To rapidly accelerate research in leukemia and advance the hunt for treatments, Qutub provided the hallmarks in an online compendium where fellow researchers and oncologists worldwide can build from the resource and tools.

Image: 
Courtesy of UTSA

(San Antonio, April 17, 2019) -- Only about one in four people diagnosed with acute myelogenous leukemia (AML) survive five years after the initial diagnosis. To improve that survival rate, researchers at The University of Texas at San Antonio (UTSA) and the University of Texas MD Anderson Cancer Center created an online atlas to identify and classify protein signatures present at AML diagnosis.

The new protein classifications will help researchers and clinicians recommend better treatment and personalized medicine for patients suffering from this aggressive cancer, which occurs in the blood and bone marrow. The breakthrough research is published in the latest April issue of Nature Biomedical Engineering.

Researcher Amina Qutub, an associate professor in the UTSA Department of Biomedical Engineering (who joined UTSA in 2018 from Rice University), and oncologist Steven M. Kornblau, a professor and practicing clinician in the Department of Leukemia at UT MD Anderson Cancer Center, examined the genetic, epigenetic and environmental diversity that occurs in cancerous cells due to AML. Analyzing proteomic screens of 205 patient biopsies obtained at MD Anderson Cancer Center, first author Chenyue Wendy Hu (then a graduate student at the Qutub Lab, now at Uber Technologies), Kornblau and Qutub developed a new computational method called MetaGalaxy to categorize the protein signatures into 154 different patterns based on their cellular functions and pathways.

By approaching this challenge through the unique lens of developing a quantitative map for each leukemia patient from protein expression in their blood and bone marrow, rather than the standard lens of qualitative metrics and genetic risks alone, Qutub, Kornblau and their research collaborators will be able to more precisely categorize patients into risk groups and better predict their treatment outcomes.

To better understand the AML hallmarks at the proteomic (protein system) level and to share the results of their work with other researchers, the UTSA biomedical engineering professor and her team including Hu, and students Andrew Ligeralde (now at the University of California, Berkeley) and Allie Raybon (from the UTSA Department of Biomedical Engineering) built a web portal known as the Leukemia Proteome Atlas. Designed by Qutub's and Kornblau's teams with input from clinical collaborators worldwide, the online portal gives oncologists and cancer scientists the tools they need to investigate AML protein expression patterns from one patient to the next. It also provides investigators around the world with leads for new leukemia research and new computational tools.

Since many genetic mutations cannot be targeted, the proteomic profiling and target identification process used in this research study will accelerate the identification of therapeutic targets. It also propels researchers much closer to the development of personalized combination therapies for patients based on their unique protein signatures.

"Acute myelogenous leukemia presents as a cancer so heterogeneous that it is often described as not one, but a collection of diseases," said Qutub. "To decipher the clues found in proteins from blood and bone marrow of leukemia patients, we developed a new computer analysis - MetaGalaxy - that identifies molecular hallmarks of leukemia. These hallmarks are analogous to the way constellations guide navigation of the stars: they provide a map to protein changes for leukemia. Our 'hallmark' predictions are being experimentally tested through drug screens and can be 'programmed' into cells through synthetic manipulation of proteins. A next step to bring this work to the clinic and impact patient care is testing whether these signatures lead to the aggressive growth or resistance to chemotherapy observed in leukemia patients. At the same time, to rapidly accelerate research in leukemia and advance the hunt for treatments, we provide the hallmarks in an online compendium where fellow researchers and oncologists worldwide can build from the resource, tools and findings, LeukemiaAtlas.org."

Credit: 
University of Texas at San Antonio

Giving robots a better feel for object manipulation

A new learning system developed by MIT researchers improves robots' abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch -- and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are "trained" using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials -- "particles" -- interact when they're poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot's control.

In experiments, a robotic hand with two fingers, called "RiceGrip," accurately shaped a deformable foam to a desired configuration -- such as a "T" shape -- that serves as a proxy for sushi rice. In short, the researchers' model serves as a type of "intuitive physics" brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

"Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots," says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). "We want to build this type of intuitive model for robots to enable them to do what humans can do."

"When children are 5 months old, they already have different expectations for solids and liquids," adds co-author Jiajun Wu, a CSAIL graduate student. "That's something we know at an early age, so maybe that's something we should try to model for robots."

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences; and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

Dynamic graphs

A key innovation behind the model, called the "particle interaction network" (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle -- such as its mass and elasticity -- to predict if and where the particle will move in the graph when perturbed.

The model then leverages a "propagation" technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material -- rigid, deformable, and liquid -- to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won't move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don't align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins's group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. "You're dealing with these cases all the time where there's only partial information," Wu says. "We're extending our model to learn the dynamics of all particles, while only seeing a small portion."

Credit: 
Massachusetts Institute of Technology

Bacterial therapy in a dish

video: Engineered bacteria (green) invade a tumor spheroid in a dish.

Image: 
Tetsuhiro Harimoto/Columbia Engineering

New York, NY--April 17, 2019--Engineering bacteria to intelligently sense and respond to disease states, from infections to cancer, has become a promising focus of synthetic biology. Rapid advances in genetic engineering tools have enabled researchers to "program" cells to perform various sophisticated tasks. For example, a network of genes can be wired together to form a genetic circuit in which cells can be engineered to sense the environment and modulate their behavior or produce molecules in response.

Recent research has found that many bacteria selectively colonize tumors in vivo, prompting scientists to engineer them as programmable vehicles, biological "robots" in other words, to deliver anticancer therapeutics. Researchers are also developing new, "smart" medicines by programming bacteria to tackle other diseases, such as gastrointestinal disease and infections. Key to advancing such "living medicines" is being able to identify the best therapeutic candidates.

However, while current synthetic biology tools can create an enormous number of programmed cells, researchers' dependence on animal-based testing has greatly limited the number of therapies that can be tested and how quickly. In fact, the ability to rapidly engineer new therapies for humans far outpaces the throughput of animal-based testing, creating a major bottleneck for clinical translation.

Researchers at Columbia Engineering report today in PNAS that they have developed a system that enables them to study tens to hundreds of programmed bacteria within mini-tissues in a dish, condensing the time of study from months to days. As a proof of concept, they focused on testing programmed antitumor bacteria using mini-tumors called tumor spheroids. The speed and high throughput of their technology, which they call BSCC for "bacteria spheroids co-culture," allows for stable growth of bacteria within tumor spheroids enabling long-term study. The method can also be used for other bacteria species and cell types. The team, led by Tal Danino, assistant professor of biomedical engineering, says that, to their knowledge, this study is the first to rapidly screen and characterize bacteria therapies in vitro and will be a useful tool for many researchers in the field.

VIDEO: https://youtu.be/fBMiMXqcvaU

"We're very excited at how efficient BSCC is and think it will really accelerate engineered bacterial therapy for clinical use," Danino says. "By combining automation and robotics technology, BSCC can test a large library of therapies to discover effective treatments. And because BSCC is so broadly applicable, we can modify the system to test human samples as well as other diseases. For example, it will help us personalize medical treatments by creating a patient's cancer in a dish, and rapidly identify the best therapy for the specific individual."

The researchers knew that while many bacteria can grow inside a tumor because of the reduced immune system there, bacteria are killed outside the tumor where the body's immune system is active. Inspired by this mechanism, they searched for an antibacterial agent that can mimic the bacteria "killing" effect outside the spheroids.

They developed a protocol to use the antibiotic gentamicin to grow bacteria inside spheroids that are similar to tumors in the body. Using BSCC, they then rapidly tested a broad range of programmed anticancer bacterial therapies made of various types of bacteria, genetic circuits, and therapeutic payloads.

"We used 3D multicellular spheroids because they recapitulate conditions found in the human body, such as oxygen and nutrient gradients--these can't be made in a traditional 2D monolayer cell culture," says the paper's lead author Tetsuhiro Harimoto, who is a PhD student in Danino's lab. "In addition, the 3D spheroid provides bacteria with enough space to live in its core, in much the same way that bacteria colonize tumors in the body, also something we can't do in the 2D monolayer culture. Plus, it's simple to make large numbers of 3D spheroids and adapt them for high-throughput screening."

The team used the BSCC's high-throughput system to rapidly characterize pools of programmed bacteria and then to quickly narrow down the best candidate for therapeutic use. They discovered a potent therapy for colon cancer, using a novel bacterial toxin, theta toxin, combined with an optimal drug delivery genetic circuit in attenuated bacteria Salmonella Typhimurium. They also found new combinations of bacterial therapies that can improve anticancer efficacy even more.

The researchers compared their BSCC results to those found in animal models, and found similar behavior of bacteria in those models. They also discovered that their top candidate, theta toxin, is more potent than therapies created in the past, demonstrating the power of BSCC's high-throughput screening.

While Danino's group focused on cancer therapy in this study, they hope to expand BSCC to characterize bacteria-based therapeutics for various diseases, including gastrointestinal disease and infections. Their ultimate goal is to use these new bacterial therapies in clinics around the world.

Credit: 
Columbia University School of Engineering and Applied Science

Need more energy storage? Just hit 'print'

image: Drexel University and Trinity College researchers have developed a conductive ink that can be used to inkjet print energy storage devices.

Image: 
Drexel University

Researchers from Drexel University and Trinity College in Ireland, have created ink for an inkjet printer from a highly conductive type of two-dimensional material called MXene. Recent findings, published in Nature Communications, suggest that the ink can be used to print flexible energy storage components, such as supercapacitors, in any size or shape.

Conductive inks have been around for nearly a decade and they represent a multi-hundred million-dollar market that is expected to grow rapidly into the next decade. It's already being used to make the radiofrequency identification tags used in highway toll transponders, circuit boards in portable electronics and it lines car windows as embedded radio antennas and to aid defrosting. But for the technology to see broader use, conductive inks need to become more conductive and more easily applied to a range of surfaces.

Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel's College of Engineering, Department of Materials Science and Engineering, who studies the applications of new materials in technology, suggests that the ink created in Drexel's Nanomaterials Institute is a significant advancement on both of these fronts.

"So far only limited success has been achieved with conductive inks in both fine-resolution printing and high charge storage devices," Gogotsi said. "But our findings show that all-MXene printed micro-supercapacitors, made with an advanced inkjet printer, are an order of magnitude greater than existing energy storage devices made from other conductive inks."

While researchers are steadily figuring out ways to make inks from new, more conductive materials, like nanoparticle silver, graphene and gallium, the challenge remains incorporating them seamlessly into manufacturing processes. Most of these inks can't be used in a one-step process, according to Babak Anasori, PhD, a research assistant professor in Drexel's department of Materials Science and Engineering and co-author of the MXene ink research.

"For most other nano inks, an additive is required to hold the particles together and allow for high-quality printing. Because of this, after printing, an additional step is required - usually a thermal or chemical treatment - to remove that additive," Anasori said. "For MXene printing, we only use MXene in water or MXene in an organic solution to make the ink. This means it can dry without any additional steps."

MXenes are a type of carbon-based, two-dimensional layered materials, created at Drexel in 2011, that have the unique ability to mix with liquids, like water and other organic solvents, while retaining their conductive properties. Because of this, Drexel researchers have produced and tested it in a variety of forms, from conductive clay to a coating for electromagnetic interference shielding to a near-invisible wireless antenna.

Adjusting the concentration to create ink for use in a commercial printer was a matter of time and iteration. The solvent and MXene concentration in the ink can be adjusted to suit different kinds of printers.

"If we really want to take advantage of any technology at a large scale and have it ready for public use, it has to become very simple and done in one step," Anasori said. "An inkjet printer can be found in just about every house, so we knew if we could make the proper ink, it would be feasible that anyone could make future electronics and devices."

As part of the study, the Drexel team, working with researchers at Trinity College, who are experts in printing, put the MXene ink to the test in a series of printouts, including a simple circuit, a mirco-supercapacitor and some text, on substrates ranging from paper to plastic to glass. In doing so, they found that they could print lines of consistent thickness and that the ink's ability to pass an electric current varied with its thickness - both important factors in manufacturing electronics components. And the printouts maintained their superior electric conductivity, which is the highest among all carbon-based conductive inks, including carbon nanotubes and graphene.

This all amounts to a very versatile product for making the tiny components that perform important, but often overlooked functions in our electronics devices - jobs like keeping the power on when the battery dies, preventing damaging electrical surges, or speeding the charging process. Providing a higher-performing material and a new way to build things with it could lead not only to improvements to our current devices, but also the creation of entirely new technologies.

"Compared to conventional manufacturing protocols, direct ink printing techniques, such as inkjet printing and extrusion printing, allow digital and additive patterning, customization, reduction in material waste, scalability and rapid production," Anasori said. "Now that we have produced a MXene ink that can be applied via this technique, we're looking at a world of new opportunities to use it."

Credit: 
Drexel University