Tech

How the brain detects the rhythms of speech

Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables. The findings provide for the first time a neural basis for the fundamental atoms of language and insights into our perception of the rhythmic poetry of speech.

For decades, speech neuroscientists have looked for evidence that neurons in auditory brain areas use fluctuations in speech volume to identify the beginnings and ends of syllables -- like a lin-guis-tics pro-fes-sor di-a-gram-ming a sen-tence. So far, these efforts have met with little luck.

In the new study, published November 20, 2019 in Science Advances, UCSF scientists discovered that the brain instead responds to a marker of vocal stress in the middle of each syllable -- more like a poet scanning the sonnets of Shakespeare (Shàll Í còmpáre thèe tó à súmmèrs dáy?). The researchers showed that this signal -- in an area of speech cortex called the middle superior temporal gyrus (mSTG) -- is specifically based on the rising volume at the start of each vowel sound, which is a universal feature of human languages.

Notably, the authors say, this simple syllabic marker could also provide the brain with direct information about patterns of stress, timing, and rhythm that are so central to conveying meaning and emotional context in English and many other languages.

"What I find most exciting about this work is that it shows a simple neural coding principle for the sense of rhythm that is absolutely fundamental to how our brains process speech," said neuroscientist Yulia Oganian, PhD, who led the new research. "Could this explain why humans are so sensitive to the sequence of stressed and unstressed syllables that make up spoken poetry, or even oral storytelling?"

Oganian is a postdoctoral researcher in the lab of UCSF Health neurosurgeon Eddie Chang, MD, PhD, Bowes Biomedical Investigator at UCSF, member of the UCSF Weill Institute for Neurosciences, and a Howard Hughes Medical Institute (HHMI) Faculty Scholar, whose research laboratory studies the neural basis of human speech, movement, and emotion.

"What really excites me is that we now understand how a simple sound cue, the rapid increase in loudness that happens at the onset of vowels, serves as a critical landmark for speech because it tells a listener when a syllable occurs and whether it is stressed. This is a rather central discovery about how the brain extracts syllable units from speech," said Chang.

The study involved volunteers from the UCSF Epilepsy Center who temporarily had post-it-note-sized arrays of electrodes placed on the surface of their brains for one to two weeks as part of standard preparation for neurosurgery. These brain recordings allow neurosurgeons like Chang to map out how to remove the brain tissue that causes patients' seizures without damaging important nearby brain regions, but also allow scientists in Chang's neuroscience research lab to ask questions about human brain function that are impossible to address any other way.

Oganian recruited 11 volunteers whose seizure-mapping electrodes happened to overlap with areas of the brain involved in speech processing and who were happy to participate in a research study during their down-time in the hospital. She played each participant a selection of speech recordings from a variety of different speakers while recording patterns of brain activity in their auditory speech centers, then analyzed the data to identify neural patterns reflecting the syllabic structure of what they had heard.

The data quickly revealed that mSTG activity contained a discrete marker of individual syllables -- contradicting the dominant model in the field that had proposed that the brain sets up a continuous metronome-like oscillator to extract syllable boundaries from fluctuations in speech volume. But exactly what aspects of speech were these discrete syllable markers in the neural data responding to?

To make it possible to identify what features of the audio recordings were driving the new-found syllable markers, Oganian asked four of her research volunteers to listen to recorded speech that was slowed down four-fold. These ultra-slow speech recordings let Oganian see that the syllable signals were occurring consistently at the moment of rising stress at the start of each vowel sound (e.g. as 'b' turns to 'a' in the syllable 'ba'), and not at the peak of each syllable as other scientists had theorized.

The syllabic marker Oganian discovered in the mSTG also varied with the emphasis the speaker placed on a particular syllable. This suggested that this first stage of speech processing simultaneously allows the brain to split speech into syllabic units and also to track the patterns of stress that are critical for meaning in English and many other languages (e.g. "computer console" vs. "console a friend"; "Did I do that?" vs. "Did I do that?").

The syllabic signal also provides a simple metronome for the brain to track the rhythm and speed of speech. "Some people speak fast; others speak slow. People change how quickly they speak when they are excited or sad. The brain needs to be able to adjust to that," Oganian said. "By marking whenever a new syllable is occurring, this signal acts as an internal pacemaker within the speech signal itself."

The researchers are continuing to study how brain signals in the mSTG are interpreted to enable the brain to process speech rhythmicity and meaning. They also hope to explore how the brain's interpretation of these signals varies in languages other than English that put more or less emphasis on the stress patterns of speech.

Credit: 
University of California - San Francisco

3D maps of gene activity

image: The vertical section shows that novoSpaRc (right) is able to answer this question very reliably on the basis of available single cell sequencing data. On the left for comparison the result of experiments.

Image: 
AG N. Rajewsky Lab, MDC

Professor Nikolaus Rajewsky is a visionary: He wants to understand exactly what happens in human cells during disease progression, with the goal of being able to recognize and treat the very first cellular changes. "This requires us not only to decipher the activity of the genome in individual cells, but also to track it spatially within an organ," explains the scientific director of the Berlin Institute for Medical Systems Biology (BIMSB) at the Max Delbrück Center for Molecular Medicine (MDC) in Berlin. For example, the spatial arrangement of immune cells in cancer ("microenvironment") is extremely important in order to diagnose the disease accurately and select the optimal therapy. "In general, we lack a systematic approach to molecularly capture and understand the (patho-)physiology of a tissue."

Maps for very different tissue types

Rajewsky has now taken a big step towards his goal with a major new study that has been published in the scientific journal Nature. Together with Professor Nir Friedman from the Hebrew University of Jerusalem, Dr. Mor Nitzan from Harvard University in Cambridge, USA, and Dr. Nikos Karaiskos, a project leader from his own research group on "Systems Biology of Gene Regulatory Elements", the scientists have succeeded in using a special algorithm to create a spatial map of gene expression for individual cells in very different tissue types: in the liver and intestinal epithelium of mammals, as well as in embryos of fruit flies and zebrafish, in parts of the cerebellum, and in the kidney. "Sometimes purely theoretical science is enough to publish in a high-ranking science journal - I think this will happen even more frequently in the future. We need to invest a lot more in machine learning and artificial intelligence," says Nikolaus Rajewsky.

"Using these computer-generated maps, we are now able to precisely track whether a specific gene is active or not in the cells of a tissue part," explains Karaiskos, a theoretical physicist and bioinformatician who developed the algorithm together with Mor Nitzan. "This would not have been possible in this form without our model, which we have named 'novoSpaRc.'"

Spatial information was previously lost

It is only in recent years that researchers have been able to determine - on a large scale and with high precision - which information individual cells in an organ or tissue are retrieving from the genome at any given time. This was thanks to new sequencing methods, for example multiplex RNA sequencing, which enables a large number of RNA molecules to be analyzed simultaneously. RNA is produced in the cell when genes become active and proteins are formed from their blueprints. Rajewsky recognized the potential of single-cell sequencing early on, and established it in his laboratory.

"But for this technology to work, the tissue under investigation must first be broken down into individual cells," explains Rajewsky. This process causes valuable information to be lost: for example, the original location in the tissue of the particular cell whose gene activity has been genetically decoded. Rajewsky and Friedmann were therefore looking for a way to use data from single-cell sequencing to develop a mathematical model that could calculate the spatial pattern of gene expression for the entire genome - even in complex tissues.

The teams led by Rajewsky and Dr. Robert Zinzen, who also works at BIMSB, already achieved a first breakthrough two years ago. In the scientific journal Science, they presented a virtual model of a fruit fly embryo. It showed which genes were active in which cells in a spatial resolution that had never before been achieved. This gene mapping was made possible with the help of 84 marker genes: in situ experiments had determined where in the egg-shaped embryo these genes were active at a certain point in time. The researchers confirmed their model worked with further complex in situexperiments on living fruit fly embryos.

A puzzle with tens of thousands of pieces and colors

"In this model, however, we reconstructed the location of each cell individually," said Karaiskos. He was one of the first authors of both the "Science" study and the current "Nature" study. "This was possible because we had to deal with a considerably smaller number of cells and genes. This time, we wanted to know whether we can reconstruct complex tissue when we have hardly any or no previous information. Can we learn a principle about how gene expression is organized and regulated in complex tissues?" The basic assumption for the algorithm was that when cells are neighbors, their gene activity is more or less alike. They retrieve more similar information from their genome than cells that are further apart.

To test this hypothesis, the researchers used existing data. For liver, kidney and intestinal epithelium there was no additional information. The group had been able to collect only a few marker genes by using reconstructed tissue samples. In one case, there were only two marker genes available.

"It was like putting together a massive puzzle with a huge number of different colors - perhaps 10,000 or so," explains Karaiskos, trying to describe the difficult task he was faced with when calculating the model. "If the puzzle is solved correctly, all these colors result in a specific shape or pattern." Each piece of the puzzle represents a single cell of the tissue under investigation, and each color an active gene that was read by an RNA molecule.

The method works regardless of sequencing technique

"We now have a method that enables us to create a virtual model of the tissue under investigation on the basis of the data gained from single-cell sequencing in the computer - regardless of which sequencing method was used," says Karaiskos. "Existing information on the spatial location of individual cells can be fed into the model, thus further refining it." With the help of novoSpaRc, it is then possible to determine for each known gene where in the tissue the genetic material is active and being translated into a protein.

Now, Karaiskos and his colleagues at BIMSB are also focusing on using the model to trace back over and even predict certain developmental processes in tissues or entire organisms. However, the scientist admits there may be some specific tissues that are incompatible with the novoSpaRc algorithm. But this could be a welcome challenge, he says: A chance to try his hand at a new puzzle!

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

A new link between migraines, opioid overuse may be key to treating pain

image: Amynah Pradhan

Image: 
UIC/Joshua Clark

About 10% of the world population suffers from migraine headaches, according to the National Institute of Neurological Disorders and Stroke. To alleviate migraine pain, people are commonly treated with opioids. But, while opioid treatment can provide temporary pain relief for episodic migraines, prolonged use can increase the frequency and severity of painful migraines.

Researchers have tried to understand how opioids cause this paradoxical increase in pain for a decade, but the mechanism remained elusive -- until now.

Researchers at the University of Illinois at Chicago and colleagues discovered that a peptide -- small chains of amino acids that can regulate many behaviors and brain signaling pathways -- links together migraine pain and pain induced by opioid overuse.

Their findings are published in the journal Molecular and Cellular Proteomics.

Amynah Pradhan, senior author and UIC associate professor of psychiatry at the College of Medicine said, "Endorphin is an example of a peptide that signals the brain to give a 'runner's high.' However, not all peptides signal for pleasant outcomes. Pituitary adenylate cyclase-activating peptide, or PACAP, is a peptide that can induce migraines in migraine-prone individuals. Because the overuse of opioids can lead to worse migraines, we wanted to determine whether opioid-induced pain changed the amounts of peptides in the brain and understand if pain from migraines and opioid overuse shared any peptides in common."

To study these peptides, Pradhan and her colleagues, including researchers at the University of Illinois at Urbana-Champaign, developed two animal models: migraine pain and opioid overuse pain, both in mouse models. Using mass spectrometry to identify peptides and their quantities in the animal samples, they found only a few peptides were altered in both models. PACAP was one of them.

"We were amazed to find PACAP in both models," Pradhan said. "This study validates prior work on PACAP's role in migraine pain and, more importantly, is the first to identify PACAP as a factor in opioid-induced pain. It is also significant that the PACAP increase was seen in major pain processing sites of the brain, in both models.

"These findings provide strong evidence that PACAP is involved in both migraine and opioid-overuse pain. We finally understand a mechanism through which opioids may exacerbate migraines -- through PACAP."

Pradhan said these findings can inform the development of real-world treatments.

"Companies are developing therapies for migraine pain right now," Pradhan said. "There are clinical trials underway to test antibodies targeting PACAP and a PACAP-binding receptor. Based on our data, these therapies may be extremely effective for people that have used opioids to treat their migraines."

This research may benefit people suffering from non-migraine pain as well, she said, as people with chronic pain also experience opioid-induced pain after overuse.

Credit: 
University of Illinois Chicago

New hybrid device can both capture and store solar energy

image: The hybrid device consists of a molecular storage material (MSM) and a localized phase-change material (L-PCM), separated by a silica aerogel to maintain the necessary temperature difference.

Image: 
University of Houston

Researchers from the University of Houston have reported a new device that can both efficiently capture solar energy and store it until it is needed, offering promise for applications ranging from power generation to distillation and desalination.

Unlike solar panels and solar cells, which rely on photovoltaic technology for the direct generation of electricity, the hybrid device captures heat from the sun and stores it as thermal energy. It addresses some of the issues that have stalled wider-scale adoption of solar power, suggesting an avenue for using solar energy around-the-clock, despite limited sunlight hours, cloudy days and other constraints.

The work, described in a paper published Wednesday in Joule, combines molecular energy storage and latent heat storage to produce an integrated harvesting and storage device for potential 24/7 operation. The researchers report a harvesting efficiency of 73% at small-scale operation and as high as 90% at large-scale operation.

Up to 80% of stored energy was recovered at night, and the researchers said daytime recovery was even higher.

Hadi Ghasemi, Bill D. Cook Associate Professor of Mechanical Engineering at UH and a corresponding author for the paper, said the high efficiency harvest is due, in part, to the ability of the device to capture the full spectrum of sunlight, harvesting it for immediate use and converting the excess into molecular energy storage.

The device was synthesized using norbornadiene-quadricyclane as the molecular storage material, an organic compound that the researchers said demonstrates high specific energy and exceptional heat release while remaining stable over extended storage times. Ghasemi said the same concept could be applied using different materials, allowing performance - including operating temperatures and efficiency - to be optimized.

T. Randall Lee, Cullen Distinguished University Chair professor of chemistry and a corresponding author, said the device offers improved efficiency in several ways: The solar energy is stored in molecular form rather than as heat, which dissipates over time, and the integrated system also reduces thermal losses because there is no need to transport the stored energy through piping lines.

"During the day, the solar thermal energy can be harvested at temperatures as high as 120 degrees centigrade (about 248 Fahrenheit)," said Lee, who also is a principle investigator for the Texas Center for Superconductivity at UH. "At night, when there is low or no solar irradiation, the stored energy is harvested by the molecular storage material, which can convert it from a lower energy molecule to a higher energy molecule."

That allows the stored energy to produce thermal energy at a higher temperature at night than during the day - boosting the amount of energy available even when the sun is not shining, he said.

Credit: 
University of Houston

A wirelessly-controlled and wearable skin-integrated haptic VR device

video: A new wearable skin-integrated haptic VR device which can be controlled and powered wirelessly.

Image: 
City University of Hong Kong

Sensing a hug from your friend through a video call with him/her may become a reality soon. A joint-research team consisted of scientists and engineers from City University of Hong Kong (CityU) and Northwestern University in the United States has developed a skin-integrated virtual reality (VR) system, which can be controlled and powered wirelessly. The innovation has great application potential in communications, prosthetic control and rehabilitation, as well as gaming and entertainment.

Skin is the largest organ of the body. But compared with the eyes and ears, it is a relatively under-explored sensory interface for VR or augmented reality (AR) technology. At present, VR and AR devices usually relies on vibratory actuation imparted to the skin by electrical motors. But it involves bulky wires and battery packs attaching to the body, limiting its applications.

Simulating touch by millimetre-scale vibration

Dr Yu Xinge, Assistant Professor, Dr Xie Zhaoqian, Senior Research Fellow of Department of Biomedical Engineering at CityU and their team, cooperating with the team from Northwestern University, as well as collaborators from several research institutes and companies in US and mainland China, have developed an integrated skin VR system which can receive commands wirelessly, and then simulate the "touch" with vibration, overcoming the above shortcomings. The user can feel the touch easily by putting a bandage-like thin, soft and adhesive device on the skin.

The research findings were published in the latest issue of the highly prestigious scientific journal Nature, titled "Skin-Integrated Wireless Haptic Interfaces for Virtual and Augmented Reality".

With the meticulous design based on structural mechanics, this pioneering skin-integrated VR device is comprised of hundreds of functional components, including the actuators simulating touch by millimeter-scale mechanical vibration. These components are integrated into a thin silicone-coated elastomeric layer with a thickness of only 3 mm. It is breathable, reusable and functional at a full range of bending and twisting motions.

Wireless-controlled and power efficient

More importantly, a collection of chip-scale integrated circuits and antennae embedded inside the skin VR device allows it to be powered and controlled wirelessly.

"The haptic actuators can harvest radio frequency power through the large flexible antenna within a certain distance, so the user wearing the device can move freely without the trouble of wires," Dr Yu explained. The system can be operated within a distance of as far as one meter, which is 10 times of existing maximum distance using similar technologies.

And since the new system uses advanced mechanical design, the haptic actuators require less than 2 milliwatts to induce a notable sensory vibration, while the conventional direct-current driven ones need power of about 100 milliwatts to produce the same level of vibration.

"Thus, we solved the difficult problem of transmission by low-power wireless function and significantly increased the distance of the operation for our system. This system not only saves power but also allows users to move more freely without the trouble of wires," Dr Yu said.

Wide application potential

The team has spent about two years to develop this wireless skin VR system, which involves various disciplines such as mechanical engineering, materials science, biomedicine, physics and chemistry. They are running application trial for the users of prostheses to feel the external environment through the sense of touch and provide feedback to the users. "It can help them to feel the external stimulation with their prosthesis, such as the shape or texture of an object," said Dr Yu. In addition, it can be used for developing virtual scenes for clinical applications.

Also, he believes the system can greatly enhance sensory experience in social media interactions, multimedia entertainment, surgical training and beyond.

To fabricate an electronic skin which can feel temperature will the next step of their research.

Credit: 
City University of Hong Kong

Bot can beat humans in multiplayer hidden-role games

MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret.

Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world's first bot that can beat professionals in multiplayer poker. DeepMind's AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.

At the Conference on Neural Information Processing Systems next month, the researchers will present DeepRole, the first gaming bot that can win online multiplayer games in which the participants' team allegiances are initially unclear. The bot is designed with novel "deductive reasoning" added into an AI algorithm commonly used for playing poker. This helps it reason about partially observable actions, to determine the probability that a given player is a teammate or opponent. In doing so, it quickly learns whom to ally with and which actions to take to ensure its team's victory.

The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game "The Resistance: Avalon." In this game, players try to deduce their peers' secret roles as the game progresses, while simultaneously hiding their own roles. As both a teammate and an opponent, DeepRole consistently outperformed human players.

"If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners," says first author Jack Serrino '18, who majored in electrical engineering and computer science at MIT and is an avid online "Avalon" player.

The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.

"Humans learn from and cooperate with others, and that enables us to achieve together things that none of us can achieve alone," says co-author Max Kleiman-Weiner, a postdoc in the Center for Brains, Minds and Machines and the Department of Brain and Cognitive Sciences at MIT, and at Harvard University. "Games like 'Avalon' better mimic the dynamic social settings humans experience in everyday life. You have to figure out who's on your team and will work with you, whether it's your first day of kindergarten or another day in your office."

Joining Serrino and Kleiman-Weiner on the paper are David C. Parkes of Harvard and Joshua B. Tenenbaum, a professor of computational cognitive science and a member of MIT's Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds and Machines.

Deductive bot

In "Avalon," three players are randomly and secretly assigned to a "resistance" team and two players to a "spy" team. Both spy players know all players' roles.

During each round, one player proposes a subset of two or three players to execute a mission. All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail. If two "succeeds" are chosen, the mission succeeds; if one "fail" is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome.

The resistance team wins after three successful missions; the spy team wins after three failed missions.

Winning the game basically comes down to deducing who is resistance or spy, and voting for your collaborators. But that's actually more computationally complex than playing chess and poker. "It's a game of imperfect information," Kleiman-Weiner says. "You're not even sure who you're against when you start, so there's an additional discovery phase of finding whom to cooperate with."

DeepRole uses a game-planning algorithm called "counterfactual regret minimization" (CFR) -- which learns to play a game by repeatedly playing against itself -- augmented with deductive reasoning. At each point in a game, CFR looks ahead to create a decision "game tree" of lines and nodes describing the potential future actions of each player. Game trees represent all possible actions (lines) each player can take at each future decision point. In playing out potentially billions of game simulations, CFR notes which actions had increased or decreased its chances of winning, and iteratively revises its strategy to include more good decisions. Eventually, it plans an optimal strategy that, at worst, ties against any opponent.

CFR works well for games like poker, with public actions -- such as betting money and folding a hand -- but it struggles when actions are secret. The researchers' CFR combines public actions and consequences of private actions to determine if players are resistance or spy.

The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do. The game tree represents a strategy that gives each player the highest likelihood to win as an assigned role. The tree's nodes contain "counterfactual values," which are basically estimates for a payoff that player receives if they play that given strategy.

At each mission, the bot looks at how each person played in comparison to the game tree. If, throughout the game, a player makes enough decisions that are inconsistent with the bot's expectations, then the player is probably playing as the other role. Eventually, the bot assigns a high probability for each player's role. These probabilities are used to update the bot's strategy to increase its chances of victory.

Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions. "If it's on a two-player mission that fails, the other players know one player is a spy. The bot probably won't propose the same team on future missions, since it knows the other players think it's bad," Serrino says.

Language: The next frontier

Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. "Avalon" enables players to chat on a text module during the game. "But it turns out our bot was able to work well with a team of other humans while only observing player actions," Kleiman-Weiner says. "This is interesting, because one might think games like this require complicated communication strategies."

Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad. That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions. Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language-heavy social-deduction games -- such as a popular game "Werewolf" --which involve several minutes of arguing and persuading other players about who's on the good and bad teams.

"Language is definitely the next frontier," Serrino says. "But there are many challenges to attack in those games, where communication is so key."

Credit: 
Massachusetts Institute of Technology

NASA tracks typhoon Kalmaegi affecting northern Philippines

image: On Nov. 19, 2019, the MODIS instrument that flies aboard NASA's Terra satellite provided a visible image of Typhoon Kalmaegi near the Luzon Strait and northern Philippines.

Image: 
NASA Worldview

NASA's Terra satellite captured an image of Typhoon Kalmaegi as it moved into the Luzon Strait and continued to affect the northern Philippines.

On Nov. 19, Kalmaegi's western edge was in the Luzon Strait, while its southern quadrant was over the northern Philippines. The Luzon Strait is located between Taiwan and Luzon, Philippines. The strait connects the Philippine Sea to the South China Sea in the northwestern Pacific Ocean.

Kalmaegi is known locally in the Philippines as Tropical Cyclone Ramon, and there are many warning signals in effect for the northern Philippines.

Signal #3 is in effect for the Luzon provinces of Northern portion of Cagayan (Santa Praxedes), Claveria, Sanchez Mira, Pamplona, Abulug, Ballesteros, Aparri, Calayan, Camalaniugan, Buguey, Santa Teresita, Gonzaga and Santa Ana. Signal #2 is in effect for the Luzon provinces of Batanes, Apayao, Kalinga, Abra, Ilocos Norte & Sur and the rest of Cagayan. Signal #1 is in effect for the Luzon provinces of Northern portion of Isabela (Sta. Maria), San Pablo, Maconacon, Cabagan, Sto. Tomas, Quezon, Delfin Albano, Tumauini, Divilacan, Quirino, Roxas, Mallig, San Manuel, Burgos, Gamu and Ilagan City, Mountain Province, Benguet, Ifugao, La Union, and Pangasinan.

On Nov. 19, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite provided a visible image of Kalmaegi. The MODIS image showed the hint of an oblong eye covered by high clouds. Forecasters at the Joint Typhoon Warning Center noted that eye had collapsed due to deteriorating environmental conditions.

At 10 a.m. EST (1500 UTC), Typhoon Kalmaegi was located near latitude 19.4 degrees north and longitude 122.5 degrees east. about 301 nautical miles north-northeast of Manila, Philippines. The storm is barely moving, however. It is moving to the west at 1 knot per hour (1 mph/1.8 kph). Maximum sustained winds were near 75 knots (86 mph/139 kph).

Kalmaegi is turning toward a southwesterly course, which will take it across northwestern Luzon (northern Philippines). The storm will start to weaken, then later weaken rapidly, as it moves into the South China Sea.

NASA's Terra satellite is one in a fleet of NASA satellites that provide data for hurricane research.

Typhoons and hurricanes are the most powerful weather event on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Clay as a feed supplement in dairy cattle has multiple benefits

URBANA, Ill. - Dairy producers frequently add clay as a feed supplement to reduce the symptoms of aflatoxin and subacute ruminal acidosis (SARA) in lactating cows. In a new study from the University of Illinois, researchers show that clay can also improve the degradability of feedstuffs.

"Farmers are giving this clay, but they want to know if the corn silage or hay the cow is eating is affected. We found that yes, the clay is changing the way the cow degrades feedstuffs," says Phil Cardoso, associate professor in the Department of Animal Sciences at Illinois and co-author of the Animal Feed Science and Technology study.

Cardoso and his team tested the degradability of six feedstuffs - dried alfalfa hay, grass hay, wet brewer's grains, ground corn, corn silage, and soybean meal - along with no added clay, 1%, or 2% of dietary dry matter.

The researchers placed the feedstuffs into mesh bags and inserted them directly into the rumen through a cannula or fistula, a surgically installed portal that allows the contents of the rumen to be sampled for research purposes. The bags were then drawn out at multiple time intervals (two hours to four days) and analyzed.

"There were some differences in how the feedstuffs degraded over time. When clay was added to grass hay at 2% of dietary dry matter, the digestibility and usage of the fat in that material was maximized. It's better. And we didn't see a decline in degradability of the other feedstuffs, either," Cardoso says. "Overall, to maximize the benefits of clay, we'd recommend adding it at 1 to 2% of dietary dry matter."

Cardoso's previous research has shown that multiple types of clay are effective in handling aflatoxin, a toxic substance produced by fungal contaminants on feed. When the toxin is bound up by the clay, it is simply excreted from the cow's body, rather than being absorbed in the bloodstream. And a 2018 study by Cardoso's team showed that aluminosilicate clay improved cows' immune function and reduced liver inflammation during an aflatoxin challenge.

Cardoso says, "From all of our work on this, I can tell producers whenever they are facing the risk of aflatoxin, they should consider using clay without worrying about it binding other minerals or hindering forage digestibility. Rather, we've shown digestibility could increase. Of course, it's important to ensure the specific clay product has been tested."

Clay's benefits don't stop there. Because the material attracts and binds positively charged ions, clay can make the rumen less acidic. This is important particularly given the popularity of increasing grain concentrates in TMR feed, which can lead to SARA. In a 2016 study from Cardoso's group, cows challenged with excess wheat in a TMR diet produced more and higher-quality milk and had higher rumen pH when fed bentonite clay at 2% of dietary dry matter.

"Basically, clay has all these benefits: It reduces aflatoxin toxicity, works as a pH buffer, and also improves the degradability of some feedstuffs," Cardoso says. "Producers should know this."

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Little-known protein appears to play important role in obesity and metabolic disease

image: Healthy brown fat cells (shown in green) require ample amounts of a molecule called heme, which enables the body to metabolize food properly. The Saez laboratory at Scripps Research has described how this vital, but very toxic, metabolite is safely transported inside cells.

Image: 
Scripps Research

LA JOLLA, CA - With unexpected findings about a protein that's highly expressed in fat tissue, scientists at Scripps Research have opened the door to critical new understandings about obesity and metabolism. Their discovery, which appears Nov. 20 in the journal Nature, could lead to new approaches for addressing obesity and potentially many other diseases.

The signaling protein, known as PGRMC2, had not been extensively studied in the past. Short for "progesterone receptor membrane component 2," it had been detected in the uterus, liver and several areas of the body. But the lab of Enrique Saez, PhD, saw that it was most abundant in fat tissue--particularly in brown fat, which turns food into heat to maintain body temperature--and became interested in its function there.

An important role: heme's travel guide

The team built on their recent discovery that PGRMC2 binds to and releases an essential molecule called heme. Recently in the spotlight for its role in providing flavor to the plant-based Impossible Burger, heme holds a much more significant role in the body. The iron-containing molecule travels within cells to enable crucial life processes such as cellular respiration, cell proliferation, cell death and circadian rhythms.

Using biochemical techniques and advanced assays in cells, Saez and his team found that PGRMC2 is a "chaperone" of heme, encapsulating the molecule and transporting it from the cell's mitochondria, where heme is created, to the nucleus, where it helps carry out important functions. Without a protective chaperone, heme would react with--and destroy--everything in its path.

"Heme's significance to many cellular processes has been known for a long time," says Saez, associate professor in the Department of Molecular Medicine. "But we also knew that heme is toxic to the cellular materials around it and would need some sort of shuttling pathway. Until now, there were many hypotheses, but the proteins that traffic heme had not been identified."

An innovative approach for obesity?

Through studies involving mice, the scientists established PGRMC2 as the first intracellular heme chaperone to be described in mammals. However, they didn't stop there; they sought to find out what happens in the body if this protein doesn't exist to transport heme.

And that's how they made their next big discovery: Without PGRMC2 present in their fat tissues, mice that were fed a high-fat diet became intolerant to glucose and insensitive to insulin--hallmark symptoms of diabetes and other metabolic diseases. By contrast, obese-diabetic mice that were treated with a drug to activate PGRMC2 function showed a substantial improvement of symptoms associated with diabetes.

"We saw the mice get better, becoming more glucose tolerant and less resistant to insulin," Saez says. "Our findings suggest that modulating PGRMC2 activity in fat tissue may be a useful pharmacological approach for reverting some of the serious health effects of obesity."

The team also evaluated how the protein changes other functions of brown and white fat, says the study's lead author, Andrea Galmozzi, PhD. "The first surprise finding was that the brown fat looked white," he says.

Brown fat, which is normally the highest in heme content, is often considered the "good fat." One of its key roles is to generate heat to maintain body temperature. Among mice that were unable to produce PGRMC2 in their fat tissues, temperatures dropped quickly when placed in a cold environment.

"Even though their brain was sending the right signals to turn on the heat, the mice were unable to defend their body temperature," Galmozzi says. "Without heme, you get mitochondrial dysfunction and the cell has no means to burn energy to generate heat."

Saez believes it's possible that activating the heme chaperone in other organs--including the liver, where a large amount of heme is made--could help mitigate the effects of other metabolic disorders such as non-alcoholic steatohepatitis (NASH), which is a major cause of liver transplantation today.

"We're curious to know whether this protein performs the same role in other tissues where we see defects in heme that result in disease" Saez says.

Credit: 
Scripps Research Institute

Are hiring algorithms fair? They're too opaque to tell, study finds

ITHACA, N.Y. - Time is money and, unfortunately for companies, hiring new employees takes significant time - more than a month on average, research shows.

Hiring decisions are also rife with human bias, leading some organizations to hand off at least part of their employee searches to outside tech companies who screen applicants with machine learning algorithms. If humans have such a hard time finding the best fit for their companies, the thinking goes, maybe a machine can do it better and more efficiently.

But new research from a team of Computing and Information Science scholars at Cornell University raises questions about those algorithms and the tech companies who develop and use them: How unbiased is the automated screening process? How are the algorithms built? And by whom, toward what end, and with what data?

They found companies tend to favor obscurity over transparency in this emerging field, where lack of consensus on fundamental points - formal definitions of "bias" and "fairness," for starters - have enabled tech companies to define and address algorithmic bias on their own terms.

"I think we're starting to see a growing recognition among creators of algorithmic decision-making tools that they need to be particularly cognizant of how their tools impact people," said Manish Raghavan, a doctoral student in computer science and first author of "Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices," to be presented in January at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency.

"Many of the vendors we encountered in our work acknowledge this (impact) and they're taking steps to address bias and discrimination," Raghavan said. "However, there's a notable lack of consensus or direction on exactly how this should be done."

The researchers scoured available public information to begin to understand these tools and what measures, if any, companies have in place to evaluate and mitigate algorithmic bias. Shielded by intellectual property laws, tech companies don't have to disclose any information about their algorithmic models for pre-employment screenings - though some companies did choose to offer insight.

The researchers honed in on 19 vendors who specialize in algorithmic pre-employment screenings, which, they found, include questions, video interview analysis and games. They combed company websites, webinars and any available documents for insights into vendor claims and practices.

Very few vendors offer concrete information about how they validate their assessments or disclose specifics on how they mitigate algorithmic bias, researchers found.

"Plenty of vendors make no mention of efforts to combat bias, which is particularly worrying since either they're not thinking about it at all, or they're not being transparent about their practices," Raghavan said.

Even if they use such terms as "bias" and "fairness," these can be vague. A vendor can claim its assessment algorithm is "fair" without revealing how the company defines fairness.

It's like "free-range" eggs, Raghavan said: There is a set of conditions under which eggs can be labeled free range, but our intuitive notion of free range may not line up with those conditions.

"In the same way, calling an algorithm 'fair' appeals to our intuitive understanding of the term while only accomplishing a much narrower result than we might hope for," he said.

The team hopes the paper will encourage transparency and conversation around what it means to act ethically in this domain of pre-employment assessments through machine learning.

Given the challenges, could it be that algorithms are just not up to the job of screening applicants? Not so fast, Raghavan said.

"We know from years of empirical evidence that humans suffer from a variety of biases when it comes to evaluating employment candidates," he said. "The real question is not whether algorithms can be made perfect; instead, the relevant comparison is whether they can improve over alternative methods, or in this case, the human status quo.

"Despite their many flaws," he said, "algorithms do have the potential to contribute to a more equitable society, and further work is needed to ensure that we can understand and mitigate the biases they bring."

Credit: 
Cornell University

Laying out directions for future of reliable blood clotting molecule models

image: Researchers review recent work on understanding the behavior of con Willebrand factor in APL Bioengineering, painting a portrait of vWF, and by highlighting advances in the field, the authors put forth promising avenues for therapies in controlling these proteins.

Multiscale modeling of complex blood flow through a microvessel

Image: 
Zixiang Liu

WASHINGTON, D.C., November 19, 2019 -- Blood clots have long been implicated in heart attacks and strokes, together accounting for almost half of deaths annually in the United States. While the role of one key protein in the process, called von Willebrand factor, has been established, a reliable model for predicting how vWF collects in blood vessels remains elusive.

Researchers at the Georgia Institute of Technology published a review of recent work on understanding the behavior of vWF in APL Bioengineering, from AIP Publishing. The paper paints a portrait of vWF, which uncoils under the shear stress of blood flow to form nets that trap platelets passing by, which then form a blood clot, called a thrombus. By highlighting advances in the field, the authors put forth promising avenues for therapies in controlling these proteins.

"The thrombus must block blood flow as it closes off, like trying to use your thumb at the end of a garden hose and then stopping all flow with some mud," said David Ku, an author on the paper. "This is extremely hard to accomplish, so thrombosis requires the fastest, strongest bonds in all of biology."

One challenge is that many of today's experimental models can only image events on the scale of microns every second or so. vWF proteins, however, are approximately one-thousandth of that size, and their interactions occur in one-thousandth of that time.

A variety of computer models have been proposed to bridge the gap from microscale to nanoscale in clot formation, ranging from simulations based on the time it takes for clots to form to computationally intensive models that re-create how platelets, vWF and cells all interact in the bloodstream. The paper calls on researchers across biology, computer science and other areas to collaborate to build an improved model.

In addition to targeting platelet aggregation and high-shear environments that stretch vWF, one potential therapy is to enhance the activity of another protein, ADAMTS13, which cleaves vWF and renders it unable to form clots. While research in mouse models shows promise, much work is still required to determine if ADAMTS13 therapies would be safe or effective for humans.

Ku's own research pointed to negatively charged nanoparticles that computational modeling has shown might keep vWF in its coiled unreactive state. The group found the nanoparticles reduce how quickly vessels become occluded and are exploring how to explain and optimize this process.

Ku said he hopes the paper will inspire others to dive deeper into new ways of measuring and understanding the clot-forming vWF.

Credit: 
American Institute of Physics

Birds of a feather flock together, but how do they decide where to go?

image: Coordinated behavior is common in a variety of biological systems, such as insect swarms, fish schools and bacterial colonies. But the way information is spread and decisions are made in such systems is difficult to understand. A group of researchers from Southeast University and China University of Mining and Technology studied the synchronized flight of pigeon flocks. They used this as a basis to explain the mechanisms behind coordinated behavior, in the journal Chaos.

Image: 
Angie Bandari

WASHINGTON, D.C., November 19, 2019 -- Coordinated behavior is common in a variety of biological systems, such as insect swarms, fish schools and bacterial colonies. But the way information is spread and decisions are made in such systems is difficult to understand.

A group of researchers from Southeast University and China University of Mining and Technology studied the synchronized flight of pigeon flocks. They used this as a basis to explain the mechanisms behind coordinated behavior, in the journal Chaos, from AIP Publishing.

"Understanding the underlying coordination mechanism of these appealing phenomena helps us gain more cognition of the world where we live," said author Duxin Chen, an assistant professor at Southeast University in China.

Previously, it was believed that coordinated behavior is subject to three basic rules: Avoid collision with your peers, match your speed and direction of motion with the rest of the group, and try to stay near the center. The scientists examined how every individual pigeon within a flock is influenced by the other members and found the dynamics are not so simple.

The researchers studied the flights of three flocks of 10 pigeons each. Every bird's position, velocity and acceleration were sampled with time, and the researchers used this data to determine which pigeons have a direct impact on each individual in the group, constructing a causal network that can be used to further observe the deep interaction rules.

They determined a number of trends in flock motion. Depending on factors, like its location in the flock, every pigeon has neighbors it influences as well as neighbors it is influenced by. Additionally, the influencers are likely to change throughout the flight.

"Interestingly, the individuals closer to the mass center and the average velocity direction are more influential to others, which means location and flight direction are two factors that matter in their interactions," Chen said.

Though pigeon social patterns were not considered, the researchers found flight competition to be intensive, and previous work has shown flight hierarchies are independent of pigeon dominance factors.

The authors suggest their method is sufficiently general to study other coordinated behaviors. Next, they plan to focus on the collective behaviors of immune cells.

Credit: 
American Institute of Physics

Predicting Alzheimer's disease-like memory loss before it strikes

image: Emily Jones (left) and Yadong Huang (right) were part of a team to show how patterns in brain activity can predict Alzheimer's symptoms.

Image: 
Photographer: Lauren Bayless, Gladstone Institutes (Please note the "s" in our name).

SAN FRANCISCO, CA November 19, 2019--For a person with Alzheimer's disease, there's no turning back the clock. By the time she begins to experience memory loss and other worrisome signs, cognitive decline has already set in. And decades of clinical trials have failed to produce treatments that could help her regain her memory.

Today, researchers at Gladstone Institutes are approaching this devastating disease from a different angle. In a new study published in Cell Reports, they demonstrate that particular patterns of brain activity can predict far in advance whether a young mouse will develop Alzheimer's-like memory deficits in old age.

"Being able to predict deficits long before they appear could open up new opportunities to design and test interventions that prevent Alzheimer's in people," said Gladstone Senior Investigator Yadong Huang, senior author of the study.

The new work builds on a 2016 study of mice engineered to carry the gene for apolipoprotein E4 (ApoE4). Carrying the ApoE4 gene is associated with an increased risk--but not a guarantee--of Alzheimer's disease in humans. As they age, ApoE4 mice often, but not always, develop signs of memory loss similar to those seen in people with Alzheimer's.

In the previous study, Huang and his team investigated a type of brain activity called sharp-wave ripples (SWRs), which play a direct role in spatial learning and memory formation in mammals. SWRs occur when the brain of a resting mouse or human rapidly and repeatedly replays a recent memory of moving through a space, such as a maze or a house.

“SWRs have two important measurable components: abundance and short gamma (SG) power,” said Emily Jones, PhD, lead author of the new study and recent graduate of UC San Francisco’s (UCSF) Biomedical Sciences Graduate Program. “Broadly, SWR abundance predicts how quickly an ApoE4 mouse can learn and memorize how to get through a maze, and SG power predicts how accurate that memory will be.”

The earlier study revealed that aging ApoE4 mice have lower SWR abundance and weaker SG power than seen in healthy aging mice. Based on those results, Jones and her colleagues hypothesized that measuring SWR activity could predict the severity of demonstrable memory problems in ApoE4 mice during aging.

To test this idea, the researchers first recorded SWR activity in aging ApoE4 mice at rest. One month later, they had the mice perform spatial tasks to test their memory. They found that mice with fewer SWRs and lower SG power were indeed more likely to have worse spatial memory deficits.

“We actually successfully replicated this experiment 2 years later with different mice,” said Huang, who is also a professor of Neurology and Pathology at UCSF. “What was striking is that we were able to use the results from the first cohort to predict with high accuracy the extent of learning and memory deficits in the second cohort, based on their SWR activity.”

Even more striking were the unexpected results of the team's next experiment.

The researchers were curious how SWR activity evolves over a mouse's lifetime, which no one had previously investigated. So, they periodically measured SWRs in ApoE4 mice from an early age--long before memory deficits appeared--through middle age, and into old age.

"We thought that, if we got lucky, the SWR measurements we took when the mice were middle aged might have some predictive relationship to later memory problems," Jones said.

Surprisingly, the analysis revealed that deficits in SWR abundance and SG power at an early age predicted which mice performed worse on memory tasks 10 months later--the equivalent of 30 years for a human.

"We were not betting on these results, the idea that young mice with no memory problems already have the seed of what's going to lead to deficits in old age," Jones said. "Although we would love to, but we thought it would be ridiculous to be able to predict so far in advance."

Since SWRs are also found in humans, these findings suggest that SWR abundance and SG power could potentially serve as early predictors of Alzheimer's disease, long before memory problems arise.

As a next step toward evaluating that possibility, Huang will work with colleagues at the UCSF Memory and Aging Center to determine whether SWRs in Alzheimer's patients show deficits in abundance and SG power similar to those seen in mouse models of the disease.

"A major advantage of this approach is that researchers have recently developed a noninvasive technique for measuring SWRs in people, without implanting electrodes in the brain," Huang said.

If SWRs are indeed predictive of Alzheimer's in humans, measuring them could boost research and drug development efforts in two important ways. First, they could be used to select participants for clinical trials testing new drugs to stave off Alzheimer's. Enrolling patients who already show SWR deficits would enhance the trials' statistical power. Second, SWR measurements could be taken repeatedly and noninvasively, enabling researchers to test drug effects over time, even before memory deficits appear.

Huang emphasizes the value of SWRs as a functional predictor, one that directly measures the decline in brain function seen in Alzheimer's, as opposed to a pathological change that only appears as a result of the underlying disease.

"I feel strongly that Alzheimer's research should not just focus on pathology, but use functional alterations like SWR deficits to guide research and drug development," he said. "Our new findings support this kind of approach."

The new study is just one facet of Gladstone's extensive Alzheimer's research program. "Gladstone provides a unique setting that makes it possible to do the kind of translational research necessary to improve understanding and treatment of this disease," Huang said.

Credit: 
Gladstone Institutes

Cell death or cancer growth: A question of cohesion

Activation of CD95, a receptor found on all cancer cells, triggers programmed cell death - or does the opposite, namely stimulates cancer cell growth. Scientists from the German Cancer Research Center (DKFZ) have now shown that the impact of CD95 activation depends on whether there are isolated cancer cells or three-dimensional structures. Individual cells are programmed to die following CD95 activation. In contrast, CD95 activation stimulates growth in clusters of cancer cells, for example in solid tumors. This finding points to new ways of specifically transforming growth-stimulating signals into cell death signals for the cancer cells.

The receptor protein CD95 is exposed on the surface of all cancer cells like small antennae. Activation of the receptor by the CD95 ligand (CD95L) triggers apoptosis in the cancer cell - or the exact opposite: "We studied various types of cancer tissue and found that CD95 activation usually stimulates tumor growth under natural conditions," remarked Ana Martin-Villalba, who has been conducting research at DKFZ on the role of CD95 for many years now. She was the first to describe the cancer-promoting effect of CD95 in glioblastomas (malignant brain tumors).

Researchers are investing considerable effort into examining how medicine could harness the other side of CD95 and cause cancer cells to die in a targeted way. To do so, Martin-Villalba and her team attempted to understand which factors decide whether CD95 activation leads to cell death or cell growth.

The team from DKFZ collaborated with Motomu Tanaka from the University of Heidelberg, jointly developing artificial cell membranes into which they could insert any amount of the CD95 ligand. Using this method, they discovered that a particular distance between the individual ligand molecules was necessary to achieve ideal activation of CD95 - and to actually induce cell death in cells isolated from biopsies of pancreatic cancer or glioblastomas grown in a petri dish.

The researchers then assumed that they had found the perfect way to cause tumor cells in the body to die and extended their experiments to brain tumors in mice. They gave the animals latex beads with the ideal surface density of CD95 ligands. However, instead of observing a reduction in the tumor mass, as expected, the opposite occurred: tumor growth was accelerated.

In order to clarify the apparent discrepancy between cell culture and animal experiments, the researchers experimented with tumorspheres, tiny tumors grown in culture. CD95 activation via the artificial cell membrane stimulated growth in these cell spheroids, which behaved like natural tumor tissue.

"The impact of CD95 activation - cell death or growth - appears to depend primarily on whether there are isolated cancer cells, as grown in culture, or cells in three-dimensional structures," Gülce Gülculer from Martin-Villalba's team explained. Individual cells are programmed to die following CD95 activation. In natural conditions, however, namely in a tissue structure, CD95 activation stimulates growth. In Gülculer's experiments, even contact to a single neighboring cell was enough to protect tumor cells from CD95-induced cell death.

"The result will enable us to develop new strategies to transform the growth-stimulating signals of CD95 into cell death signals for the cancer cells. This could help us stop tumor cells becoming resistant to treatment," study director Martin-Villalba added.

A clinical phase II study conducted several years ago already showed that, when combined with radiotherapy, blocking the CD95 signal could lead to better survival rates in patients with advanced glioblastomas. The study used a substance that Ana Martin-Villalba played a key role in developing. "Our current findings provide an explanation for the first time of why blocking CD95 does actually slow down cell death," Martin-Villalba explained.

Gülce Gülcüler Balta, Cornelia Monzel, Susanne Kleber, Joel Beaudouin, Emre Balta, Thomas Kaindl, Si Chen, Liang Gao, Meinolf Thiemann, Christian R. Wirtz, Yvonne Samstag, Motomu Tanaka and Ana Martin-Villalba: 3D cellular architecture modulates tyrosine kinase activity thereby switching CD95 mediated apoptosis to survival.
Cell Reports 2019; DOI: 10.1016/j.celrep.2019.10.054

Why animal experiments are vital in cancer research

Death signals or growth stimulators - up to now, scientists did not know why CD95 activation in cancer cells can have such different outcomes. As a result, they were unable to conduct further targeted research into this signaling system - so promising for cancer treatment - to benefit patients.

The different outcomes of the experiments on cancer cells grown in culture and on brain tumors in mice allowed the researchers to find the explanation for the first time. Using tumorspheres, the researchers were subsequently able to confirm the effect of cell contact in an experimental setting and rule out other influencing factors such as the immune system.

This finding will enable the therapeutic potential of blocking CD95 to be harnessed in a much more targeted way.

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

State abortion conscience laws

What The Study Did: This study examined state laws that grant individuals and institutions rights to refuse participation in abortion based on their beliefs, that grant immunity from liability for such refusals, and that limit conscience rights when patient safety is at risk.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Nadia N. Sawicki, J.D., M.Be., of the Loyola University Chicago School of Law in Chicago, is the corresponding author.

(doi:10.1001/jama.2019.15105)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network