Tech

Flexible organic electronics mimic biological mechanosensory nerves

video: We connected an artificial mechanosensory nerve to biological motor nerves of a cockroach leg as in Fig. 3B to make a hybrid reflex arc. A pressure onto the artificial mechanosensory nerve generates outputs. The outputs from the artificial mechanosensory nerve stimulate the disabled cockroach leg, which leads to the actuation of the leg.

Image: 
Yeongin Kim (Stanford University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University)

Researchers at Seoul National University and Stanford University developed artificial mechanosensory nerves using flexible organic devices to emulate biological sensory afferent nerves. They used the artificial mechanosensory nerves to control a disabled insect leg and distinguish braille characters.

Compared to conventional digital computers, biological nervous system is powerful for real-world problems, such as visual image processing, voice recognition, tactile sensing, and movement control. This inspired scientists and engineers to work on neuromorphic computing, bioinspired sensors, robot control, and prosthetics. The previous approaches involved implementations at the software level on conventional digital computers and circuit designs using classical silicon devices which have shown critical issues related to power consumption, cost, and multifunction.

The research describes artificial mechanosensory nerves based on flexible organic devices to emulate biological mechanosensory nerves. "The recently found mechanisms of information processing in biological mechanosensory nerves were adopted in our artificial system," said Zhenan Bao at Stanford University.

The artificial mechanosensory nerves are composed of three essential components: mechanoreceptors (resistive pressure sensors), neurons (organic ring oscillators), and synapses (organic electrochemical transistors). The pressure information from artificial mechanoreceptors can be converted to action potentials through artificial neurons. Multiple action potentials can be integrated into an artificial synapse to actuate biological muscles and recognize braille characters.

Devices that mimic the signal processing and functionality of biological systems can simplify the design of bioinspired system or reduce power consumption. The researchers said organic devices are advantageous because their functional properties can be tuned, they can be printed on a large area at a low cost, and they are flexible like soft biological systems.

Wentao Xu, a researcher at Seoul National University, and Yeongin Kim and Alex Chortos, graduate students at Stanford University, used their artificial mechanosensory nerves to detect large-scale textures and object movements and distinguish braille characters. They also connected the artificial mechanosensory nerves to motor nerves in a detached insect leg and control muscles.

Professor Tae-Woo Lee, a Professor at Seoul National University said, "Our artificial mechanosensory nerves can be used for bioinspired robots and prosthetics compatible with and comfortable for humans." Lee said, "The development of human-like robots and prosthetics that help people with neurological disabilities can benefit from our work."

Credit: 
Seoul National University

An artificial nerve system gives prosthetic devices and robots a sense of touch

Stanford and Seoul National University researchers have developed an artificial sensory nerve system that can activate the twitch reflex in a cockroach and identify letters in the Braille alphabet.

The work, reported May 31 in Science, is a step toward creating artificial skin for prosthetic limbs, to restore sensation to amputees and, perhaps, one day give robots some type of reflex capability.

"We take skin for granted but it's a complex sensing, signaling and decision-making system," said Zhenan Bao, a professor of chemical engineering and one of the senior authors. "This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications."

Building blocks

This milestone is part of Bao's quest to mimic how skin can stretch, repair itself and, most remarkably, act like a smart sensory network that knows not only how to transmit pleasant sensations to the brain, but also when to order the muscles to react reflexively to make prompt decisions.

The new Science paper describes how the researchers constructed an artificial sensory nerve circuit that could be embedded in a future skin-like covering for neuro-prosthetic devices and soft robotics. This rudimentary artificial nerve circuit integrates three previously described components.

The first is a touch sensor that can detect even minuscule forces. This sensor sends signals through the second component - a flexible electronic neuron. The touch sensor and electronic neuron are improved versions of inventions previously reported by the Bao lab.

Sensory signals from these components stimulate the third component, an artificial synaptic transistor modeled after human synapses. The synaptic transistor is the brainchild of Tae-Woo Lee of Seoul National University, who spent his sabbatical year in Bao's Stanford lab to initiate the collaborative work.

"Biological synapses can relay signals, and also store information to make simple decisions," said Lee, who was a second senior author on the paper. "The synaptic transistor performs these functions in the artificial nerve circuit."

Lee used a knee reflex as an example of how more-advanced artificial nerve circuits might one day be part of an artificial skin that would give prosthetic devices or robots both senses and reflexes.

In humans, when a sudden tap causes the knee muscles to stretch, certain sensors in those muscles send an impulse through a neuron. The neuron in turn sends a series of signals to the relevant synapses. The synaptic network recognizes the pattern of the sudden stretch and emits two signals simultaneously, one causing the knee muscles to contract reflexively and a second, less urgent signal to register the sensation in the brain.

Making it work

The new work has a long way to go before it reaches that level of complexity. But in the Science paper, the group describes how the electronic neuron delivered signals to the synaptic transistor, which was engineered in such a way that it learned to recognize and react to sensory inputs based on the intensity and frequency of low-power signals, just like a biological synapse.

The group members tested the ability of the system to both generate reflexes and sense touch.

In one test they hooked up their artificial nerve to a cockroach leg and applied tiny increments of pressure to their touch sensor. The electronic neuron converted the sensor signal into digital signals and relayed them through the synaptic transistor, causing the leg to twitch more or less vigorously as the pressure on the touch sensor increased or decreased.

They also showed that the artificial nerve could detect various touch sensations. In one experiment the artificial nerve was able to differentiate Braille letters. In another, they rolled a cylinder over the sensor in different directions and accurately detected the direction of the motion.

Bao's graduate students Yeongin Kim and Alex Chortos, plus Wentao Xu, a researcher from Lee's own lab, were also central to integrating the components into the functional artificial sensory nervous system.

The researchers say artificial nerve technology remains in its infancy. For instance, creating artificial skin coverings for prosthetic devices will require new devices to detect heat and other sensations, the ability to embed them into flexible circuits and then a way to interface all of this to the brain.

The group also hopes to create low-power, artificial sensor nets to cover robots, the idea being to make them more agile by providing some of the same feedback that humans derive from their skin.

Credit: 
Stanford University

Surgical technique improves sensation, control of prosthetic limb

Boston, MA -- Humans can accurately sense the position, speed and torque of their limbs, even with their eyes shut. This sense, known as proprioception, allows humans to precisely control their body movements. Despite significant improvements to prosthetic devices in recent years, researchers have been unable to provide this essential sensation to people with artificial limbs, limiting their ability to accurately control their movements. Researchers at the Center for Extreme Bionics at the MIT Media Lab have invented a new neural interface and communication paradigm that is able to send movement commands from the central nervous system to a robotic prosthesis, and relay proprioceptive feedback describing movement of the joint back to the central nervous system in return. This new paradigm, known as the agonist-antagonist myoneural interface (AMI), involves a novel surgical approach to limb amputation in which dynamic muscle relationships are preserved within the amputated limb. The AMI was validated in extensive pre-clinical experimentation at MIT, prior to its first surgical implementation in a human patient at Brigham and Women's Faulkner Hospital.

In a paper published today in Science Translational Medicine, the researchers describe the first human implementation of the agonist-antagonist myoneural interface (AMI), in a person with below-knee amputation. The paper represents the first time information on joint position, speed and torque has been fed from a prosthetic limb into the nervous system, according to senior author and project director Hugh Herr, a professor of media arts and sciences at the MIT Media Lab. "Our goal is to close the loop between the peripheral nervous system's muscles and nerves, and the bionic appendage," says Herr.

To do this, the researchers used the same biological sensors that create the body's natural proprioceptive sensations. The AMI consists of two opposing muscle-tendons, known as an agonist and an antagonist, which are surgically connected in series so that when one muscle contracts and shortens - upon either volitional or electrical activation - the other stretches, and vice versa. This coupled movement enables natural biological sensors within the muscle-tendon to transmit electrical signals to the central nervous system, communicating muscle length, speed and force information, which is interpreted by the brain as natural joint proprioception. This is how muscle-tendon proprioception works naturally in human joints, Herr says. "Because the muscles have a natural nerve supply, when this agonist-antagonist muscle movement occurs information is sent through the nerve to the brain, enabling the person to feel those muscles moving, both their position, speed and load," he says. By connecting the AMI with electrodes, the researchers can detect electrical pulses from the muscle, or apply electricity to the muscle to cause it to contract. "When a person is thinking about moving their phantom ankle, the AMI that maps to that bionic ankle is moving back and forth, sending signals through the nerves to the brain, enabling the person with an amputation to actually feel their bionic ankle moving throughout the whole angular range," Herr says.

Decoding the electrical language of proprioception within nerves is extremely difficult, according to Tyler Clites, first author of the paper and graduate student lead on the project. "Using this approach, rather than needing to speak that electrical language ourselves, we use these biological sensors to speak the language for us," Clites says. "These sensors translate mechanical stretch into electrical signals that can be interpreted by the brain as sensations of position, speed and force."

The AMI was first implemented surgically in a human patient at Brigham and Women's Faulkner Hospital, Boston, by Matthew J Carty, MD, one of the paper's authors, a surgeon in the Division of Plastic and Reconstructive Surgery and an MIT research scientist. In this operation, two AMIs were constructed in the residual limb at the time of primary below-knee amputation, with one AMI to control the prosthetic ankle joint, and the other to control the prosthetic subtalar joint.

"We knew that in order for us to validate the success of this new approach to amputation, we would need to couple the procedure with a novel prosthesis that could take advantage of the additional capabilities of this new type of residual limb," Carty says. "Collaboration was critical, as the design of the procedure informed the design of the robotic limb, and vice versa." Towards this end, an advanced prosthetic limb was built at MIT and electrically linked to the patient's peripheral nervous system using electrodes placed over each AMI muscle following the amputation surgery. The researchers then compared the movement of the AMI patient with that of four people who had undergone a traditional below-knee amputation procedure, using the same advanced prosthetic limb. They found that the AMI patient had more stable control over movement of the prosthetic device, and was able to move more efficiently than those with the conventional amputation. They also found that the AMI patient quickly displayed natural, reflexive behaviors such as extending the toes towards the next step when walking down a set of stairs.

These behaviors are essential to natural human movement, and were absent in all of the people who had undergone a traditional amputation. What's more, while the patients with conventional amputation reported feeling disconnected to the prosthesis, the AMI patient quickly described feeling that the bionic ankle and foot had become a part of their own body. "This is pretty significant evidence that the brain and the spinal cord in this patient adopted the prosthetic leg as if it were his biological limb, enabling those biological pathways to become active once again," Clites says. "We believe proprioception is fundamental to that adoption."

It is difficult for an individual with a lower limb amputation to gain a sense of embodiment with their artificial limb, according to Daniel Ferris, the Robert W. Adenbaum Professor of Engineering Innovation at the University of Florida, who was not involved in the research. "This is ground breaking. The increased sense of embodiment by the amputee subject is a powerful result of having better control of and feedback from the bionic limb," Ferris says. "I expect that we will see individuals with traumatic amputations start to seek out this type of surgery and interface for their prostheses - it could provide a much greater quality of life for amputees."

The researchers have since carried out the AMI procedure on nine other below-knee amputees, and are planning to adapt the technique for those needing above-knee, below-elbow and above-elbow amputations.

"Previously humans have used technology in a tool-like fashion," Herr says. "We are now starting to see a new era of human-device interaction, of full neurological embodiment, in which what we design becomes truly part of us, part of our identity."

The current study has its roots in a research project that received initial funding in 2014, when Carty was selected as the winner of the inaugural Stepping Strong Innovator Awards granted by The Gillian Reny Stepping Strong Center for Trauma Innovation. The awards and center were established by a family that survived the Boston Marathon bombings and committed to support ground breaking projects in innovative trauma research and care. This support allowed the team to quickly focus on all of the foundational work that was necessary to prepare in advance of taking this innovation to the operating room.

Funding for this work was also provided by MIT Media Lab Consortia and a generous gift from Google, Inc. Prosthetic design and fabrication funded in part by US Army MRMC (W81XWH-14-C-0111).

Brigham and Women's Hospital (BWH) is a 793-bed nonprofit teaching affiliate of Harvard Medical School and a founding member of Partners HealthCare. BWH has more than 4.2 million annual patient visits and nearly 46,000 inpatient stays, is the largest birthing center in Massachusetts and employs nearly 16,000 people. The Brigham's medical preeminence dates back to 1832, and today that rich history in clinical care is coupled with its national leadership in patient care, quality improvement and patient safety initiatives, and its dedication to research, innovation, community engagement and educating and training the next generation of health care professionals. Through investigation and discovery conducted at its Brigham Research Institute (BRI), BWH is an international leader in basic, clinical and translational research on human diseases, more than 3,000 researchers, including physician-investigators and renowned biomedical scientists and faculty supported by nearly $666 million in funding. For the last 25 years, BWH ranked second in research funding from the National Institutes of Health (NIH) among independent hospitals. BWH is also home to major landmark epidemiologic population studies, including the Nurses' and Physicians' Health Studies and the Women's Health Initiative as well as the TIMI Study Group, one of the premier cardiovascular clinical trials groups. For more information, resources and to follow us on social media, please visit BWH's online newsroom.

Credit: 
Brigham and Women's Hospital

NASA finds Subtropical Depression Alberto's center over Indiana

image: NASA's Terra satellite captured an infrared image of Subtropical Depression Alberto on May 30 at 12:15 p.m. EDT (1645 UTC) in the Ohio Valley. Its center was near Indianapolis, Indiana. Terra showed strongest storms (yellow) had cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius).

Image: 
Credits: NASA/NRL

NASA's Terra satellite provided infrared data on Subtropical Depression Alberto when it was centered over Indiana and as it moved through the Ohio Valley.

On May 30, The National Weather Service (NWS) was issuing Flash Flood Warnings for portions of western Kentucky as well as from extreme northeast Georgia to western North Carolina. Flash Flood Watches are in effect for portions of the southern Appalachians and Lower Ohio Valley.

At 11 a.m. EDT the center of Subtropical Depression Alberto was located near latitude 38.7 degrees north and longitude 87.4 degrees west. That's about 80 miles (129 km) southwest of Indianapolis, Indiana. The depression is moving toward the north-northeast near 17 mph (28 kph) and this motion is expected to accelerate today. Maximum sustained winds are near 30 mph (45 kph) with higher gusts.

The NWS Weather Prediction Center in College Park, Maryland said the system will transition to an extratropical wave cyclone as the remnant circulation comes under the influence of an upper level trough moving across the Great Lakes and southern Canada through Thursday.

The Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Terra satellite captured an infrared image of Subtropical Depression Alberto on May 30 at 12:15 p.m. EDT (1645 UTC) in the Ohio Valley. Terra showed strongest storms had cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius). Storms with cloud tops that cold have been shown to generate heavy rain.

NWS said "Additional rainfall of 2 to 4 inches, with isolated higher amounts, will be possible northward along the Illinois/Indiana border into the Lower Peninsula of Michigan. Flash flooding and rapid water rises on creeks and streams will remain a possibility across these areas."

Credit: 
NASA/Goddard Space Flight Center

After 40 years in limbo: Styrene is probably carcinogenic

"Possibly carcinogenic and should be investigated more closely." For forty years, this has been the conclusion of researchers who have been unsure of whether there is an increased risk of cancer associated with styrene. But now an impartial working group under the auspices of WHO and appointed by the International Agency for Research on Cancer (IARC), has upgraded the warning. Styrene is upgraded from possibly carcinogenic to probably carcinogenic for humans, and the decision is largely based on register-based studies from Aarhus together with new animal evidence.

The new announcement from the World Health Organisation will be published by IARC as a Monograph, authored by 23 handpicked researchers from around the world, including Professor Henrik Kolstad. He is professor in occupational medicine, at the Department of Clinical Medicine at Aarhus University as well as Aarhus University Hospital. He is pleased with the fact that we in Denmark are able to do something that no other country can:

"The reason for my presence in the working group is our register-based research, which is unique throughout the world, and where the most recent styrene study shows the risk of acute myeloid leukaemia, a rare form of leukemia, is doubled. Out of the more than 70,000 people included in the research project, we found 25 cases of acute myeloid leukaemia, where you would statistically expect to find 10," says Professor Henrik Kolstad with reference to the research article 'Styrene exposure and risk of lymphohematopoietic malignancies in 73,036 reinforced plastics workers', published in Epidemiology.

Another important research result is a fivefold risk for a particular type of nasal cancer following styrene exposure. This part of the study is published in Occupational and Environmental Medicine in the article 'Sinonasal adenocarcinoma following styrene exposure in the reinforced plastics industry'.

Styrene is included in synthetic rubber, some insulation materials, disposable tableware, packaging and fiberglass plastic.

In March, the 23 researchers in the WHO working group spent ten intensive working days in Lyon, where they had the task of reviewing and re-evaluating the carcinogenic risk based on the latest research on styrene exposure which includes epidemiological studies of humans together with animal experiments and what are known as mechanism studies. The latter cover cause-effect studies based on biological material.

The issue of styrene and cancer has been a priority for IARC ever since an accumulation of leukaemia cases among employees in the American synthetic rubber industry was seen in the 1970s. However, it was not possible to establish whether the American workers contracted leukaemia from handling styrene or from butadiene, a chemical, which is styrene's permanent companion in the production of synthetic rubber - and this is where the Danish register-based studies come into the picture.

"Clearly, the best place to study the possible health effects of styrene exposure is in the reinforced plasticsindustry, where no butadiene is involved. We have therefore mapped the cancer incidence for those who worked in companies that used styrene in the production during the period 1968-2011," explains Henrik Kolstad on the background for what has ended up being the world's largest epidemiological study of styrene exposure in the reinforced plastics industry.

In the research project, PhD student Mette Skovgaard Christensen, Henrik Kolstad and their research colleagues followed 73,036 employees who during the period 1968-2011 worked in one of the 456 small and medium-sized companies in Denmark that have used styrene in the production of e.g. wind turbines or yachts.

The study has involved a comprehensive linkage of registers, where the researchers used the central business register , along with various other company registers, to identify the relevant companies and their employees. After this stage, the information was linked with the Danish Cancer Register to assess the incidences of different types of cancer among the employees compared to the general population's risk of developing the same diseases.

One of the reasons why the results carry a lot of weight with the WHO is that Kolstad and his colleagues have been able to compare the research data from the cancer patients with measurements of styrene exposure in the Danish reinforced plastics industry over time.

"That part of the study is primarily based on measurements that the Danish Working Environment Authority carried out a number of years ago, since the Danish Working Environment Authority has not carried out measurements of styrene over the past several years. Many companies take measurements, but they are not publicly available. We've requested the relevant data in anonymised form from the companies that analyse the measurements , but unfortunately, we were unable to gain access to them," says Henrik Kolstad

"This is deeply regrettable, because the fact is that good answers to important questions presupposes that we also have access to the relevant information in the future."

Henrik Kolstad emphasizes that the Danish register findings reflect the sins of the past. Significant improvements have been made to the working environments in the Danish reinforced plastics industry in recent years, but globally the problem has not been solved.

Credit: 
Aarhus University

New guidance on treating diabetes in elderly and frail adults

New guidance has been published on managing diabetes in the elderly, including for the first time how to manage treatment for the particularly frail.

The guidance was produced from a collaboration between experts in diabetes medicine, primary care and geriatric medicine, led by Dr David Strain at the University of Exeter Medical School.

It will advise clinicians on helping elderly people with type 2 diabetes get the most out of treatment options, and for the first time contains guidance on how and when to stop diabetes treatments in particularly frail adults.

Dr Strain said: "Older adults have been systematically excluded from clinical trials and have very different ambitions from their diabetes management. This guidance puts the older person with diabetes firmly back at the centre of target setting, ensuring that appropriate goals are agreed to achieve the best quality of life possible, without continuing treatments that would not provide any benefit and potentially cause harm."

The research was carried out in collaboration with NHS England and was published in Diabetic Medicine, the journal of Diabetes UK last month.

The report authors hope it will ultimately be incorporated into national guidance for GPs, to advise GPs on the management of type 2 diabetes in elderly adults, aiming to reduce complications and improve quality of life.

The guidance will be adopted across Devon immediately. The authors hope local health and care commissioners will adopt and implement these principles in their own areas.

Pav Kalsi, Senior Clinical Advisor at Diabetes UK, said: "People with diabetes rightly deserve to have access to the right care and support at every stage of their life, and that means the care they receive needs to be adapted and tailored to suit each individual's changing needs. For example, those who are elderly, and potentially frail, often have different priorities, such as safety and quality of life.

"We're really pleased that these new guidelines will, for the first time, help healthcare professionals give this tailored support and will help them review and decide whether to stop diabetes treatment for particularly frail adults.

"In the future, we hope these guidelines will have a positive impact on the lives of older people with diabetes."

The paper, 'Type 2 diabetes mellitus in older people: a brief statement of key principles of modern day management including the assessment of frailty. A national collaborative stakeholder initiative', is available online.

Credit: 
University of Exeter

Radish cover crop traps nitrogen; mystery follows

image: Experimental strip of radish cover crop planted following winter wheat, Sheboygan County, Wis.

Image: 
Matt Ruark.

When you think of a radish, you may think of the small, round, crunchy, red-and-white vegetable that is sliced into salads. You might be surprised to learn that a larger, longer form of this root vegetable is being used in agriculture as a cover crop.

Cover crops are grown between main crops such as wheat, corn, or soybeans when the soil would otherwise be bare. Cover crops can control erosion, build soil, and suppress weeds. Radish as a cover crop can provide these benefits and more. The long radish root creates deep channels in the soil that can make it easier for subsequent crops to reach water in the soil below.

Radish is also known to benefit water quality. It does so by taking up nitrogen, in the form of nitrates, from the soil. This leaves less nitrogen in the soil that can run off to nearby streams and lakes.

Matt Ruark of the University of Wisconsin-Madison and colleagues wanted to know more about the effect of this nitrate uptake in the following growing season. They established test sites in three Wisconsin locations and studied them for three years. At each site, some plots received the radish cover crop and some did not. The radish cover crop was planted in August after a wheat harvest. Corn was planted the following spring.

The research showed that radish significantly reduced the nitrate content in the soil as compared to the test plots with no cover crop. This finding confirmed the results of several earlier studies. It showed that radish did take up nitrogen, in the form of nitrates, from the soil.

This research supports the use of radish as a cover crop as a trap crop for fall nitrogen. However, what happens to that nitrogen afterward remains unknown.

There was no consistent evidence that nitrogen was returned to the soil as the radish crop decomposed. Radish did not supply nitrogen to the corn crop. The researchers concluded that in the Upper Midwest the nitrogen in radish could not replace fertilizer.

Ruark commented, "Radish grows well when planted in late summer and traps a lot of nitrogen. But the way it decomposes doesn't result in a nitrogen fertilizer benefit to the next crop. We don't know exactly why. We were hoping it would provide a nitrogen benefit, but alas, it did not."

What happens to the nitrogen? The decomposition pattern of radish needs to be explored more fully to learn more. And perhaps, Ruark said, radish could be more beneficial if mixed with a winter-hardy cover crop.

Credit: 
American Society of Agronomy

Teaching chores to an AI

image: The AI agent setting the table.

Image: 
MIT CSAIL

For many people, household chores are a dreaded, inescapable part of life that we often put off or do with little care - but what if a robot maid could help lighten the load?

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. In a new paper spearheaded by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto, researchers demonstrate "VirtualHome," a system that can simulate detailed household tasks and then have artificial "agents" execute them, opening up the possibility of one day teaching robots to do such tasks.

The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. A simple task like "making coffee," for example, would also include the step "grabbing a cup." The researchers demonstrated VirtualHome in a 3-D world inspired by the Sims video game.

The team's AI agent can execute 1,000 of these interactions in the Sims-style world, with eight different scenes including a living room, kitchen, dining room, bedroom, and home office.

"Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task," says PhD student Xavier Puig, who was lead author on the paper. "These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions."

The project was co-developed by CSAIL and the University of Toronto alongside researchers from McGill University and the University of Ljubljana. It will be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which takes place this month in Salt Lake City.

How it works

Unlike humans, robots need more explicit instructions to complete easy tasks - they can't just infer and reason with ease.

For example, one might tell a human to "switch on the TV and watch it from the sofa." Here, actions like "grab the remote control" and "sit/lie on sofa" have been omitted, since they're part of the commonsense knowledge that humans have.

To better demonstrate these kinds of tasks to robots, the descriptions for actions needed to be much more detailed. To do so, the team first collected verbal descriptions of household activities, and then translated them into simple code. A program like this might include steps like: walk to the television, switch on the television, walk to the sofa, sit on the sofa, and watch television.

Once the programs were created, the team fed them to the VirtualHome 3-D simulator to be turned into videos. Then, a virtual agent would execute the tasks defined by the programs, whether it was watching television, placing a pot on the stove, or turning a toaster on and off.

The end result is not just a system for training robots to do chores, but also a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could eventually use data like this to train their models to do more complex tasks.

The team's model successfully demonstrated that, their agents could learn to reconstruct a program, and therefore perform a task, given either a description: "pour milk into glass", or a video demonstration of the activity.

"This line of work could facilitate true robotic personal assistants in the future," says Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. "Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process."

In the future, the team hopes to train the robots using actual videos instead of Sims-style simulation videos, which would enable a robot to learn simply by watching a YouTube video. The team is also working on implementing a reward-learning system in which the agent gets positive feedback when it does tasks correctly.

"You can imagine a setting where robots are assisting with chores at home and can eventually anticipate personalized wants and needs, or impending action," says Puig. "This could be especially helpful as an assistive technology for the elderly, or those who may have limited mobility."

Credit: 
Massachusetts Institute of Technology, CSAIL

XENON1T Experimental data establishes most stringent limit on dark matter

image: Experimental results from the XENON1T dark matter detector limit the effective size of dark matter particles to 4.1X10-47 square centimeters--one-trillionth of one-trillionth of a centimeter squared--the most stringent limit yet determined for dark matter as established by the world's most sensitive detector.

Image: 
XENON Collaboration

Troy, N.Y. - Experimental results from the XENON1T dark matter detector limit the effective size of dark matter particles to 4.1X10-47 square centimeters--one-trillionth of one-trillionth of a centimeter squared--the most stringent limit yet determined for dark matter as established by the world's most sensitive detector.

The results, presented Monday in a seminar in Italy at the Gran Sasso Underground Laboratory (LNGS), were produced using an active target volume of 1,300 kilograms of Xenon, the first search for dark matter that has monitored the equivalent of one ton of xenon for an entire year.

"We now have the tightest limit for what is known as 'the WIMP-nucleon cross section,' which is a measure of the effective size of dark matter, or how strongly it interacts with normal matter," said Ethan Brown, a member of the XENON Collaboration, and assistant professor of physics, applied physics, and astronomy at Rensselaer Polytechnic Institute. "With these results, we have now tested many new theoretical models of dark matter and placed the strongest constraints on these models to date."

Dark matter is theorized as one of the basic constituents of the universe, five times more abundant than ordinary matter. But because the dark matter particles known as "weakly interacting massive particles," or "WIMPs," cannot be seen and seldom interact with ordinary matter, their existence has never been confirmed.

Several astronomical measurements have corroborated the existence of dark matter, leading to a worldwide effort to directly observe dark matter particle interactions with ordinary matter. Up to the present, the interactions have proven so feeble that they have escaped direct detection, forcing scientists to build ever-more-sensitive detectors.

Since 2002, the XENON Collaboration, incorporating 165 scientists from 12 countries, has operated three successively more sensitive liquid xenon detectors in LNGS in Italy, and XENON1T is its most powerful venture to date and the largest detector of its type ever built. Particle interactions in liquid xenon create tiny flashes of light, and the detector is intended to capture the flash from the rare occasion in which a dark matter particle collides with a xenon nucleus.

The results analyze 279 days of data, according to Elena Aprile, a professor at Columbia University and project lead. During that time, only two background events were expected in the innermost, cleanest region of the detector. However, no events were detected, suggesting dark matter particles must be even smaller than previously thought. A portion of the data analysis was conducted at Rensselaer, as scientists from collaborating institutes around the world convened at the Institute late in 2018 to review data and finalize analysis routines that would eliminate irrelevant information from the collected data.

The sensitivity of the detector is a function of its size and its "silence." Although dark matter interactions are rare, interactions with other forms of matter are common, and a sensitive detector is designed to minimize those interactions. To shield it from natural radioactivity in the cavern, the detector (a so-called Liquid Xenon Time Projection Chamber) sits within a cryostat submersed in a tank of water. A mountain above the underground laboratory further shields the detector from cosmic rays.

Even with shielding from the outside world, contaminants seep into the xenon from the materials used in the detector itself and, among his contributions, Brown is responsible for a sophisticated purification system that continually scrubs the xenon in the detector. As the size of detectors have grown, so has the complexity of the purification system--not only is there more xenon to clean, but it must be kept cleaner so that light and charge may move through the greater volume of the detector. In the current phase, Brown said his team "scaled up, adding more pumps and more purifiers" to the system.

"Our work has maintained a high level of purity for the largest quantity of xenon over the longest period of time ever," said Brown. "It's an accomplishment that allows other experiments to build on the performance of this purification system."

In the next phase, Brown will introduce a new solution, a newly designed pump built with ultra clean parts in his laboratory at Rensselaer in collaboration with researchers at Stanford and at Muenster University in Germany. Where the current pumps contribute one-third to one-half of the total radon in the experiment, the new pumps will be essentially radon-free, removing one of the largest contributions to background.

Research on dark matter fulfills The New Polytechnic, an emerging paradigm for higher education which recognizes that global challenges and opportunities are so great they cannot be adequately addressed by even the most talented person working alone. Rensselaer serves as a crossroads for collaboration -- working with partners across disciplines, sectors, and geographic regions -- to address complex global challenges, using the most advanced tools and technologies, many of which are developed at Rensselaer. Research at Rensselaer addresses some of the world's most pressing technological challenges -- from energy security and sustainable development to biotechnology and human health. The New Polytechnic is transformative in the global impact of research, in its innovative pedagogy, and in the lives of students at Rensselaer.

Credit: 
Rensselaer Polytechnic Institute

Researchers listen for failure in granular materials

In a pilot study, researchers from North Carolina State University and Haverford College have used naturally arising acoustic vibrations - or sound waves - to monitor the state of granular materials. This passive approach represents a way to probe disordered or granular materials without disturbing them, and may enable researchers to forecast the failure of these materials.

Granular materials, like the ground beneath us, can fail through spontaneous events like earthquakes. But it is difficult to probe or measure these materials in order to predict failure. Haverford College physicist and former NC State postdoctoral researcher Ted Brzinski and NC State physicist Karen Daniels decided to examine sound waves emanating from the material to characterize the different vibrational modes of the material.

Vibrational modes are the ways in which something can oscillate, or move internally. A small molecule can only oscillate in a few ways, for example, but larger objects will have more modes, which are affected by both the locations and the masses of the components. In a disordered or amorphous system of granular materials, like dirt or gravel, the number of modes quickly becomes too large to either predict or measure directly.

However, each mode has a particular acoustic frequency associated with it. Brzinski and Daniels' approach measures the frequencies of the active vibrational modes in the material, giving them an acoustic snapshot of the material's overall "health."

To test their technique they created a granular system composed of 8,000 circular and elliptical polymer beads. They recorded the acoustic emissions from over 1,100 stick-slip events - which is what happens when tectonic plates slide past each other in an earthquake - and classified the frequencies present in acoustic signals associated with impending failure.

"Lower frequencies are associated with 'floppy' modes, meaning that there is a lot more movement, while higher frequencies are associated with stiff or rigid modes," says Brzinski. "What people have seen in model systems is that as you have more floppy modes than expected, the closer you are to losing rigidity. The slip occurs when rigidity is lost. Our tests confirmed these model system results - failures occurred when there were more low frequency modes than expected."

"But it's not just listening to see what sound frequencies are present; we need to look at the proportion of modes," says Daniels. "We know that materials close to failure have a lot of low frequency modes. Our method counts the numbers of certain types of modes in order to predict failure. The beauty of the technique is that you can monitor the system without any interference - just by listening. The method is fairly simple, and it may let us forecast the behavior of disordered materials."

Credit: 
North Carolina State University

Study finds big savings in removing dams over repairs

A new study by Portland State University researchers finds billions of dollars could be saved if the nation's aging dams are removed rather than repaired, but also suggests that better data and analysis is needed on the factors driving dam-removal efforts.

The study, published online in May in the journal River Research and Applications, analyzed the best available national data to compare the trends and characteristics of dams that have been removed with those that remain standing.

The researchers expect that if trends continue, by 2050, between 4,000 and 36,000 dams will be removed.

The study found that a high-end cost estimate of removing 36,000 dams would be roughly $25.1 billion, a significant savings over the estimated rehabilitation costs.

The American Society of Civil Engineers estimates more than $45 billion would be needed to repair and upgrade roughly 2,170 high-hazard dams - those that pose the greatest threat to life and property if they fail. The Association of State Dam Safety Officials estimates it would cost $64 billion to rehabilitate all of the U.S. dams that need to be brought up to safe condition, according to the study.

"I think it's time for a re-invigorated public process around managing the risks dams and aging dam infrastructure pose to public safety throughout the U.S.," said Zbigniew Grabowski, a Ph.D. candidate in PSU College of Liberal Arts and Science's Earth, Environment & Society program and the study's lead author. "It's difficult to assess the actual public safety hazards and the most cost-effective ways of mitigating those hazards because the data on dams and dam removals has not been systematically compiled in a way that allows for robust analysis by government agencies or independent researchers."

The study found that hydroelectric and water-supply dams were the types most disproportionately removed, a finding that suggests more nuanced conversations about what drives the removal of dams is necessary.

Grabowski said the choice between removing or rehabilitating dams is often framed as a cost-benefit tradeoff between the ecological, social and economic impacts of dams.

"Yet we should also be looking at how including the public in dam safety decisions might increase the number of dams that don't make sense to rehabilitate," he said.

Among the study's recommendations:

-More detailed data needs to be made public and data collection on removed and rehabilitated dams needs to be standardized to allow for more robust comparative research and better-informed decisions at the national, state and local levels

-Dam policy officials and researchers need to take an interdisciplinary approach and draw knowledge from dam safety engineering, ecological restoration, social science and technology as well as the communities affected by dams and their removals

Credit: 
Portland State University

Study shows ceramics can deform like metals if sintered under an electric field

image: Purdue researchers observed for the first time how ceramics formed under an electric field surprisingly change shape rather than break when compressed at high strain. Pictured: Graduate research assistants Jaehun Cho and Qiang Li.

Image: 
Purdue University image/Vincent Walter

WEST LAFAYETTE, Ind. -- Purdue researchers have observed a way that the brittle nature of ceramics can be overcome as they sustain heavy loads, leading to more resilient structures such as aircraft engine blade coatings and dental implants.

While inherently strong, most ceramics tend to fracture suddenly when just slightly strained under a load unless exposed to high temperatures. Structural ceramic components also require high temperatures to form in the first place through a lengthy process called sintering, in which a powdered material coalesces into a solid mass.

These issues are particularly problematic for ceramic coatings of metal engine blades intended to protect metal cores from a range of operational temperatures. A study published in Nature Communications demonstrates for the first time that applying an electric field to the formation of yttria-stabilized zirconia (YSZ), a typical thermal barrier ceramic, makes the material almost as plastic, or easily reshaped, as metal at room temperature. Engineers could also see cracks sooner since they start to slowly form at a moderate temperature as opposed to higher temperatures, giving them time to rescue a structure.

"In the past, when we applied a high load at lower temperatures, a large number of ceramics would fail catastrophically without warning," said Xinghang Zhang, professor of materials engineering. "Now we can see the cracks coming, but the material stays together; this is predictable failure and much safer for the usage of ceramics."

Recent studies have shown that applying an electric field, or "flash," significantly accelerates the sintering process that forms YSZ and other ceramics, and at much lower furnace temperatures than conventional sintering. Flash-sintered ceramics also have very little porosity, which makes them more dense and therefore easier to deform. None have yet tested the ability of flash-sintered ceramics to change shape at room temperature or increasingly higher temperatures.

"YSZ is a very typical thermal barrier coating - it basically protects a metal core from heat," said Haiyan Wang, Purdue's Basil S. Turner Professor of Engineering. "But it tends to suffer from a lot of fractures when an engine heats up and cools down due to residual stresses."

What allows metals to be fracture-resistant and easy to change shape is the presence of "defects," or dislocations - extra planes of atoms that shuffle during deformation to make a material simply deform rather than break under a load.

"These dislocations will move under compression or tension, such that the material doesn't fail," said Jaehun Cho, a graduate research assistant in materials engineering.

Ceramics normally don't form dislocations unless deformed at very high temperatures. Flash-sintering them, however, introduces these dislocations and creates a smaller grain size in the resulting material.

"Smaller grains, such as nanocrystalline grains, may slide as the ceramic material deforms, helping it to deform better," Wang said.

Pre-existing dislocations and small grain sizes enabled a flash-sintered YSZ sample thinner than human hair to grow increasingly plastic between room temperature and 600 degrees Celsius when compressed, with cracks starting to slowly spread at 400 degrees as opposed to conventionally sintered YSZ that requires 800 degrees and higher to plastically deform.

Improved plasticity means more stability during operation at relatively low temperatures. The sample could also withstand almost as much compression strain as some metals do before cracks started to appear.

"Metals can be compressed to 10 or 20 percent strain, no problem, but ceramics often fracture into pieces if you compress them to less than 2-3 percent strain," Zhang said. "We show that flash-sintered ceramics can be compressed to 7-10 percent without catastrophic fracture."

Even when the sample did begin to crack, the cracks formed very slowly and did not result in complete collapse as would typically happen with conventional ceramics. The next steps would be using these principles to design even more resilient ceramic materials.

The researchers would not have been able to perform in-situ experiments of a micron-sized ceramic sample without an in-situ nanomechanical testing tool inside a high-resolution scanning electron microscope equipped with a focused iron beam tool at Purdue's Life Science Microscopy Center and an FEI Talos 200X electron microscope facility in Purdue's Materials Engineering facility. Both microscopes were provided by Purdue's Office of the Executive Vice President for Research and Partnerships and the Colleges of Engineering and Science. Purdue is expecting an even higher-resolution aberration-corrected microscope that the researchers will soon use for future nanomaterials research.

Credit: 
Purdue University

Assessment of biomarkers of subconcussive head trauma

image: Serum concentrations of NF-L, a biomarker for head trauma, over the course of the season in American football athletes. Asterisk indicates significant difference by starter status (p ? 0.05); pound sign indicates significant difference from T1 (p ? 0.05).

Image: 
© 2018 American Association of Neurological Surgeons.

Charlottesville, VA (May 29, 2018). Researchers from The Sport Science Center at Texas Christian University, Texas Health Sports Medicine, and the University of Wisconsin-La Crosse evaluated the usefulness of biomarker testing in determining the potential extent of brain trauma suffered from repetitive subconcussive head impacts sustained over the course of a college football season. Their findings are reported today in the Journal of Neurosurgery, in the article "Fluctuations in blood biomarkers of head trauma in NCAA football athletes over the course of a season" by Jonathan M. Oliver, Ph.D., and colleagues .

In American football, athletes are regularly subjected to subconcussive head impacts--hits to the head that are not severe enough to cause a concussion or even produce a clinical symptom. Nevertheless, the accumulation of subconcussive head impacts over the course of a football season has been linked to neurophysiological and neuropsychological changes in athletes, and the accumulation of subconcussive head impacts over a long career has been suggested as a cause of severe neurodegenerative diseases such as Alzheimer's disease and chronic traumatic encephalopathy (CTE).

Since individual subconcussive head impacts do not produce symptoms, it can be difficult to ascertain the resultant injury to the brain and to recognize when an athlete should refrain from play. To protect contact-sport athletes throughout the football season, Oliver and colleagues call for the development of a simple, easy-to-use, diagnostic test to identify and monitor accumulation of subconcussive head impacts. The authors suggest periodic measurement of biomarkers of subconcussive head trauma.

In this study, Oliver and colleagues developed a schedule of blood sampling that is cost effective and could fit into the busy schedule of an academic football program. To test the feasibility of this method, they evaluated the usefulness of two well-known biomarkers of head trauma and more severe traumatic brain injury: the tau protein and neurofilament light polypeptide (NF-L).

The researchers obtained blood samples from thirty-five NCAA football players on seven different days, beginning before fall camp commenced and ending three weeks after the competitive football season had finished. The researchers wanted to examine fluctuations in plasma concentrations of tau and serum concentrations of NF-L over the course of the entire football season, during which the number and magnitude of head impacts varied. Those athletes who sustained a concussion were not further examined. Timing of blood sampling was coordinated with the convenience of players and coaches. During the competitive period, this was usually between 36 and 72 hours after a game or practice session.

Twenty football players were identified as starters and fifteen as nonstarters. Since starters are involved in more repetitive plays than nonstarters and are consequently more likely to sustain a greater number of head impacts, the researchers expected to see increased concentrations of biomarkers of brain trauma in starters than in nonstarters, who served as controls in this study.

Analyses of the blood samples showed that tau concentrations decreased over the course of the competitive season, compared to preseason values, in both starters and nonstarters. Concentrations did not increase to preseason values until after the competitive season was over. Statistical analysis showed that differences in tau concentrations between the two groups of athletes (starters and nonstarters) were not great enough to differentiate between starters and nonstarters. In this study, tau concentrations were not useful in determining injury due to repetitive subconcussive head impacts.

Concentrations of NF-L increased in both athlete groups during the competitive season, compared to preseason values. In starters, these increases were significantly higher than preseason values at many time points; in nonstarters, the increases in NF-L concentrations never reached statistical significance. Statistical analysis showed that during periods of repetitive head impacts, differences in NF-L concentrations demonstrated fair to modest accuracy for differentiating between starters and nonstarters.

The researchers conclude that, in this preliminary study, serum NF-L shows some promise as a sensible and reliable biomarker of brain injury due to repetitive subconcussive head impacts. However, they point out that this study is preliminary and that additional, larger studies should be performed to verify this. They believe that the knowledge gained by studies of biomarkers will "lay the groundwork for the eventual development of clinical tools to help reduce the deleterious effects of repetitive subconcussive impacts."

When asked about this study, Dr. Oliver responded, "Given recent findings indicating a potential link between repetitive subconcussive impacts and the development of CTE, efforts to determine the effect of subconcussive impacts throughout an athlete's career may prove useful, especially if those efforts are feasible and cost effective."

Credit: 
Journal of Neurosurgery Publishing Group

Scientists improve ability to measure electrical properties of plasma

image: Yevgeny Raitses and Brian Kraus in front of the Penning trap experiment, part of the Hall Trap Experiment, which was used to produce some of the experimental results.

Image: 
Elle Starkman

Any solid surface immersed within a plasma, including those in satellite engines and fusion reactors, is surrounded by a layer of electrical charge that determines the interaction between the surface and the plasma. Understanding the nature of this contact, which can affect the performance of the devices, often hinges on understanding how electrical charge is distributed around the surface. Now, recent research by scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) indicates a way to more accurately measure these electrical properties.

The recent discovery relates to the layer, the so-called plasma-wall sheath of electrical charge that surrounds objects, including diagnostic probes, inside the plasma, which is composed of charged electrons and ions. This layer protects probes by repelling other electrons in the plasma that affect the measurements of the instrument and sometimes even cause damage. "In effect, the object insulates itself from all these electrons in the plasma that carry energy and heat and could cause the probe to melt," said Brian Kraus, a graduate student in the Princeton Program in Plasma Physics who was lead author of the paper that published the findings in Physics of Plasmas.

Kraus and principal research physicist Yevgeny Raitses, co-author of the paper and research advisor for Kraus on his first-year graduate project, found that the layer's charge can sometimes be positive, contradicting what scientists have long thought -- that the blanket always has a more negative charge than the surrounding plasma. The findings indicate that researchers must determine exactly what kind of charge surrounds the probe to be able to make corrections that will generate an accurate measurement of conditions inside the plasma.

Specifically, research conducted on the Raitses-led Hall Thruster Experiment (HTX) at PPPL, which is typically used to study plasma thrusters for spacecraft and related plasma devices, showed that a heat-emitting diagnostic that is not connected to a grounded wire can sometimes produce the positive charge. The HTX was able to provide a steady, stable plasma that let the scientists detect more precisely what kind of charge was building up next to the probe.

"The big new thing is that until now, scientists for at least a decade had been developing theoretical calculations and performing computational simulations showing that the positive layer, or inverse sheath, could occur, but no one had seen it in experiments involving probes," Kraus said. "In this paper, we say we think we are indeed seeing it in an experiment, as well as seeing the transition between negative and positive sheaths."

The research was the first to support these calculations concerning the effect of so-called highly emissive walls. Developing such calculations were Michael Campanell, Alexander Khrabrov, and Igor Kaganovich of PPPL, along with Dmytro Sydorenko at the University of Alberta. (Campanell is now at DOE's Lawrence Livermore National Laboratory.) The new experiments thus provide an excellent example of how theoretical predictions motivate experimental research that in turn validates theoretical predictions.

According to Raitses and Kraus, future research involving physical experiments will measure more carefully how well the highly emissive probe model matches observations. One such experiment would determine whether an emissive probe with a long wire would retain a positive charge more easily.

Credit: 
DOE/Princeton Plasma Physics Laboratory

To scan or not to scan: research shows how to personalize lung cancer screening decisions

ANN ARBOR, MI - For smokers and former smokers, the threat of lung cancer always lurks in the shadows.

To flush it out of the darkness, some decide to get their lungs scanned by a CT machine, which can find a tumor early enough to stop it - or set off a false alarm that turns out to be nothing.

Others may avoid the scans, or don't know they should have one, even though they are the type of person who has the most to gain from screening, according to official recommendations in effect for the last five years.

Now, a new study shows how to personalize the lung cancer screening decision for every patient. The results could help doctors fine-tune their advice to patients, so that it's based not just on a patient's individual lung cancer risk and the potential benefits and harms of screening, but also a likely range of patient attitudes about looking for problems and dealing with the consequences.

Published in the Annals of Internal Medicine, the study forms the backbone for new free online decision tools aimed at physicians and their teams, and at members of the public.

The tool for clinicians, called Lung Decision Precision, was designed by a University of Michigan and Veterans Affairs team to help clinicians talk with patients and their loved ones about whether to a lung CT scan might be a good idea for them.

The same team has also launched a website for patients and their loved ones, http://www.shouldiscreen.com, that gives easy-to-understand information about the positives and potential negatives of lung cancer screening, and allows individuals to calculate their risk of lung cancer.

Tanner Caverly, M.D., M.P.H., led the team that did the new computer-based simulation analysis using data from major studies of lung cancer screening, and national data on the potential screening population under the current guidelines.

"Our model is built on a comprehensive view of net benefits for individual patients, which incorporates the best evidence for personalizing the pros and cons of screening and assumes that not all patients will feel the same about screening and its consequences," explains Caverly, an assistant professor in the Division of General Medicine and Department of Learning Health Sciences at the U-M Medical School. "This allows us to identify which patients are in the preference-sensitive zone for the decision about screening, and which ones have a very clear potential benefit to them."

Any person with an annual chance of lung cancer between 0.3 percent and 1.3 percent, and a life expectancy of more than 10 years, falls into that latter high-benefit category, he notes. This accounts for about 50 percent of all Americans who qualify for screening under the current guidelines.

But for most of the rest, their personal preferences should play a large role in determining if they should get screened. For instance, this might include how much they dislike getting medical tests in general, how they feel about the potential unintended consequences of looking for a problem when they feel fine, and how they view the process of having to get follow-up scans and lung biopsies if the screening scan shows something suspicious.

In fact, for them personal preference is more important to their decision than the false-positive rate for lung CT screening (which outnumber true cancers 25 to 1), and the negative impacts of being over-treated for a lung cancer that was not highly dangerous.

At the same time, people who fall into the potential pool of screening candidates but have a short life expectancy and a low risk of lung cancer should not be screened.

The model could help physicians prepare for conversations with patients about lung screening, customized to the patient in front of them.

"If a physician is not clear about the potential benefit for a patient who's in the high-benefit zone, they could miss an opportunity to do something really good for them, to say, 'I don't recommend this for everyone but I recommend it for you'," Caverly says. "But coming across strong for screening with a patient who has a fine balance of pros and cons could miss an opportunity to give them a choice, to tell them that their decision depends on the kind of person they are."

While the study looked specifically at the evidence around lung cancer screening, its authors note that the underlying analytical method could lead to personalized health decision tools for other situations.

Looking at lungs

The researchers focused on lung cancer -- the leading cause of cancer death among both men and women - because of the recent move to encourage certain smokers and former smokers between the ages of 55 and 80 to get screened for it.

A major study published in 2011 showed that some members of this group could survive longer if they had CT screening to find the earliest signs of lung cancer, which is diagnosed in more than 230,000 Americans every year.

Two years later, a national panel recommended it for people between the ages of 55 and 80 who had been or still were heavy smokers (an average of a pack of cigarettes a day for 30 years, who currently smoke or quit less than 15 years ago).

The recommendation comes with exceptions, even among this group, including patients whose overall health meant they have a life expectancy of less than 10 years, and those who aren't strong enough to withstand lung surgery if a scan shows signs of cancer.

Now, the U.S. Preventive Services Task Force, which made the initial recommendation, is preparing to revisist its guidance on this topic.

Caverly and his colleagues - including senior author Rafael Meza, Ph.D., who is coordinating principal investigator of the Cancer Intervention and Surveillance Modeling Network (CISNET) lung group -- hope their new study will inform that process.

They're studying how the online tool can be used by physicians and the medical team members who assist with screening. They're also thinking about whether it could be included in the online systems that patients use to communicate with their clinic ahead of an appointment, and the clinical reminder tools that prompt physicians to talk with patients about preventive services that might be right for them.

The physician-focused tool produces a colorful display that places the individual patient somewhere along a green and yellow line. If a patient is deep in the yellow, they likely have a small but non-zero benefit with screening and screening will depend highly on patient views.

The closer they fall to the divide between green and yellow, the more likely it is that screening will benefit them. And if they're deep in the green zone, the physician should encourage them more strongly to get screened. The site generates other visual representations, and handouts for patients and their spouse or other loved one.

The team, including co-author Rodney Hayward, M.D., of General Medicine and the VA Ann Arbor Healthcare System, hopes to test the model's usefulness for other types of health services that are used by many people and have good data available about their benefits, risks and patient preferences.

This includes cancer screening, disease prevention, chronic disease management approaches and more.

"This method can incorporate anything that moves the needle on risk and benefit, and that involves patient preferences about time, dollars and worry," Caverly explains. "As a clinician I'd like to have this for many of the things I do, where it would be meaningful to know how beneficial something could be for the individual patient, and we could talk about whether it's indicated for them."

Credit: 
Michigan Medicine - University of Michigan