Culture

UTSA researchers want to teach computers to learn like humans

A new study by Paul Rad, assistant director of the UTSA Open Cloud Institute, and Nicole Beebe, Melvin Lachman Distinguished Professor in Entrepreneurship and director of the UTSA Cyber Center for Security and Analytics, describes a new cloud-based learning platform for artificial intelligence (A.I.) that teaches machines to learn like humans.

"Cognitive learning is all about teaching computers to learn without having to explicitly program them," Rad said. "In this study, we're presenting an entirely new platform for machine learning to teach computers to learn the way we do."

To build the cloud-based platform, Rad and Beebe studied how education and understanding has evolved over the past five centuries. They wanted to gain a better picture of how computers could be taught to approach deductive reasoning.

"Our goal here is to teach the machine to become smarter, so that it can help us. That's what they're here to do," Rad said. "So how do we become better? We learn from experience."

The UTSA researchers also studied how humans learn across their lifetimes. Children, for example, begin by identifying objects such as faces and toys, then move on from there to understand communication. This process helps their thought processes mature as they get older.

Ultimately, Rad and Beebe want AI agents to learn automatic threat detection. This means the AI agent can dynamically learn network traffic patterns and normal behavior and thus become more effective in discovering and thwarting new attacks before significant damage.

"Or It would be nice if an intelligent computer assistant could aggregate thousands of news items or memos for someone, so that the process of reading that material was quicker and that person could decide almost instantly how to use it," Rad said.

Additionally, intelligent machines could be used in medical diagnoses, which Rad says could lead to more affordable health care, and other fields that require precise, deductive reasoning.

"During the history, humans have invented and used tools such as swords, calculators and cars, and tools have changed human society and enable us to evolve," Rad said. "That's what we're doing here, but on a much more impactful scale."

Credit: 
University of Texas at San Antonio

Chaperones can hold protein in non-equilibrium states

After translation, proteins must fold to their functional 3D shape and keep it while under attack by various perturbations: external stress such as temperature changes, wrong interactions with other proteins in the cell, and even deleterious mutations. To ensure that proteins stay functional, the cell uses a particular class of proteins, the chaperones. These are present in all organisms and are among the most abundant proteins in cells, emphasizing how crucial they are to sustain life.

The current view is that the functional 3D shape of a protein is also its most thermodynamically stable state, and that chaperones help proteins reach this state by keeping them from aggregating and by allowing them to escape so-called "kinetic traps" - points where the protein may get "stuck" in a non-functional state. And to do all this, chaperones need energy, which in the cell comes in the form of adenosine triphosphate, or ATP.

The labs of Paolo De Los Rios at EPFL and Pierre Goloubinoff at UNIL, in collaboration with Alessandro Barducci (INSERM - Montpellier) have now shown that the energy from ATP is used by chaperones to actively maintain the proteins they are working on in a non-equilibrium but transiently stable version of the functional form, even under conditions upon which the functional form should not be thermodynamically stable.

"What we found is that chaperones can actively repair and revert the proteins they act upon in a non-equilibrium steady-state," says De Los Rios. "In this state, the proteins are in their native state even if, from an equilibrium thermodynamics perspective, they should not."

The researchers combined theoretical and experimental approaches to prove that chaperones are molecular motors, capable of performing work and extending the stability range of proteins. The results may challenge parts of the prevalent view that evolution has designed amino acid sequences so that the functional state of the protein they belong to is thermodynamically optimal.

"In the presence of chaperones, even thermodynamically sub-optimal proteins might be able to reach their functional form, facilitating evolution in its endless exploration of chemical possibilities," says De Los Rios.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Culturing cheaper stem cells

image: A human embryonic stem cell colony cultured in the newly developed medium.

Image: 
Kyoto Univ iCeMS

Human pluripotent stem cells (hPSCs) can infinitely self-renew and develop into all major cell types in the body, making them important for organ repair and replacement. But culturing them in large quantities can be expensive. Now, scientists at Japan's Kyoto University, with colleagues in India and Iran, have developed a more cost-effective culture by using a new combination of chemical compounds.

Current culture systems need to contain components that can sustain hPSC self-renewal while preventing them from differentiating into other cell types. Of these components, genetically engineered growth factors produced in bacteria or animal cells, are particularly expensive.

The new culture was able to support and maintain the long-term renewal of hPSCs without the need for expensive growth factors.

Kouichi Hasegawa of Kyoto University's Institute for Integrated Cell-Material Sciences (iCeMS) and his team developed their 'AKIT' culture using three chemical compounds: 1-azakenpaullone (AK), ID-8 (I), and tacrolimus (T).

1-azakenpaullone supported hPSC self-renewal, but also induced their differentiation into other cells. To turn off the differentiation, the team added ID-8. This compound, however, also leads to partial cell growth arrest, so a third compound, tacrolimus, was finally added to counter this effect.

The survival and growth rates of some hPSC cell lines were slightly lower in the AKIT medium than in other culture media. But its key advantage lies in the simplicity and low cost of its preparation, which is five to ten times cheaper than any currently available hPSC culture medium.

"This improved method of culturing may thus facilitate the large-scale, quality-controlled and cost-effective translation of hPSC culture practices to clinical and drug-screening applications," say the researchers in their study published in the journal Nature Biomedical Engineering.

Credit: 
Kyoto University

Massive astrophysical objects governed by subatomic equation

image: An artist's impression of research presented in Batygin (2018), MNRAS 475, 4. Propagation of waves through an astrophysical disk can be understood using Schrödinger's equation -- a cornerstone of quantum mechanics.

Image: 
James Tuttle Keane, California Institute of Technology

Quantum mechanics is the branch of physics governing the sometimes-strange behavior of the tiny particles that make up our universe. Equations describing the quantum world are generally confined to the subatomic realm--the mathematics relevant at very small scales is not relevant at larger scales, and vice versa. However, a surprising new discovery from a Caltech researcher suggests that the Schrödinger Equation--the fundamental equation of quantum mechanics--is remarkably useful in describing the long-term evolution of certain astronomical structures.

The work, done by Konstantin Batygin, a Caltech assistant professor of planetary science and Van Nuys Page Scholar, is described in a paper appearing in the March 5 issue of Monthly Notices of the Royal Astronomical Society.

Massive astronomical objects are frequently encircled by groups of smaller objects that revolve around them, like the planets around the sun. For example, supermassive black holes are orbited by swarms of stars, which are themselves orbited by enormous amounts of rock, ice, and other space debris. Due to gravitational forces, these huge volumes of material form into flat, round disks. These disks, made up of countless individual particles orbiting en masse, can range from the size of the solar system to many light-years across.

Astrophysical disks of material generally do not retain simple circular shapes throughout their lifetimes. Instead, over millions of years, these disks slowly evolve to exhibit large-scale distortions, bending and warping like ripples on a pond. Exactly how these warps emerge and propagate has long puzzled astronomers, and even computer simulations have not offered a definitive answer, as the process is both complex and prohibitively expensive to model directly.

While teaching a Caltech course on planetary physics, Batygin (the theorist behind the proposed existence of Planet Nine) turned to an approximation scheme called perturbation theory to formulate a simple mathematical representation of disk evolution. This approximation, often used by astronomers, is based upon equations developed by the 18th-century mathematicians Joseph-Louis Lagrange and Pierre-Simon Laplace. Within the framework of these equations, the individual particles and pebbles on each particular orbital trajectory are mathematically smeared together. In this way, a disk can be modeled as a series of concentric wires that slowly exchange orbital angular momentum among one another.

As an analogy, in our own solar system one can imagine breaking each planet into pieces and spreading those pieces around the orbit the planet takes around the sun, such that the sun is encircled by a collection of massive rings that interact gravitationally. The vibrations of these rings mirror the actual planetary orbital evolution that unfolds over millions of years, making the approximation quite accurate.

Using this approximation to model disk evolution, however, had unexpected results.

"When we do this with all the material in a disk, we can get more and more meticulous, representing the disk as an ever-larger number of ever-thinner wires," Batygin says. "Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum. When I did this, astonishingly, the Schrödinger Equation emerged in my calculations."

The Schrödinger Equation is the foundation of quantum mechanics: It describes the non-intuitive behavior of systems at atomic and subatomic scales. One of these non-intuitive behaviors is that subatomic particles actually behave more like waves than like discrete particles--a phenomenon called wave-particle duality. Batygin's work suggests that large-scale warps in astrophysical disks behave similarly to particles, and the propagation of warps within the disk material can be described by the same mathematics used to describe the behavior of a single quantum particle if it were bouncing back and forth between the inner and outer edges of the disk.

The Schrödinger Equation is well studied, and finding that such a quintessential equation is able to describe the long-term evolution of astrophysical disks should be useful for scientists who model such large-scale phenomena. Additionally, adds Batygin, it is intriguing that two seemingly unrelated branches of physics--those that represent the largest and the smallest of scales in nature--can be governed by similar mathematics.

"This discovery is surprising because the Schrödinger Equation is an unlikely formula to arise when looking at distances on the order of light-years," says Batygin. "The equations that are relevant to subatomic physics are generally not relevant to massive, astronomical phenomena. Thus, I was fascinated to find a situation in which an equation that is typically used only for very small systems also works in describing very large systems."

"Fundamentally, the Schrödinger Equation governs the evolution of wave-like disturbances." says Batygin. "In a sense, the waves that represent the warps and lopsidedness of astrophysical disks are not too different from the waves on a vibrating string, which are themselves not too different from the motion of a quantum particle in a box. In retrospect, it seems like an obvious connection, but it's exciting to begin to uncover the mathematical backbone behind this reciprocity."

Credit: 
California Institute of Technology

Provide stroke patients with palliative care support minus the label

When caring for stroke patients, health care providers should focus on the social and emotional issues facing patients, rather than only physical rehabilitation, according to a new study published in CMAJ (Canadian Medical Association Journal).

"Rather than focusing only on physical rehabilitation, a realistic approach to managing care should consider the emotional needs of patients and their caregivers," says Dr. Scott Murray, Primary Palliative Care Research Group, University of Edinburgh, Edinburgh, United Kingdom. "Balancing the need for hope of recovery with the potential of severe disability or death is important in this approach."

Stroke is the second leading cause of death, accounting for 11% of deaths worldwide. Survival is especially poor for people who have had a severe total anterior circulation stroke with loss of motor control, language and other conditions.

The study of 219 patients in central Scotland with severe stroke (total anterior circulation stroke) looked at the experiences, concerns and priorities of patients, families and health care professionals in the 12 months after stroke. In the first 6 months, 57% (125 people) died, with 1-year fatality of 60% (132 deaths.) About two-thirds (67%) of deaths occurred within the first month after stroke.

Researchers found that patients and their families reported grief over the loss of their previous life, anxiety among caregivers over whether they were "doing the right thing," uncertainty about the future and confusion about prognosis. As well, the term "palliative care" was interpreted negatively by many health care providers, families and informal caregivers, as it is associated with care for people, for example patients with advanced cancer, who are dying.

"Many patients and informal caregivers would have welcomed more support in making decisions and in planning for the future from day one," writes Dr. Murray with coauthors. "The focus was on active rehabilitation, recovery, motivation and hope, with much less discussion and preparation for limited recovery."

The authors suggest that the principles of palliative care rather than the term itself should be applied to stroke patients, which means supporting people to live well with deteriorating health and making them comfortable until their eventual death.

In a related commentary http://www.cmaj.ca/lookup/doi/10.1503/cmaj.170956, Dr. Jessica Simon, Department of Oncology, University of Calgary, writes "the challenging questions for physicians and other health care providers should not be, 'What shall we call it?' or 'Who should receive palliative care?'; the questions for each patient who is facing the challenges associated with life-threatening illness should be, 'Am I providing the palliative care support my patient needs?' and 'Is there access to sufficient specialist palliative care resources in my community if needed?'."

"Outcomes, experiences and palliative care in major stroke: a multicentre, mixed-method, longitudinal study" is published March 5, 2018.

Credit: 
Canadian Medical Association Journal

How are we related? A Compara-bly easy workflow to find gene families

Published in GigaScience, the open source Galaxy workflow allows researchers to make easier work of finding gene families; an important tool when it comes to analysing the evolution, structure and function of genes across species.

Co-author, Wilfried Haerty explained why this tool is so useful to biologists: "The software developed at the Earlham Institute enables scientists to investigate species of interest using a flexible and reproducible pipeline. The performance of our workflow was assessed on vertebrate genome assemblies of various qualities (platypus, pig, horse, dog, mouse and human). The species were selected to assess the impact of genome quality on gene families identification. The mouse, dog and human genomes are of high quality whereas the three others are at different stages of analysis completion."

Based on and expanding Ensembl's existing EnsemblCompara Gene Trees pipeline, the GeneSeqToFamily workflow removes many complex prerequisites of the process, such as having to use the command line to install a large number of separate tools, by converting the whole process into Galaxy; a much simpler platform to use.

Importantly, the workflow is highly customisable, allowing users to choose parameters, change tools and run the software on their own genes, without having to use the Ensembl database.

Not just a workflow, GeneSeqToFamily contains a number of new, standalone Galaxy tools, including TreeBeST, hcluster_sg, T-Coffee and ETE. Developed at EI by Anil Thanki and Nicola Soranzo of the Data Infrastructure Group, the software makes the process of finding and generating phylogenetic trees easier, using a range of open platforms and databases. Anil Thanki, Scientific Programmer, said: "We are excited to put our work in the open domain, where it allows biologists and bioinformaticians to use the Ensembl Compara GeneTrees Pipeline in a simple, graphical user interface and modify it if needed."

The team hopes that the new workflow will help users unfamiliar with the complexities associated with using Compara to be able to more easily analyse phylogenetic datasets, while collating a number of useful gene family tools in one Galaxy workflow. Users can either select existing Ensembl databases to use as the reference sets for their analysis, or provide their own data in the same format, and tools are provided that can help.

Earlham Institute is committed to providing tools and algorithms to support, enable and develop computational biology and life sciences research, with projects such as Galaxy helping to open access to a range of scientific tools and databases.

The Data Infrastructure group, led by Dr. Rob Davey, also supports resources such as CyVerse UK and COPO which, alongside Galaxy, expand the availability and usability of computational resources to the wider scientific community in the UK and internationally through EI's National Capability in e-Infrastructure.

Credit: 
Earlham Institute

Changing size of neurons could shed light on new treatments for motor neurone disease

New research published in The Journal of Physiology improves our understanding of how motor nerve cells (neurons) respond to motor neurone disease, which could help us identify new treatment options.

Motor neurone disease referred to as Amyotrophic Lateral Sclerosis (ALS) is associated with the death of motor nerve cells (neurons). It starts with the progressive loss of muscle function, followed by paralysis and ultimately death due to inability to breathe. Currently, there is no cure for ALS and no effective treatment to halt, or reverse, the progression of the disease. Most people with ALS die within 3 to 5 years from when symptoms first appear.

Previous studies in animal models of ALS have reported inconsistencies in the changes in the size of motor neurons. This new study is the first to show robust evidence that motor neurons change size over the course of disease progression and that, crucially, different types of neurons experience different changes. Specifically, the study shows that motor neuron types that are more vulnerable to the disease - that is, they die first - increase in size very early in the disease, before there are symptoms. Other motor neuron types that are more resistant to the disease (they die last) do not increase their size. These changes in the size of the motor neurons have a significant effect on their function and their fate as the diseases progresses.

The hope is that by understanding more about the mechanisms by which the neurons are changing size, it will be possible to identify and pursue new strategies for slowing or halting motor nerve cell death.

This research suggests motor neurons might alter their characteristics as a response to the disease in an attempt to compensate for loss of function. However these changes can lead to the neuron's early death. Furthermore the research supports the idea that the most vulnerable motor neurons undergo unique changes that might impact their ability to survive.

The research conducted by Wright State University involved identifying and measuring size changes of motor neuron types in a mouse model of familial ALS. The motor neurons were examined at every key stage of the disease to observe when and where these changes begin, and how they progress through the entirety of the disease. Specific antibodies were used as markers to bind to the structure of motor neurons so that they could be easily viewed under high-power microscopes, and a computer program performed the three-dimensional measurement of the size and shape of a motor neuron's cell body.

It is important to note that the research was carried out in only one mouse model which was the most aggressive mouse model of ALS. Future work should focus on other mouse models of ALS in order to determine how relevant these results are likely to translate in human patients.

Sherif M. Elbasiouny, the lead investigator on the research commented potential areas for further study:

"This research approach could be applicable not only to ALS, but also to other neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases. This new understanding could help us to identify new therapeutic targets for improving motor neuron survival."

Credit: 
The Physiological Society

Short-term increases in inhaled steroid doses do not prevent asthma flare-ups in children

Researchers have found that temporarily increasing the dosage of inhaled steroids when asthma symptoms begin to worsen does not effectively prevent severe flare-ups, and may be associated with slowing a child's growth, challenging a common medical practice involving children with mild-to-moderate asthma.

The study, funded by the National Heart, Lung, and Blood Institute (NHLBI), part of the National Institutes of Health, will appear online on March 3 in the New England Journal of Medicine (NEJM) to coincide with its presentation at a meeting of the 2018 Joint Congress of the American Academy of Allergy, Asthma & Immunology (AAAAI) and the World Allergy Organization (WAO) in Orlando, Florida. It will appear in print on March 8th.

Asthma flare-ups in children are common and costly, and to prevent them, many health professionals recommend increasing the doses of inhaled steroids from low to high at early signs of symptoms, such as coughing, wheezing, and shortness of breath. Until now, researchers had not rigorously tested the safety and efficacy of this strategy in children with mild-to-moderate asthma.

"These findings suggest that a short-term increase to high-dose inhaled steroids should not be routinely included in asthma treatment plans for children with mild-moderate asthma who are regularly using low-dose inhaled corticosteroids," said study leader Daniel Jackson, M.D., associate professor of pediatrics at the University of Wisconsin School of Medicine and Public Health, Madison, and an expert on childhood asthma. "Low-dose inhaled steroids remain the cornerstone of daily treatment in affected children."

The research team studied 254 children 5 to 11 years of age with mild-to-moderate asthma for nearly a year. All the children were treated with low-dose inhaled corticosteroids (two puffs from an inhaler twice daily). At the earliest signs of asthma flare-up, which some children experienced multiple times throughout the year, the researchers continued giving low-dose inhaled steroids to half of the children and increased to high-dose inhaled steroids (five times the standard dose) in the other half, twice daily for seven days during each episode.

Though the children in the high-dose group had 14 percent more exposure to inhaled steroids than the low-dose group, they did not experience fewer severe flare-ups. The number of asthma symptoms, the length of time until the first severe flare-up, and the use of albuterol (a drug used as a rescue medication for asthma symptoms) were similar between the two groups.

Unexpectedly, the investigators found that the rate of growth of children in the short-term high-dose strategy group was about 0.23 centimeters per year less than the rate for children in the low-dose strategy group, even though the high-dose treatments were given only about two weeks per year on average. While the growth difference was small, the finding echoes previous studies showing that children who take inhaled corticosteroids for asthma may experience a small negative impact on their growth rate. More frequent or prolonged high-dose steroid use in children might increase this adverse effect, the researchers caution.

The study did not include children with asthma who do not take inhaled steroids regularly, nor did it include adults. "This study allows caregivers to make informed decisions about how to treat their young patients with asthma," said James Kiley, Ph.D., director of the NHLBI's Division of Lung Diseases. "Trials like this can be used in the development of treatment guidelines for children with asthma."

Credit: 
NIH/National Heart, Lung and Blood Institute

Kids persistently allergic to cow's milk are smaller than peers with nut allergies

image: Karen A. Robbins, M.D., a pediatric allergist/immunologist at Children's National Health System and lead study author.

Image: 
Children's National Health System

ORLANDO, Fla.-(March 4, 2018)-Children who experience persistent allergies to cow's milk may remain shorter and lighter throughout pre-adolescence when compared with children who are allergic to peanuts or tree nuts, according to a retrospective chart review to be presented March 4, 2018, during the American Academy of Allergy, Asthma & Immunology/World Allergy Organization (AAAAI/WAO) Joint Conference.

"The relationship between food allergies and childhood growth patterns is complex, and we have an incomplete understanding about the influence food allergies have on children's growth," says Karen A. Robbins, M.D., a pediatric allergist/immunologist at Children's National Health System and lead study author. "Our study begins to fill this research gap but further study is needed, especially as children enter their teens, to gauge whether these growth deficits are transitory or lasting."

Approximately 6 percent to 8 percent of U.S. children suffer from a food allergy, according to the AAAAI. Eight food groups account for 90 percent of serious allergic reactions, including milk, egg, fish, crustacean shellfish, wheat, soy, peanuts and tree nuts, the Centers for Disease Control and Prevention adds. Allergy to cow's milk in particular can foreclose a wide array of food choices during early childhood, a time when children's bodies undergo a series of growth spurts.

"We learned from our previous research that there is a continuum of risk for deficits in height and weight among children with food allergies, and kids who are allergic to cow's milk are at heightened risk," Dr. Robbins adds. "They never have had cow's milk in their diet. Looking at food labeling, many items 'may contain milk,' which severely narrows what could be a wide variety of food items for growing children. They also frequently have allergies to additional foods."

To gauge how specific food allergies impact children's height and weight, the study team conducted a longitudinal chart review for 191 children. To be included in the study, the children had to have at least one clinic visit from the time they were aged 2 to 4, 5 to 8 and 9 to 12 years old, ages that span from early childhood to preadolescence. From each clinical visit, the research team recorded weight; height; co-morbid conditions, such as asthma, eczema and seasonal allergies; and use of inhaled corticosteroids.

They calculated mean differences in height, weight and body mass index (BMI) z-scores, which act like the percentile measures kids and parents hear about during well-child visits, comparing values with what is normal among other kids of the same age and gender in the general population.

"Children who are allergic to cow's milk had lower mean weight and height when compared with kids who are allergic to peanuts and tree nuts," she says. "These growth deficits remained prominent in the 5- to 8-year-old and the 9- to 12-year-old age ranges."

Dr. Robbins says future research will explore whether older children with cow's milk allergies begin to bridge that height gap during their teen years or if growth differences persist.

Credit: 
Children's National Hospital

Dual frequency comb generated on a single chip using a single laser

image: A compact, integrated, silicon-based chip used to generate dual combs for extremely fast molecular spectroscopy.

Image: 
A. Dutt, A. Mohanty, E. Shim, G. Patwardhan/Columbia Engineering

New York, NY--March 2, 2018--In a new paper published today in Science Advances, researchers under the direction of Columbia Engineering Professors Michal Lipson and Alexander Gaeta (Applied Physics and Applied Mathematics) have miniaturized dual-frequency combs by putting two frequency comb generators on a single millimeter-sized chip.

"This is the first time a dual comb has been generated on a single chip using a single laser," says Lipson, Higgins Professor of Electrical Engineering.

A frequency comb is a special kind of light beam with many different frequencies, or "colors," all spaced from each other in an extremely precise way. When this many-color light is sent through a chemical specimen, some colors are absorbed by the specimen's molecules. By looking at which colors have been absorbed, one can uniquely identify the molecules in the specimen with high precision. This technique, known as frequency-comb spectroscopy, enables molecular fingerprinting and can be used to detect toxic chemicals in industrial areas, to implement occupational safety controls, or to monitor the environment.

"Dual-comb spectroscopy is this technique put on steroids," says Avik Dutt, former student in Lipson's group (now a postdoctoral scholar at Stanford) and lead author of the paper. "By mixing two frequency combs instead of a single comb, we can increase the speed at which measurement are made by thousandfolds or more."

The work also demonstrated the broadest frequency span of any on-chip dual comb?i.e., the difference between the colors on the low-frequency end and the high-frequency end is the largest. This span enables a larger variety of chemicals to be detected with the same device, and also makes it easier to uniquely identify the molecules: the broader the range of colors in the comb, the broader the diversity of molecules that can see the colors.

Conventional dual-comb spectrometers, which have been introduced over the last decade, are bulky tabletop instruments, and not portable due to their size, cost, and complexity. In contrast, the Columbia Engineering chip-scale dual comb can easily be carried around and used for sensing and spectroscopy in field environments in real time.

"There is now a path for trying to integrate the entire device into a phone or a wearable device," says Gaeta, Rickey Professor of Applied Physics and of Materials Science.

The researchers miniaturized the dual comb by putting both frequency comb generators on a single millimeter-sized chip. They also used a single laser to generate both the combs, rather than the two lasers used in conventional dual combs, which reduced the experimental complexity and removed the need for complicated electronics. To produce miniscule rings?tens of micrometers in diameter?that guide and enhance light with ultralow loss, the team used silicon nitride, a glass-like material they have perfected specifically for this purpose. By combining the silicon nitride with platinum heaters, they were able to very finely tune the rings and make them work in tandem with the single input laser.

"Silicon nitride is a widely used material in the silicon-based semiconductor industry that builds computer/smartphone chips," Lipson notes. "So, by leveraging the capabilities of this mature industry, we can foresee reliable fabrication of these dual comb chips on a massive scale at a low cost."

Using this dual comb, Lipson's and Gaeta's groups demonstrated real-time spectroscopy of the chemical dichloromethane at very high speeds, over a broad frequency range. A widely used organic solvent, dichloromethane is abundant in industrial areas as well as in wetland emissions. The chemical is carcinogenic, and its high volatility poses acute inhalation hazards. Columbia Engineering's compact, chip-scale dual comb spectrometer was able to measure a broad spectrum of dichloromethane in just 20 microseconds (there are 1,000,000 microseconds in one second), a task that would have taken at least several seconds with conventional spectrometers.

As opposed to most spectrometers, which focus on gas detection, this new, miniaturized spectrometer is especially suited for liquids and solids, which have broader absorption features than gases?the range of frequencies they absorb is more spread out. "That's what our device is so good at generating," Gaeta explains. "Our very broad dual combs have a moderate spacing between the successive lines of the frequency comb, as compared to gas spectrometers which can get away with a less broad dual comb but need a fine spacing between the lines of the comb."

The team is working on broadening the frequency span of the dual combs even further, and on increasing the resolution of the spectrometer by tuning the lines of the comb. In a paper published last November in Optics Letters, Gaeta's and Lipson's groups demonstrated some steps towards showing an increased resolution.

"One could also envision integrating the input laser into the chip for further miniaturizing the system, paving the way for commercializing this technology in the future," says Dutt.

Credit: 
Columbia University School of Engineering and Applied Science

Do racial and gender disparities exist in newer glaucoma treatments?

SAN FRANCISCO - The American Glaucoma Society today announced that it has awarded a grant to Mildred MG Olivier, MD, to study how often minimally invasive glaucoma surgery (MIGS) devices and procedures are used in black and Latino glaucoma patients and whether these devices perform similarly across races, ethnicities, genders, ages, and regions. The goal of Dr. Olivier's research is to increase quality care for glaucoma patients in all demographic groups.

Dr. Olivier's team includes Eydie Miller-Ellis, MD, Clarisse Croteau-Chonka, PhD, Oluwatosin U. Smith, MD, Maureen G. Maguire, PhD, and Brian VanderBeek, MD, MPH, MSCE. They will work in conjunction with a bioinformatics team from the American Academy of Ophthalmology to mine the IRIS® Registry (Intelligent Research in Sight) database to explore this important topic.

The Academy's IRIS Registry is the nation's first and largest comprehensive eye disease clinical registry. It allows ophthalmologists to pioneer research based on real-world clinical practice. The Academy developed it as part of the profession's shared goal of continual improvement in the delivery of eye care.

The American Glaucoma Society last year issued a call for proposals for an IRIS Registry grant. The Society today announced at its 2018 Annual Meeting that it chose Dr. Olivier's application among seven strong proposals, primarily because it will help clinicians better understand where MIGS fits in the management of glaucoma.

Glaucoma affects 2.7 million people in the United States, a number that is expected to grow to 6.3 million by 2050. Glaucoma affects minority groups at a significantly higher rate than whites. A 2010 National Eye Institute report puts the total percentage of minority groups with glaucoma at 34 percent, even though minority groups represent just 30 percent of the U.S. population.

Glaucoma is typically treated with eyedrops or lasers first. If the disease progresses to moderate or advanced stages, ophthalmologists perform an invasive surgical procedure called trabeculectomy. MIGS offers a new option for patients with mild to moderate glaucoma, filling a gap between medication and more invasive filtration procedures. MIGS is also safer than trabeculectomy, has fewer complications, and patients recover faster.

Dr. Olivier's research seeks to assess the real-world demographic differences in the use, safety, and effectiveness of MIGS compared with other glaucoma treatments. Findings may also help inform the structure of future studies involving MIGS procedures.

Credit: 
American Academy of Ophthalmology

Unprecedentedly wide and sharp dark matter map

image: Hyper Suprime-Cam image of a location with a highly significant dark matter halo detected through the weak gravitational lensing technique. This halo is so massive that some of the background (blue) galaxies are stretched tangentially around the center of the halo. This is called strong lensing.

Image: 
NAOJ

A research team of multiple institutes, including the National Astronomical Observatory of Japan and University of Tokyo, released an unprecedentedly wide and sharp dark matter map based on the newly obtained imaging data by Hyper Suprime-Cam on the Subaru Telescope. The dark matter distribution is estimated by the weak gravitational lensing technique. The team located the positions and lensing signals of the dark matter halos and found indications that the number of halos could be inconsistent with what the simplest cosmological model suggests. This could be a new clue to understanding why the expansion of the Universe is accelerating.

Mystery of the accelerated Universe

In the 1930's, Edwin Hubble and his colleagues discovered the expansion of the Universe. This was a big surprise to most of the people who believed that the Universe stayed the same throughout eternity. A formula relating matter and the geometry of space-time was required in order to express the expansion of the Universe mathematically. Coincidentally, Einstein had already developed just such a formula. Modern cosmology is based on Einstein's theory for gravity.

It had been thought that the expansion is decelerating over time because the contents of the Universe (matter) attract each other. But in the late 1990's, it was found that the expansion has been accelerating since about 8 Giga years ago. This was another big surprise which earned the astronomers who found the expansion a Nobel Prize in 2011. To explain the acceleration, we have to consider something new in the Universe which repels the space.

The simplest resolution is to put the cosmological constant back into Einstein's equation. The cosmological constant was originally introduced by Einstein to realize a static universe, but was abandoned after the discovery of the expansion of the Universe. The standard cosmological model (called LCDM) incorporates the cosmological constant. LCDM is supported by many observations, but the question of what causes the acceleration still remains. This is one of the biggest problems in modern cosmology.

Wide and deep imaging survey using Hyper Suprime-Cam

The team is leading a large scale imaging survey using Hyper Suprime-Cam (HSC) to probe the mystery of the accelerating Universe. The key here is to examine the expansion history of the Universe very carefully.

In the early Universe, matter was distributed almost but not quite uniformly. There were slight fluctuations in the density which can now be observed through the temperature fluctuations of the cosmic microwave background. These slight matter fluctuations evolved over cosmic time because of the mutual gravitational attraction of matter, and eventually the large scale structure of the present day Universe become visible. It is known that the growth rate of the structure strongly depends on how the Universe expands. For example, if the expansion rate is high, it is hard for matter to contract and the growth rate is suppressed. This means that the expansion history can be probed inversely through the observation of the growth rate.

It is important to note that growth rate cannot be probed well if we only observe visible matter (stars and galaxies). This is because we now know that nearly 80 % of the matter is an invisible substance called dark matter. The team adopted the 'weak gravitation lensing technique.' The images of distant galaxies are slightly distorted by the gravitational field generated by the foreground dark matter distribution. Analysis of the systematic distortion enables us to reconstruct the foreground dark matter distribution.

This technique is observationally very demanding because the distortion of each galaxy is generally very subtle. Precise shape measurements of faint and apparently small galaxies are required. This motivated the team to develop Hyper Suprime-Cam. They have been carrying out a wide field imaging survey using Hyper Suprime-Cam since March 2014. At this writing in February 2018, 60 % of the survey has been completed.

Unprecedentedly wide and sharp dark matter map

In this release, the team presents the dark matter map based on the imaging data taken by April 2016. This is only 11 % of the planned final map, but it is already unprecedentedly wide. There has never been such a sharp dark matter map covering such a wide area.

Imaging observations are made through five different color filters. By combining these color data, it is possible to make a crude estimate of the distances to the faint background galaxies (called photometric redshift). At the same time, the lensing efficiency becomes most prominent when the lens is located directly between the distant galaxy and the observer. Using the photometric redshift information, galaxies are grouped into redshift bins. Using this grouped galaxy sample, dark matter distribution is reconstructed using tomographic methods and thus the 3D distribution can be obtained. Data for 30 square degrees are used to reconstruct the redshift range between 0.1 (~1.3 G light-years) and 1.0 (~8 G light-years). At the redshift of 1.0, the angular span corresponds to 1.0 G x 0.25 G light-years. This 3D dark matter mass map is also quite new. This is the first time the increase in the number of dark matter halos over time can be seen observationally.

What the dark matter halo count suggests and future prospects

The team counted the number of dark matter halos whose lensing signal is above a certain threshold. This is one of the simplest measurements of the growth rate. It is suggested that the number count of the dark matter halos is less than what is expected from LCDM. This could indicate there is a flaw in LCDM and that we might have to consider an alternative rather than the simple cosmological constant (Note 1).

The statistical significance is, however, still limited as the large error bars suggest. There has been no conclusive evidence to reject LCDM, but many astronomers are interested in testing LCDM because discrepancies can be a useful probe to unlock the mystery of the accelerating Universe. Further observation and analysis are needed to confirm the discrepancy with higher significance. There are some other probes of the growth rate and such analysis are also underway (e.g. angular correlation of galaxy shapes) in the team to check the validity of standard LCDM.

Credit: 
National Institutes of Natural Sciences

Sedative may prevent delirium in the ICU

image: Sedative was found to prevent delirium in critically ill patients.

Image: 
ATS

March 2, 2018--A low dose of the sedative dexmedetomidine given at night may prevent delirium in critically ill patients, according to new research published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.

In "Low-dose Nocturnal Dexmedetomidine Prevents ICU Delirium: A Randomized, Placebo-controlled Trial," researchers report on what is believed to be the first investigation to identify a drug to prevent adults from developing delirium in the ICU.

The study was led by Yoanna Skrobik, MD, FRCP(c) MSc, a clinician-scientist at McGill University Health Centre in Canada who conducted the first studies of delirium in the critically ill and whose research has shown that delirium prolongs hospital stay and increases mortality.

"In other studies, dexmedetomidine has been associated with lower delirium prevalence rates than other sedatives," Dr. Skrobik said. "But whether dexmedetomidine might actually prevent delirium was not clear."

The study enrolled 100 ICU patients at two hospitals, one in Quebec, the other in Boston. The patients did not have delirium at the time of ICU enrollment. Half the patients were randomly assigned to receive intravenous dexmedetomidine; the other half were infused with the placebo. Neither the patients nor the ICU health care team knew which arm of the trial the patients were in.

The study found that compared to the placebo arm, those receiving dexmedetomidine during their ICU stay:

Were more likely to remain free of delirium throughout their ICU stay: 80 percent vs. 54 percent.

Spent more days free of delirium in the ICU: 8 vs. 6 days.

Were less likely, if in pain, to experience severe pain: 44 percent vs. 66 percent

The authors expected that dexmedetomidine would also improve sleep quality. A previous study of a select group of critically ill patients found that to be the case. In the current study, however, there was no difference in sleep quality between the two groups, as assessed by a self-reported questionnaire.

Dr. Skrobik said that the sleep findings should be interpreted in light of two caveats: sleep in the ICU is almost always abnormal, and no validated instrument exists to identify when an ICU patient is experiencing normal vs. abnormal sleep.

There was also no difference in length of ICU stay or hospital stay, or in ICU mortality. However, a reduction in opiate requirements confirmed other studies describing dexmedetomidine's potential to relieve pain.

"We believe this is a practice-altering study and that dexmedetomidine should be used with patients at high risk for delirium," Dr. Skrobik said.

Credit: 
American Thoracic Society

Two-year study of gun policy research finds gaps, proposes fixes

One of the largest-ever studies of U.S. gun policy finds there is a shortage of evidence about the effects of most gun laws, although researchers from the RAND Corporation found there is some persuasive evidence about the effects of several common gun policies.

The findings are from RAND's sweeping Gun Policy in America initiative, which also evaluated the views of gun policy experts with opposing perspectives on the likely effects of gun laws to identify where compromise might be possible.

RAND researchers evaluated thousands of studies to assess the available evidence for the effect of 13 common gun policies on a range of outcomes, including injuries and deaths, mass shootings, defensive gun use, and participation in hunting and sport shooting. The strongest available evidence supports the conclusion that laws designed to keep guns out of the hands of children reduce firearm self-injuries, suicides and unintended injuries to children.

There is moderate evidence to support conclusions that background checks reduce firearm suicides and firearm homicides, and that laws prohibiting the purchase or possession of guns by individuals with some forms of mental illness reduce violent crime, according to the analysis. There also is moderate evidence that stand-your-ground laws, which allow people to use guns to defend themselves without requiring that they first attempt to retreat, if possible, may increase state homicide rates.

The RAND Gun Policy in America initiative is intended to provide new nonpartisan information to national and local discussions about gun policy.

"The goal of this project is to help build consensus around a shared set of facts about gun policy by demonstrating where scientific evidence is accumulating," said Andrew Morral, the project's leader and a behavioral scientist at RAND, a nonprofit research organization.

Out of the thousands of studies evaluated, the RAND analysis identified 62 that investigated the causal effects of gun polices on any of the outcomes RAND investigated, including those of concern to gun owners and the gun industry, as well as violence, suicide and injury. Most other studies demonstrate only an association between gun policies and outcomes, which offers less-persuasive evidence that the policies caused changes in the outcomes. In many cases, researchers found no studies linking gun policies to the many effects they examined.

"While science can teach us a lot about gun policy, research in this area is generally far behind where it is for most other causes of death that claim similar numbers of lives in the U.S. each year," Morral said. "This does not mean that gun polices have no effects. Most laws probably have some effect, however small or intended. Instead, the limited evidence base reflects shortcomings in the contributions that scientific study has made to the policy debate."

Many of the studies RAND reviewed used weak methods of establishing the effects of gun laws, often because historical information on variations in state gun laws is unavailable or difficult to collect. To encourage more high-quality research, the Gun Policy in America project created a large database of state-level U.S. gun policy laws, covering the period of 1979 to 2016. RAND is making this data set available to researchers and the public, and is using it to develop new estimates of the effects of gun laws that will be released later this year.

Because there is so little scientific evidence to draw on, RAND researchers also surveyed 95 gun policy experts from across the ideological spectrum to identify where there might be consensus or opportunities for compromise. Two groups with opposing views were identified among the experts. One group had views closely aligned with the National Rifle Association, and the other had views aligned with the Brady Campaign to Prevent Gun Violence.

Despite sharp disagreements on their ratings of the overall merits of different gun laws, the two groups of experts were often not far apart in their estimates of the likely effects of laws.

There was comparatively strong agreement between the groups of experts about the positive effects of expanded mental health-related prohibitions, the required reporting of lost or stolen firearms, media campaigns to prevent children from accessing guns, and the required surrender of firearms by those prohibited from possessing guns, such as people convicted of felonies.

The largest disagreements in opinion involved policies that allow the carrying of concealed weapons without permits and the elimination of gun-free zones. The project includes a website visualization tool that allows visitors to explore a wide range of scenarios to see how different combinations of gun policies would affect outcomes nationally and in individual U.S. states, according to the two groups of experts.

"Both groups overwhelmingly favored policies they believed would reduce firearm homicides and suicides, but there is disagreement about which laws would have these effects," Morral said. "Collecting more and stronger evidence about the true effects of laws is a necessary and promising step toward building greater consensus around effective gun policy."

Looking forward, the RAND team recommends that the federal government increase funding for gun research to levels comparable to federal research investments in other significant causes of death and injury, such as automobile accidents. In addition, the focus of research should expand to include the effects that policies have on defensive gun use, gun ownership, hunting and recreation activities, jobs in the gun industry and officer-involved shootings.

"Issues beyond gun violence are often central considerations in gun policy debates, but we identified no qualifying research examining most of them. If we had better information on the effects of gun laws on some of these issues, we would be in a better position to develop fair and effective gun laws," Morral said.

The United States has the highest gun ownership rate in the world, with estimates suggesting that Americans own as many as 300 million guns. Between 10 million and 20 million Americans actively participate in hunting or sport shooting annually, and the gun industry generates $16 billion in revenue and employs hundreds of thousands in gun manufacturing, distribution, sales and recreation.

At the same time, more than 36,000 people died of gunshot wounds in the U.S. in 2015, and Americans are 25 times more likely to die by gun homicide than residents of other wealthy countries.

About two-thirds of gunshot deaths in the U.S. are suicides, while mass shootings accounted for just 0.5 percent of all gun fatalities annually. Despite wide acknowledgement that gun violence levels are too high, little consensus has been reached about what gun policies should be adopted widely.

Credit: 
RAND Corporation

Planning for smallpox outbreak must consider immunosuppression

Unprecedented levels of immunosuppression in countries like Australia and the US must be considered in planning for the real risk of smallpox re-emerging in the world, an expert in infectious diseases warned.

"Smallpox was eradicated in 1980 but in 2017, Canadian scientists created a smallpox-like virus in a lab using just mail order DNA", says UNSW Sydney Professor of Infectious Disease Epidemiology Raina MacIntyre. "Now in 2018, these same scientists published a step by step method to create a pox virus in a lab, making the threat of smallpox re-emergence even greater."

"Experts have long feared this scenario, and it is now a reality," says Professor MacIntyre, who is also Director of the NHMRC Centre for Research Excellence, Integrated Systems for Epidemic Response. "This highlights the real risk of smallpox re-emerging in the world, without terrorists needing to access closely guarded stockpiles of the virus."

In the nearly 40 years since smallpox was eradicated, much has changed in society. Advances in medicine mean that many more people today live with a weakened immune system - such as people with HIV, people being treated for cancer and autoimmune conditions.

The study shows that children and young people aged 0-19 years will have the highest risk of infection in a smallpox epidemic. However, the risk of severe disease and death is in people aged >45 years. Almost 1 in 5 people in cities like Sydney and New York have a weakened immune system, which would make the impact of an attack with smallpox much more severe, research published in the US Centers for Disease Control, Emerging Infectious Diseases journal finds.

"The rates of immunosuppression were even higher for the age group 60-65 years, because of natural decline of the immune system with age. We have an ageing population, and this must be considered when planning for a bioterrorism attack, and vaccination strategies during an outbreak," Professor MacIntyre says.

Professor MacIntyre has led a study that used a mathematical model to identify the impact of smallpox re-emerging in cities like Sydney and New York. The research identified that the highest rates of smallpox infection in these cities would be in people aged under 20 years, but the highest death rates would be among people aged 45 and over.

The study also is a tale of two cities with very different vaccination policies. Smallpox vaccine was routine in New York, but not used widely in Sydney. Professor Raina MacIntyre says almost 22% of the current New York population is vaccinated for smallpox, compared with only 10% of the current Sydney population - and these were mostly migrants who were vaccinated in their country of origin.

The research therefore looked at whether past vaccination in older people gave much protection. Despite widespread past vaccination in New York, the modelled impact of smallpox in this city was more serious than in Sydney due to its larger number of immunosuppressed people.

Immunologist and HIV expert Professor Tony Kelleher from the Kirby Institute said "Immunosuppression would likely drive the impact of smallpox more than past vaccination, especially with so many people today living with HIV, or receiving cancer therapy or other medical treatments that suppress the immune system."

"Vaccine immunity wanes over time, and recent vaccination is needed for protection. The good news is, people who have been vaccinated in the past would have a faster response to re-vaccination in the event of an outbreak," Professor MacIntyre says. "The bad news is, both cities show the highest smallpox infection rates for unvaccinated young people, aged 5-20 years."

The study also notes the importance of vaccination for health workers, and the need for hospitals to have appropriate isolation facilities to minimise the impact of a smallpox outbreak.

One of the study authors, Professor Mike Lane, Emeritus Professor from Emory University in the US and the former director of the US Centers for Disease Control Smallpox Eradication Program, says: "Should there be a smallpox attack with a virus similar to the virus which was eradicated, the prospects for bringing an epidemic under control are good, with good public health follow up and vaccination of contacts".

Credit: 
University of New South Wales