Tech

Scientists can now control thermal profiles at the nanoscale

image: This figure shows evidence that the two nanorods were heated to different temperatures. The researchers collected data on how the heated nanorods and surrounding glycerol scattered photons from a beam of green light. The five graphs show the intensity of that scattered light at five different wavelengths, and insets show images of the scattered light. Arrows indicate that peak intensity shifts at different wavelengths, an indirect sign that the nanorods were heated to different temperatures.

Image: 
Bhattacharjee et al., <em>ACS Nano</em>, 2019

At human scale, controlling temperature is a straightforward concept. Turtles sun themselves to keep warm. To cool a pie fresh from the oven, place it on a room-temperature countertop.

At the nanoscale -- at distances less than 1/100th the width of the thinnest human hair -- controlling temperature is much more difficult. Nanoscale distances are so small that objects easily become thermally coupled: If one object heats up to a certain temperature, so does its neighbor.

When scientists use a beam of light as that heat source, there is an additional challenge: Thanks to heat diffusion, materials in the beam path heat up to approximately the same temperature, making it difficult to manipulate the thermal profiles of objects within the beam. Scientists have never been able to use light alone to actively shape and control thermal landscapes at the nanoscale.

At least, not until now.

In a paper published online July 30 by the journal ACS Nano, a team of researchers reports that they have designed and tested an experimental system that uses a near-infrared laser to actively heat two gold nanorod antennae -- metal rods designed and built at the nanoscale -- to different temperatures. The nanorods are so close together that they are both electromagnetically and thermally coupled. Yet the team, led by researchers at the University of Washington, Rice University and Temple University, measured temperature differences between the rods as high as 20 degrees Celsius. By simply changing the wavelength of the laser, they could also change which nanorod was cooler and which was warmer, even though the rods were made of the same material.

"If you put two similar objects next to each other on a table, ordinarily you would expect them to be at the same temperature. The same is true at the nanoscale," said lead corresponding author David Masiello, a UW professor of chemistry and faculty member in both the Molecular & Engineering Sciences Institute and the Institute for Nano-Engineered Systems. "Here, we can expose two coupled objects of the same material composition to the same beam, and one of those objects will be warmer than the other."

Masiello's team performed the theoretical modeling to design this system. He partnered with co-corresponding authors Stephan Link, a professor of both chemistry and electrical and computer engineering at Rice University, and Katherine Willets, an associate professor of chemistry at Temple University, to build and test it.

Their system consisted of two nanorods made of gold -- one 150 nanometers long and the other 250 nanometers long, or about 100 times thinner than the thinnest human hair. The researchers placed the nanorods close together, end to end on a glass slide surrounded by glycerol.

They chose gold for a specific reason. In response to sources of energy like a near-infrared laser, electrons within gold can "oscillate" easily. These electronic oscillations, or surface plasmon resonances, efficiently convert light to heat. Though both nanorods were made of gold, their differing size-dependent plasmonic polarizations meant that they had different patterns of electron oscillations. Masiello's team calculated that, if the nanorod plasmons oscillated with either the same or opposite phases, they could reach different temperatures -- countering the effects of thermal diffusion.

Link's and Willets' groups designed the experimental system and tested it by shining a near-infrared laser on the nanorods. They studied the beam's effect at two wavelengths -- one for oscillating the nanorod plasmons with the same phase, another for the opposite phase.

The team could not directly measure the temperature of each nanorod at the nanoscale. Instead, they collected data on how the heated nanorods and surrounding glycerol scattered photons from a separate beam of green light. Masiello's team analyzed those data and discovered that the nanorods refracted photons from the green beam differently due to nanoscale differences in temperature between the nanorods.

"This indirect measurement indicated that the nanorods had been heated to different temperatures, even though they were exposed to the same near-infrared beam and were close enough to be thermally coupled," said co-lead author Claire West, a UW doctoral candidate in the Department of Chemistry.

The team also found that, by changing the wavelength of near-infrared light, they could change which nanorod -- short or long -- heated up more. The laser could essentially act as a tunable "switch," changing the wavelength to alter which nanorod was hotter. The temperature differences between the nanorods also varied based on their distance apart, but reached as high as 20 degrees Celsius above room temperature.

The team's findings have a range of applications based on controlling temperature at the nanoscale. For example, scientists could design materials that photo-thermally control chemical reactions with nanoscale precision, or temperature-triggered microfluidic channels for filtering tiny biological molecules.

The researchers are working to design and test more complex systems, such as clusters and arrays of nanorods. These require more intricate modeling and calculations. But given the progress to date, Masiello is optimistic that this unique partnership between theoretical and experimental research groups will continue to make progress.

"It was a team effort, and the results were years in the making, but it worked," said Masiello.

Credit: 
University of Washington

Older adults more likely to condemn even accidental harm

CHICAGO -- As people get older, they are more likely to condemn and want to punish others for acts that cause harm, even if no harm was intended, according to research presented at the annual convention of the American Psychological Association.

"Although older adults are capable of empathizing [about] someone's intentions when making a moral evaluation, they appear less likely to do so than younger individuals when those actions cause harm," said Janet Geipel, PhD, of the University of Chicago, who presented the research.

Geipel and her colleagues conducted a series of experiments examining how younger adults (ages 21 to 39) and older adults (ages 63 to 90) would morally evaluate accidentally harmful and accidentally helpful actions.

The first experiment involved 60 participants equally split into younger and older adults. Each participant was presented with eight hypothetical scenarios in which a person's actions resulted in either a positive or negative outcome. In each case, the scenario was described in such a way that the participant could infer whether the act was intended to cause the outcome that it did, as opposed to simply being an accident.

After each scenario with a negative outcome, participants were asked to judge the immorality of the described action and how much it should be punished. In the case of a positive outcome, participants were asked to judge the goodness of the action and how much it should be rewarded. Participants answered all questions on a scale of zero to ten.

For instance, in one scenario, a character named Joanna and one of her friends are in a boat in a part of the sea with lots of poisonous jellyfish. Her friend asks if it is OK to go swimming, and Joanna (knowing the water is not safe) tells her to go ahead. The friend goes swimming, gets stung and goes into shock. In another version of the scenario, Joanna has read (incorrectly) that the local jellyfish were harmless and unknowingly puts her friend at risk.

The researchers found that older adults were more likely to condemn accidentally harmful acts and recommend that the person be punished, even when it appeared that the harmful action was unintentional. Interestingly, they did not find any age difference in how accidentally helpful actions were evaluated.

A second experiment involved 82 participants and was similar to the first experiment. Participants were presented with four different scenarios: One in which accidental harm was caused by negligence (e.g., Chloe sold a sick dog that was infected with rabies because she did not check the animal carefully), one in which the agent acted with due care (e.g., Chloe sold a dog with rabies after a careful assessment of the dog made her believe it was healthy), one with a neutral outcome (e.g., Chloe intended to sell a healthy dog and did so) and one in which the agent acted with negative intentions (e.g., Chloe knew the dog had rabies and sold it anyway).

"We found that while younger adults condemned more severely negligent than non-negligent actions, older participants condemned both equally," said Geipel.

In a second part of the same experiment, participants were presented with the accidental harm scenarios from the first experiment and asked to what extent they thought the person was negligent and whether his or her actions should be condemned.

"We found that older adults condemned the accidental transgressors more than did younger adults and were more likely to attribute negligence to the actions," said Geipel. "Further analysis showed that perceived negligence mediated the relationship between age group and the judgment of moral wrongness."

Geipel believes this phenomenon may have something to do with the fact that people experience cognitive decline as they age. Making moral judgments based on intent requires more cognitive effort than simply condemning outcomes. Since older adults may find considering intent more mentally taxing than younger adults, they would be more likely to condemn even unintentional harm.

These findings may have important implications, especially for the legal system, said Geipel. For example, a jury member who has to evaluate whether someone is guilty needs to consider intent.

"The present results suggest that older adults may attend less to the intentions of the accused and more to the negative outcomes that the accused produced," said Geipel. "Put simply, the present findings imply that older adults may be more likely to convict.

Credit: 
American Psychological Association

NASA gives Typhoon Lekima a twice-over with the Aqua satellite

image: On Aug. 9, 2019 at 12:41 a.m. EDT (441 UTC) the AIRS instrument aboard NASA's Aqua satellite analyzed Lekima's cloud top temperatures in infrared light. AIRS found coldest cloud top temperatures (purple) of strongest thunderstorms were as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) in the thick band of thunderstorms around the eye and in bands north and southeast of the center.

Image: 
NASA JPL/Heidar Thrastarson

NASA's Aqua satellite provided infrared and visible views of Typhoon Lekima as it was approaching landfall in China. China has posted Typhoon and Heavy Rain Warnings for Lekima.

On Aug. 9 at 12:41 a.m. EDT (441 UTC) the Atmospheric Infrared Sounder or AIRS instrument aboard NASA's Aqua satellite analyzed Lekima's cloud top temperatures in infrared light. The stronger the storms, the higher they extend into the troposphere, and they have the colder cloud temperatures. AIRS found coldest cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the eye and in thick bands of thunderstorms wrapping into the center from the north and southeast. Storms with cloud tops that cold have been found to generate heavy rainfall.

On Aug 9 at 12:45 a.m. EDT (0445 UTC), the Moderate Imaging Spectroradiometer or MODIS instrument that also flies aboard NASA's Aqua satellite provided visible views of powerful Typhoon Lekima affecting China. The satellite showed a clear, small, rounded eye surrounded by a thick, powerful ring of thunderstorms and a large band of thunderstorms extending north of the center.

At 11 a.m. EDT (1500 UTC) the center of Typhoon Lekima was located near latitude 27.8 degrees north latitude and longitude 121.8 degrees east. Lekima is moving toward the north-northeast. Maximum sustained winds are near 95 knots (109 mph/176 kph). Lekima is a Category 2 hurricane on the Saffir-Simpson Hurricane Wind Scale.

China's National Meteorological Center (NMC) issued a Red Warning for the Typhoon at 10:00 a.m. local times on August 9 and an Orange Warning for rainstorm.

NMC said, "Typhoon Lekima is forecast to move northwest direction at the speed of 15-20 kph and moves towards coastal regions of Zhejiang and make landfall in coastal regions from Xiangshan to Cangnan of Zhejiang from the dawn to the daytime of August 10. From August 9 to 10, Bashi Channel, Taiwan Strait, coastal sea areas of Taiwan, East China Sea, Hangzhou Bay, Yangtze River Estuary, coastal regions of Zhejiang, Shanghai and southern Jiangsu, northern Taiwan Island, southern Huanghuai Sea, central-northern Fujian will be exposed to scale 7-9 gale.

Heavy rain to rainstorm will pummel Zhejiang, northern Fujian, eastern and southern Jiangsu, Shanghai, southeastern Anhui, and Taiwan Island. Heavy downpour will pound central-eastern Zhejiang, southern Shanghai, and Taiwan Island. Torrential downpour (250-320mm) will slam eastern Zhejiang and central Taiwan Island. (Aug. 9)."

The Orange Warning says, "It is predicted that from August 9 to 10, heavy rain to rainstorm will grip Zhejiang, northern Fujian, eastern and southern Jiangsu, Shanghai, southeastern Anhui, Taiwan Island, Beijing, central-southern Hebei, central Henan, southwestern and northern Chongqing, southern Sichuan, central-northern Yunnan, and eastern Heilongjiang. Heavy downpour will pound central-eastern Zhejiang, southern Shanghai, and Taiwan Island. Torrential downpour (250-320mm) will slam eastern Zhejiang and central Taiwan Island."

For updated forecasts from NMC, visit: http://www.cma.gov.cn/en2014/weather/Warnings/

Credit: 
NASA/Goddard Space Flight Center

Development of simplified new mass spectrometric technique using laser and graphene

image: Chair-professor Dae Won Moon in the Department of New Biology (left) and Research Fellow Jae Young Kim in the Department of Robotics Engineering (right)

Image: 
DGIST

A technology that can obtain high-resolution, micrometer-sized images for mass spectrometric analysis without sample preparation has been developed. DGIST Research Fellow Jae Young Kim and Chair-professor Dae Won Moon's team succeeded in developing the precise analysis and micrometer-sized imaging of bio samples using a small and inexpensive laser.

DGIST announced that Research Fellow Jae Young Kim in the Department of Robotics Engineering and Chair-professor Dae Won Moon's team developed a technology that can analyze experiment samples without any preparation processing. Due to its ability to obtain high-resolution mass spectrometric images without an experimental environment using 'continuous wave laser'1, the technology is expected to be applied widely in the precise medicine and medical diagnosis fields.

Many advance preparations are needed for the mass spectrometric imaging of biometric samples using 'specimen,' which thinly cut an object to analyze. The specimen must be changed artificially since they cannot be analyzed accurately in a room temperature or atmospheric pressure. To develop a convenient analysis technology and ease the burden, Research Fellow Kim started the research.

The research team installed a lens carrying continuous wave laser right below a microscope substrate where the specimen is put and shot the laser on it to measure mass spectra by examining molecules from desorption2.

The mass spectra can be analyzed through a continuous wave laser whose energy is weaker than other lasers because of the use of 'graphene substrate' below the specimen.

Since the honeycomb-patterned graphene has very high heat conductivity and can convert light into heat, it can secure enough heat needed for specimen analysis with small amount of light generated by the continuous wave laser. This technology is also advantageous for obtaining high-resolution analysis images, because it can secure space to observe specimen much more closely even when using a 20x magnifying lens.

Chair-professor Dae Won Moon in the Department of New Biology explained that "Through this technology, we could greatly shorten the preparation time for analysis by omitting the specimen preprocessing step. Our next plan is to develop the technology further so it can be applied in various areas such as medical diagnosis."

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

A Finnish study finds bowel preparation for colon surgery unnecessary

image: Gastrointestinal surgeons Laura Koskenvuo (left) and Ville Sallinen at work at Meilahti Hospital.

Image: 
Ville Sallinen

In recent decades, patients in Europe coming in for colectomies, or surgical procedures targeted at the colon, have not been routinely subjected to what is known as bowel preparation, where the bowel is emptied before the operation. In the United States , on the other hand, cleansing the bowel is relatively common.

Several extensive retrospective studies conducted in the United States were published a few years ago, indicating that bowel preparation combined with the preoperative oral administration of antibiotics appeared to significantly reduce surgical site infections. Based on these results, American surgical associations ended up recommending bowel preparation before colectomies. In Finland, the attitude to this practice has so far remained somewhat reserved due to the absence of randomised studies.

"Bowel preparation is a stressful procedure for the patient, so conducting it is only justified when it genuinely benefits the patient. However, not a single randomised follow-up study had been conducted on the topic, so we decided to carry one out ourselves," says MD Laura Koskenvuo, gastrointestinal surgeon and Ph.D. at the Helsinki University Hospital.

The study, published in the distinguished journal The Lancet, was carried out at the Helsinki and Oulu University Hospitals, as well as the Central Finland and Seinäjoki Central Hospitals. A total of 400 patients awaiting colectomy took part in the study, half of whom were randomised into a preparation group which was given orally administered antibiotics combined with bowel cleansing with a drinkable cleansing liquid and the other half into a group in which no such preparations were made.

"According to our findings, there were no differences in treatment outcomes between the groups. Bowel preparation did not reduce surgical site infections or the total number or severity of surgical complications. Neither was there any difference in the number of days spent at the hospital," notes MD Ville Sallinen, gastrointestinal surgeon and adjunct professor at the Helsinki University Hospital.

"It appears that this stressful procedure provides no benefit to patients."

Credit: 
University of Helsinki

Mathematicians of TU Dresden develop new statistical indicator

Most of us know this phenomenon only too well: as soon as it is hot outside, you get an appetite for a cooling ice cream. But would you have thought that mathematics could be involved?

Let us explain: The rising temperatures and the rising ice consumption are two statistical variables in linear dependence; they are correlated.

In statistics, correlations are important for predicting the future behaviour of variables. Such scientific forecasts are frequently requested by the media, be it for football or election results.

To measure linear dependence, scientists use the so-called correlation coefficient, which was first introduced by the British natural scientist Sir Francis Galton (1822-1911) in the 1870s. Shortly afterwards, the mathematician Karl Pearson provided a formal mathematical justification for the correlation coefficient. Therefore, mathematicians also speak of the "Pearson product-moment correlation" or the "Pearson correlation".

If, however, the dependence between the variables is non-linear, the correlation coefficient is no longer a suitable measure for their dependence.

René Schilling, Professor of Probability at TU Dresden, emphasises: "Up to now, it has taken a great deal of computational effort to detect dependencies between more than two high-dimensional variables, in particular when complicated non-linear relationships are involved. We have now found an efficient and practical solution to this problem."

Dr. Björn Böttcher, Prof. Martin Keller-Ressel and Prof. René Schilling from TU Dresden's Institute of Mathematical Stochastics have developed a dependence measure called "distance multivariance". The definition of this new measure and the underlying mathematical theory were published in the leading international journal Annals of Statistics under the title "Distance Multivariance: New

Dependence Measures for Random Vectors".

Martin Keller-Ressel explains: "To calculate the dependence measure, not only the values of the observed variables themselves, but also their mutual distances are recorded and from these distance matrices, the distance multivariance is calculated. This intermediate step allows for the detection of complex dependencies, which the usual correlation coefficient would simply ignore. Our method can be applied to questions in bioinformatics, where big data sets need to be analysed."

In a follow-up study, it was shown that the classical correlation coefficient and other known dependence measures can be regained as borderline cases from the distance multivariance.

Björn Böttcher concludes by pointing out: „We provide all necessary functions in the package 'multivariance' for the free statistics software 'R', so that all interested parties can test the application of the new dependence measure".

Credit: 
Technische Universität Dresden

1-2 caffeinated drinks not linked with higher risk of migraines; 3+ may trigger them

BOSTON - Afflicting more than one billion adults worldwide, migraine is the third most prevalent illness in the world. In addition to severe headache, symptoms of migraine can include nausea, changes in mood, sensitivity to light and sound, as well as visual and auditory hallucinations. People who suffer from migraine report that weather patterns, sleep disturbances, hormonal changes, stress, medications and certain foods or beverages can bring on migraine attacks. However, few studies have evaluated the immediate effects of these suspected triggers.

In a study published today in the American Journal of Medicine, researchers at Beth Israel Deaconess Medical Center (BIDMC), Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health (HSPH) evaluated the role of caffeinated beverages as a potential trigger of migraine. Led by Elizabeth Mostofsky, ScD, an investigator in BIDMC's Cardiovascular Epidemiology Research Unit and a member of the Department of Epidemiology at HSPH, researchers found that, among patients who experience episodic migraine, one to two servings of caffeinated beverages were not associated with headaches on that day, but three or more servings of caffeinated beverages may be associated with higher odds of migraine headache occurrence on that day or the following day.

"While some potential triggers - such as lack of sleep - may only increase migraine risk, the role of caffeine is particularly complex, because it may trigger an attack but also helps control symptoms," said Mostofsky. "Caffeine's impact depends both on dose and on frequency, but because there have been few prospective studies on the immediate risk of migraine headaches following caffeinated beverage intake, there is limited evidence to formulate dietary recommendations for people with migraines."

In their prospective cohort study, Mostofsky and colleagues - including Principal Investigator Suzanne M. Bertisch, MD, MPH, of the Division of Sleep and Circadian Disorders at Brigham and Women's Hospital, Beth Israel Deaconess Medical Center, and Harvard Medical School - 98 adults with frequent episodic migraine completed electronic diaries every morning and every evening for at least six weeks. Every day, participants reported the total servings of caffeinated coffee, tea, soda and energy drinks they consumed, as well as filled out twice daily headache reports detailing the onset, duration, intensity, and medications used for migraines since the previous diary entry. Participants also provided detailed information about other common migraine triggers, including medication use, alcoholic beverage intake, activity levels, depressive symptoms, psychological stress, sleep patterns and menstrual cycles.

To evaluate the link between caffeinated beverage intake and migraine headache on the same day or on the following day, Mostofsky, Bertisch and colleagues used a self-matched analysis, comparing an individual participant's incidence of migraines on days with caffeinated beverage intake to that same participant's incidence of migraines on days with no caffeinated beverage intake. This self-matching eliminated the potential for factors such as sex, age, and other individual demographic, behavioral and environmental factors to confound the data. The researchers further matched headache incidence by day of the week, eliminating weekend versus week day habits that may also impact migraine occurrence.

Self-matching also allowed for the variations in caffeine dose across different types of beverages and preparations.

"One serving of caffeine is typically defined as eight ounces or one cup of caffeinated coffee, six ounces of tea, a 12-ounce can of soda and a 2-ounce can of an energy drink," said Mostofsky. "Those servings contain anywhere from 25 to 150 milligrams of caffeine, so we cannot quantify the amount of caffeine that is associated with heightened risk of migraine. However, in this self-matched analysis over only six weeks, each participant's choice and preparation of caffeinated beverages should be fairly consistent."

Overall, the researchers saw no association between one to two servings of caffeinated beverages and the odds of headaches on the same day, but they did see higher odds of same-day headaches on days with three or more servings of caffeinated beverages. However, among people who rarely consumed caffeinated beverages, even one to two servings increased the odds of having a headache that day.

"Despite the high prevalence of migraine and often debilitating symptoms, effective migraine prevention remains elusive for many patients," said Bertisch. "This study was a novel opportunity to examine the short-term effects of daily caffeinated beverage intake on the risk of migraine headaches. Interestingly, despite some patients with episodic migraine thinking they need to avoid caffeine, we found that drinking one to two servings/day was not associated with higher risk of headache. More work is needed to confirm these findings, but it is an important first step."

Credit: 
Beth Israel Deaconess Medical Center

Despite habitat protection, endangered owls decline in Mount Rainier National Park

image: Spotted Owls in Mount Rainier National Park are declining as competing Barred Owls spread into the region.

Image: 
Anna Mangan

When the Northern Spotted Owl was protected under the Endangered Species Act in 1990, the primary threat to the species was the loss of the old-growth forest it depends on. However, new research published in The Condor: Ornithological Applications shows that the Northern Spotted Owl population in Washington's Mount Rainier National Park has declined sharply in the past two decades despite the long-term preservation of habitat within the park. The culprit? The spread of Barred Owls, a closely related, competing species that has moved into Spotted Owls' range from the east.

Biologists have seen Barred Owls in Spotted Owl territories within the national park more and more frequently since Spotted Owl surveys began in 1997. For their new study, Oregon Cooperative Fish and Wildlife Research Unit's Anna Mangan, the National Park Service's Tara Chestnut, and their colleagues analyzed two decades' worth of data from these surveys. "We found that Spotted Owls now occupy 50% fewer territories in the park than they did 20 years ago when the study began, despite the lack of habitat disturbance," says Chestnut. "Spotted Owls were less likely to be present in territories where Barred Owls were detected, and if Spotted Owls were there, sharing space with Barred Owls made them less likely to breed. Only 18 adult Spotted Owls were detected in the study area in 2016, down from a high of 30 owls in 1998."

"Barred Owls eat a wider range of foods and use a greater variety of forested habitats, including the old-growth forest required by Spotted Owls, and these generalist traits have aided them in their highly successful range expansion throughout the Pacific Northwest," explains co-author Katie Dugger, a researcher the US Geological Survey's Oregon Cooperative Fish and Wildlife Research Unit. "Barred Owls are now competing with Northern Spotted Owls for food and space, and increased Barred Owl densities are associated with declines in Northern Spotted Owl populations across their range."

"What is particularly alarming is that this decline has occurred even at Mount Rainier, where Spotted Owl habitat has been protected for over 100 years, with virtually no fire or logging disturbance," says Mangan. "With Barred Owls detected at nearly every Spotted Owl territory monitored in the park, the future of Spotted Owls at Mount Rainier is tenuous. It also suggests that preserving owl habitat, while still crucial, is likely no longer enough to sustain the Spotted Owl population at Mount Rainier."

If current trends continue, scientists predict that the Spotted Owl could be extinct in the region within approximately six to eight decades. "Conservation managers can focus on protecting old-growth habitat with steeper slopes, as we found this to have higher Spotted Owl occupancy, and can continue to monitor Barred Owl populations to better understand their effect on local Spotted Owl populations," adds Mangan. "Managers will need to consider some creative solutions, and likely some unpopular choices, if the Northern Spotted Owl is going to be prevented from going extinct on public lands."

Credit: 
American Ornithological Society Publications Office

World's largest frogs build their own ponds for their young

image: An adult Goliath frog caught by a local froghunter.

Image: 
Marvin Schäfe

The first example of "nest"-building in an African amphibian, the Goliath frog, has been described in a new article in the Journal of Natural History, and could explain why they have grown to be giant.

Researchers observed adult Goliath frogs in the wild and found that they move rocks weighing up to 2kg while building ponds for their young, which they then guard. Goliath frogs themselves weigh up to 3.3kg and their bodies reach over 34cm, without including their legs.

"Goliath frogs are not only huge, but our discovery shows they seem to be attentive parents as well," says author Marvin Schäfer from the Berlin Natural History Museum. "The little ponds they make at the edges of fast-flowing rivers provide their eggs and tadpoles with a safe haven from sometimes torrential waters, as well as from the many predators living there. We think that the heavy work they put into excavation and moving rocks may explain why gigantism evolved in these frogs in the first place."

Despite their renown, they're found only in Cameroon and Equatorial Guinea and little is known about their biology, particularly their reproductive behaviour. Numbers of the endangered species have declined by more than 50 per cent in just 10 years, due to overhunting and deforestation; and researchers first learned about the unusual level of parental care they provide from local frog hunters, who trap adults for bush meat.

To investigate the behaviour, two researchers waded along opposite edges of the Mpoula River looking for breeding sites. At first, they could only distinguish them by finding eggs or tadpoles. However, they learned to spot material excavated and piled up in a way that defied the effects of the water current. They identified 22 potential breeding sites, 14 of which contained nearly 3,000 eggs each, spread across the entire area. At the nest which showed the most recent signs of activity, they recorded a time-lapse video using a camera trap.

The scientists found that Goliath frogs create three different types of pond. For some, they simply clear naturally-occurring rock-pools of leaf-litter and debris. The effect is still strikingly different to surrounding puddles, which have thick layers of leaves, debris and gravel. For the second type, frogs dig out leaf litter and gravel from shallow pools and push it to the edges, forming a dam. Dams are most obvious in the third type, for which frogs clear depressions of shallow water of any large stones, moving them to edge and creating circular pond. This was the most reliable type, as eggs were least likely to spill out or become over-flooded after heavy rain.

Infrared time-lapse revealed an adult spending all night guarding a nest from predators, only ending the vigil just before dawn. The scientists were unable to determine which sex was responsible for building ponds or which for guarding them, although one hunter who described the behaviours in detail suggested that males do the construction while females are the guards. They didn't directly observe the cleaning or digging, but over five days followed progress of a nest, from the first digging attempts to the depositing of eggs.

Dr Mark-Oliver Rödel, project leader and president of Frogs & Friends, says: "The fact that we've only just discovered these behaviours shows how little we know about even some of the most spectacular creatures on our planet. We hope that our findings, combined with further ongoing research, will improve our understanding of the needs of the Goliath frog so we can help support its continued survival."

Credit: 
Taylor & Francis Group

Adding MS drug to targeted cancer therapy may improve glioblastoma outcomes

image: In a patient-derived tumor model in mice, glioblastoma tumors (dark purple) shrink significantly when treated with both the MS drug teriflunomide and the targeted cancer drug BKM-120.

Image: 
UC San Diego School of Medicine

Glioblastoma is an aggressive form of brain cancer that infiltrates surrounding brain tissue, making it extremely difficult to treat with surgery. Even when chemotherapy and radiation successfully destroy the bulk of a patient's glioblastoma cells, they may not affect the cancer stem cells. This small population of tumor cells have the capacity to grow and multiply indefinitely, and can lead to tumor recurrence.

To study these tumors and test new therapies, researchers at University of California San Diego School of Medicine use mice that bear glioblastoma tumor samples donated by patients who underwent surgery. With this approach, they recently discovered that treatment with both a targeted cancer therapy and the multiple sclerosis (MS) drug teriflunomide halts glioblastoma stem cells, markedly shrinks tumors and improves mouse survival.

The study is published August 7, 2019 in Science Translational Medicine.

"We used to think we'd find a single magic bullet to treat everyone with glioblastoma," said senior author Jeremy Rich, MD, professor of medicine at UC San Diego School of Medicine and director of neuro-oncology and director of the Brain Tumor Institute at UC San Diego Health. "But now we realize that we need to find out what drives each patient's unique tumor, and tailor our treatments to each individual."

In recent years, the desire to personalize cancer therapies has led to the development of several targeted cancer therapies. These drugs work by inhibiting specific molecules that cancer cells rely on for growth and survival. For that reason, targeted therapies can work better and cause fewer side effects compared to traditional therapies, such as chemotherapy and radiation. Yet targeted therapies haven't been as successful as the scientific community had hoped. According to Rich, that's because it's usually not enough to inhibit just one molecule or pathway driving tumor formation or survival -- cancer cells will find a way to compensate.

"As scientists, we are often looking at small snapshot of what a cancer stem cell is doing," said Rich, who is also a faculty member in the Sanford Consortium for Regenerative Medicine and Sanford Stem Cell Clinical Center at UC San Diego Health. "As a clinician, I also try to look at the bigger picture. I'm not looking for just one or two drugs to help my patients, because I think it's going to take a whole personalized cocktail of many different drugs to really get the cancer cells on the run."

To continue replicating, glioblastoma stem cells need to keep making more DNA, and to do that they need to make pyrimidine, one of DNA's building blocks. Mining tumor genomic data available for hundreds of glioblastoma patients in six different databases, Rich's team noticed that higher scores on pyrimidine metabolism were associated with poorer patient survival.

"It's a lot of hard work to be a cancer cell. They have to work all the time to find ways to pull together pathways to survive and grow," said Rich. "Not that I have sympathy for them. But knowing this helps us know where they might have weak spots."

The MS drug teriflunomide happens to block pyrimidine-forming enzymes. Rich and team discovered that teriflunomide inhibits glioblastoma cell survival, self-renewal and tumor initiation in the laboratory. What's more, the researchers found that when treated with teriflunomide alone, the patient-derived tumors shrank slightly and the mice bearing them survived somewhat longer, compared to mock-treated mice.

The team also tested two targeted cancer therapies: BKM-120, an inhibitor that works best in glioblastoma cells driven by lack of an enzyme called PTEN, and lapatinib, an inhibitor used to treat cancers driven by mutations in the Epidermal Growth Factor Receptor (EGFR). With BKM-120 treatment alone, tumors shrank moderately and mice survived even longer, compared to either mock-treated mice or teriflunomide-treated mice.

But when treated with both teriflunomide and BKM-120, tumors shrank markedly on average and the mice survived significantly longer, compared to mock-treated mice or mice who received either treatment alone.

"We're excited about these results, especially because we're talking about a drug that's already known to be safe in humans," Rich said. "But this laboratory model isn't perfect -- yes it uses human patient samples, yet it still lacks the context a glioblastoma would have in the human body, such as interaction with the immune system, which we know plays an important role in determining tumor growth and survival. Before this drug could become available to patients with glioblastoma, human clinical trials would be necessary to support its safety and efficacy."

Credit: 
University of California - San Diego

Controlling the shape-shifting skeletons of cells

image: A three-dimensional look at an aster, a structure composed of tiny protein filaments that have been engineered to be controlled with light.

Image: 
Caltech

You know you have a skeleton, but did you know that your cells have skeletons, too? Cellular skeletons, or cytoskeletons, are shapeshifting networks of tiny protein filaments, enabling cells to propel themselves, carry cargo, and divide. Now, an interdisciplinary team of Caltech researchers has designed a way to study and manipulate the cytoskeleton in test tubes in the lab. Understanding how cells control movement could one day lead to tiny, bioinspired robots for therapeutic applications. The work also contributes to the development of new tools for manipulating fluids on very small scales relevant to molecular biology and chemistry.

The work is described in a paper appearing in the August 8 issue of the journal Nature.

The building blocks of the cellular cytoskeleton are thin, tube-like filaments called microtubules that can form together into three-dimensional scaffolds. Each microtubule is 1,000 times thinner than a human hair and only about 10 micrometers long (about 1,000 times smaller than a common black ant). Along with motor proteins that power movement, these incredibly small structures combine to propel the relatively large cell--like ants steering and powering a car.

In previous studies, researchers have taken these molecules out of the cell and put them into test tubes, where the tubules and motor proteins spontaneously group together to organize themselves into star-shaped structures called asters. How asters in a test tube are related to a cytoskeleton powering cell movement, however, is still unclear. Moreover, the collective microtubule organization demonstrated by aster formation involves interacting forces that are not entirely understood.

"What we wanted to know was: how do you get from these spontaneously forming aster structures in the lab, to a cell controlling its movement? And, how can we control these molecules the way a cell does?" says graduate student Tyler Ross, first author on the study.

Led by Ross, a team of Caltech researchers explored how to manipulate the component filaments and motor proteins outside of the cell's natural environment. In test tubes, they linked motor proteins to light-activated proteins that are naturally found in plants, so that the tubules would only organize into asters when light was shining on them. In this way, the researchers could control when and where asters would form by projecting different patterns of light, enabling them to develop theories about the physical mechanisms underlying aster formation.

Controlling the asters not only allowed for the study of their formation but also enabled the team to build things out of the structures. Ross developed simple procedures of light patterns to place, move, and merge asters of various sizes. The technique offers a way to manipulate structures and study fluid dynamics at a miniscule length scale that is usually difficult to work at; fluids exhibit tricky behaviors at such small volumes.

"Generally, it's really difficult to manipulate fluids and structures on this length scale. But this is the scale that we're most interested in for studying cells and chemistry; all of molecular biology works on this scale," says Ross. "Our light-based system allows us to dynamically manipulate our system. We could look through a microscope and say, 'Okay we have enough over here, let's start routing things over there,' and change the light pattern accordingly. We could use aster structures in such a way that they could stir and mix solutions at very small length scales."

The research is a collaboration between the laboratories of Matt Thomson, assistant professor of computational biology and Heritage Medical Research Institute Investigator, and Rob Phillips, Fred and Nancy Morris Professor of Biophysics, Biology, and Physics. This collaboration, notes Thomson, enabled pivotal breakthroughs in the project, which Ross had begun in Thomson's laboratory at UC San Francisco (UCSF) before the two came to Caltech in 2017. At Caltech, the pair teamed up with Heun Jin Lee, a staff scientist with extensive expertise in optics, to develop a specialized microscope with which they could view aster formation and direct precise patterns of light.

"This has been one of the great collaborations I've seen in my career," says Thomson. "This story really speaks to the community, how you can do work across different fields and people will support and cultivate it. We had feedback from people who work in DNA nanotechnology and people who work in chemical engineering and fluid dynamics."

Credit: 
California Institute of Technology

Cancer survivors in high deductible health plans more likely to have delayed care

A new study from American Cancer Society investigators finds cancer survivors in high deductible health plans were more likely to report delaying or foregoing care. The study appears in the Journal of Oncology Practice. Below is a short recap of the study by lead author Zhiyuan "Jason" Zheng, PhD, principal scientist, health services research at the American Cancer Society.

Question Asked:

How do high-deductible health plan (HDHP) enrollment and health savings account (HSA) status affect cancer survivorship, access to care, and health care utilization?

Summary Answer:

HDHP enrollment without an HSA is associated with worse access to care in privately insured cancer survivors and individuals without a cancer history. HSA enrollment coupled with an HDHP may mitigate this effect. Associations did not vary by cancer history. Emergency department (ED) visits didn't vary significantly by insurance type in cancer survivors.

What We Did:

The 2010 to 2017 National Health Interview Survey was used to identify a nationally representative sample of privately insured adults ages 18 to 64 years (cancer survivors, n = 4,321; individuals without a cancer history, n = 95,316). The sample was categorized into six groups: three groups of cancer survivors enrolled in low-deductible health plans (LDHPs), HDHPs with HSA, and HDHPs without HSA; and three groups of adults without a cancer history enrolled in LDHPs, HDHPs with HSA, and HDHPs without HSA. Separate multivariable logistic regressions were conducted to assess the association between HDHP enrollment with or without HSA status and delayed/forgone care because of cost and ED visits.

What We Found:

There were no differential impacts of HDHP enrollment and HSA status on access to care and ED visits by cancer history. However, we found that, even when covered by private insurance, cancer survivors enrolled in an HDHP with or without an HSA (8.9% and 13.9%, respectively; both P

Bias, Confounding Factors:

Our findings that HSA enrollment appears to mitigate problems with access to care among those with HDHPs could be confounded by greater financial capability among those who are eligible for and enroll in an HSA (eg, financial literacy, knowledge of financial options, and assets) than those who are eligible but without an HSA. We did not have information about the reasons for enrollment in an HDHP (eg, because of lower premiums or employers' limited plan choices).

Real-Life Implications

Health plan benefit managers, payers, and policy makers should identify reasons for low HSA participation rates, provide more education about the benefits of different health plan options to help patients understand the implications of health plans and HSAs, and develop tools to forecast medical expenses and facilitate HSA enrollment.

Credit: 
American Cancer Society

Tel Aviv U and Technion researchers wrest control of one of world's most secure PLCs

Cybersecurity researchers at Tel Aviv University and the Technion Institute of Technology have discovered critical vulnerabilities in the Siemens S7 Simatic programmable logic controller (PLC), one of the world's most secure PLCs that are used to run industrial processes.

Prof. Avishai Wool and M.Sc student Uriel Malin of TAU's School of Electrical Engineering worked together with Prof. Eli Biham and Dr. Sara Bitan of the Technion to disrupt the PLC's functions and gain control of its operations.

The team is slated to present their findings at Black Hat USA week in Las Vegas this month, revealing the security weaknesses they found in the newest generation of the Siemens systems and how they reverse-engineered the proprietary cryptographic protocol in the S7.

The scientists' rogue engineering workstation posed as a so-called TIA engineering station that interfaced with the Simatic S7-1500 PLC controlling the industrial system. "The station was able to remotely start and stop the PLC via the commandeered Siemens communications architecture, potentially wreaking havoc on an industrial process," Prof. Wool explains. "We were then able to wrest the controls from the TIA and surreptitiously download rogue command logic to the S7-1500 PLC."

The researchers hid the rogue code so that a process engineer could not see it. If the engineer were to examine the code from the PLC, he or she would see only the legitimate PLC source code, unaware of the malicious code running in the background and issuing rogue commands to the PLC.

The research combined deep-dive studies of the Siemens technology by teams at both the Technion and TAU.

Their findings demonstrate how a sophisticated attacker can abuse Siemens' newest generation of industrial controllers that were built with more advanced security features and supposedly more secure communication protocols.

Siemens doubled down on industrial control system (ICS) security in the aftermath of the Stuxnet attack in 2010, in which its controllers were targeted in a sophisticated attack that ultimately sabotaged centrifuges in the Natanz nuclear facility in Iran.

"This was a complex challenge because of the improvements that Siemens had introduced in newer versions of Simatic controllers," adds Prof. Biham. "Our success is linked to our vast experience in analyzing and securing controllers and integrating our in-depth knowledge into several areas: systems understanding, reverse engineering, and cryptography."

Dr. Bitan noted that the attack emphasizes the need for investment by both manufacturers and customers in the security of industrial control systems. "The attack shows that securing industrial control systems is a more difficult and challenging task than securing information systems," she concludes.

Following the best practices of responsible disclosure, the research findings were shared with Siemens well in advance of the scheduled Black Hat USA presentation, allowing the manufacturer to prepare.

Credit: 
American Friends of Tel Aviv University

Back-to-back low snow years will become more common, study projects

WASHINGTON--Consecutive low snow years may become six times more common across the Western United States over the latter half of this century, leading to ecological and economic challenges such as expanded fire seasons and poor snow conditions at ski resorts, according to a study.

"Across the West, we're generally losing a lot of our snowpack - in many places, low snow conditions will be increasingly consistent from year to year," said Adrienne Marshall, a postdoctoral researcher at the University of Idaho College of Natural Resources and lead author of the new study in AGU's journal Geophysical Research Letters. "Every time we have a snow drought, we're delving into our water resources and the ecosystem's resources. We're drawing down on our savings without restocking the bank."

Previous research shows warming temperatures linked to climate change will generally reduce snowpack and lead to earlier snowmelt in the Western U.S., but the year-to-year variability of snowpack had not been well established. In the new study, researchers analyzed projected changes in the year-to-year variability of peak snowpack and the timing of peak snowpack using historical conditions from 1970-99 and projected snowpack for 2050-79 under a high carbon emissions future climate scenario adopted by the Intergovernmental Panel on Climate Change. In this scenario, emissions rise throughout the 21st century.

For 2050 to 2079, the average frequency of consecutive snow droughts - years with low snowpack - rose from 6.6 percent to 42.2 percent across Western mountains. The authors defined snow drought as low snowpack conditions that historically occurred one out of every four years. These changes were greatest in Sierra Nevada and Cascades and the lower elevations of the northern Rockies.

"Throughout the Inland Northwest including northern and central Idaho, we expect to see a real increase in consecutive snow droughts," Marshall said. "The droughts will likely occur in the lower elevation ranges that historically received a decent amount of snow that is now falling as rain."

The study also projects year-to-year variability of peak snowpack across the West will decrease, mostly in areas transitioning from snow- to rain-dominated precipitation. In addition, the timing of yearly peak snowpack is predicted to occur earlier and across a broader range of months. Snowpack historically peaked in April, but 2050 to 2079 projections predict more peak snowpacks in March or earlier.

The researchers suggest ski resorts will need to prepare for both consistently lower snowpack and more inconsistent timing of peak snowpack, and low elevation ski resorts should expect an increase in snow drought. According to the paper, reservoir managers will need to develop adaptation strategies to account for increases in snow drought and earlier, more inconsistent timing of snowmelt on top of the usual pre-scheduled water releases.

The authors also suggest a consistent decrease in maximum snowpack may negatively impact threatened wildlife, such as the wolverine; vegetation, including tree establishment and summer water stress; and fire activity.

Credit: 
American Geophysical Union

Persistent plume

image: A towering cloud of smoke billows in the Willow Fire near Payson, Arizona on July 8, 2004.

Image: 
Eric Neitzel/ Wikimedia Commons

Thunderstorms generated by a group of giant wildfires in 2017 injected a small volcano's worth of aerosol into the stratosphere, creating a smoke plume that lasted for almost nine months. CIRES and NOAA researchers studying the plume found that black carbon or soot in the smoke was key to the plume's rapid rise: the soot absorbed solar radiation, heating the surrounding air and allowing the plume to quickly rise.

The billowing smoke clouds provided researchers with an ideal opportunity to test climate models that estimate how long the particulate cloud would persist--after achieving a maximum altitude of 23 km, the smoke plume remained in the stratosphere for many months.

These models are also important in understanding the climate effects of nuclear war or geoengineering.

"We compared observations with model calculations of the smoke plume. That helped us understand why the smoke plume rose so high and persisted so long, which can be applied to other stratospheric aerosol injections, such as from volcanoes or nuclear explosions," said NOAA scientist Karen Rosenlof, a member of the author team that also included scientists from CU Boulder, Naval Research, Rutgers and other institutions. The findings were published today in the journal Science.

During the summer of 2017, wildfires raged across the Pacific Northwest. On August 12 in British Columbia, a group of fires and ideal weather conditions produced five near-simultaneous towering clouds of smoke or pyrocumulonimbus clouds that lofted smoke high into the stratosphere. Within two months, the plume rose from its initial height of about 12 km up to 23 km and persisted in the atmosphere for much longer--satellites could spot it even after eight months.

"The forest fire smoke was an ideal case study for us because it was so well observed by satellites," said lead author Pengfei Yu, a former CIRES scientist at NOAA, now at the Institute for Environment and Climate Research at Jinan University in Guangzhou, China.

Instruments on two satellites--the International Space Station and NASA's CALIPSO--and on NOAA's balloon-borne Printed Optical Particle Spectrometer, or POPS, provided the aerosol measurements the researchers needed.

Yu and his colleagues compared those observations with results from a global climate and chemistry model to get a match for how high up the smoke rose and how long it lasted in the atmosphere. With measurements of the rise rate and evolution of the smoke plume, the researchers could estimate the amount of black carbon in the smoke and how quickly the organic particulate material was destroyed in the stratosphere.

They found that the plume's rapid rise could only be explained by the presence of black carbon or soot, which comprised about 2 percent of the total mass of the smoke. The soot absorbed solar radiation, heated the surrounding air and forced the plume high into the atmosphere.

Next, the team modeled the degradation of the smoke plume in the atmosphere. They found that to mimic the smoke's observed rate of decay over the multi-month plume, there had to be a relatively slow loss of organic carbon (through photochemical processes) that previous nuclear winter studies had assumed to be very rapid.

"We have a better understanding of how our models represent smoke. And because we can model this process, we know we can model other aerosol-related processes in the atmosphere," said Ru-Shan Gao, a NOAA scientist and one of the paper's co-authors.

CU Boulder's Brian Toon and Rutgers University's Alan Robock, also co-authors of the new paper, are particularly interested in what the findings mean for the climate impacts of nuclear explosions, which include a severe cooling impact dubbed "nuclear winter." In modeling the climate impacts of nuclear war, Toon, Robock and others have long expected that massive fires would create smoke plumes that could also be lofted well up into the stratosphere.

"While the rise of the smoke was predicted in the 1980s, the 2017 fire in British Columbia is the first time it has been observed," Toon said.

"It was exciting to get confirmation," Robock added.

Moreover, the detailed observations made during the 2017 fire--such as the somewhat longer-than-expected persistence of organic matter--are fueling more modeling, the two noted. It's possible that the cooling impacts of a nuclear winter could last somewhat less long than models have predicted to date, Toon said, but work is ongoing.

Credit: 
University of Colorado at Boulder