Tech

New low-cost approach detects building deformations with extreme precision in real time

image: The researchers applied their new camera-based method for measuring building deformations to monitor very small movements of an adaptive building prototype frame 9 meters tall.

Image: 
Flavio Guerra, University of Stuttgart

WASHINGTON -- A new camera-based method for measuring building deformations can detect small displacements from 10 meters away. The method could be useful for continuously detecting fast deformations in high-rise buildings, bridges and other large structures with the aim of adapting these structures to external forces.

"Our new approach to detect building deformations could be used to continuously monitor movements. For bridges, the measured deformations could be used to counteract external loads such as a truck traversing the bridge, thereby increasing the lifetime of the bridge," said Flavio Guerra from the University of Stuttgart, a member of the research team. "Because it operates in real time, it could be used to set off an alert the moment any new deformations - which can lead to cracks - were detected."

Researchers led by Tobias Haist describe the new technique in The Optical Society (OSA) journal Applied Optics. The research was conducted as part of a project that aims to develop the technology necessary to create buildings that adapt to environmental conditions such as sunlight, air temperature, wind and earthquakes.

"One day we could have lightweight buildings that change forms in response to complex wind forces and can stay still during an earthquake," said Guerra. "This type of adaptation requires extremely precise building deformation measurement so that the building's current state is estimated and the direction in which it will likely move can be predicted."

A vision-based method

The new method involves fixing a camera on a tripod a small distance away from the front of the building and attaching small light emitters to the building. The camera then detects whether the light sources move relative to each other. A computer-generated hologram is used to create multiple copies of each light source image on the image sensor. Averaging the movement of the multiple copies of the laser spot helps decrease measurement errors, such as noise, yielding measurement uncertainties below a hundredth of a pixel. Using multiple cameras would improve that accuracy even more and enable the technique to be used on very large structures.

Although fiber optic sensors can be used for structural health monitoring, they must be installed when the building is built. The new camera-based system can be attached after construction and uses hardware that is less expensive than fiber optic systems.

"The multi-point measurement approach we used is based on a relatively simple method developed for the control of coordinate measurement machines," said Guerra. "However, we applied the multi-point method for the first time on large objects outdoors under changing environmental conditions in real-time."

The researchers point out that most camera inspection systems illuminate the object -- a building in this case -- and then image it with a camera. They took a different approach by attaching light emitters to the building and directing the light directly toward the camera. This setup allows faster and more accurate measurements because the camera receives more light.

Monitoring an adaptive structure prototype

The researchers used their new method to monitor very small movements of an adaptive building prototype frame 9 meters tall. Their measurements matched well with vibrometer and strain gauge sensor data obtained for the prototype.

Next, the researchers plan to use the system to measure movement in real buildings. They also plan to make the software more robust and redundant so that it is reliable for continuous measurement 24 hours a day.

Credit: 
Optica

APS tip sheet: Origins of matter and antimatter

image: The rotation of the QCD axion (black ball) produces an excess of matter (colored balls) over antimatter, allowing galaxies and human beings to exist.

Image: 
Co et al. Physical Review Letters (2020) and NASA

The fact that the Universe has more matter than antimatter is often referred to as "baryon asymmetry." The theoretical process behind this imbalance is "baryogenesis" and supposedly took place during the early cosmos. Now, a new paper postulates a mechanism called "axiogenesis," which suggests that the excess of matter over antimatter comes from a rotation of the axion field. Raymond Co and Keisuke Harigaya used a model to explore the physics behind the rotation of the quantum chromodynamics axion, a hypothetical particle that is also a dark matter candidate. The work could provide "new research avenues for model building and studies of associated phenomenology" according to the authors.

Axiogenesis

Raymond T. Co and Keisuke Harigaya

Credit: 
American Physical Society

Experts stress radiology preparedness for COVID-19

OAK BROOK, Ill. (March 16, 2020) - Today, the journal Radiology published the policies and recommendations of a panel of experts on radiology preparedness during the coronavirus disease (COVID-19) public health crisis. The article outlines priorities for handling COVID-19 cases and suggests strategies that radiology departments can implement to contain further infection spread and protect hospital staff and other patients.

Although severe public health measures have brought infection rates under control in China where COVID-19 began in December 2019, cases in Italy and Iran have increased exponentially.

Other countries have had approximately two months to prepare their responses to the COVID-19 epidemic. These responses are led by public health authorities in coordination with local governments and hospitals. Now that chest CT findings are no longer part of diagnostic criteria for COVID-19, the focus of most radiology departments has shifted from diagnostic capability to preparedness.

Radiology preparedness during this outbreak requires radiology department policies and procedures designed to have enough capacity for continued operation during a health care emergency of unprecedented proportions and to support the care of patients with COVID-19, while maintaining radiology support for the entirety of the hospital and health system.

Because of varying national and regional infection control policies, steps for radiology preparedness for COVID-19 will vary between institutions and clinics. The Radiology Editorial Board has assembled a team of radiologists who are active in coordination, development and implementation of radiology preparedness policies for COVID-19 at institutions or health care systems in Washington, New York, Georgia, California, Wisconsin and Singapore. Their policies have been developed in conjunction with infection control experts within their institutions.

In the article, each panel member describes their department's top priorities for COVID-19 preparedness in their environment and the steps that have been implemented to address those priorities.

"The Editorial Board hopes that readers may find similarity of the highlighted healthcare systems to their own environment, providing impetus for action or confirmation of their current preparedness activities," said David A. Bluemke, M.D., Ph.D., Radiology Editor and professor in Department of Radiology at University of Wisconsin School of Medicine and Public Health in Madison.

Priorities for COVID-19 preparedness vary among health care systems, but focus on early detection, limiting virus exposure to others, safety precautions, cleaning protocols, training, and maintenance of operations and staffing.

Western Washington state is the epicenter of the COVID-19 outbreak in the U.S. At University of Washington Medicine, the hospitals have begun screening at the high-flow main hospital entrances to check those coming in for symptoms that could be related to coronavirus infection or with risk factors related to travel or exposure.

"The radiology front desk serves as an additional screening site, with similar screening to that performed at the hospital front door," said Mahmud Mossa-Basha, M.D., associate professor of radiology at University of Washington School of Medicine in Seattle. "Patients who come in with respiratory symptoms who are undergoing outpatient imaging or procedures have their imaging exams canceled and are asked to follow up with their primary care physician."

Even after the outbreak subsides, radiology departments must continue to plan and prepare for future outbreaks and pandemics. At Singapore General Hospital, long-range planning for COVID-19 is considered a new norm for radiology operations.

"We are re-thinking how radiology can deliver optimal imaging and treatment while reducing unnecessary movement and congregation of patients within our hospital environment," said Bien Soo Tan, M.D., chair of the Division of Radiological Sciences at Singapore General Hospital. "Teleconsultation and electronic smart appointment applications and counselling are being fast tracked for implementation and will have far reaching impact on our future practice."

Credit: 
Radiological Society of North America

Data from Sweden used to examine PPI use, risk of fracture in children

What The Study Did: Data from Sweden were used to compare 115,933 pairs of children who did or didn't use proton pump inhibitors (PPIs) to examine the association between PPI use and risk of fracture in children.

Author: Yun-Han Wang, M.Sc., B.Pharm., of the Karolinska Institutet in Stockholm, Sweden, is the corresponding author.

 To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamapediatrics.2020.0007)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

California’s strict air quality regulations help farmers prosper, UCI-led study finds

Irvine, Calif. – Farmers in California’s Central Valley are not known for their love of government regulations, but those same growers have seen a boost in the productivity of their high-value crops – and greater earnings – as a result of the Golden State’s strict air pollution controls.

For a study published today in Nature Food, researchers at the University of California, Irvine and other institutions conducted a statistical analysis of pollution exposure and yields from 1980 to 2015 on a key sector making up about 38 percent of the state’s total agricultural output: perennial crops such as almonds, grapes, nectarines, peaches, strawberries and walnuts. They found that reductions in ground ozone during this 35-year period resulted in $600 million in increased production annually.

“A lot of California farmers may not appreciate that air quality standards have had such a benefit on their ability to grow crops,” said co-author Steven Davis, UCI associate professor of Earth system science. “The irony is that by fighting against certain environmental regulations, these folks may be damaging their own earning capacity.”

The researchers also projected yield changes up to 2050 under various scenarios, determining that expected declines in ambient ozone will result in a 5 percent boost in wine grapes, an 8 percent climb in nectarines and a 20 percent jump in table grapes. They discovered, however, that yields of other crops, such as almonds, may suffer comparable decreases due to higher temperatures.

Davis noted that earlier studies on the impact of climate warming and ambient ozone on our ability to grow food have focused on high-volume staple crops such as wheat, soy and rice. But he and his colleagues chose to concentrate on perennials because of the long-term investment they represent and the fact that California is a major supplier of this type of produce.

“These aren’t the things that are providing the global population with its main source of calories. These are the sweet things in life – fruits, nuts and grapes for wine,” Davis said. “Also, monetarily, some of these crops are a lot more valuable than wheat or corn.”

Another difference is that some grains can be modified to withstand greater heat and even higher ozone levels in the air. But, for example, once planted, there’s no way to make almond trees more tolerant of changing conditions, and the capital investment in them is expected to be recouped over decades.

The study demonstrated that the effects of warming have not been statistically significant for many perennial crops to date, but ambient ozone – much of which results from emissions from California’s energy production and transportation sectors – substantially reduces harvests of strawberries, grapes, peaches and nectarines – by as much as 22 percent in the case of table grapes.

“If you look at a map of the state, you’ll see an overlap in areas such as the San Joaquin Valley where many perennial crops are grown and which have high levels of ozone pollution,” said lead author Chaopeng Hong, a UCI postdoctoral scholar in Earth system science. “This co-location indicates that there are opportunities to increase the state’s crop production with even a localized reduction in the amount of ambient ozone pollution.”

Tropospheric ozone is created when nitrogen oxide, emitted primarily through human activities, reacts with volatile organic compounds in sunlight. When ozone enters plants’ leaves through their stomates, it burns plant cells via oxidization, impairing photosynthesis and the energy the plants can dedicate to producing fruit.

Davis said that now that he and his fellow climate scientists know more about the relationship between air pollution and agricultural output, California is in a position to serve as a test bed for different climate change mitigation scenarios.

“We can really look at the state’s energy and transportation systems and be quantitative about how those things might help or hurt agriculture,” he said. “As we transition away from fossil fuels in favor of solar and wind energy and electric vehicles, there will be big changes in ozone pollution. We can simulate those changes and project the effects on California’s most valuable crops.”

Credit: 
University of California - Irvine

Shifts in deep geologic structure may have magnified great 2011 Japan tsunami

image: Japan's risk of giant tsunamis may have grown when the angle of a down-going slab of ocean crust declined. Top: ocean crust (right) slides under continental crust at a steep angle, causing faulting (red lines) in seafloor sediments piled up behind. Bottom: as the angle shallows, stress is transferred to sediments piled onto the continental crust, and faults develop there. Blue dots indicate resulting earthquakes. At left in both images, the change in angle also shifts the region where magma fueling volcanoes is generated, pushing eruptions further inland.

Image: 
Adapted from Oryan and Buck, <em>Nature Geoscience</em> 2020

On March 11, 2011, a magnitude 9 earthquake struck under the seabed off Japan--the most powerful quake to hit the country in modern times, and the fourth most powerful in the world since modern record keeping began. It generated a series of tsunami waves that reached an extraordinary 125 to 130 feet high in places. The waves devastated much of Japan's populous coastline, caused three nuclear reactors to melt down, and killed close to 20,000 people.

The tsunami's obvious cause: the quake occurred in a subduction zone, where the tectonic plate underlying the Pacific Ocean was trying to slide under the adjoining continental plate holding up Japan and other landmasses. The plates had been largely stuck against each other for centuries, and pressure built up. Finally, something gave. Hundreds of square miles of seafloor suddenly lurched horizontally some 160 feet, and thrust upward by up to 33 feet. Scientists call this a megathrust. Like a hand waved vigorously underwater in a bathtub, the lurch propagated to the sea surface and translated into waves. As they approached shallow coastal waters, their energy concentrated, and they grew in height. The rest is history.

But scientists soon realized that something did not add up. Tsunami sizes tend to mirror earthquake magnitudes on a predictable scale; This one produced waves three or four times bigger than expected. Just months later, Japanese scientists identified another, highly unusual fault some 30 miles closer to shore that seemed to have moved in tandem with the megathrust. This fault, they reasoned, could have magnified the tsunami. But exactly how it came to develop there, they could not say. Now, a new study in the journal Nature Geoscience gives an answer, and possible insight into other areas at risk of outsize tsunamis.

The study's authors, based at Columbia University's Lamont-Doherty Earth Observatory, examined a wide variety of data collected by other researchers before the quake and after. This included seafloor topographic maps, sediments from underwater boreholes, and records of seismic shocks apart from the megathrust.

The unusual fault in question is a so-called extensional fault--one in which the earth's crust is pulled apart rather than being pushed together. Following the megathrust, the area around the extensional fault moved some 200 feet seaward, and a series of scarps 10 to 15 feet high could be seen there, indicating a sudden, powerful break. The area around the extensional fault was also warmer than the surrounding seabed, indicating friction from a very recent movement; that suggested the extensional fault had been jolted loose when the megathrust struck. This in turn would have added to the tsunami's power.

Extensional faults are in fact common around subduction zones--but only in oceanic plates, not the overriding continental ones, where this one was found. How did it get there? And, might such dangerous features lurk in other parts of the world?

The authors of the new paper believe the answer is the angle at which the ocean plate dives under the continental; they say it has been gradually shallowing out over millions of years. "Most people would say it was the megathrust that caused the tsunami, but we and some others are saying there may have been something else at work on top of that," said Lamont PhD. student Bar Oryan, the paper's lead author. "What's new here is we explain the mechanism of how the fault developed."

The researchers say that long ago, the oceanic plate was moving down at a steeper angle, and could drop fairly easily, without disturbing the seafloor on the overriding continental plate. Any extensional faulting was probably confined to the oceanic plate behind the trench--the zone where the two plates meet. Then, starting maybe 4 million or 5 million years ago, it appears that angle of subduction began declining. As a result, the oceanic plate began exerting pressure on sediments atop the continental plate. This pushed the sediments into a huge, subtle hump between the trench and Japan's shoreline. Once the hump got big and compressed enough, it was bound to break, and that was probably what happened when the megathrust quake shook things loose. The researchers used computer models to show how long-term changes in the dip of the plate could produce major changes in the short-term deformation during an earthquake.

There are multiple lines of evidence. For one, material taken from boreholes before the quake show that sediments had been squeezed upward about midway between the land and the trench, while those closer to both the land and the trench had been subsiding--similar to what might happen if one laid a piece of paper flat on a table and then slowly pushed in on it from opposite sides. Also, recordings of aftershocks in the six months after the big quake showed scores of extensional-fault-type earthquakes carpeting the seabed over the continental plate. This suggests that the big extensional fault is only the most obvious one; strain was being released everywhere in smaller, similar quakes in surrounding areas, as the hump relaxed.

Furthermore, on land, Japan hosts numerous volcanoes arranged in a neat north-south arc. These are fueled by magma generated 50 or 60 miles down, at the interface between the subducting slab and the continental plate. Over the same 4 million to 5 million years, this arc has been migrating westward, away from the trench. Since magma generation tends to take place at a fairly constant depth, this adds to the evidence that the angle of subduction has gradually been growing shallower, pushing the magma-generating zone further inland.

Lamont geophysicist and coauthor Roger Buck said that the study and the earlier ones it builds on have global implications. "If we can go and find out if the subduction angle is moving up or down, and see if sediments are undergoing this same kind of deformation, we might be better able to say where this kind of risk exists," he said. Candidates for such investigation would include areas off Nicaragua, Alaska, Java and others in the earthquake zones of the Pacific Ring of Fire. "These are areas that matter to millions of people," he said.

Credit: 
Columbia Climate School

Technology to screen for higher-yielding crop traits is now more accessible to scientists

video: A team from the University of Illinois is working to revolutionize how scientists screen plants for key traits at the field level and making this technology more accessible to other scientists to increase crop yields.

Image: 
RIPE project

Like many industries, big data is driving innovations in agriculture. Scientists seek to analyze thousands of plants to pinpoint genetic tweaks that can boost crop production--historically, a Herculean task. To drive progress toward higher-yielding crops, a team from the University of Illinois is revolutionizing the ability to screen plants for key traits across an entire field. In two recent studies--published in the Journal of Experimental Botany (JExBot) and Plant, Cell & Environment (PC&E)--they are making this technology more accessible.

"For plant scientists, this is a major step forward," said co-first author Katherine Meacham-Hensold, a postdoctoral researcher at Illinois who led the physiological work on both studies. "Now we can quickly screen thousands of plants to identify the most promising plants to investigate further using another method that provides more in-depth information but requires more time. Sometimes knowing where to look is the biggest challenge, and this research helps address that."

This work is supported by Realizing Increased Photosynthetic Efficiency (RIPE), an international research project that is creating more productive food crops by improving photosynthesis, the natural process all plants use to convert sunlight into energy and yields. RIPE is sponsored by the Bill & Melinda Gates Foundation, the U.S. Foundation for Food and Agriculture Research (FFAR), and the U.K. Government's Department for International Development (DFID).

The team analyzed data collected with specialized hyperspectral cameras that capture part of the light spectrum (much of which is invisible to the human eye) that is reflected off the surface of plants. Using hyperspectral analysis, scientists can tease out meaningful information from these bands of reflected light to estimate traits related to photosynthesis.

"Hyperspectral cameras are expensive and their data is not accessible to scientists who lack a deep understanding of computational analysis," said Carl Bernacchi, a research plant physiologist with the U.S. Department of Agriculture, Agricultural Research Service (USDA-ARS) at the Carl R. Woese Institute for Genomic Biology. "Through these studies, our team has taken a technology that was out of reach and made it more available to our research community so that we can unearth traits needed to provide farmers all over the world with higher-yielding crops."

The RIPE project analyzes hundreds of plants each field season. The traditional method used to measure photosynthesis requires as much as 30 minutes per leaf. While newer technologies have increased efficiency to as little as 15 seconds per plant, the study published in JExBot has increased efficiency by an order of magnitude, allowing researchers to capture the photosynthetic capacity of hundreds to thousands of plants in a research plot.

In the JExBot study, the team reviewed data from two hyperspectral cameras; one that captures spectra from 400-900 nanometers and another that captures 900-1800 nanometers. "Our previous work suggested that we should use both cameras to estimate photosynthetic capacity; however, this study suggests that only one camera that captures 400-900 is required," said co-first author Peng Fu, a RIPE postdoctoral researcher who led the computational work on both studies.

In the PC&E study, the team resolved to make hyperspectral information even more meaningful and accessible to plant scientists. Using just 240 bands of reflectance spectra and a radiative transfer model, the team teased out how to identify seven important leaf traits from the hyperspectral data that are related to photosynthesis and of interest to many plant scientists.

"Our results suggest we do not always need 'high-resolution' reflectance data to estimate photosynthetic capacity," Fu said. "We only need around 10 hyperspectral bands--as opposed to several hundred or even a thousand hyperspectral bands--if the data are carefully selected. This conclusion can help pave the way to make meaningful measurements with less expensive cameras."

These studies will help us map photosynthesis across different scales from the leaf level to the field level to identify plants with promising traits for further study.

The RIPE project and its sponsors are committed to ensuring Global Access and making the project's technologies available to the farmers who need them the most.

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Selective killing of cancer cells by cluttering their waste disposal system

video: Time-lapse microscopy reveals the movements of lysosomes containing mixed-charge nanoparticle cargoes. Lysosomes in cancer cells (left) are slowed down and cluster around the cell center as compared to the directional trajectories of lysosomes in healthy cells (right).

Image: 
IBS

A team of researchers from the Center for Soft and Living Matter, within the Institute of Basic Science (IBS, South Korea) and affiliated with Ulsan National Institute of Science and Technology (UNIST) has discovered a novel approach to selectively target and kill several types of cancer cells.

Lysosomes are small sacs filled with a large number of enzymes and acid that work to break down and recycle damaged and unwanted cellular components. In other words, they are simultaneously both a cell's waste container and recycling center. Typically, lysosomes get rid of the byproducts of this degradation process by releasing them outside the cell. Releasing the rubbish outside only makes sense. For example, imagine collecting all the rubbish in your house into a bin and then emptying that very same bin right on the floor in the kitchen, making your living conditions miserable. Similarly, puncturing the lysosomes and releasing their toxic contents inside the cell damages cellular components beyond repair which, in extreme cases, can trigger cell death.

Since cancer lysosomes are easier to damage than healthy cells' lysosomes, scientists have been looking into using this strategy as a promising alternative for targeting cancers that are resistant to conventional treatments. However, only a handful of potential therapeutics can target lysosomes, and most of them lack cancer selectivity.

Published in Nature Nanotechnology, this study shows that nanoparticles covered with a mixture of positively [+] and negatively [-] charged molecules can selectively kill cancer cells by targeting their lysosomes. The death of cancer cells results from a remarkable succession of transport and aggregation phenomena, starting with the formation of small nanoparticle clusters at cell surfaces and culminating with the assembly of micron-sized nanoparticle crystals inside the cancer lysosomes. Nanoparticle crystals induce lysosomal swelling, gradual loss of the integrity of lysosomal membranes, and finally cell death.

"In this work, we have harnessed the deregulated waste management system of the cancer cells to act as a "nanoscale assembly line" for constructing high-quality nanoparticle crystals that destroy the very lysosome "reactors" that allowed them to grow in the first place," says Bartosz A. Grzybowski, co-leading author of the study.

The aggregation of mixed-charge nanoparticles is favored by the acidic environment typical of cancer cells. "Non-cancerous cells, however, also internalize mixed-charge nanoparticles, but nanoparticle aggregation is limited. The nanoparticles quickly transit through the recycling routes and are cleared from these cells," explains Kristiana Kandere-Grzybowska, co-leading author of the study.

"Our conclusions are based on a comparison of thirteen different sarcomas, melanoma, breast and lung carcinoma cell lines with four non-cancer cell types," adds the first author of the study, Magdalena Borkowska. "The nanoparticles were effective against all thirteen cancer lines, while not harming non-cancerous cells."

The aggregation of the nanoparticles as they transit through the endolysosomal system of cancer cells is a complex process. The team discovered that nanoparticles with a surface composition of about 80% [+] and 20% [-] ligands show optimal cancer selectivity. The fact that negatively charged ligands are also pH-sensitive seems to be key to cancer selectivity. In the acidic pH, found around cancerous cells and inside the lysosomes, these ligands are protonated and prone to interact with similar ligands on the neighboring nanoparticles, thus promoting their aggregation. The balance between attractive interactions ? the bonds between [-] ligands and strong interactions between nanoparticle cores ? and electrostatic repulsions between [+] ligands on the neighboring particles determine the extent of nanoparticle aggregation. Overall, the interactions between particles, serum proteins and cells' internal environment work in concert to impair cancer lysosomes.

"The nanoparticle clusters may alter the lipid composition of the lysosome membrane, affect its integrity and render it less mechanically robust. Unexpectedly, our team also discovered that some proteins, such as the cell growth signaling molecules mTORC1, are displaced (and thus inhibited) from the surface of nanoparticle-containing cancer lysosomes. This is important because cancer cell growth and division require mTORC1, and nanoparticles are shutting it down only in cancer cells," explains Kandere-Grzybowska.

While single nanoparticles are approximately the same size as an average protein molecule, and thus too small to be seen with most dynamic live-cell microscopy approaches, the crystals composed of several nanoparticles can be observed. The team used a combination of complementary approaches, including dark-field microscopy, confocal reflection microscopy, and TEM, as well as biochemical and computational approaches to assess the full impact of mixed-charge nanoparticles on lysosomal organelles.

This study opens up new research directions. The mixed-charge strategy could be applied to other types of nanoparticles, such as polymer-based particles, dendrimers or iron oxide nanoparticles. Another important step will be testing the effectiveness of mixed-charge nanoparticles against tumors in animal models.

Credit: 
Institute for Basic Science

Peppered with gold

image: If a gallium-arsenide crystal is irradiated with short laser pulses, charge carriers are formed. These charges are accelerated by applying a voltage which enforces the generation of a terahertz wave.

Image: 
HZDR / Juniks

Terahertz waves are becoming ever more important in science and technology. They enable us to unravel the properties of future materials, test the quality of automotive paint and screen envelopes. But generating these waves is still a challenge. A team at Helmholtz-Zentrum Dresden-Rossendorf (HZDR), TU Dresden and the University of Konstanz has now made significant progress. The researchers have developed a germanium component that generates short terahertz pulses with an advantageous property: the pulses have an extreme broadband spectrum and thus deliver many different terahertz frequencies at the same time. As it has been possible to manufacture the component employing methods already used in the semiconductor industry, the development promises a broad range of applications in research and technology, as the team reports in the journal Light: Science & Applications (DOI: 10.1038/s41377-020-0265-4).

Just like light, terahertz waves are categorized as electromagnetic radiation. In the spectrum, they fall right between microwaves and infrared radiation. But while microwaves and infrared radiation have long since entered our everyday lives, terahertz waves are only just beginning to be used. The reason is that experts have only been able to construct reasonably acceptable sources for terahertz waves since the beginning of the 2000s. But these transmitters are still not perfect - they are relatively large and expensive, and the radiation they emit does not always have the desired properties.

One of the established generation methods is based on a gallium-arsenide crystal. If this semiconductor crystal is irradiated with short laser pulses, gallium arsenide charge carriers are formed. These charges are accelerated by applying voltage which enforces the generation of a terahertz wave - basically the same mechanism as in a VHF transmitter mast where moving charges produce radio waves.

However, this method has a number of drawbacks: "It can only be operated with relatively expensive special lasers," explains HZDR physicist Dr. Harald Schneider. "With standard lasers of the type we use for fiber-optic communications, it doesn't work." Another shortcoming is that gallium-arsenide crystals only deliver relatively narrowband terahertz pulses and thus a restricted frequency range - which significantly limits the application area.

Precious metal implants

That is why Schneider and his team are placing their bets on another material - the semiconductor germanium. "With germanium we can use less expensive lasers known as fiber lasers," says Schneider. "Besides, germanium crystals are very transparent and thus facilitate the emission of very broadband pulses." But, so far, they have had a problem: If you irradiate pure germanium with a short laser pulse, it takes several microseconds before the electrical charge in the semiconductor disappears. Only then can the crystal absorb the next laser pulse. Today's lasers, however, can fire off their pulses at intervals of a few dozen nanoseconds - a sequence of shots far too fast for germanium.

In order to overcome this difficulty, experts searched for a way of making the electrical charges in the germanium vanish more quickly. And they found the answer in a prominent precious metal - gold. "We used an ion accelerator to shoot gold atoms into a germanium crystal," explains Schneider's colleague, Dr. Abhishek Singh. "The gold penetrated the crystal to a depth of 100 nanometers." The scientists then heated the crystal for several hours at 900 degrees Celsius. The heat treatment ensured the gold atoms were evenly distributed in the germanium crystal.

Success kicked in when the team illuminated the peppered germanium with ultrashort laser pulses: instead of hanging around in the crystal for several microseconds, the electrical charge carriers disappeared again in under two nanoseconds - about thousand times faster than before. Figuratively speaking, the gold works like a trap, helping to catch and neutralize the charges. "Now the germanium crystal can be bombarded with laser pulses at a high repetition rate and still function," Singh is pleased to report.

Inexpensive manufacture possible

The new method facilitates terahertz pulses with an extremely broad bandwidth: instead of 7 terahertz using the established gallium-arsenide technique, it is now ten times greater - 70 terahertz. "We get a broad, continuous, gapless spectrum in one fell swoop", Harald Schneider enthuses. "This means we have a really versatile source at hand that can be used for the most diverse applications." Another benefit is that, effectively, germanium components can be processed with the same technology that is used for microchips. "Unlike gallium arsenide, germanium is silicon compatible," Schneider notes. "And as the new components can be operated together with standard fiber-optic lasers, you could make the technology fairly compact and inexpensive."

This should turn gold-doped germanium into an interesting option not just for scientific applications, such as the detailed analysis of innovative two-dimensional materials such as graphene, but also for applications in medicine and environmental technology. One could imagine sensors, for instance, that trace certain gases in the atmosphere by means of their terahertz spectrum. Today's terahertz sources are still too expensive for the purpose. The new methods, developed in Dresden-Rossendorf, could help to make environmental sensors like this much cheaper in the future.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

Researchers expose vulnerabilities of password managers

Some commercial password managers may be vulnerable to cyber-attack by fake apps, new research suggests.

Security experts recommend using a complex, random and unique password for every online account, but remembering them all would be a challenging task.
That's where password managers come in handy.

Encrypted vaults accessed by a single master password or PIN, they store and autofill credentials for the user and come highly recommended by the UK's National Cyber Security Centre.

However, researchers at the University of York have shown that some commercial password managers may not be a watertight way to ensure cyber security.

After creating a malicious app to impersonate a legitimate Google app, they were able to fool two out of five of the password managers they tested into giving away a password.

The research team found that some of the password managers used weak criteria for identifying an app and which username and password to suggest for autofill. This weakness allowed the researchers to impersonate a legitimate app simply by creating a rogue app with an identical name.

Senior author of the study, Dr Siamak Shahandashti from the Department of Computer Science at the University of York, said: "Vulnerabilities in password managers provide opportunities for hackers to extract credentials, compromising commercial information or violating employee information. Because they are gatekeepers to a lot of sensitive information, rigorous security analysis of password managers is crucial.

"Our study shows that a phishing attack from a malicious app is highly feasible - if a victim is tricked into installing a malicious app it will be able to present itself as a legitimate option on the autofill prompt and have a high chance of success."

"In light of the vulnerabilities in some commercial password managers our study has exposed, we suggest they need to apply stricter matching criteria that is not merely based on an app's purported package name."

The researchers also discovered some password managers did not have a limit on the number of times a master PIN or password could be entered. This means that if hackers had access to an individual's device they could launch a "brute force" attack, guessing a four digit PIN in around 2.5 hours.

As well as these new vulnerabilities, the researchers also drew up a list of previously disclosed vulnerabilities identified in a previous study and tested whether they had been resolved. They found that while the most serious of these issues had been fixed, many had not been addressed.

The researchers disclosed these vulnerabilities to the password managers.

Lead author of the study, Michael Carr, who carried out the research while studying for his MSc in Cyber Security at the Department of Computer Science, University of York, said: "New vulnerabilities were found through extensive testing and responsibly disclosed to the vendors. Some were fixed immediately while others were deemed low priority.

"More research is needed to develop rigorous security models for password managers, but we would still advise individuals and companies to use them as they remain a more secure and useable option. While it's not impossible, hackers would have to launch a fairly sophisticated attack to access the information they store."

Revisiting Security Vulnerabilities in Commercial Password Managers will be presented at the 35th International Conference on ICT Systems Security and Privacy Protection (IFIP SEC 2020) in September, 2020.

Credit: 
University of York

Unraveling the puzzle of Madagascar's forest cats

image: A forest cat trips a motion-activated 'camera trap' in Madagascar's Bezà Mahafaly Special Reserve.

Image: 
Michelle Sauther

In her 30 years working as a researcher in Madagascar, CU Boulder Anthropology Professor Michelle Sauther has had a number of chance encounters with a strange forest creature: a wild, oversized cat with a characteristic tabby-like coloring.

"When I first started working in Madagascar, I noticed that these cats all seemed to look the same," said Sauther, whose research focuses on primates. "They were big, and they were always the same color."

Scientists had no idea where they came from--the island nation has no native cats of its own.

Now, in a study published in Conservation Genetics, Sauther and her colleagues have drawn on genetic data from dozens of these wild cats to narrow in on an answer. According to their findings, the animals may not be newcomers to Madagascar at all. Instead, the cats seemed to have hitched a ride to the island on trade ships from as far away as Kuwait hundreds or even more than 1,000 years ago.

The results offer a first step toward better understanding the threat that these weird felines might pose to Madagascar's native species. They include the fossa, a forest predator that looks feline but is more closely related to the mongoose.

The case of the forest cat also points to a global, and hairball-rich, phenomenon that Sauther calls the "cat diaspora."

"Cats have essentially gone with us everywhere we've gone," Sauther said. "We can see that journey of humans and their pets going back pretty deep in time."

Tracking cats

That journey has brought pitfalls, too. Over the last century, hungry cats have run wild on islands around the world. They've even hunted local birds, mammals and reptiles to extinction in places like Hawaii, the West Indies and New Zealand.

What's happening in Madagascar, however, is less clear.

In part, that's because researchers don't know much about the island's forest cats. Many Malagasy are familiar with the beasts, which often sneak into their villages to eat their chickens. They call them "ampaha," "fitoaty" and "kary" among other names and distinguish them from the island's pet cat population.

Still, Sauther said that she and other researchers have seen forest cats stalking lemurs, and that has her worried. She has a soft spot in her heart for these "underdogs" of the primate world, and many lemur populations are already in deep trouble in Madagascar.

"The real worry is: What are these cats doing?" Sauther said. "Are they posing a threat to animals in Madagascar? Maybe they're just part of the local ecology."

To find out, Sauther and her colleagues analyzed the DNA from 30 forest cats from sites in the north and south of Madagascar.

And, to their surprise, the cats seemed to have traveled to the island from far away--really far away.

"They were probably part of the maritime ships that came to Madagascar along these Arab routes," Sauther said.

Furry stowaways

The team's DNA results identified the cats as belonging to Felis catus, the same domestic species that curls up on the laps of people worldwide. But the animals also seemed to have originated from the Arabian Sea region around modern-day Dubai, Oman and Kuwait. Sauther said that the cats may have stowed away on merchant ships following trade routes that have existed for more than 1,000 years.

"They would come down along the East Coast of Africa. They would stop at the islands of Lamu and Pate, and then it's just barely a jump to go over to Madagascar," Sauther said.

While the team can't pinpoint exactly when the cats arrived on the island, Sauther thinks they may have been residents for a while--and have possibly become a normal part of the local forests.

"That's not to say they're not a threat, but we need to understand their biology and their history to understand how we proceed in terms of conservation policy," Sauther said.

For now, she's just happy to have the answer to a question that's bugged her for several decades.

"This study has answered a mystery that not just me but a lot of researchers in Madagascar have wondered about," Sauther said. "We now know that these mysterious cats are domestic cats with a really interesting backstory."

Credit: 
University of Colorado at Boulder

NASA finds Gretel becoming extra-tropical

image: On March 16, the MODIS instrument that flies aboard NASA's Terra satellite took this image of Tropical Cyclone Gretel and showed a transitioning storm northwest of New Zealand.

Image: 
NASA Worldview

NASA's Terra satellite passed over the Southern Pacific Ocean and captured an image of Tropical Storm Gretel as it was transitioning into an extra-tropical cyclone, northwest of New Zealand.

Tropical Cyclone 23P formed on March 14 at 4 p.m. EDT (2100 UTC) between Australia and New Caledonia. Once it intensified into a tropical storm, it was renamed Gretel.

On March 16, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite provided forecasters with a visible image of Tropical Cyclone Gretel. The bulk of Gretel's clouds and storms were south and southeast of the center of circulation. Clouds associated with Gretel extended to northern New Zealand, despite the storm's center being hundreds of miles away.

At 11 p.m. EDT on March 15 (0300 UTC on March 16), the Joint Typhoon Warning Center or JTWC issued the final warning on Gretel. At that time, the center of Tropical Cyclone Gretel was located near latitude 26.6 degrees south and longitude 169.7 degrees east, about 675 nautical miles north-northwest of Auckland, New Zealand. Maximum sustained winds were near 50 knots (58 mph/93 kph) and Gretel was speeding southeast at 25 knots (29 mph/46 kph).

The Joint Typhoon Warning Center or JTWC noted that Gretel will continue to move southeast and is now becoming extra-tropical.

Often, a tropical cyclone will transform into an extra-tropical cyclone as it recurves toward the poles (north or south, depending on the hemisphere the storm is located in). An extra-tropical cyclone is a storm system that primarily gets its energy from the horizontal temperature contrasts that exist in the atmosphere.

Tropical cyclones have their strongest winds near the earth's surface, while extra-tropical cyclones have their strongest winds near the tropopause - about 8 miles (12 km) up. Tropical cyclones, in contrast, typically have little to no temperature differences across the storm at the surface and their winds are derived from the release of energy due to cloud/rain formation from the warm moist air of the tropics.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Biophysicists blend incompatible components in one nanofiber

image: EDX analysis of the fibers made of PLA (top row), BSA (middle), and the PLA-BSA blend in equal proportions (bottom). The color mapping corresponds to the chemical elements, with the carbon, oxygen, and nitrogen shown in green, yellow, and red, respectively. The presence of nitrogen indicates the protein

Image: 
Elizaveta Pavlova et al./RSC Advances

Russian researchers from the Federal Research Clinical Center of Physical-Chemical Medicine, the Moscow Institute of Physics and Technology, and Lomonosov Moscow State University showed the possibility of blending two incompatible components -- a protein and a polymer -- in one electrospun fiber. Published in RSC Advances, the study also demonstrates that the resulting mat can gradually release the protein. Blended mats containing proteins are promising for biomedical applications as burn and wound dressings, matrices for drug delivery and release, and in tissue engineering.

Electrospinning

Electrospun mats consisting of ultrafine fibers have numerous applications. They can be used for liquid and gas filtering, cell culturing, drug delivery, as sorbents and catalytic matrices, in protective clothing, antibacterial wound dressing, and tissue engineering.

Electrospinning is a method for fabricating micro- and nanofibers from polymers that involves the use of an electrostatic field. Under a high voltage of about 20 kilovolts, a drop of polymer solution becomes electrified and stretches out into a thin fiber once the Coulomb repulsion overcomes surface tension.

The technique is fairly flexible and enables a range of components to be incorporated into electrospun mats: micro- and nanoparticles of different nature, carbon nanotubes, fluorescent dyes, drugs and antibacterial agents, polymer and biopolymer mixtures. That way the properties of the mats can be fine-tuned to fit a specific practical application.

Polymer-protein mats

An electrospun mat is often manufactured with a carrier polymer, which ensures stable fiber formation and can incorporate additional components. For biomedical applications, biodegradable and biocompatible polymers are usually required, and polylactic acid is among the most common ones. PLA is used to produce degradable packaging, surgical threads, screws, and pins.

The main problem with using PLA in biology and medicine is its hydrophobic nature, and therefore poor cell adhesion. To address this, the polymer is blended with proteins, because they are nontoxic, hydrophilic, naturally metabolized, and can act as therapeutic agents.

The researchers studied blended mats consisting of the water-insoluble PLA and a water-soluble globular protein called bovine serum albumin, or BSA. Experiments in a water medium showed the protein component to be released from the mat into the solution gradually. Specifically, about half of the protein in the mat was dissolved over a week. This effect suggests possible applications in prolonged release of protein-based drugs.

To predict the properties of the blended mats, the team had to study protein distribution in them. The caveat is that most polymers do not mix well. In a polymer-protein-solvent system, the components tend to separate into two solutions. Although this does apply to PLA and BSA solutions, electrospinning allowed the researchers to overcome phase separation in mats. They showed both components to be present in every fiber (fig. 1) with three independent analytic methods: fluorescence microscopy, EDX spectroscopy, and Raman spectroscopy.

"Electrospun polymer-protein blended mats have many possible applications. By varying the amount of protein, you can tune how fast mat biodegradation happens. The protein's numerous functional groups enable us to modify the mat surface by attaching chemical compounds to it. Protein-based blended mats could also be used as selective filters or for prolonged drug release, for example, in burn and wound dressings," study co-author Dmitry Klinov commented. He is a researcher at MIPT's Molecular and Translational Medicine Department and the head of the Laboratory of Medical Nanotechnologies at the Federal Research Clinical Center of Physical-Chemical Medicine of the Federal Medical and Biological Agency of Russia.

Credit: 
Moscow Institute of Physics and Technology

App detects harsh side effect of breast cancer treatment

image: An app and a camera attachment are now all that are needed to check for early onset lymphedema in breast cancer survivors.

Image: 
LymphaTech / Georgia Tech

Some 20 percent of breast cancer survivors will suffer from lymphedema, a potentially severe side effect of treatment that makes arms swell with lymph. The disease is often overlooked, but commercially available app-based technology now makes early detection easier, allowing for proactive treatment.

The lymphedema monitoring technology originated through research at the Georgia Institute of Technology and was further developed for market by the company LymphaTech, which also emerged from Georgia Tech. Now, a new study has benchmarked the technology, finding that it effectively detects early arm swelling associated with lymphedema in breast cancer patients.

The detection technology is intended to improve not only patients' physical health but also their peace of mind and finances.

Severe depression

"The most immediate awful consequence of lymphedema is seen in mental health. Severe depression is very high," said Brandon Dixon, who co-led the study and is an associate professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering. "If you detect it early, managing it could cost as little as $2,500 in a patient's lifetime. If you catch it too late, the costs can rise as high as $200,000."

"Lymphedema is under-researched, so we don't know directly how it may lead to deadly health conditions, but there are more cases than AIDS, Parkinson's disease, and Alzheimer's disease combined, and it diminishes patients' health," Dixon said.

The researchers published the detector's test results in the journal Physical Therapy on February 10, 2019. Dixon and Georgia Tech graduates founded LymphaTech through the initiative TI:GER, Technology Innovation: Generating Economic Results at Georgia Tech's Scheller College of Business. The startup received early funding from the Georgia Research Alliance.

No cure

Lymphedema can strike breast cancer survivors if surgery includes the removal of a lymph node, slowing the flow of lymph. The liquid waste can congest the arm, at first subtly but later so drastically that patients may no longer fit into their clothing.

"It makes the stigma of cancer stick out," Dixon said. "And it is a very underappreciated disorder in medical treatment, so patients can feel stuck with it with no way out."

A German device called a perometer accurately detects arm swelling caused by lymphedema, but perometers are seldom available in the U.S. The research team could find only one in metropolitan Atlanta to benchmark the LymphaTech system against. It was located at TurningPoint Breast Cancer Rehabilitation, a non-profit center that co-led the new study in collaboration with Dixon.

The advantages of the new technology over perometers are cost and convenience. Perimeters are bulky, costly machines, while the LymphaTech system runs on iPhone or iPad and requires only a $400 camera attachment and a paid smartphone app. Both devices simply determine total volume of the arm for swelling diagnosis.

The new system performed comparably in its accuracy to the perometer in the study.

Awareness barriers

Developing LymphaTech has faced a more challenging component - spreading lymphedema awareness - and a less challenging component - arriving at the technology to accomplish this.

"In the past 20 years, depth-sensor cameras have become significantly cheaper and better. Video games, self-driving cars, robotics - they have all required better depth sensors, and we took advantage of that by using a commercially available lens attachment," Dixon said.

The camera attachment creates point clouds, 3D representations of objects, in this case of human arms, which the app uses to calculate the total arm volume. Usually, only one arm is afflicted with lymphedema, allowing clinicians to compare it with the unaffected arm for easier gauging of disease severity.

As with perometers, the LymphaTech technology avoids human error that creeps in when recording arm volume with a tape measure, a current method to assess lymphedema.

"The real battle has been to convince a medical market that has not much cared about lymphedema in the past or sought solutions to care," Dixon said. "Hopefully, the high accessibility of our solution will make it easier to care."

In a separate study involving the LymphaTech system, a research team traveled to Sri Lanka to measure lymphedema in legs, Dixon said. And in Germany, the technology is catching on with medical garment manufacturers to help them custom-fit compression sleeves to treat lymphedema.

Credit: 
Georgia Institute of Technology

Sobering new data on drinking and driving: 15% of US alcohol-related motor vehicle fatalities involve alcohol under the legal limit

image: The number of lower blood alcohol concentration fatalities is substantial. States with more restrictive alcohol policies have reduced odds of lower blood alcohol concentration motor vehicle crashes than states with weaker policies.

Image: 
Lira MC, et al. American Journal of Preventive Medicine, 2020.

Ann Arbor, March 16, 2020 - A new study in the American Journal of Preventive Medicine, published by Elsevier, found that motor vehicle crashes involving drivers with blood alcohol concentrations (BACs) below the legal limit of 0.08 percent accounted for 15 percent of alcohol-involved crash deaths in the United States. Of these deaths, 55 percent of fatalities were individuals other than the drinking driver, and these crashes were more likely to result in youth fatalities compared with crashes above the legal BAC limit.

Alcohol-involved motor vehicle accidents remain a leading cause of injury-related death in the US. Most research on crashes and alcohol focuses on alcohol above the legal limit of 0.08 percent, but cognitive impairment can begin at BACs as low as 0.03 percent. The National Transportation Safety Board and the National Academies of Sciences, Engineering, and Medicine have recommended reducing the legal blood alcohol concentration limit from 0.08 percent to 0.05 percent. In 2018, Utah became the first state to do so. Other countries have adopted this limit already and have seen decreases in motor vehicle crashes.

"Our study challenges the popular misconception that alcohol-involved crashes primarily affect drinking drivers, or that BACs below the legal limit don't matter," explained lead investigator Timothy S. Naimi, MD, MPH, Section of General Internal Medicine, Boston Medical Center, and Department of Community Health Sciences, Boston University School of Public Health, Boston, MA, USA.

The study analyzed sixteen years of US motor vehicle crash data from the Fatality Analysis Reporting System with the Alcohol Policy Scale, a measure of state alcohol policies. From 2000 to 2015, 37 percent of more than 600,00 motor vehicle deaths occurred in crashes involving at least one driver with a positive BAC. Of these, 15 percent were from crashes involving drivers testing below the legal alcohol limit.

The results of this study also showed that more restrictive alcohol policies were associated with a 9 percent decrease in the odds that a crash involved alcohol at levels below the legal limit. This relationship was consistent for multiple subgroups (e.g., men only, women only) and at a blood alcohol cutoff of 0.05 percent.

"Lower alcohol crashes have been underestimated as a public health problem. Our research suggests that stringent alcohol policies reduce the likelihood of fatal accidents involving drivers with all levels of alcohol blood concentration," noted Dr. Naimi. The study identified a number of policy approaches that could lead to a decrease in crash deaths involving alcohol at all levels, including increased alcohol taxes, required keg registration, and limited alcohol availability in grocery stores.

"Policies restricting impaired driving increase freedom from worry of injury or death for the majority of people on public roadways who are not drinking," observed lead author Marlene Lira, Section of General Internal Medicine, Boston Medical Center, Boston, MA, USA.

Credit: 
Elsevier