Tech

75% of US workers can't work exclusively from home, face greater risks during pandemic

About three-quarters of U.S. workers, or 108 million people, are in jobs that cannot be done from home during a pandemic, putting these workers at increased risk of exposure to disease. This majority of workers are also at higher risk for other job disruptions such as layoffs, furloughs or hours reductions, a University of Washington study shows.

Such job disruptions can cause stress, anxiety and other mental health outcomes that could persist even as the United States reopens its economic and social life, said author Marissa Baker, an assistant professor in the UW Department of Environmental & Occupational Health Sciences.

These workers also represent some of the lowest paid workers in the U.S. workforce, Baker emphasized.

The remaining 25% of U.S. workers, or 35.6 million people, are in jobs that can be done at home. These jobs are typically in highly-paid occupational sectors such as finance, administration, computer, engineering and technology. Even as the economy begins to reopen, these workers will continue to be better shielded from exposure to the virus, reduced hours, furloughs or joblessness and have an increased ability to care for a child at home -- further growing the disparity between the top quarter of the workforce and the rest, the study found.

"This pandemic has really exacerbated existing vulnerabilities in American society, with workers most affected by the pandemic and stay-at-home orders being significantly lower paid and now also at increased risk for mental health outcomes associated with job insecurity and displacement, in addition to increased risk of exposure to COVID-19 if they keep going to work," said Baker.

"The most privileged workers will have a job that can be done at home, reducing their risk of exposure, and enabling them to continue to work even as office buildings were closed. Unfortunately, only a quarter of the U.S. workforce falls into this category. The fact that these are some of the highest paid workers in the U.S. is no surprise," Baker added.

In the study, published June 18 in the American Journal of Public Health, Baker examined 2018 Bureau of Labor Statistics data characterizing the importance of interacting with the public and the importance of using a computer at work to understand which workers could work from home during a pandemic event, and which workers would experience work disruptions due to COVID-19.

Using these two characteristics of work and how important they are in different types of jobs, Baker's analysis determined four main groups of occupations:

Work that relies on the use of computers but not as much on interaction with the public -- jobs in business and finance, software development, architecture, engineering and the sciences, for instance -- made up 25% of the workforce or 35.6 million workers. These workers had a median income of nearly $63,000.

Work that relies on both interaction with the public and computer use -- such as positions in management, healthcare, policing and education, most classified as essential during the pandemic -- comprised 36.4% of the workforce or 52.7 million workers. These workers had a median income of roughly $57,000.

Jobs in which interaction with the public and computer use are not important -- construction, maintenance, production, farming or forestry -- are 20.1% of the workforce or 29 million workers who make a median wage of $40,000.

Lastly, jobs in which computer work is not important but interacting with the public is -- retail, food and beauty services, protective services and delivery of goods -- were 18.9% of the workforce, or 27.4 million workers, with a median income of $32,000.

"The workers for whom computer use is not important at work but interactions with the public is are some of the lowest paid workers," Baker said. "And during this pandemic, they face compounding risks of exposure to COVID-19, job loss and adverse mental health outcomes associated with job loss."

As the economy reopens, some workers who have been unable to work at home but did continue to go to work during the pandemic -- such as some healthcare workers, security guards or bus drivers -- may now face layoffs as organizations adjust to reduced demand and economic pressures force layoffs, Baker explains. On the upside, workers in construction, manufacturing, production or freight transport who may have been laid off or furloughed during the pandemic will likely be some of the first industries to rebound and hire workers back.

However, the 18.9% of workers in occupations such as retail or food services, many of whom were laid off during the pandemic, may not have a job to go back to, further extending their job displacement and increasing adverse health effects associated with job loss. Those who are able to go back to work face a higher risk of exposure to the novel coronavirus still active in populations across the country.

Given the relationship between job insecurity or job displacement and mental health outcomes including stress, depression or anxiety, there could be a large burden of mental health outcomes among these workers.

"These results underscore the important role that work plays in public health. Workplace policies and practices enacted during a pandemic event or other public health emergency should aim to establish and maintain secure employment and living wages for all workers and consider both physical and mental health outcomes, even after the emergency subsides," Baker said.

Credit: 
University of Washington

Tropical forest loss

image: UD Assistant Professor Kyle Davis has published a new study that looks at which types of large-scale land investments may be associated with increases in tropical deforestation. Davis is pictured here doing fieldwork in central Mozambique when members of the research group were visiting forest concessions and large-scale agricultural investments. This particular photo is from inside a Eucalyptus plantation.

Image: 
Photo courtesy of Kyle Davis

In recent years, there has been a rise in foreign and domestic large-scale land acquisitions--defined as being at least roughly one square mile--in Latin America, Asia, and Africa where investing countries and multinational investors take out long-term contracts to use the land for various enterprises.

In some cases, this leads to the creation of new jobs for local communities, and governments often welcome these investments as a means to promote the transfer of technologies and the inflow of capital. But the investments can also have adverse outcomes for local people, who rely on the acquired areas for food and income but have no legal claim to the land, and the environment-- as the land will likely need to be converted to serve its intended use.

An international group of researchers led by the University of Delaware's Kyle Davis has recently published a study in Nature Geoscience to see which types of large-scale land investments may be associated with increases in tropical deforestation. They found that investment types focusing on establishing new tree plantations -- where an area is cleared of existing trees and planted with a single tree species that is harvested for timber -- as well as plantations for producing palm oil and wood fiber, consistently had higher rates of forest loss than surrounding non-investment areas.

The study's findings show that large-scale land acquisitions can lead to elevated deforestation of tropical forests and highlight the role of local policies in the sustainable management of these ecosystems.

Analyzing land deals, forest cover

Researchers used a georeferenced database of more than 82,000 land deals -- covering 15 countries in Latin American, sub-Saharan Africa and Southeast Asia -- with global data on annual forest cover and loss between 2000 and 2018.

They found that since the start of the century, 76% of all large-scale land acquisitions in the Global South -- an emerging term which refers to the regions of Latin America, Asia, Africa and Oceania -- can be attributed to foreign land investment. These land acquisitions covered anywhere from 6% to 59% of a particular country's land area and 2% to 79% of its forests.

The information came from the Global Forest Watch database run by the World Resources Institute as well as other sources such as government ministries, which provides information for thousands of individual investments that show the exact area, boundary and intended use.

"This collection of datasets on individual land investments provided me with information on the exact area, boundary, and intended use of each deal. I then combined these data with satellite information on forest cover and forest loss to understand whether large-scale land investments are associated with increased rates of forest loss," said Davis, assistant professor in the Department of Geography and Spatial Sciences in UD's College of Earth, Ocean and Environment and the Department of Plant and Soil Sciences in UD's College of Agriculture and Natural Resources.

Environmentally damaging, globalized industries

With regards to the environmental damage done by oil palm, wood fiber and tree plantations, Davis said a lot of it has to do with the ways in which those products are grown.

"Investments to establish new oil palm or tree plantations seem to consistently have higher rates of forest loss, and that makes sense because basically, you have to completely clear the land in order to convert it to that intended use," said Davis. "If you want to establish a tree plantation or a palm oil plantation in place of natural vegetation, you've first got to cut down the forest."

For the other investment types, such as logging and mining, however, the results were much more mixed. Logging investments, in fact, served a small, protective role where the rates of forest loss in logging concessions were slightly lower than the rates of forest loss in surrounding, comparable areas. Davis attributed this to the specific requirements for the logging industry where only trees of a certain size or species can often be harvested.

These large-scale land acquisitions are now widespread across the planet, which was caused largely by rising globalization and the world's increasing interconnectedness.

"There's been a rapid increase in land investments in recent decades due to growing global demands for food, fuel, and fiber," said Davis.

He pointed to the global food crisis in 2008 when many import-reliant countries realized they were vulnerable to food or resource shortages. To help offset that vulnerability, they have pursued investments abroad to expand the pool of resources available to them in case another large-scale shock occurs.

Government information

Davis emphasized the importance for governments to provide detailed information on land investments, to ensure that these deals were carried out transparently and to allow researchers to objectively assess their effects.

He also said that by performing this comparison across different countries, it makes it possible to start identifying specific policies that are more effective in protecting forests.

"If you see deals in one country that aren't leading to enhanced forest loss but the same type of investment in another country is accelerating deforestation, then this suggests that there are opportunities to compare the policies in both places and leverage what's working in one country and adapt that to another context," said Davis. "But it also clearly shows that countries will inevitably experience deforestation should they seek to promote certain investments such as palm oil, wood fiber, and tree plantations, which we found were consistently associated with increased forest loss."

Credit: 
University of Delaware

Research determines financial benefit from driving electric vehicles

Motorists can save as much as $14,500 on fuel costs over 15 years by driving an electric vehicle instead of a similar one fueled by gasoline, according to a new analysis conducted by researchers at the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL) and Idaho National Laboratory (INL).

Previous studies assumed a singular value for the cost to charge an electric vehicle (EV), but this new work provides an unprecedented state-level assessment of the cost of EV charging that considers when, where, and how a vehicle is charged, and considers thousands of electricity retail tariffs and real-world charging equipment and installation costs. The cost of charging is compared against the price of gasoline to estimate total fuel cost savings over a vehicle's lifetime.

"Finding out the purchase price of a vehicle is relatively simple, but the savings related to fuel aren't readily available, especially since electricity cost varies greatly for different locations and charging options," said Matteo Muratori, a senior systems engineer at NREL and co-author of the article, "Levelized Cost of Charging Electric Vehicles in the United States." The research appears in Joule and is led by Brennan Borlaug from NREL and co-authored by Shawn Salisbury and Mindy Gerdes from INL.

The researchers developed a baseline scenario based on current vehicle use and charging behavior to estimate the average levelized cost of charging (LCOC) for electric vehicles.

The cost to charge an EV varies widely. The key factors include differences in the price of electricity, the types of equipment used (slow or fast charging), the cost of installation, and vehicle use (miles driven). The national average cost to charge a battery EV ranges from 8 cents per kilowatt-hour (kWh) to 27 cents, with an average of 15 cents. That corresponds to an average lifetime fuel cost savings of $3,000 to $10,500.

In addition to this variation, considering state-by-state differences can push savings to $14,500 (in Washington state) or, in the case of four states (Alabama, Hawaii, Mississippi, and Tennessee), fail to provide any savings when compared to a conventional gasoline vehicle under certain scenarios. The researchers examined vehicles of the same class and size and driven the same number of miles a year.

In calculating costs, the researchers also considered the nature of the charging stations. For a slow charge, a motorist can use a traditional outlet at home without any special equipment. Upgrading to a higher-powered residential charger costs about $1,800, including installation. But charging at home can be done at night when electricity prices are currently at their lowest, which is considered the best-case scenario from a cost perspective.

The average cost of 15 cents per kWh assumes 81% of charging was done at home, 14% at the workplace or public station, and 5% with a DC fast charger (DCFC), in line with current empirical data. Exclusively charging at DCFC stations increases the national LCOC to 18 cents per kWh, while the price falls to 11 cents per kWh for motorists who only charged their EV using a dedicated household outlet. The cost can be further reduced to 8 cents by charging during off-peak periods.

Credit: 
DOE/National Renewable Energy Laboratory

NASA analyzes the newest Atlantic Ocean subtropical depression

image: On June 23 at 2:10 a.m. EDT (0610 UTC), NASA's Aqua satellite found several fragmented and disorganized thunderstorms circling the center of circulation of Subtropical Depression 4, where cloud top temperatures as cold as or colder than minus 50 degrees Fahrenheit (minus 45.5 Celsius).

Image: 
NASA/NRL

NASA's Aqua satellite used infrared light to analyze the strength of storms in the North Atlantic Ocean's newly formed Subtropical Depression 4. Infrared data provides temperature information to find the strongest thunderstorms that reach high into the atmosphere which have the coldest cloud top temperatures.

By 5 p.m. EDT on Monday, June 22, the non-tropical low-pressure system that the National Hurricane Center had been following for the past couple of days off the U.S. east coast had developed enough organized convection near the center to be classified as subtropical depression. It was then that Subtropical Depression 4 was born.

The Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured infrared data on June 23 at 2:10 a.m. EDT (0610 UTC). The MODIS data showed several fragmented and disorganized thunderstorms circling the center of circulation where cloud top temperatures as cold as or colder than minus 50 degrees Fahrenheit (minus 45.5 Celsius). Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall.

Just three hours after the MODIS image, at 5 a.m. EDT as daylight broke, NOAA's National Hurricane Center (NHC) said satellite data showed an increase in deep convection (rising air that forms the thunderstorms that make up tropical cyclones) since the Aqua satellite passed overhead. Those thunderstorms still appeared disorganized in this image.

NHC said, "The depression is situated beneath an upper-level low [pressure area], and the system has a large radius-of-maximum winds, so it is still subtropical. While it is possible the depression could become a storm later today, rapidly cooling sea surface temperatures should cause the system to weaken on Wednesday [June 24]."

What is a Subtropical Storm?

NOAA's National Hurricane Center defines subtropical storms as "A non-frontal low-pressure system that has characteristics of both tropical and extratropical cyclones. Like tropical cyclones, they are non-frontal, synoptic-scale cyclones that originate over tropical or subtropical waters, and have a closed surface wind circulation about a well-defined center. In addition, they have organized moderate to deep convection, but lack a central dense overcast. Unlike tropical cyclones, subtropical cyclones derive a significant proportion of their energy from baroclinic sources, and are generally cold-core in the upper troposphere, often being associated with an upper-level low or trough. In comparison to tropical cyclones, these systems generally have a radius of maximum winds occurring relatively far from the center (usually greater than 60 nautical miles), and generally have a less symmetric wind field and distribution of convection.

Subtropical Depression 04L's Status

At 5 a.m. EDT (0900 UTC), on June 23 the center of Subtropical Depression Four was located near latitude 39.3 degrees north and longitude 63.4 degrees west. That is about 365 miles (590 km) south of Halifax, Nova Scotia, Canada.

The depression is moving toward the northeast near 13 mph (20 km/h), and this motion is expected to continue for next couple of days with some increase in forward speed. The estimated minimum central pressure is 1008 millibars. Maximum sustained winds are near 35 mph (55 kph) with higher gusts.

Little change in strength is forecast during the next day or so, with the system likely weakening and transitioning into a post-tropical cyclone on Wednesday, June 24.

 What is a Post-tropical Cyclone?

NHC defines a post-tropical cyclone as a former tropical cyclone. This generic term describes a cyclone that no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. Post-tropical cyclones can continue carrying heavy rains and high winds. Note that former tropical cyclones that have become extratropical, as well as remnant lows, are two classes of post-tropical cyclones.

Hurricanes/tropical cyclones are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

NOAA/NASA's Suomi NPP satellite captures 63 mile smoke trail from bush fire

image: Bush Fire in Arizona

Image: 
NASA Worldview

NOAA/NASA's Suomi NPP satellite captured this image of the Bighorn Fire on June 22, 2020, showing clouds of smoke pouring off this fire that is plaguing Arizona. On June 24, the fire has grown to 65,536 acres and is 33 percent contained. On June 05, 2020, a lightning strike started the Bighorn Fire in the Catalina Mountains northwest of Tucson, Arizona on the Coronado National Forest. Due to current dry, windy weather conditions the fire has pushed closer to communities forcing evacuation. Dry conditions also add to fuel stores of tinder-dry tall grass, brush, dormant brush and hardwood slash.

The fire is burning in steep and rugged terrain in the Pusch Ridge Wilderness. Catalina State Park and several popular trails in the area are closed including Romero Canyon, Pima Canyon, Finger Rock, Pontatoc, Pontatoc Ridge and Linda Vista. Smoke impacts to surrounding communities are being carefully monitored.

The Suomi NPP image below illustrates the flow of the smoke from the Bighorn Fire which extends out 63 miles southeast and 41 miles to the northeast. The ruler is an application which is available through the Worldview website to measure distances.

NASA's satellite instruments are often the first to detect wildfires burning in remote regions, and the locations of new fires are sent directly to land managers worldwide within hours of the satellite overpass. Together, NASA instruments detect actively burning fires, track the transport of smoke from fires, provide information for fire management, and map the extent of changes to ecosystems, based on the extent and severity of burn scars. NASA has a fleet of Earth-observing instruments, many of which contribute to our understanding of fire in the Earth system. Satellites in orbit around the poles provide observations of the entire planet several times per day, whereas satellites in a geostationary orbit provide coarse-resolution imagery of fires, smoke and clouds every five to 15 minutes. For more information visit: www.nasa.gov/mission_pages/fires/main/missions/index.html

Credit: 
NASA/Goddard Space Flight Center

Researcher develops tool to protect children's online privacy

image: Dr. Kanad Basu, assistant professor of electrical and computer engineering in the Erik Jonsson School of Engineering and Computer Science at The University of Texas at Dallas.

Image: 
The University of Texas at Dallas

A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children's online privacy.

Dr. Kanad Basu, assistant professor of electrical and computer engineering in the Erik Jonsson School of Engineering and Computer Science and lead author of the study, along with colleagues elsewhere, developed a tool that can determine whether an Android game or other mobile app complies with the federal Children's Online Privacy Protection Act (COPPA).

The researchers introduced and tested their "COPPA Tracking by Checking Hardware-Level Activity," or COPPTCHA, tool in a study published in the March edition of IEEE Transactions on Information Forensics and Security. The tool was 99% accurate. Researchers continue to improve the technology, which they plan to make available for download at no cost.Basu said games and other apps that violate COPPA pose privacy risks that could make it possible for someone to determine a child's identity and location. He said the risk is heightened as more people are accessing apps from home, rather than public places, due to the COVID-19 pandemic.

"Suppose the app collects information showing that there is a child on Preston Road in Plano, Texas, downloading the app. A trafficker could potentially get the user's email ID and geographic location and try to kidnap the child. It's really, really scary," Basu said.

Apps can access personal identifiable information, including names, email addresses, phone numbers, location, audio and visual recordings, and unique identifiers for devices such as an international mobile equipment identity (IMEI), media access control (MAC) addresses, Android ID and Android advertising ID. The advertising ID, for example, allows app developers to collect information on users' interests, which they can then sell to advertisers.

"When you download an app, it can access a lot of information on your cellphone," Basu said. "You have to keep in mind that all this info can be collected by these apps and sent to third parties. What do they do with it? They can pretty much do anything. We should be careful about this."

The researchers' technique accesses a device's special-purpose register, a type of temporary data-storage location within a microprocessor that monitors various aspects of the microprocessor's function. Whenever an app transmits data, the activity leaves footprints that can be detected by the special-purpose register.

COPPA requires that websites and online services directed to children obtain parental consent before collecting personal information from anyone younger than 13; however, as Basu's research found, many popular apps do not comply. He found that many popular games designed specifically for young children revealed users' Android IDs, Android advertising IDs and device descriptions.

Basu recommends that parents use caution when downloading or allowing children to download apps.

"If your kid asks you to download a popular game app, you're likely to download it," Basu said. "A problem with our society is that many people are not aware of -- or don't care about -- the threats in terms of privacy."

Basu advises keeping downloads to a minimum.

"I try to limit my downloading of apps as much as possible," Basu said. "I don't download apps unless I need to."

Credit: 
University of Texas at Dallas

St. Jude creates resource for pediatric brain tumor research

image: Martine Roussel, Ph.D., of the St. Jude Tumor Cell Biology department.

Image: 
St. Jude Children's Research Hospital

Scientists at St. Jude Children's Research Hospital have created orthotopic patient-derived xenograft (O-PDX) models representing a variety of pediatric brain tumor types. The models are molecularly characterized and available through a cloud-based data portal. Acta Neuropathologica recently published a report detailing these models.

Brain tumors are the most common solid tumors affecting children. O-PDXs are research models that are created with the consent of patients and parents by implanting cancerous cells orthotopically, or in the same tissue, in immune-compromised mice. These models have recently emerged as a useful way to test new therapies because they remain faithful to the biology of the original tumors from which they derive.

"We started out by researching medulloblastoma and needing a good model that we could use to screen for novel therapies," said corresponding and co-senior author Martine Roussel, Ph.D., St. Jude Tumor Cell Biology. "Our current database of models is the result of many years of work by our lab as well as with many collaborators in surgery, oncology, pathology and computational biology."

St. Jude researchers have created 37 O-PDX models generated from samples of pediatric medulloblastoma, ependymoma, atypical teratoid rhabdoid tumor and embryonal tumors donated by patient families. Scientists have thoroughly characterized these models using a combination of histopathology, whole-genome and whole-exome sequencing, RNA-sequencing and DNA methylation analysis. The O-PDXs provide a novel modeling strategy based upon individual genomes.

"To effectively treat childhood brain tumors, we need to have additional treatment strategies in our toolkit," said author Frederick Boop, M.D., St. Jude Pediatric Neurosurgery Division chief. "It takes many different scientific and medical specialties working together to create these types of models and conduct essential preclinical research that paves the way for new clinical trials."

At St. Jude, work done in some of these models provided support to launch three clinical trials for pediatric brain tumors (SJMB12, SJDAWN and SJELIOT).

The O-PDX models and their associated data are available through an interactive web-based portal as part of St. Jude Cloud. St. Jude Cloud provides data and analysis resources for pediatric cancer and other pediatric catastrophic diseases through a cloud-based infrastructure.

Credit: 
St. Jude Children's Research Hospital

Slow-growing rotavirus mutant reveals early steps of viral assembly

image: This transmission electron micrograph (TEM) of intact rotavirus particles.

Image: 
CDC

Rotavirus is responsible for more than 130,000 deaths in infants and young children younger than five years, every year. The virus causes severe, dehydrating diarrhea as it replicates in viral factories called viroplasms that form inside infected cells. Viroplasms have been difficult to study because they normally form very quickly, but a serendipitous observation led researchers at Baylor College of Medicine to uncover new insights into the formation of viroplasms.

The researchers created a mutant rotavirus that unexpectedly replicated much slower than the original virus, allowing them to observe the first steps of viral assembly. The findings, published in the Journal of Virology, open new possibilities for treating and preventing this viral disease and for understanding how similar factories of other viruses work.

"The formation of viroplasms is indispensable for a successful rotavirus infection. They form quickly inside infected cells and are made of both viral and cellular proteins that interact with lipid droplets, but the details of how the parts are put together are still not clear," said first author Dr. Jeanette M. Criglar, a former postdoctoral trainee and now staff scientist in the Department of Molecular Virology and Microbiology at Baylor in Dr. Mary Estes's lab.

To get new insights into the formation of viroplasms, Criglar and her colleagues studied NSP2, one of the viral proteins that is required for the virus to replicate. Without it, neither viroplasms nor new viruses would form.

Like all proteins, NSP2 is made of amino acids strung together like beads on a necklace. 'Bead' 313 is the amino acid serine. Importantly, serine 313 is phosphorylated - it has a phosphate chemical group attached to it. Protein phosphorylation is a mechanism cells use to regulate protein activity. It works like an on-and-off switch, activating or deactivating a protein. Here, the researchers evaluated the role NSP2's phosphorylation of serine 313 plays on viroplasm formation.

A serendipitous finding

Using a recently developed reverse genetics system, Criglar and her colleagues generated a rotavirus carrying an NSP2 protein with a mutation in amino acid 313, called a phosphomimetic mutation, by changing serine to aspartic acid. The name phosphomimetic indicates that the mutant protein mimics the phosphorylated protein in the original rotavirus. Reverse genetics starts with a protein and works backward to make the mutant gene, which then is made part of the virus to study the function of the protein on viral behavior.

"In laboratory experiments, our phosphomimetic mutant protein crystalized faster than the original, within hours as opposed to days," Criglar said. "But surprisingly, when compared to non-mutant rotavirus, the phosphomimetic virus was slow to make viroplasms and to replicate."

"This is not what we expected. We thought that rotavirus with the mutant protein also would replicate faster," said Estes, Cullen Foundation Endowed Chair and Distinguished Service Professor of molecular virology and microbiology at Baylor. "We took advantage of the delay in viroplasm formation to observe very early events that have been difficult to study."

Early steps: NSP2 and lipid droplets come together

The researchers discovered that one of the first steps in viroplasm formation is the association of NSP2 with lipid droplets, indicating that NSP2 phosphorylated on position 313 alone can interact with the droplets, without interacting with other components of the viroplasm.

Lipid droplets are an essential part of viroplasms. It is known that rotavirus coaxes infected cells to produce the droplets, but how it does it is unknown. The new findings suggest that rotavirus may be using phosphorylated NSP2 to trigger lipid droplet formation.

"It was very exciting to see that just changing a single amino acid in the NSP2 protein affected the replication of the whole virus," Criglar said. "The phosphomimetic change altered the dynamics of viral replication without killing the virus. We can use this mutant rotavirus to continue investigating the sequence of events leading to viroplasm formation, including a long-standing question in cell biology about how lipid droplets form."

"This is the first study in our lab that has used the reverse genetics system developed for rotavirus by Kanai and colleagues in Japan, and that's very exciting for me," Estes said. "There have been very few papers that use the system to ask a biological question, and ours is one of them."

Credit: 
Baylor College of Medicine

Innovative smartphone-camera adaptation images melanoma and non-melanoma

image: Fig. 1 Two dermascope implementations. The USB-camera-based PMSI and PWLI dermascope
is shown in (a) and (b). (a) Various components of the handheld imaging module (the USB camera
is hidden behind the imaging polarizer) and (b) the imaging module paired with the smartphone
camera. The smartphone-camera-based PMSI and PWLI dermascope is shown in (c), (d), and (e).
(c) The smartphone-based system's side opposite the smartphone screen with the imaging annulus
removed, where the LED PCB and smartphone camera are visible and other components are
highlighted; (d) the system with the imaging annulus attached; and (e) the smartphone installed in
the dermascope.

Image: 
SPIE

BELLINGHAM, Washington, USA - An article published in the Journal of Biomedical Optics (JBO), "Point-of-care, multispectral, smartphone-based dermascopes for dermal lesion screening and erythema monitoring," shows that standard smartphone technology can be adapted to image skin lesions, providing a low-cost, accessible medical diagnostic tool for skin cancer.

Skin lesions are diagnosed by a simple system of color, size, asymmetry, and surface appearance: the way in which the lesions are lit indicates differences between normal and malignant lesions. For their study, the authors developed two dermascopes using a smartphone-based camera and a USB-based camera. Both dermascopes, integrating LED-based polarized white-light imaging (PWLI), polarized multispectral imaging (PMSI) and image-processing algorithms, successfully mapped and delineated between the dermal chromophores indicative of melanoma, and the general skin redness known as erythema.

According to JBO Editor-in-Chief, SPIE Fellow, and MacLean Professor of Engineering at Dartmouth Brian W. Pogue, leveraging a smartphone camera to image skin lesions improves both the efficiency and efficacy of skin-lesion diagnostics. "The functionality and performance for the detection of skin cancers is very important," he said. "The cellphone camera approaches proposed and tested in this paper introduce key steps in a design that will provide simple, accurate diagnoses. While there are always ways to make medical imaging systems work, they can often be too expensive to be a viable commercial success. Similarly, devices can be made so complex that they aren't readily adopted. Here, the already-familiar platform is both low cost and intuitive to use. The simplicity of the approach here successfully combines the need for simple diagnostic tools with high precision. This is the kind of innovation that will potentially allow for easier adoption in clinical use."

Credit: 
SPIE--International Society for Optics and Photonics

Machine learning has a flaw; it's gullible

Artificial intelligence and machine learning technologies are poised to supercharge productivity in the knowledge economy, transforming the future of work. But they're far from perfect.

Machine learning (ML) - technology in which algorithms "learn" from existing patterns in data to conduct statistically driven predictions and facilitate decisions - has been found in multiple contexts to reveal bias. An example is Amazon.com coming under fire for a hiring algorithm that revealed gender and racial bias. Such biases often result from slanted training data or skewed algorithms.

And in other business contexts, there's another potential source of bias. It comes when outside individuals stand to benefit from bias predictions, and work to strategically alter the inputs. In other words, they're gaming the ML systems. A couple of the most common contexts are perhaps job applicants and people making a claim against their insurance. ML algorithms are built for these contexts. They can review resumes faster than any recruiter and comb through insurance claims faster than any human processor. But people who submit resumes and insurance claims have a strategic interest in getting positive outcomes - and some of them know how to outthink the algorithm.

Rajshree Agarwal and Evan Starr, researchers at the University of Maryland's Robert H. Smith School of Business worked with Prithwiraj Choudhury at Harvard Business School to set out to answer "Can ML correct for such strategic behavior?"

They found that two critical attributes of humans serve as important complements to machine learning in correcting for these biases. The more obvious one is vintage-specific skills, this ensures that humans have the ability to properly interface with machine learning so they can guide it appropriately. The other important attribute is domain-specific expertise, the knowledge that humans can provide machines on how to correct for the incompleteness of inputs.

Their work is forthcoming in Strategic Management Journal as "Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation."

Prior research in so-called "adversarial" ML looked closely at attempts to "trick" ML technologies, and generally concluded that it's extremely challenging to prepare the ML technology to account for every possible input and manipulation. In other words, ML is trickable.

What should firms do about it? Can they limit ML prediction bias? And, is there a role for humans to work with ML to do so? Agarwal, Choudhury and Starr honed their focus on patent examination, a context rife with potential trickery. "Patent examiners face a time-consuming challenge of accurately determining the novelty and nonobviousness of a patent application by sifting through ever-expanding amounts of 'prior art,'" or inventions that have come before, the researchers explain. It's challenging work.

Compounding the challenge: patent applicants are permitted by law to create hyphenated words and assign new meaning to existing words to describe their inventions. It's an opportunity, the researchers explain, for applicants to strategically write their applications in a strategic, ML-targeting way. The U.S. Patent and Trademark Office is generally wise to this. It has invited in ML technology that "reads" the text of applications, with the goal of spotting the most relevant prior art quicker and leading to more accurate decisions. "Although it is theoretically feasible for ML algorithms to continually learn and correct for ways that patent applicants attempt to manipulate the algorithm, the potential for patent applicants to dynamically update their writing strategies makes it practically impossible to train an ML algorithm to correct for this behavior," the researchers write.

In its study, the team conducted observational and experimental research. They found that patent language changes over time, making it highly challenging for any ML tool to operate perfectly on its own. The ML benefited strongly, they found, from human collaboration.

People with skills and knowledge accumulated through prior learning within a domain complement ML in mitigating bias stemming from applicant manipulation, the researchers found, because domain experts bring relevant outside information to correct for strategically altered inputs. And individuals with vintage-specific skills - skills and knowledge accumulated through prior familiarity of tasks with the technology - are better able to handle the complexities in ML technology interfaces.

They caution that although the provision of expert advice and vintage-specific human capital increases initial productivity, it remains unclear whether constant exposure and learning-by-doing by workers would cause the relative differences between the groups to grow or shrink over time. They encourage further research into the evolution in the productivity of all ML technologies, and their contingencies.

Credit: 
University of Maryland

Using chaos as a tool, scientists discover new method of making 3D-heterostructures

image: Ames Laboratory's technique for making heterostructured solids involves smashing the pristine materials to build new ones. Called mechanochemistry, the technique uses ball milling to take apart structurally incommensurate solids and reassemble them.

Image: 
U.S. Department of Energy, Ames Laboratory

Scientists at the U.S. Department of Energy's Ames Laboratory and their collaborators from Iowa State University have developed a new approach for generating layered, difficult-to-combine, heterostructured solids. Heterostructured materials, composed of layers of dissimilar building blocks display unique electronic transport and magnetic properties that are governed by quantum interactions between their structurally different building blocks, and open new avenues for electronic and energy applications.

The technique for making them is simple, and counterintuitive--it involves smashing the pristine materials to build new ones. Called mechanochemistry, the technique uses ball milling to take apart structurally incommensurate solids-- ones that don't have matching atomic arrangements-- and reassemble them into unique three dimensional (3D) "misfit" hetero assemblies. Smashing things together by milling seems like the least plausible way to achieve atomic ordering, but it's turned out to be more successful than the scientists themselves imagined.

"A colleague of mine remarked that our ideas would be either naive or brilliant," said Viktor Balema, Ames Laboratory Senior Scientist. "Some time ago we discovered stochastic reshuffling of layered metal dichalcogenides (TMDCs) into 3D hetero-assemblies during mechanical milling. It came as a complete surprise to us and triggered our curiosity about the possibility of atomic ordering through mechanochemical processing."

Metal chalcogenides are often unique in their properties and uses. They can display remarkable electron transport behaviors ranging from complete lack of electrical conductivity to superconductivity, photo- and thermoelectric properties, mechanical pliability and, especially, the ability to form stable two-dimensional monolayers, three dimensional heterostructures, and other nano-scaled quantum materials.

"Nanostructures of misfit layered compounds (MLC) in the form of nanotubes, nanofilms (ferecrystals) and exfoliated sheets have been investigated for over a decade and offer a rich field of research and possibly also exciting applications in renewable energy, catalysis and optoelectronics, said Reshef Tenne of the Weizmann Institute of Science, Israel, and an expert in nanostructure synthesis. "One obstacle for their large-scale application is the high temperature and lengthy growth processes, which are prohibitive for large scale applications. The mechanochemical process developed by the Balema group at Ames Lab, besides being stimulating scientifically, brings us one step closer to realize down-to-earth applications for these intriguing materials."

Typically, these complex materials, especially ones with the most unusual structures and properties, are made using two different synthetic approaches. The first, known as top-down synthesis, employs two-dimensional (2D) building blocks to assemble them, using additive manufacturing techniques. The second approach, broadly defined as bottom-up synthesis, uses stepwise chemical reactions involving pure elements or small molecules that deposit individual monolayers on top of each other. Both are painstaking and have other disadvantages such as poor scalability for use in real-world applications.

The Ames Laboratory team combined these two methods into one mechanochemical process that simultaneously exfoliates, disintegrates and recombines starting materials into new heterostructures even though their crystal structures do not fit each other well (i.e. misfit). Theoretical (DFT) calculations, supported by the results of X-ray diffraction, scanning transmission electron microscopy, Raman spectroscopy, electron transport studies and, for the first time ever, solid state nuclear magnetic resonance (NMR) experiments, explained the mechanism of the reorganization of precursor materials and the driving forces behind the formation of novel 3D heterostructures during mechanical processing.

"Solid-state NMR spectroscopy is an ideal technique for the characterization of powdered materials that are obtained from mechanochemistry," said Aaron Rossini, Ames Laboratory scientist and professor of chemistry at Iowa State University. "By combining information obtained from solid-state NMR spectroscopy with other characterization techniques we are able to obtain a complete picture of the 3D heterostructures."

Credit: 
DOE/Ames National Laboratory

Laser allows solid-state refrigeration of a semiconductor material

image: University of Washington researchers used an infrared laser to cool a solid semiconductor material -- labeled here as "cantilever" -- by at least 20 degrees C, or 36 F, below room temperature.

Image: 
Anupum Pant

To the general public, lasers heat objects. And generally, that would be correct.

But lasers also show promise to do quite the opposite -- to cool materials. Lasers that can cool materials could revolutionize fields ranging from bio-imaging to quantum communication.

In 2015, University of Washington researchers announced that they can use a laser to cool water and other liquids below room temperature. Now that same team has used a similar approach to refrigerate something quite different: a solid semiconductor. As the team shows in a paper published June 23 in Nature Communications, they could use an infrared laser to cool the solid semiconductor by at least 20 degrees C, or 36 F, below room temperature.

The device is a cantilever -- similar to a diving board. Like a diving board after a swimmer jumps off into the water, the cantilever can vibrate at a specific frequency. But this cantilever doesn't need a diver to vibrate. It can oscillate in response to thermal energy, or heat energy, at room temperature. Devices like these could make ideal optomechanical sensors, where their vibrations can be detected by a laser. But that laser also heats the cantilever, which dampens its performance.

"Historically, the laser heating of nanoscale devices was a major problem that was swept under the rug," said senior author Peter Pauzauskie, a UW professor of materials science and engineering and a senior scientist at the Pacific Northwest National Laboratory. "We are using infrared light to cool the resonator, which reduces interference or 'noise' in the system. This method of solid-state refrigeration could significantly improve the sensitivity of optomechanical resonators, broaden their applications in consumer electronics, lasers and scientific instruments, and pave the way for new applications, such as photonic circuits."

The team is the first to demonstrate "solid-state laser refrigeration of nanoscale sensors," added Pauzauskie, who is also a faculty member at the UW Molecular Engineering & Sciences Institute and the UW Institute for Nano-engineered Systems.

The results have wide potential applications due to both the improved performance of the resonator and the method used to cool it. The vibrations of semiconductor resonators have made them useful as mechanical sensors to detect acceleration, mass, temperature and other properties in a variety of electronics -- such as accelerometers to detect the direction a smartphone is facing. Reduced interference could improve performance of these sensors. In addition, using a laser to cool the resonator is a much more targeted approach to improve sensor performance compared to trying to cool an entire sensor.

In their experimental setup, a tiny ribbon, or nanoribbon, of cadmium sulfide extended from a block of silicon -- and would naturally undergo thermal oscillation at room temperature.

At the end of this diving board, the team placed a tiny ceramic crystal containing a specific type of impurity, ytterbium ions. When the team focused an infrared laser beam at the crystal, the impurities absorbed a small amount of energy from the crystal, causing it to glow in light that is shorter in wavelength than the laser color that excited it. This "blueshift glow" effect cooled the ceramic crystal and the semiconductor nanoribbon it was attached to.

"These crystals were carefully synthesized with a specific concentration of ytterbium to maximize the cooling efficiency," said co-author Xiaojing Xia, a UW doctoral student in molecular engineering.

The researchers used two methods to measure how much the laser cooled the semiconductor. First, they observed changes to the oscillation frequency of the nanoribbon.

"The nanoribbon becomes more stiff and brittle after cooling -- more resistant to bending and compression. As a result, it oscillates at a higher frequency, which verified that the laser had cooled the resonator," said Pauzauskie.

The team also observed that the light emitted by the crystal shifted on average to longer wavelengths as they increased laser power, which also indicated cooling.

Using these two methods, the researchers calculated that the resonator's temperature had dropped by as much as 20 degrees C below room temperature. The refrigeration effect took less than 1 millisecond and lasted as long as the excitation laser was on.

"In the coming years, I will eagerly look to see our laser cooling technology adapted by scientists from various fields to enhance the performance of quantum sensors," said lead author Anupum Pant, a UW doctoral student in materials science and engineering.

Researchers say the method has other potential applications. It could form the heart of highly precise scientific instruments, using changes in oscillations of the resonator to accurately measure an object's mass, such as a single virus particle. Lasers that cool solid components could also be used to develop cooling systems that keep key components in electronic systems from overheating.

Credit: 
University of Washington

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

Bereavement Care in the Wake of COVID-19: Offering Condolences and Referrals

The COVID-19 pandemic is causing deaths with forced separations that deny final goodbyes and traditional mourning rituals. These conditions threaten survivors' mental health, leaving them vulnerable to enduring psychological distress. Authors from Memorial Sloan Kettering Cancer Center and Cornell Center for Research on End-of-Life Care at Weill Cornell Medicine suggest words clinicians can say to bereaved family members and guidance on when to make referrals, particularly when time for bereavement care is limited, to offset the risks that the pandemic has posed. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-2526.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Wendy G. Lichtenthal, PhD, FT, can be reached by contacting Rebecca Williams at williamr@mskcc.org.

Credit: 
American College of Physicians

NASA satellite gives a hello to tropical storm Dolly

image: NASA's Terra satellite provided a visible image of Tropical Storm Dolly in the western North Atlantic Ocean on June 23, 2020 at 1:30 p.m. EDT.

Image: 
NASA Worldview

During the morning of June 23, the fourth system in the Northern Atlantic Ocean was a subtropical depression. By the afternoon, the subtropical depression took on tropical characteristics and was renamed Dolly. NASA's Terra satellite greeted Tropical Storm Dolly by taking an image of the new tropical storm.

At 1 p.m. EDT (1700 UTC), the National Hurricane Center (NHC) classified Dolly as a tropical storm. The NHC Discussion said, "A (9:48 a.m. EDT) 1348 UTC an ASCAT-A scatterometer [a satellite instrument that measures wind speed and direction] pass, arriving just after the previous advisory was issued, indicates that the cyclone is producing winds of 35 to 40 knots (40 to 46 mph/65 to 74 kph) in its southern semicircle. In addition, the radius of maximum winds has contracted to about 40 nautical miles. This, along with the current convective pattern, suggests that the system has made a transition from a subtropical to a tropical cyclone, and it has been designated as Tropical Storm Dolly."

The center of Tropical Storm Dolly was located near latitude 39.4 degrees north and longitude 61.7 degrees west. That is about 370 miles (600 km) south-southeast of Halifax, Nova Scotia, Canada. Dolly was moving toward the east-northeast near 13 mph (20 kph). A turn toward the northeast with an increase in forward speed is expected tonight and on Wednesday, June 24.

Satellite-derived wind data indicate that maximum sustained winds have increased to near 45 mph (75 kph) with higher gusts. Tropical-storm-force winds extend outward up to 70 miles (110 km) to the south of the center. The estimated minimum central pressure is 1002 millibars.

The Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite captured a visible image of Tropical Storm Dolly in the western North Atlantic Ocean on June 23, 2020 at 1:30 p.m. EDT. The image showed a thick band of thunderstorms wrapping around the center from the southern to the eastern quadrant of the storm. The image was created by NASA Worldview at NASA's Goddard Space Flight Center in Greenbelt, Md.

The National Hurricane Center said, "Weakening is forecast during the next day or two as Dolly moves over colder waters, and the system is expected to become post-tropical on Wednesday. The low should then dissipate by early Thursday."

NASA's Terra satellite is one in a fleet of NASA satellites that provide data for hurricane research.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Tongue microbes provide window to heart health

image: Clinically, there are differences in tongue images, including tongue coating and tongue colour between chronic heart failure (CHF) patients and healthy individuals. Recent reports have suggested alterations in the tongue microbiota, which may play a critical role in diseases. CHF-associated tongue coating microbiome dysbiosis has not yet been clearly defined.

Image: 
@European Society of Cardiology 2020

Sophia Antipolis - 23 June 2020: Microorganisms on the tongue could help diagnose heart failure, according to research presented today on HFA Discoveries, a scientific platform of the European Society of Cardiology (ESC).1

"The tongues of patients with chronic heart failure look totally different to those of healthy people," said study author Dr. Tianhui Yuan, No.1 Hospital of Guangzhou University of Chinese Medicine. "Normal tongues are pale red with a pale white coating. Heart failure patients have a redder tongue with a yellow coating and the appearance changes as the disease becomes more advanced."

"Our study found that the composition, quantity and dominant bacteria of the tongue coating differ between heart failure patients and healthy people," she said.

Previous research has shown that microorganisms in the tongue coating could distinguish patients with pancreatic cancer from healthy people.2 The authors of that study proposed this as an early marker to diagnose pancreatic cancer. And, since certain bacteria are linked with immunity, they suggested that the microbial imbalance could stimulate inflammation and disease. Inflammation and the immune response also play a role in heart failure.3

This study investigated the composition of the tongue microbiome in participants with and without chronic heart failure. The study enrolled 42 patients in hospital with chronic heart failure and 28 healthy controls. None of the participants had oral, tongue or dental diseases? had suffered an upper respiratory tract infection in the past week? had used antibiotics and immunosuppressants in the past week? or were pregnant or lactating.

Stainless steel spoons were used to take samples of the tongue coating in the morning, before participants had brushed their teeth or eaten breakfast. A technique called 16S rRNA gene sequencing was used to identify bacteria in the samples.

The researchers found that heart failure patients shared the same types of microorganisms in their tongue coating. Healthy people also shared the same microbes. There was no overlap in bacterial content between the two groups.

At the genus level, five categories of bacteria distinguished heart failure patients from healthy people with an area under the curve (AUC) of 0.84 (where 1.0 is a 100% accurate prediction and 0.5 is a random finding).

In addition, there was a downward trend in levels of Eubacterium and Solobacterium with increasingly advanced heart failure.

Dr. Yuan said: "More research is needed, but our results suggest that tongue microbes, which are easy to obtain, could assist with wide-scale screening, diagnosis, and long-term monitoring of heart failure. The underlying mechanisms connecting microorganisms in the tongue coating with heart function deserve further study."

Credit: 
European Society of Cardiology