Tech

Asteroid's scars tell stories of its past

image: This image shows four views of asteroid Bennu along with a corresponding global mosaic. The images were taken on Dec. 2, 2018, by the OSIRIS-REx spacecraft's PolyCam camera, which is part of the OCAMS instrument suite designed by UArizona scientists and engineers.

Image: 
NASA/Goddard/University of Arizona

By studying impact marks on the surface of asteroid Bennu - the target of NASA's OSIRIS-REx mission - a team of researchers led by the University of Arizona has uncovered the asteroid's past and revealed that despite forming hundreds of millions of years ago, Bennu wandered into Earth's neighborhood only very recently.

The study, published in the journal Nature, provides a new benchmark for understanding the evolution of asteroids, offers insights into a poorly understood population of space debris hazardous to spacecraft, and enhances scientists' understanding of the solar system.

The researchers used images and laser-based measurements taken during a two-year surveying phase in which the van-sized OSIRIS-REx spacecraft orbited Bennu and broke the record as the smallest spacecraft to orbit a small body.

Presented at the opening day of the American Astronomical Society's Division for Planetary Sciences meeting on Oct. 26, the paper details the first observations and measurements of impact craters on individual boulders on an airless planetary surface since the Apollo missions to the moon 50 years ago, according to the authors.

The publication comes just a few days after a major milestone for NASA's University of Arizona-led OSIRIS-REx mission. On Oct. 20, the spacecraft successfully descended to asteroid Bennu to grab a sample from its boulder-scattered surface - a first for NASA. The sample has now been successfully stowed and will be returned to Earth for study in 2023, where it could give scientists insight into the earliest stages of the formation of our solar system.

Impact Craters on Rocks Tell a Story

Although Earth is being pelted with more than 100 tons of space debris each day, it is virtually impossible to find a rockface pitted by impacts from small objects at high velocities. Courtesy of our atmosphere, we get to enjoy any object smaller than a few meters as a shooting star rather than having to fear being struck by what essentially amounts to a bullet from outer space.

Planetary bodies lacking such a protective layer, however, bear the full brunt of a perpetual cosmic barrage, and they have the scars to show for it. High-resolution images taken by the OSIRIS-REx spacecraft during its two-year survey campaign allowed researchers to study even tiny craters, with diameters ranging from a centimeter to a meter, on Bennu's boulders.

On average, the team found boulders of 1 meter (3 feet) or larger to be scarred by anywhere from one to 60 pits - impacted by space debris ranging in size from a few millimeters to tens of centimeters.

"I was surprised to see these features on the surface of Bennu," said the paper's lead author, Ronald Ballouz, a postdoctoral researcher in the UArizona Lunar and Planetary Laboratory and a scientist with the OSIRIS-REx regolith development working group. "The rocks tell their history through the craters they accumulated over time. We haven't observed anything like this since astronauts walked on the moon."

For Ballouz, who grew up during the 1990s in post-civil war Beirut, Lebanon, the image of a rock surface pitted with small impact craters evoked childhood memories of building walls riddled with bullet holes in his war-torn home country.

"Where I grew up, the buildings have bullet holes all over, and I never thought about it," he said. "It was just a fact of life. So, when I looked at the images from the asteroid, I was very curious, and I immediately thought these must be impact features."

The observations made by Ballouz and his team bridge a gap between previous studies of space debris larger than a few centimeters, based on impacts on the moon, and studies of objects smaller than a few millimeters, based on observations of meteors entering Earth's atmosphere and impacts on spacecraft.

"The objects that formed the craters on Bennu's boulders fall within this gap that we don't really know much about," Ballouz said, adding that rocks in that size range are an important field of study, mainly because they represent hazards for spacecraft in orbit around Earth. "An impact from one of these millimeter to centimeter-size objects at speeds of 45,000 miles per hour can be dangerous."

Ballouz and his team developed a technique to quantify the strength of solid objects using remote observations of craters on the surfaces of boulders - a mathematical formula that allows researchers to calculate the maximum impact energy that a boulder of a given size and strength could endure before being smashed. In other words, the crater distribution found on Bennu today keeps a historical record of the frequency, size and velocity of impact events the asteroid has experienced throughout its history.

"The idea is actually pretty simple," Ballouz said, using a building exposed to artillery fire as an analogy to boulders on an asteroid. "We ask, 'What is the largest crater you can make on that wall before the wall disintegrates?' Based on observations of multiple walls of the same size, but with different sized craters, you can get some idea of the strength of that wall."

The same holds true for a boulder on an asteroid or other airless body, said Ballouz, who added that the approach could be used on any other asteroid or airless body that astronauts or spacecraft may visit in the future.

"If a boulder gets hit by something larger than an object that would leave a certain size cater, it would just disappear," he explained. In other words, the size distribution of boulders that have persisted on Bennu serve as silent witnesses to its geologic past.

A Newcomer to Earth's Neighborhood

Applying the technique to boulders ranging in size from pebbles to parking garages, the researchers were able to make inferences about the sizes and type of impactors to which the boulders were exposed, and for how long.

The authors conclude that the largest craters on Bennu's boulders were created while Bennu resided in the asteroid belt, where impact speeds are lower than in the near-Earth environment, but are more frequent and often near the limit of what the boulders could withstand. Smaller craters, on the other hand, were acquired more recently, during Bennu's time in near-Earth space, where impact speeds are higher but potentially disruptive impactors are much less common.

Based on these calculations, the authors determine that Bennu is a relative newcomer to Earth's neighborhood. Although it is thought to have formed in the main asteroid belt more than 100 million years ago, it is estimated that it was kicked out of the asteroid belt and migrated to its current territory only 1.75 million years ago. Extending the results to other near-Earth objects, or NEOs, the researchers also suggest that these objects likely come from parent bodies that fall in the category of asteroids, which are mostly rocky with little or no ice, rather than comets, which have more ice than rock.

While theoretical models suggest that the asteroid belt is the reservoir for NEOs, no observational evidence of their provenance was available other than meteorites that fell to Earth and were collected, Ballouz said. With these data, researchers can validate their models of where NEOs come from, according to Ballouz, and get an idea of how strong and solid these objects are - crucial information for any potential missions targeting asteroids in the future for research, resource extraction or protecting Earth from impact.

Credit: 
University of Arizona

Novel adoptive cell transfer method shortens timeline for T-cell manufacture

image: A faster approach for T-cell therapy against cancer. Knochelmann and colleagues show a subset of CD4 T-cells, Th17 cells, can effectively kill tumor cells when they are grown in cell culture for only four days. Th17 cells grown for shorter or longer periods of time cannot successfully eradicate the tumor. Additionally, the cytokine IL-6 is produced by the day-four Th17 cells, which supports the tumor-eradicating functions of the Th17 cells.

Image: 
MUSC Hollings Cancer Center

Adoptively transferred T-cells can prolong survival and sometimes cure patients with advanced solid tumors. While promising, it can take months to generate the necessary T-cells to help these patients. Such slow speed makes this therapy impractical for most patients who need immediate treatment.

In the September issue of Cancer Research, Hannah Knochelmann, a student in the Medical Scientist Training Program at the Medical University of South Carolina (MUSC) and researcher in Chrystal Paulos' laboratory, teamed up with investigators at three different NCI-designated cancer centers -- MUSC Hollings Cancer Center, Emory Winship and the James at The Ohio State University -- to report a new approach to generate T-cells faster for patients in the near future.

The human immune system contains two main types of T-cells: CD4 and CD8. This team cut down the time needed to manufacture T-cells from several months to less than one week by using a remarkably potent CD4 T-cell subset, called Th17 cells.

"In fact," Knochelmann explained, "very few Th17 cells were needed to eradicate multiple different types of tumors effectively. This new milestone could widen inclusion criteria to promote access to T-cell therapy for more patients with metastatic disease."

Adoptive T-cell transfer therapy, which is the transfer of therapeutic T-cells into a patient, is used in only a handful of institutions around the world. This makes a potent therapy inaccessible for the general population. These protocols often use billions of CD8 T-cells, which have cytotoxic properties that allow them to kill cancerous cells. However, it takes weeks of growth in cell culture to grow enough CD8 T-cells to be used in a single treatment. Paulos, who is Knochelmann's mentor and director of Translational Research for Cutaneous Malignancies at Emory University, said, "What is most remarkable about this finding is that we can build on this platform to bring T-cells to patients all over the world."

The best effector Th17 cells are grown in cell culture for only four days before being infused into the host. Any shorter or any longer in culture reduced the efficacy of the treatment. While the team could generate more Th17 cells over several weeks, more cells were actually equally or less effective compared with fewer Th17 cells expanded only four days. This finding highlights the potential that T-cell therapy can be administered to patients sooner, a discovery that has immediate clinical implications.

Another limitation of conventional T-cell therapy is that patients can relapse -- cancer can return even after seemingly successful treatment. Therefore, Knochelmann and the team sought to develop a therapy that was long-lived while understanding factors that can prevent relapse. They found that day-four Th17 cell therapy provides a long-lasting response. Interestingly, IL-6 was a key cytokine in fueling these T-cells to prevent relapse after treatment. This cytokine destabilized the regulatory T-cells, the brakes of the immune system, which empowered the Th17 cells to kill cancer cells.

Paulos said the researchers want this data to inspire physicians with a new way of thinking about immunotherapy. "This treatment has the potential to be very versatile. If the tumor can be targeted, meaning that a unique identifier for the tumor is known, this treatment can be effective. Thus, this therapy can be used to treat patients with either liquid or solid tumors."

Knochelmann said the core facilities and research environment at MUSC were a critical piece for the success of this work. "Many colleagues gave me key advice on this discovery. In fact, this work is a great example of what can be accomplished when different minds come together. It has been inspiring and rewarding to work on improving medicine for the future."

This work was funded by the National Institutes of Health and the Melanoma Research Foundation, as independent grants were awarded to both Knochelmann and Paulos. The research team is now collaborating with surgeons and medical oncologists to develop their findings into applicable treatments for patients. "Our vision is that T-cell products will be generated for patients within a few days," Knochelmann said, "so these therapies can help all patients in need, especially those needing treatment quickly."

Credit: 
Medical University of South Carolina

Giving the immune system a double boost against cancer

image: A highly specialized cell, the fibroblastic reticular cell, coordinates immune responses to cancer cells. In this image, a single fibroblastic reticular cell is identified, by staining it: the cell nucleus (blue), markers that identify fibroblast cells (red), and a molecule that attracts immune cells (green).

Image: 
CSHL, 2020

Cancer immunotherapies, which empower patients' immune systems to eliminate tumors, are revolutionizing cancer treatment. Many patients respond well to these treatments, sometimes experiencing long-lasting remissions. But some cancers remain difficult to treat with immunotherapy, and expanding the impact of the approach is a high priority.

In the October 30 issue of the Proceedings of the National Academy of Sciences, a team led by Cold Spring Harbor Laboratory scientists Tobias Janowitz and Douglas Fearon together with Duncan Jodrell at the Cancer Research UK Cambridge Institute and Centre, University of Cambridge reports on a clinical trial of a drug that induces an integrated immune response in the tumors of patients with cancer types that do not usually respond to immunotherapy. The researchers hope the potential treatment might make such tumors more responsive to the class of drugs known as immune checkpoint inhibitors.

Checkpoint inhibitors release natural brakes on the immune system, freeing it to find and destroy cancer cells. But they generally have not been effective against cancer cells with low levels of genetic mutation. Janowitz said:

"Those tumors often do not seem to be visible to the immune system and do not seem to be unmasked by these therapies that are currently available. And we have reasons to believe that that is because they can engage an immune suppressive pathway that keeps most of the immune cells out of the cancer cell nest."

In this clinical trial, the research team interrupted that immunosuppressive pathway with a drug called plerixafor. The drug was administered continuously by I.V. for one week to 24 patients with either pancreatic cancer or colorectal cancer with a low tumor mutational burden. All patients had advanced disease, and biopsies were collected from metastatic tumors before and after treatment.

When the team analyzed those patient samples, they found that critical immune cells had infiltrated the tumors during the time patients received plerixafor, including a cell type known to summon and organize key players in the anti-cancer response. The finding was encouraging because the team detected changes that have also been observed in patients whose cancers responded well to checkpoint inhibitors.

Jodrell, who led the planning and patient recruitment for the clinical study, said, "I am delighted that the work of this multi-disciplinary team has translated important laboratory findings into patients, with the potential to make a difference in these hard-to-treat cancers." A clinical trial based on this study is about to start recruitment and will test the effects of combining plerixafor with an approved checkpoint inhibitor.

Credit: 
Cold Spring Harbor Laboratory

Carbon-releasing 'zombie fires' in peatlands could be dampened by new findings

Imperial College London researchers have simulated for the first time how soil moisture content affects the ignition and spread of smouldering peat fires, which can release up to 100 times more carbon into the atmosphere than flaming fires. They also simulated how several smaller peat fires can merge into one large blaze, and tracked the interplay between smouldering and flaming fires.

The findings could help scientists, authorities, and landowners to manage the clearing of vegetation in peatlands in the safest way possible. The study is published today in Proceedings of the Combustion Institute.

First author Dwi Purnomo of Imperial's Department of Mechanical Engineering said: "Peat fires are a devastating yet chronically under-researched phenomenon that spurt millions of tonnes of carbon into the atmosphere every year. If we can use scientific evidence to help people manage them more effectively, we can perhaps dampen their impact on people and the environment."

Peat fires, which occur in regions like Southeast Asia, North America, and Siberia, are driven by the burning of soils rich in organic content. When peat - which is a natural reservoir of carbon - burns, it releases up to 100 times more carbon per burn area into the atmosphere than non-peat fires. Worldwide, peat fires account for millions of tonnes of carbon released into the atmosphere each year.

Unlike smoke from flaming fires, which reaches high into the atmosphere, smoke from smouldering stays close to the ground, causing haze which damages human health and is associated with excess deaths in Southeast Asia.

Peat fires can start naturally by lightning strikes or by human activities, but often begin accidentally from controlled burns - flaming fires that are intentionally started to remove excess vegetation on the surface of forests or plantations.

However because they're driven by smouldering, these fires are notoriously difficult to extinguish once they get out of control. Even when flames are extinguished the fire can continue by smouldering underground and reigniting flames much later on - hence the name 'zombie fires'.

Senior author on the paper Professor Guillermo Rein, of Imperial's Department of Mechanical Engineering, said: "Although people have been using controlled burns in agriculture for centuries, starting them on peat soils can be particularly dangerous. Peat draws the fire underground, which then hides there before coming back like zombies, making detection and extinction very challenging. The effects are felt in plantations, forests, homes, residents' health and the environment."
The new research demonstrates that burning vegetation on peaty soils with a high moisture content is less likely to sustain smouldering, lessening the likelihood of losing control of blazes. The findings are the first to study the interplay between smouldering peat and flaming vegetation.

The computer model could help authorities and landowners to manage the clearing of vegetation in peatlands in the safest way possible, by for example finding the right soil moisture content to avoid the ignition or spread of smouldering.

Dwi said: "It might seem trivial that drier soils sustain faster and larger smouldering fires, but this work can predict the critical moisture values for ignition."

The researchers used advanced computer simulations of smouldering and flaming fires in peatlands, and validated the simulations by comparing them to experiments. Then, they applied the model to a control burn in Southeast Asia (see video).

Dwi was inspired to study peat fires because of their abundance in his home country of Indonesia. He said: "I've seen the devastation they can cause and want to help my country and others like it which are affected by peat fires."

Next the researchers will build on their models to look for other factors that affect uncontrolled fires and in other affected regions like the Arctic.

Dwi added: "As well as soil moisture content we will look at the way rain, wind and fire-fighting affect peat fires."

Credit: 
Imperial College London

Evolution of consumption: A psychological ownership framework

Researchers from Boston University, Rutgers University, University of Washington, Cornell University, and University of Pennsylvania published a new paper in the Journal of Marketing that proposes that preserving psychological ownership in the technology-driven evolution of consumption underway should be a priority for marketers and firm strategy.

The study, forthcoming in the the Journal of Marketing, is titled "Evolution of Consumption: A Psychological Ownership Framework" and is authored by Carey Morewedge, Ashwani Monga, Robert Palmatier, Suzanne Shu, and Deborah Small.

Why does--and what happens when--nothing feels like it is MINE?

Technological innovations are rapidly changing the consumption of goods and services. Consumption is evolving in modern capitalist societies from a model in which people legally own private material goods to access-based models in which people purchase temporary rights to experiential goods owned by and shared with others. For example, many urban consumers have replaced car ownership with car and ride sharing services. Physical pictures occupying frames, wallets, and albums have been replaced with digital photographs that can be viewed at any time and songs, books, movies, or magazines can be pulled down from the cloud. Half the world population now buys, sells, generates, and consumes goods and information online through connected devices, generating vast quantities of personal data about their consumption patterns and private lives.

The researchers say that technological innovations such as digitization, platform markets, and the exponential expansion of the generation and collection of personal data are driving an evolution in consumption along two major dimensions. The first dimension is from a model of legal ownership, where consumers purchase and consume their own private goods, to a model of legal access, in which consumers purchase temporary access rights to goods and services owned and used by others. The second dimension is from consuming solid material goods to liquid experiential goods. The benefits of these consumption changes, from convenience to lower economic cost to greater sustainability to better preference matching, makes legal ownership of many physical private goods undesirable and unnecessary. But their commensurate reduction in psychological ownership--the feeling that a thing is "MINE"--may have profoundly negative consequences for consumers and firms.

Morewedge explains that "Psychological ownership is not legal ownership, but is, in many ways, a valuable asset for consumers and firms. It satisfies important consumer motives and is value-enhancing. The feeling that a good is MINE enhances how much we like the good, strengthens our attachment to it, and increases how much we think it is worth." Downstream consequences to firms include increased consumer demand for goods and services offered by the firm, willingness to pay for goods, word of mouth, and loyalty.

The researchers propose that the consumption changes underway can have three effects on psychological ownership--threaten it, cause it to transfer to other targets, and create new opportunities to preserve it. Fractional ownership models and the impermanence and intangibility of access-based experiential goods stunt the development of psychological ownership for streamed, rented, and cloud-based goods. In many cases, this results in a loss of psychological ownership, but sometimes it will transfer to the brands (e.g., Disney, Uber, MyChart) and devices through which goods and services are accessed (e.g., smartphones) or transfer to the community of consumers who use them (e.g., Facebook groups, followers, and forums). The greater choice and new channels for self-expression provided by this evolution of consumption, however, also offer new opportunities for consumers to feel as much psychological ownership for these access-based experiential goods and services they consume as they would for privately owned material goods.

These consumption changes and their effects on psychological ownership appear in a framework that is examined across three macro marketing trends: (1) the growth of the sharing economy; (2) the digitization of goods and service; and (3) the expansion of personal data. Exemplary cases explored include ride sharing, the digitization of music, and the expansion of health and wellness data. Each case illustrates why each of these trends is eroding psychological ownership, how it is being transformed, and new opportunities being created for firms to recapture and preserve it--whether in goods and services, intermediary devices like a phone, or at the brand level.

This psychological ownership framework generates future research opportunities and actionable marketing strategies for firms seeking to preserve the value-enhancing consequences of psychological ownership and navigate cases where it is a liability. It highlights many ways in which psychological ownership will continue to be a valuable lens through which to view, understand, forecast, and manage the consumer experience.

Full article and author contact information available at: https://doi.org/10.1177%2F0022242920957007

Credit: 
American Marketing Association

Smart tablecloth can find fruit and help with watering the plants

image: The Capacitivo smart fabric can identify fruit and find lost objects. Overall, the system achieved a 94.5% accuracy in testing.

Image: 
Figure courtesy of XDiscovery Lab.

HANOVER, N.H. - October 29, 2020 - Researchers have designed a smart fabric that can detect non-metallic objects ranging from avocadoes to credit cards, according to a study from Dartmouth College and Microsoft Research.

The fabric, named Capacitivo, senses shifts in electrical charge to identify items of varying shapes and sizes.

A study and demonstration video describing the sensing system were presented at the ACM Symposium on User Interface Software and Technology (UIST 2020).

"This research has the potential to change the way people interact with computing through everyday soft objects made of fabrics," said Xing-Dong Yang, an assistant professor of computer science and senior researcher for the study.

Existing sensing techniques using fabrics typically rely on inputs such as user touch. The new interactive system relies on an "implicit input" technique in which the fabric does not require action from the object it is sensing.

The fabric system recognizes objects based on shifts to electrical charge in its electrodes caused by changes to an object's electrical field. The difference in charge can relate to the type of material, size of the object and shape of the contact area.

Information detected on the electrical charge is compared to data stored in the system using machine learning techniques.

The ability to recognize non-metallic objects such as food items, liquids, kitchenware, plastic, and paper products makes the system unique.

"Being able to sense non-metallic objects is a breakthrough for smart fabrics because it allows users to interact with a wide variety of everyday items in entirely new ways," said Te-Yen Wu a PhD student at Dartmouth and lead author of the study.

Twenty objects were tested on the "smart tablecloth" as part of the study. The objects varied in size, shape and material. The team also included a water glass and a bowl to test how reliably the system could recognize the fullness of a container.

Overall, the system achieved a 94.5% accuracy in testing.

The system was particularly accurate for distinguishing between different fruits, such as kiwis and avocados. The status of a liquid container was also relatively simple for the system to determine.

In a supplemental study, the system was able to distinguish between different types of liquids such as water, milk, apple cider and soda.

The system was less accurate for objects that don't create firm footprints on the fabric, such as credit cards.

The design prototype features a grid of diamond-shaped electrodes made from conductive fabric attached to a sheet of cotton. The size of the electrodes and the distance between them were designed to maximize the sensing area and sensitivity.

When an object or an object's status is identified by the fabric--such as when a potted plant needs watering--the smart fabric can trigger a desired action or prompt.

Researchers expect that the system can serve a variety of functions including helping to find lost objects, providing alerts or notifications, and providing information to other smart systems such as diet trackers.

The system can even assist with cooking by making recipe suggestions and giving preparation instructions.

Teddy Seyed from Microsoft Research, Lu Tan from Wuhan University, and Yuji Zhang from Southeast University also contributed to this research.

Credit: 
Dartmouth College

Curbing COVID-19 hospitalizations requires attention to construction workers

Construction workers have a much higher risk of becoming hospitalized with the novel coronavirus than non-construction workers, according to a new study from researchers with The University of Texas at Austin COVID-19 Modeling Consortium.

Analyzing data from mid-March to mid-August on hospitalizations in Austin, Texas, the researchers found that construction workers there were five times as likely to be hospitalized with the coronavirus as workers in other occupations. The finding closely matches forecasts the team made in April.

The current study is, to the authors' knowledge, the first to compare COVID-19 hospitalizations of construction workers to non-construction workers. An earlier study by the CDC reported that the construction sector was ranked number two in frequency of workplace outbreaks in Utah.

According to the researchers, the higher vulnerability for construction workers probably stems from the continuation of construction work throughout the pandemic, even during stay-home orders and other community-wide mitigation measures. The nature of the work exacerbated the risks due to close contact with others, practices by employers and demographic factors.

"It doesn't necessarily mean we need to stop construction work," said Lauren Ancel Meyers, a professor of integrative biology and director of the consortium. "It means we need to go to great lengths to ensure the health and safety of workers when they do go to work."

Encouraging basic precautions such as mask wearing and physical distancing on the work site would help, the authors note, as would having governments or employers offer workers paid sick leave and other incentives to stay home when they have a known exposure or have mild symptoms, to help mitigate risk. In addition, regular work site-based surveillance COVID-19 testing (with effective tracing and isolation of detected cases) can help prevent spread.

In central Texas, construction workers are disproportionately Hispanic, and many of them are uninsured or in close contact with people who have limited access to health care. Compared with the general population, they also experience more underlying health conditions linked to severe cases of COVID-19, are more likely to have more people in the home and may feel pressured to work even when they don't feel well due to socioeconomic pressures.

In Texas, COVID-19 has disproportionately affected Hispanics, who account for about 40% of the state's population but 56% of its COVID-19 fatalities, according to the latest data from the Texas Department of State Health Services.

"These workers face many overlapping risks and are being exposed at a time when less vulnerable populations are able to stay home," Meyers said.

Across the U.S., construction workers are disproportionately Hispanic: 17.6% of all workers are Hispanic or Latino, yet 30% of construction workers are Hispanic or Latino, according to the U.S. Bureau of Labor Statistics.

The study's other authors are Remy Pasco, graduate student in the Meyers lab; Spencer Fox, the consortium's associate director; Clay Johnston, dean of the Dell Medical School and vice president of medical affairs at UT Austin; and Michael Pignone, chair of the Department of Internal Medicine and interim chair of the Department of Population Health at Dell Med.

The results are published in the peer-reviewed journal JAMA Network Open, a subsidiary of the Journal of the American Medical Association.

In their earlier study delivered in the spring, at the request of the City of Austin, the team analyzed the risks of allowing construction work to continue during the pandemic. (On March 31, Texas Gov. Greg Abbott declared all construction essential and permissible statewide, overriding earlier local restrictions.) At the time, the team projected that construction workers would have a 4 to 5 times higher rate of hospitalization than non-construction workers -- a prediction the new paper bears out.

"From mid-March to mid-August, the elevated risk of COVID hospitalization among construction workers matched our model predictions almost to a T," Pasco said. "The rise in COVID-19 hospitalizations among construction workers suggest that the virus has been spreading at work sites, and more should be done to protect the health and safety of the workers."

Their model also predicted that continued construction work would increase the rates of hospitalizations among the general public because of increased transmission from construction workers, but with current levels of contact tracing, that is much harder to measure and validate, Meyers noted.

Credit: 
University of Texas at Austin

Escaping the 'Era of Pandemics': experts warn worse crises to come; offer options to reduce risk

image: Convened by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) for an urgent virtual workshop about the links between degradation of nature and increasing pandemic risks, the experts agree that escaping the era of pandemics is possible, but that this will require a seismic shift in approach from reaction to prevention.

Image: 
IPBES

Future pandemics will emerge more often, spread more rapidly, do more damage to the world economy and kill more people than COVID-19 unless there is a transformative change in the global approach to dealing with infectious diseases, warns a major new report on biodiversity and pandemics by 22 leading experts from around the world.

Convened by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) for an urgent virtual workshop about the links between degradation of nature and increasing pandemic risks, the experts agree that escaping the era of pandemics is possible, but that this will require a seismic shift in approach from reaction to prevention.

COVID-19 is at least the sixth global health pandemic since the Great Influenza Pandemic of 1918, and although it has its origins in microbes carried by animals, like all pandemics its emergence has been entirely driven by human activities, says the report released on Thursday. It is estimated that another 1.7 million currently 'undiscovered' viruses exist in mammals and birds - of which up to 850,000 could have the ability to infect people.

"There is no great mystery about the cause of the COVID-19 pandemic - or of any modern pandemic", said Dr. Peter Daszak, President of EcoHealth Alliance and Chair of the IPBES workshop. "The same human activities that drive climate change and biodiversity loss also drive pandemic risk through their impacts on our environment. Changes in the way we use land; the expansion and intensification of agriculture; and unsustainable trade, production and consumption disrupt nature and increase contact between wildlife, livestock, pathogens and people. This is the path to pandemics."

Pandemic risk can be significantly lowered by reducing the human activities that drive the loss of biodiversity, by greater conservation of protected areas, and through measures that reduce unsustainable exploitation of high biodiversity regions. This will reduce wildlife-livestock-human contact and help prevent the spillover of new diseases, says the report.

"The overwhelming scientific evidence points to a very positive conclusion," said Dr. Daszak. "We have the increasing ability to prevent pandemics - but the way we are tackling them right now largely ignores that ability. Our approach has effectively stagnated - we still rely on attempts to contain and control diseases after they emerge, through vaccines and therapeutics. We can escape the era of pandemics, but this requires a much greater focus on prevention in addition to reaction."

"The fact that human activity has been able to so fundamentally change our natural environment need not always be a negative outcome. It also provides convincing proof of our power to drive the change needed to reduce the risk of future pandemics - while simultaneously benefiting conservation and reducing climate change."

The report says that relying on responses to diseases after their emergence, such as public health measures and technological solutions, in particular the rapid design and distribution of new vaccines and therapeutics, is a "slow and uncertain path", underscoring both the widespread human suffering and the tens of billions of dollars in annual economic damage to the global economy of reacting to pandemics.

Pointing to the likely cost of COVID-19 of $8-16 trillion globally by July 2020, it is further estimated that costs in the United States alone may reach as high as $16 trillion by the 4th quarter of 2021. The experts estimate the cost of reducing risks to prevent pandemics to be 100 times less than the cost of responding to such pandemics, "providing strong economic incentives for transformative change."

The report also offers a number of policy options that would help to reduce and address pandemic risk. Among these are:

Launching a high-level intergovernmental council on pandemic prevention to provide decision-makers with the best science and evidence on emerging diseases; predict high-risk areas; evaluate the economic impact of potential pandemics and to highlight research gaps. Such a council could also coordinate the design of a global monitoring framework.

Countries setting mutually-agreed goals or targets within the framework of an international accord or agreement - with clear benefits for people, animals and the environment.

Institutionalizing the 'One Health' approach in national governments to build pandemic preparedness, enhance pandemic prevention programs, and to investigate and control outbreaks across sectors.

Developing and incorporating pandemic and emerging disease risk health impact assessments in major development and land-use projects, while reforming financial aid for land-use so that benefits and risks to biodiversity and health are recognized and explicitly targeted.

Ensuring that the economic cost of pandemics is factored into consumption, production, and government policies and budgets.

Enabling changes to reduce the types of consumption, globalized agricultural expansion and trade that have led to pandemics - this could include taxes or levies on meat consumption, livestock production and other forms of high pandemic-risk activities.

Reducing zoonotic disease risks in the international wildlife trade through a new intergovernmental 'health and trade' partnership; reducing or removing high disease-risk species in the wildlife trade; enhancing law enforcement in all aspects of the illegal wildlife trade and improving community education in disease hotspots about the health risks of wildlife trade.

Valuing Indigenous Peoples and local communities' engagement and knowledge in pandemic prevention programs, achieving greater food security, and reducing consumption of wildlife.

Closing critical knowledge gaps such as those about key risk behaviors, the relative importance of illegal, unregulated, and the legal and regulated wildlife trade in disease risk, and improving understanding of the relationship between ecosystem degradation and restoration, landscape structure and the risk of disease emergence.

Speaking about the workshop report, Dr. Anne Larigauderie, Executive Secretary of IPBES said: "The COVID-19 pandemic has highlighted the importance of science and expertise to inform policy and decision-making. Although it is not one of the typical IPBES intergovernmental assessments reports, this is an extraordinary peer-reviewed expert publication, representing the perspectives of some of the world's leading scientists, with the most up-to-date evidence and produced under significant time constraints. We congratulate Dr. Daszak and the other authors of this workshop report and thank them for this vital contribution to our understanding of the emergence of pandemics and options for controlling and preventing future outbreaks. This will inform a number of IPBES assessments already underway, in addition to offering decision-makers new insights into pandemic risk reduction and options for prevention."

Credit: 
Terry Collins Assoc

Radical diagnostic could save millions of people at risk of dying from blood loss

image: Fibrinogen diagnostic developed by researchers at Monash University

Image: 
Monash University / BioPRIA

- In a world-first, engineers at Monash University in Australia have developed a diagnostic that can help deliver urgent treatment to people at risk of dying from rapid blood loss.

- This simple, cheap and portable diagnostic measures fibrinogen concentration in blood. Fibrinogen is needed for clotting and stops people bleeding to death from traumatic injury and major surgery.

- The diagnostic can be performed using a fresh whole blood sample, not just plasma, and can be performed in under four minutes.

Engineers at Monash University in Australia have developed a fast, portable and cheap diagnostic that can help deliver urgent treatment to people at risk of dying from rapid blood loss.

In a world-first outcome that could save more than two million lives globally each year, researchers have developed a diagnostic using a glass slide, Teflon film and a piece of paper that can test for levels of fibrinogen concentration in blood in less than four minutes.

Fibrinogen is a protein found in blood that is needed for clotting. When a patient experiences traumatic injury, such as a serious car accident, or major surgery and childbirth complications, fibrinogen is required in their blood to prevent major haemorrhaging and death from blood loss.

Typically, heavily bleeding patients must be transported to a hospital or emergency centre where they undergo diagnostic tests before being treated. These tests are time consuming and costly as they require expensive equipment, specialised/trained personnel and can take up to half an hour.

This new development by researchers at Monash University's Department of Chemical Engineering and BioPRIA (Bioresource Processing Institute of Australia), in collaboration with Haemokinesis, removes the need for centralised hospital equipment to detect, monitor and treat fibrinogen levels - something never achieved until now.

Additionally, this diagnostic can be upscaled into a point-of-care tool and placed in ambulances and other first responder vehicles, in regional and remote locations, and in GP clinics. It takes just four minutes to complete.

Findings were published in the prestigious journal, ACS Sensors.

Professor Gil Garnier, Director of BioPRIA, said this diagnostic will allow emergency doctors and paramedics to quickly and accurately diagnose low levels of fibrinogen in patients, giving them faster access to life-saving treatment to stop critical bleeding.

"When a patient is bleeding heavily and has received several blood transfusions, their levels of fibrinogen drop. Even after dozens of transfusions, patients keep bleeding. What they need is an injection of fibrinogen. However, if patients receive too much fibrinogen, they can also die," Professor Garnier said.

"There are more than 60 tests that can measure fibrinogen concentration. However, these tests require importable machinery on hospital table tops to use. This means that critical time has to be spent transporting heavily bleeding patients to a hospital - before they even undergo a 30 minute diagnosis."

PhD candidate in the Department of Chemical Engineering and research co-author, Marek Bialkower, said the implications for this diagnostic are significant.

"Our diagnostic can eliminate the preparation time, labour and transportation difficulties of traditional techniques used in the hospital, Mr Bialkower said.

"It can diagnose hypofibrinogenemia in critically bleeding patients anywhere in the world, and can drastically reduce the time to treatment needed for fibrinogen replacement therapy. The test can take less than four minutes, about five times faster than the current gold standard methods."

The test works by placing a pre-mixed droplet of a blood sample and an enzyme solution onto a solid surface, allowing it to clot, and then dropping a paper strip on top. The further that blood moves down the strip of paper, the lower the fibrinogen concentration.

The diagnostic can work with a variety of blood conditions. Furthermore, diluting blood samples not only increases the test's sensitivity, but also eliminates the effect of interfering substances in the blood.

Hypofibrinogenemia (insufficient fibrinogen to enable effective clotting) in critical bleeding is common. More than 20 per cent of major trauma patients have hyperfibrinogenemia.

Dr Clare Manderson, Research Fellow in the Monash Department of Chemical Engineering and co-author of the study, said the early diagnosis of hypofibrinogenemia could stop bleeding in these patients and save their lives.

"The development of the world's first handheld fibrinogen diagnostic is a game changer for the millions of people who die each year from critical blood loss. It will also ease pressure on emergency departments knowing that this life-saving treatment can be delivered on site and in quick time," Dr Manderson said.

"Our capacity to develop this diagnostic using cheap and readily available materials means it can be easily commercialised for use across the world."

Credit: 
Monash University

Scientists discover new structures in the smallest ice cube

image: Scientists discovered new structures in the smallest ice cube.

Image: 
LI Gang and LI Qinming

The freezing of water is one of the most common processes. However, understanding the microstructure of ice and its hydrogen-bonding networks has been a challenge.

The low-energy structure of water octamer is predicted to be nominally cubic, with eight tri-coordinated water molecules at the eight corners of the cube. Such tri-coordinated water molecules have been identified at the surface of ice.

Only a few gas-phase studies have been achieved for experimental characterization of water octamer, and two nearly isoenergetic structures with D2d and S4 symmetry are found.

This understanding now has been changed. A research team led by Prof. JIANG Ling and Prof. YANG Xueming from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences, in collaboration with Prof. LI Jun from Tsinghua University, revealed the coexistence of five cubic isomers in the smallest ice cube, including two with chirality.

The study was published in Nature Communications on October 28.

Prof. JIANG and Prof. YANG developed a method of infrared spectroscopy of neutral clusters based on a tunable vacuum ultraviolet free electron laser (VUV-FEL). This method created a new paradigm for the study of vibrational spectra of a wide variety of neutral clusters that could not be studied before.

"We measured infrared spectra of size-selected neutral water octamer using the VUV-FEL-based infrared scheme," said Prof. JIANG.

"We observed the distinct features in the spectra, and identified additional cubic isomers with C2 and Ci symmetry, which coexisted with the global-minimum D2d and S4 isomers at finite temperature of the experiment," said Prof. YANG.

Prof. LI's team conducted quantum chemical studies to understand the electronic structure of the water octamer. They found that the relative energies of these structures reflect topology-dependent, delocalized multi-center hydrogen-bonding interactions.

The study demonstrated that even with a common structural motif, the degree of cooperativity among the hydrogen-bonding network created a hierarchy of distinct species. It provided crucial information for fundamental understanding of the formation processes of cloud, aerosol, and ice, especially under rapid cooling.

Their findings provide a benchmark for accurate description of the water intermolecular potentials to understand the macroscopic properties of water, and stimulate further study of intermediate-ice structures formed in the crystallization process of ice.

Credit: 
Chinese Academy of Sciences Headquarters

Chemical scissors snip 2D transition metal dichalcogenides into nanoribbon

image: Schematic view of scissoring 2D sheets to nanoribbon.

Image: 
KAIST

One of the biggest challenges in making hydrogen production clean and cheap has been finding an alternative catalyst necessary for the chemical reaction that produces the gas, one that is much cheaper and abundant than the very expensive and rare platinum that is currently used. Researchers in Korea have now found a way to 'snip' into tiny nanoribbons a cheap and plentiful substance that fits the bill, boosting its catalytic efficiency to at least that of platinum.

Researchers have identified a potential catalyst alternative - and an innovative way to produce them using chemical 'scissors' - that could make hydrogen production more economical.

The research team led by Professor Sang Ouk Kim at the Department of Materials Science and Engineering published their work in Nature Communications.

Hydrogen is likely to play a key role in the clean transition away from fossil fuels and other processes that produce greenhouse gas emissions. There is a raft of transportation sectors such as long-haul shipping and aviation that are difficult to electrify and so will require cleanly produced hydrogen as a fuel or as a feedstock for other carbon-neutral synthetic fuels. Likewise, fertilizer production and the steel sector are unlikely to be "de-carbonized" without cheap and clean hydrogen.

The problem is that the cheapest methods by far of producing hydrogen gas is currently from natural gas, a process that itself produces the greenhouse gas carbon dioxide-which defeats the purpose.

Alternative techniques of hydrogen production, such as electrolysis using an electric current between two electrodes plunged into water to overcome the chemical bonds holding water together, thereby splitting it into its constituent elements, oxygen and hydrogen are very well established. But one of the factors contributing to the high cost, beyond being extremely energy-intensive, is the need for the very expensive precious and relatively rare metal platinum. The platinum is used as a catalyst-a substance that kicks off or speeds up a chemical reaction-in the hydrogen production process.

As a result, researchers have long been on the hunt for a substitution for platinum -- another catalyst that is abundant in the earth and thus much cheaper.

Transition metal dichalcogenides, or TMDs, in a nanomaterial form, have for some time been considered a good candidate as a catalyst replacement for platinum. These are substances composed of one atom of a transition metal (the elements in the middle part of the periodic table) and two atoms of a chalcogen element (the elements in the third-to-last column in the periodic table, specifically sulfur, selenium and tellurium).

What makes TMDs a good bet as a platinum replacement is not just that they are much more abundant, but also their electrons are structured in a way that gives the electrodes a boost.

In addition, a TMD that is a nanomaterial is essentially a two-dimensional super-thin sheet only a few atoms thick, just like graphene. The ultrathin nature of a 2-D TMD nanosheet allows for a great many more TMD molecules to be exposed during the catalysis process than would be the case in a block of the stuff, thus kicking off and speeding up the hydrogen-making chemical reaction that much more.

However, even here the TMD molecules are only reactive at the four edges of a nanosheet. In the flat interior, not much is going on. In order to increase the chemical reaction rate in the production of hydrogen, the nanosheet would need to be cut into very thin - almost one-dimensional strips, thereby creating many edges.

In response, the research team developed what are in essence a pair of chemical scissors that can snip TMD into tiny strips.

"Up to now, the only substances that anyone has been able to turn into these 'nano-ribbons' are graphene and phosphorene," said Sang Professor Kim, one of the researchers involved in devising the process.

"But they're both made up of just one element, so it's pretty straightforward. Figuring out how to do it for TMD, which is made of two elements was going to be much harder."

The 'scissors' involve a two-step process involving first inserting lithium ions into the layered structure of the TMD sheets, and then using ultrasound to cause a spontaneous 'unzipping' in straight lines.

"It works sort of like how when you split a plank of plywood: it breaks easily in one direction along the grain," Professor Kim continued. "It's actually really simple."

The researchers then tried it with various types of TMDs, including those made of molybdenum, selenium, sulfur, tellurium and tungsten. All worked just as well, with a catalytic efficiency as effective as platinum's.

Because of the simplicity of the procedure, this method should be able to be used not just in the large-scale production of TMD nanoribbons, but also to make similar nanoribbons from other multi-elemental 2D materials for purposes beyond just hydrogen production.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Advanced facade material for urban heat island mitigation

image: Surface appearance and structure of RR plate.

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

A joint research team led by Asst. Prof. Jihui Yuan at the Dept. of Architecture and Civil Eng. of Toyohashi University of Technology, in collaboration with Osaka City University, has proposed two analytical models to evaluate the reflection directional characteristics of retro-reflective (RR) materials applied to building envelopes for urban heat island (UHI) mitigation, based on the measured data of optical experiments. It was shown that the predication result of the anisotropic body of rotation of the normal distribution function (AND) model is more accurate than that of the original analytical model.

Details:

Currently, countermeasures for UHI mitigation have been implemented widely. It has also been reported that the solar reflectivity of a building's exterior wall surface and pavement is an important index to affect the air conditioning loads of buildings that are directly related to their energy use. Rooftops covered with diffuse highly reflective (DHR) materials (i.e., highly reflective paints) can reflect solar radiation to the sky if there are no higher buildings around it. However, if there are taller buildings nearby, much radiation can be reflected to neighboring buildings and roads. Consequently, this is absorbed by the neighboring buildings and roads, aggravating the UHI phenomenon. To solve this problem in applying DHR materials to building facades, RR materials have been recommended to replace DHR materials to mitigate the UHI phenomenon and reduce the energy consumption of buildings.

However, RR materials are still in the research stage and have not been used in practice. Before applying RR materials to a building envelope, predicting the reflection directional characteristics of RR materials has become one of the most urgent issues. Regarding prediction methods, the optical experiment has mainly been adopted to evaluate the reflection directional characteristics of RR materials; however, there is almost no accurate model prediction method.

Thus, a joint research team led by Asst. Prof. Jihui Yuan at the Dept. of Architecture and Civil Eng. of Toyohashi University of Technology, in collaboration with Osaka City University, has proposed two analytical models to possibly evaluate the reflection directional characteristics of three RR samples determined by their research team, comparing the prediction results of the two analytical models to that of the optical experiment in the research. The results of this research were published in the Elsevier journal "Energy & Buildings" on September 15, 2020.

This research mainly consists of three parts; the first is the production of RR samples, the second is the optical measurement of RR samples, and the third is the proposal of analytical models based on optical measurement data.

In this study, two analytical models are introduced to evaluate the reflection directional characteristics of RR materials. To propose an appropriate model to evaluate the reflection directional characteristics of RR materials, the angular distribution of reflection intensity evaluated by two analytical models for three types of RR plates are compared. It was shown that applying the AND model to evaluate the angular distribution of reflection intensity for RR materials achieved more favorable results compared to the original analytical model.

Future Outlook:

Future work should endeavor to improve the evaluation accuracy of analytical models, focusing on proposing other models to possibly evaluate the reflection directional characteristics of RR materials, such as using the artificial neural network (ANN) methodology. Moreover, to propose a three-dimensional model for evaluating the three-dimensional reflective directional performance of RR materials, forthcoming work will also aim to develop three-dimensional optical systems to continue this research.

Credit: 
Toyohashi University of Technology (TUT)

A new method to measure optical absorption in semiconductor crystals

image: (a) A scheme of temperature variable ODPL spectroscopy. The spectra of ODPL and SPL as well as r (ODPL intensity divided by SPL intensity) measured at (b) T =300 K and (c) T = 12 K.

Image: 
Kazunobu Kojima

Tohoku University researchers have revealed more details about omnidirectional photoluminescence (ODPL) spectroscopy - a method for probing semiconducting crystals with light to detect defects and impurities.

"Our findings confirm the accuracy of ODPL measurements and show the possibility to measure optical absorption of crystals by the ODPL method, making the process much easier," says Tohoku University materials scientist Kazunobu Kojima.

Huge strides have been made in the development of highly efficient electronic and optical devices, e.g. ultraviolet, blue, and white light-emitting diodes (LEDs) as well as high-frequency transistors, that use nitride semiconductors--specifically aluminum gallium nitride (AlGaN), indium gallium nitride (InGaN), and gallium nitride (GaN).

GaN is a suitable material for power devices on account of its large bandgap energy, high breakdown field and high saturation electron velocity.

There is a strong need for manufacturers to be able to detect crystal defects and test their efficiency. Within such high quality crystals, the concentration of nonradiative recombination centers (NRC) serves as a good predictor of the crystals quality.

Annihilation spectroscopy, deep-level transient spectroscopy and photoluminescence (PL) spectroscopy are among the estimation techniques for detecting point defects which are the source of NRCs. PL spectroscopy is attractive because it requires no electrodes and contacts.

First proposed by Kojima and his research team in 2016, ODPL is a novel form of PL spectroscopy that measures PL intensity by using an integrating sphere to quantify the quantum efficiency of radiation in sample semiconductor crystals. It is non-touching, non-destructive and good for large-sized GaN wafers for room-lighting LEDs and transistors for electric vehicles. Yet, the origin of the two-peak structure formed in ODPL had remained elusive until now.

Kojima and his team combined ODPL and standard PL (SPL) spectroscopy experiments on a GaN crystal at various temperatures (T) between 12 K and 300 K. The intensity ratio (r) of the ODPL spectra to SPL spectra for the NBE emission of GaN showed a linearly decreasing slope for photon energy (E) below a fundamental absorption edge energy (Eabs).The slope obtained in r corresponded to the so-called Urbach-Martienssen (U-M) absorption tail, which is observed in many semiconductor crystals.

Therefore, the origin of the two-peak structure in the ODPL spectra around the NBE emission of the GaN crystal exists because of the U-M tail.

Credit: 
Tohoku University

How Twitter takes votes away from Trump but not from Republicans

image: Carlo Schwarz (Bocconi University)

Image: 
Paolo Tonato

A popular narrative holds that social media network Twitter influenced the outcome of the 2016 presidential elections by helping Republican candidate Donald Trump spread partisan content and misinformation. In a recent interview with CBS News, Trump himself stated he "would not be here without social media."

A new study by Carlo Schwarz (Bocconi University) with Thomas Fujiwara and Karsten Müller (both Princeton University) casts doubt on this hypothesis by comparing electoral results in American counties with similar characteristics but differences in Twitter usage in the run-up to the 2016 presidential, House, and Senate elections. Their conclusion is that Twitter disadvantaged Donald Trump, by making independent voters less likely to vote for him.

"We estimate that doubling a county's number of Twitter users would have lowered the vote share of Trump by roughly 2 percentage points," Prof. Schwarz says. "On the other hand, we find no effect of Twitter on the 2016 House and Senate elections. Although voters had the choice to vote for Trump and other Republicans on the same day, Twitter only affected who they wanted for President."

Survey data on individuals' voting decisions also suggest that Twitter usage has no significant effect on voters with strong Democratic or Republican views, but it can persuade centrist voters to stay away from more extreme candidates. In 2016, the effect was strongest on independent voters, with little evidence that those feeling strongly aligned with the Republican or Democratic party were persuaded. Similarly, Twitter likely affected voting decisions in swing counties more than in counties with a consistent streak of Republican or Democratic wins.

Importantly, Twitter usage is not associated with a uniform shift away from Republican candidates. Instead, it is linked to a pronounced shift towards higher approval of Hillary Clinton at the expense of Donald Trump, especially among independents, those who are most likely to be persuaded by social media content.

The findings are consistent with the idea of a predominately liberal atmosphere on Twitter. For example, people using Twitter are disproportionately more likely to be young, well-educated liberals living in urban areas, while Trump's broadest support came from older whites without college education in rural areas, who are among the least likely people to actively use social media. As the authors document, Democratic politicians are considerably more popular on Twitter than Republicans and, in 2016, three quarters of the tweets mentioning Trump were likely sent by users who most likely opposed his politics.

Prof. Schwarz concludes: "While our study does not speak to the effect of social media platforms other than Twitter, such as Facebook, and on the potential role of foreign government interventions or misinformation, our findings suggest that social media may indeed be able to affect the outcomes of elections."

Credit: 
Bocconi University

Mouse studies link some autism to brain cells that guide sociability and platonic love

image: Oxytocin neurons in a mouse brain.

Image: 
Gül Dölen, M.D., Ph.D.

FOR IMMEDIATE RELEASE

Johns Hopkins Medicine researchers report that new experiments with genetically engineered mice have found clear connections among a range of autism types and abnormalities in brain cells whose chemical output forges platonic (non-sexual) feelings of love and sociability.

The findings, the researchers say, could eventually fuel the development of autism therapies that target disease symptoms spurred on by abnormalities in parvocellular oxytocin neurons, which are brain cells in the hypothalamus of mammals.

A report on the experiments was published online Oct. 27 in Neuron.

The investigators pursued evidence of the connections because of long-known variations in forms and symptoms of autism spectrum disorders, and because those with Fragile X -- an inherited disorder that occurs in one in 4,000 males and one in 6,000 females -- frequently is characterized by the inability to form close social bonds.

"Autism is defined by impaired social behaviors, but not all social behaviors are the same," says Gül Dölen, M.D., Ph.D., associate professor of neuroscience at the Johns Hopkins University School of Medicine. "People with autism generally have less difficulty with developing very close, family bonds than with friendships. Our experiments provide evidence that these two types of affection are encoded by different types of oxytocin neurons, and that disruption of one of these types of neurons is responsible for the characteristic social impairments seen in autism."

For more than a century, Dölen says, scientists have known there are two types of neurons in the hypothalamus. The neurons release the so-called "love hormone" oxytocin, which induces contractions during childbirth, reduces stress and fosters bonding among animals across mammalian species, including humans.

A magnocellular oxytocin neuron, which is one type of oxytocin-releasing neuron, releases huge quantities of oxytocin to the brain and body -- as much as 500 times or more than is released by parvocellular oxytocin neurons, which limit their scope and avoid flooding the body with all-consuming feelings of love.

As their name suggests, magnocellular oxytocin neurons are larger than other neurons and can send their arm-like axons beyond the blood-brain barrier. Among their functions, magnocellular oxytocin neurons stir filial love -- what Dölen calls "mad love" -- and bonding between infants and mothers, and between sexual partners.

Dölen's research shows that parvocellular oxytocin neurons, which comes from the Greek word "parvo" or "small" -- also encode social behaviors, but a different kind than the magnocellular neurons encode. While magnocellular oxytocin neurons encode social behaviors related to reproduction (pair bonding and parental bonding), parvocellular oxytocin neurons encode social behaviors related to what Dölen calls "love in moderation," or the platonic love that is important to communities (friends and colleagues).

To study if and how autism symptoms are associated with disruptions in either or both of magnocellular and parvocellular neurons, Dölen and her team first genetically engineered mice to glow a fluorescent light in all oxytocin neurons, magno and parvo. Then, knowing that magnocellular neurons project their axons and chemicals beyond the blood/brain barrier, the research team used dyes that stay within the barrier to mark only the parvocellular neurons -- which are rarer and harder to detect, as well as smaller in size.

Next, Dölen enlisted the help of Johns Hopkins scientist Loyal Goff, Ph.D., an expert in charting the genetic profile of individual cells. The technique, called single cell sequencing, specifically reads an individual cell's RNA -- a genetic cousin to DNA -- which indicates how the cell's genetic code is being read and which proteins are being produced. The way our genetic code is read makes one cell type different from another.

"This study is a comprehensive characterization of two types of closely-related neurons involved in the regulation of social behavior," says Goff, assistant professor of genetic medicine at the Johns Hopkins University School of Medicine. "One of the things that makes this study so unique is the multi-modal aspect of this characterization; relating anatomical, morphological, electrophysiological, transcriptional, genetic, and behavioral features to fully define the relevant and important differences between these two types of neurons."

The research team used single cell sequencing and other gene-tracking tools and techniques to ensure that the subpopulations of magnocellular and parvocellular neurons were, indeed, distinct, so that they could genetically alter each group to determine if a change would induce autism-like behaviors in mice. What the researchers measured included how much the mice liked their social interactions and how much they preferred things associated with those social interactions (such as bedding).

To re-create a model of autism in mice, the scientists turned to the FMR1 gene, which is linked to Fragile X, an inherited disorder characterized by intellectual disability, but also one of the most commonly identified causes of autism, occurring in about five percent of people with the condition.

In humans, the FMR1 gene is silenced through a cellular process that adds chemicals called methyl groups to the gene. This same process does not occur in mice, so to replicate the FMR1 gene abnormality, the scientists genetically engineered the mice to have no functioning FMR1 gene either throughout the brain or only in parvocellular neurons.

The researchers studied how mice without FMR1 valued the rewards from forming a social bond with an adult female mouse serving as a surrogate parent. These mice learned to like bedding associated with the surrogate parent, but not bedding associated with social interactions with peer mice -- evidence that mutations in genes that cause autism selectively disrupt platonic love, but spare filial love.

When the scientists deleted the FMR1 gene in parvocellular cells only, not magnocellular cells, the mice had the same reaction: intact affinity for things associated with their surrogate parent, compared with things associated with peer mice. The scientists found no such preference in mice lacking FMR1 in oxytocin magnocellular cells.

In a further set of experiments to pin down the specificity of their findings with the oxytocin-producing neurons, the scientists studied how certain genes linked to risk for autism were turned on or off, or expressed, among the two types of oxytocin neurons. They found that significantly more autism risk genes had higher expression levels in parvocellular neurons compared with magnocellular neurons. However, when the scientists looked at genes for schizophrenia, Alzheimer's disease and diabetes, there were no such differences in gene expression between the two oxytocin neuron types.

"This tells us that the difference we are seeing between the two types of oxytocin neurons relates to the disease that is characterized by impaired social behaviors, but not diseases where this behavior is not a defining symptom," says Dölen.

She also notes, "What may be happening in the brain is that even though all brain cells may carry a particular mutation associated with autism, some neurons are more vulnerable to the symptoms related to social bonding."

Dölen plans to conduct similar studies on genes associated with other types of autism. She says her work may indicate that drugs currently being tested for autism -- such as intranasal oxytocin -- could prove ineffective because the treatments target magnocellular neurons, which the new study indicates is not central to the disease. Instead, she says, their evidence suggests that parvocellular oxytocin neurons should be the focus of drug development for autism.

Credit: 
Johns Hopkins Medicine