Tech

Warming, acidic oceans may nearly eliminate coral reef habitats by 2100

SAN DIEGO--Rising sea surface temperatures and acidic waters could eliminate nearly all existing coral reef habitats by 2100, suggesting restoration projects in these areas will likely meet serious challenges, according to new research presented here today at the Ocean Sciences Meeting 2020.

Scientists project 70 to 90 percent of coral reefs will disappear over the next 20 years as a result of climate change and pollution. Some groups are attempting to curb this decline by transplanting live corals grown in a lab to dying reefs. They propose new, young corals will boost the reef's recovery and bring it back to a healthy state.

But new research mapping where such restoration efforts would be most successful over the coming decades finds that by 2100, few to zero suitable coral habitats will remain. The preliminary findings suggest sea surface temperature and acidity are the most important factors in determining if a site is suitable for restoration.

"By 2100, it's looking quite grim," said Renee Setter, a biogeographer at the University of Hawaii Manoa who will present the new findings.

The results highlight some of the devastating impacts Earth's warming climate will have on marine life, according to the researchers. Although pollution poses numerous threats to ocean creatures, the new research suggests corals are most at risk from emission-driven changes in their environment.

"Trying to clean up the beaches is great and trying to combat pollution is fantastic. We need to continue those efforts," Setter said. "But at the end of the day, fighting climate change is really what we need to be advocating for in order to protect corals and avoid compounded stressors."

Projecting the future of coral reefs

Coral reefs around the globe face uncertain futures as ocean temperatures continue to climb. Warmer waters stress corals, causing them to release symbiotic algae living inside them. This turns typically vibrant-colored communities of corals white, a process called bleaching. Bleached corals are not dead, but they are at higher risk of dying, and these bleaching events are becoming more common under climate change.

In the new study, Setter and her colleagues mapped what areas of the ocean would be suitable for coral restoration efforts over the coming decades. The researchers simulated ocean environment conditions like sea surface temperature, wave energy, acidity of the water, pollution, and overfishing in areas where corals now exist. To factor in pollution and overfishing, the researchers considered human population density and land cover use to project how much waste would be released into the surrounding waters.

The researchers found most of parts of the ocean where coral reefs exist today won't be suitable habitats for corals by 2045, and the situation worsened as the simulation extended to 2100.

"Honestly, most sites are out," Setter said. The few sites that are viable by 2100 included only small portions of Baja California and the Red Sea, which are not ideal locations for coral reefs because of their proximity to rivers.

Rising temperatures and ocean acidification are mostly to blame for diminishing coral habitats, according to the researchers. Projected increases in human pollution have only a minor contribution to the future elimination of reef habitat, because humans have already caused such extensive damage to coral reefs that there aren't many locations left to impact, Setter said.

Credit: 
American Geophysical Union

Road salt harmful to native amphibians, new research shows

image: George Meindl, visiting assistant professor of environmental studies at Binghamton University, introduces frog eggs into the treatment solutions.

Image: 
George Meindl

BINGHAMTON, N.Y. - The combined effects of chemical contamination by road salt and invasive species can harm native amphibians, according to researchers at Binghamton University, State University of New York. 

During the winter, Binghamton and similar areas throughout the United States use a lot of salt to clear icy roads, but what effect does it have on wildlife? George Meindl, visiting assistant professor of environmental studies at Binghamton University, worked with a team of undergraduate students in his plant ecology course to examine how water chemistry changes due to invasive plant leaf litter leachates and road salt, and how it influences the development and survival of the native Northern leopard frog, Lithobates pipiens, and the non-native African clawed frog, Xenopus laevis.

They discovered that the non-native amphibians were more tolerant to chemical changes than the native amphibians, suggesting that the non-native amphibian species might have a competitive advantage when introduced into a new, disturbed environment.

Recently, there has been widespread chemical alteration of natural environments due to human activity, especially with the use of road salt. While road salt is commonly used to keep roads safe during the winter months, it also has negative effects on the surrounding environment and animals, according to Meindl's research. Aquatic ecosystems, where frogs tend to reside, are especially susceptible to chemical changes, including those caused by road salt runoff and invasive plant species. 

"People are changing natural environments in many ways, so it is important that we understand how these changes affect wild populations of plants and animals, in order to better protect them," Meindl said. "Using natural areas on campus, my students and I can ask how people are altering natural ecosystems, and then think of better management strategies to make sure these places aren't completely degraded."

To test the environmental effects, researchers exposed the frog eggs to metal treatment solutions (i.e., calcium, potassium and manganese), which mimicked documented differences between native and invasive wetland plant species' leaf tissues. Researchers first measured the amount of time it took for the eggs to hatch in solutions representing native and invasive plant leachates, and then they exposed the tadpoles to a lethal concentration of sodium chloride and recorded tadpole survival. 

Essentially, researchers determined that increased metal concentrations resulted in a lower susceptibility to salt for non-native tadpoles. However, increased metal concentrations caused the native tadpoles to have a higher susceptibility to salt, causing an accelerated time to death. 

Meindl and his students chose to focus on the native Northern leopard frog and the non-native African clawed frog for this research to determine whether the salt tolerance of native and non-native animals was differently affected by environmental changes caused by invasive plants. 

"In addition to studying how chemical contamination can affect amphibians generally, we also wanted to know if native vs. non-native amphibians would respond differently to these stressors," he said. "For example, if non-native species are less likely to be negatively impacted by chemical contamination compared to native species, then contaminants might actually encourage the spread of invasive species by giving them a competitive advantage over native species," Meindl said. "Perhaps not surprisingly, we found that the non-native African clawed frog was more tolerant to chemical changes compared to the Northern leopard frog, suggesting chemical contamination (e.g., due to road salts or invasive plant species) may facilitate future invasions by non-native species in aquatic ecosystems."

However, invasive species and road salts aren't the only factors causing negative environmental effects, Meindl said. He hopes that the results from this study will influence people to focus more on the safety of the environment and the steps they can take to improve it. 
 

"Invasive species and road salts are just some of many ways that people are modifying the chemistry of the environment, along with extraction and burning of fossil fuels, plastic pollution, disposal of pharmaceuticals, excessive fertilizer use, etc.," Meindl said. "A great challenge is understanding how all of these stressors affect the natural environment, and then using this information to guide policy development that protects our planet's natural resources. Studies like this will help to generate data that can guide more responsible resource use and behavior by people."

Credit: 
Binghamton University

New chip brings ultra-low power Wi-Fi connectivity to IoT devices

image: A set of ultra-low power Wi-Fi radios integrated in small chips, each measuring 1.5 square millimeters in area (grain of rice shown for scale).

Image: 
David Baillot/UC San Diego Jacobs School of Engineering

More portable, fully wireless smart home setups. Lower power wearables. Batteryless smart devices. These could all be made possible thanks to a new ultra-low power Wi-Fi radio developed by electrical engineers at the University of California San Diego.

The device, which is housed in a chip smaller than a grain of rice, enables Internet of Things (IoT) devices to communicate with existing Wi-Fi networks using 5,000 times less power than today's Wi-Fi radios. It consumes just 28 microwatts of power. And it does so while transmitting data at a rate of 2 megabits per second (a connection fast enough to stream music and most YouTube videos) over a range of up to 21 meters.

The team will present their work at the ISSCC 2020 conference Feb. 16 to 20 in San Francisco.

"You can connect your phone, your smart devices, even small cameras or various sensors to this chip, and it can directly send data from these devices to a Wi-Fi access point near you. You don't need to buy anything else. And it could last for years on a single coin cell battery," said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.

Commercial Wi-Fi radios typically consume hundreds of milliwatts to connect IoT devices with Wi-Fi transceivers. As a result, Wi-Fi compatible devices need either large batteries, frequent recharging or other external power sources to run.

"This Wi-Fi radio is low enough power that we can now start thinking about new application spaces where you no longer need to plug IoT devices into the wall. This could unleash smaller, fully wireless IoT setups," said UC San Diego electrical and computer engineering professor Patrick Mercier, who co-led the work with Bharadia.

Think a portable Google Home device that you can take around the house and can last for years instead of just hours when unplugged.

"It could also allow you to connect devices that are not currently connected--things that cannot meet the power demands of current Wi-Fi radios, like a smoke alarm--and not have a huge burden on battery replacement," Mercier said.

The Wi-Fi radio runs on extremely low power by transmitting data via a technique called backscattering. It takes incoming Wi-Fi signals from a nearby device (like a smartphone) or Wi-Fi access point, modifies the signals and encodes its own data onto them, and then reflects the new signals onto a different Wi-Fi channel to another device or access point.

This work builds on low-power Wi-Fi radio technology that Bharadia helped develop as a Ph.D. student at Stanford. In this project, he teamed up with Mercier to develop an even lower-power Wi-Fi radio. They accomplished this by building in a component called a wake-up receiver. This "wakes up" the Wi-Fi radio only when it needs to communicate with Wi-Fi signals, so it can stay in low-power sleep mode the rest of the time, during which it consumes only 3 microwatts of power.

The UC San Diego team's improvements to the technology also feature a custom integrated circuit for backscattering data, which makes the whole system smaller and more efficient, and thus enables their Wi-Fi radio to operate over longer communication range (21 meters). This is a practical distance for operating in a smart home environment, the researchers said.

"Here, we demonstrate the first pragmatic chip design that can actually be deployed in a small, low-power device," Mercier said.

Credit: 
University of California - San Diego

University of Illinois researchers demonstrate new capability for cooling electronics

image: This is a diagram of air jet impingement cooling of electronic devices using additively manufactured nozzles.

Image: 
Bill King, Mechanical Science & Engineering, Grainger Engineering

For decades, researchers have considered the potential for cooling hot electronic devices by blowing on them with high-speed air jets. However, air jet cooling systems are not widely used today. Two of the biggest obstacles that prevent the use of these systems is their complexity and weight. Air jet systems must be made of metal to be able to handle the pressure associated with air jets whose speed can exceed 200 miles per hour. And the air handling system can be complex with many discrete components that manage the air flow and direct the air onto the hot spots where cooling is required.

Now, researchers at the University of Illinois at Urbana-Champaign have demonstrated a new type of air jet cooler that overcomes previous barriers to jet cooling systems. Using additive manufacturing, the researchers created an air jet cooling system in a single component that can direct high-speed air onto multiple electronics hot spots. The researchers manufactured the cooling system from strong polymer materials that can withstand the harsh conditions associated with high-speed air jets.

"The design freedom of additive manufacturing allows us to create cooling solutions that have sizes and shapes not previously possible," said William King, Andersen Chair and Professor of Mechanical Science and Engineering. "This really opens up a new world of opportunities for thermal management."

The research focused on heat removal from high-power electronic devices. "The acute thermal management problems of high-power electronic devices appear in a host of applications, especially in modern data centers as well as electric vehicles including aircraft, automotive, and off-road vehicles," said Nenad Miljkovic, Associate Professor of Mechanical Science and Engineering at Illinois and co-author on the published research.

The applications of high-power electronic devices are growing rapidly--in electric cars, solar power systems, 5G communications, and high-power computing utilizing graphics processing units (GPU), to name a few. The electronic devices in these systems generate heat that must be removed in order for effective and reliable operation. In general, higher power results in higher performance. Unfortunately, higher power also makes it more difficult to remove the heat. New cooling technologies are required to support the growth of electric systems.

Credit: 
University of Illinois Grainger College of Engineering

Study identifies risk factors for endometrial cancer

An analysis of 149 scientific studies has identified 24 genetic variants which predispose women to endometrial cancer.

The systematic review, led by Professor Emma Crosbie from The University of Manchester, could help scientists develop targeted screening and prevention strategies for women at greatest risk of the disease.

Although each genetic variant changes cancer risk by a small fraction, when all 24 variants are combined in a so-called polygenic risk score, women who score within the top 1% have a risk of endometrial cancer 3.16 times higher than the mean risk they say.

Published in the Journal of Medical Genetics, the study is the most comprehensive systematic review carried out to critically appraise the evidence on genetic variants implicated in predisposition to endometrial cancer.

The team was based at The University of Manchester, University of Cambridge and Manchester University NHS Foundation Trust and was funded by the NIHR Manchester Biomedical Research Centre.

The research is an important milestone in the study of endometrial cancer, the sixth most commonly occurring cancer in women and the 15th most commonly occurring cancer overall according to the World Cancer Research Fund.

In the UK there are about 8,600 new cases per year according to Cancer Research UK.

The team searched 2,674 abstracts, narrowing them down to 149 papers which were eligible for inclusion in the study.

Specifically, genetic variants in HNF1B, KLF, EIF2AK, CYP19A1, SOX4 and MYC were strongly associated with the cancer.

Nineteen variants were reported with genome-wide significance and a further five with suggestive significance.

They found no convincing evidence for the widely studied MDM2 variant rs2279744 as a risk factor.

"Because many of the studies carried out to date have been of variable quality, we felt it was important to understand more fully genetic predispositions to endometrial cancer," said Professor Crosbie.

"Our work, we hope, will facilitate personalized risk assessment so that prevention and screening could be targeted more efficiently."

She added: "These genetic variants linked to endometrial cancer risk are involved in cell survival, oestrogen metabolism and transcriptional control - when the information stored in our DNA is converted into instructions for making proteins or other molecules.

"We think studies with larger cohorts are needed to identify more variants with genome-wide significance.

"But until data from larger and more diverse cohorts are available, these twenty-four SNPs are the most robust common genetic variants that affect endometrial cancer risk."

Credit: 
University of Manchester

Mediterranean diet promotes gut bacteria linked to 'healthy ageing' in older people

Eating a Mediterranean diet for a year boosts the types of gut bacteria linked to 'healthy' ageing, while reducing those associated with harmful inflammation in older people, indicates a five-country study, published online in the journal Gut.

As ageing is associated with deteriorating bodily functions and increasing inflammation, both of which herald the onset of frailty, this diet might act on gut bacteria in such a way as to help curb the advance of physical frailty and cognitive decline in older age, suggest the researchers.

Previous research suggests that a poor/restrictive diet, which is common among older people, particularly those in long term residential care, reduces the range and types of bacteria (microbiome) found in the gut and helps to speed up the onset of frailty.

The researchers therefore wanted to see if a Mediterranean diet might maintain the microbiome in older people's guts, and promote the retention or even proliferation of bacteria associated with 'healthy' ageing.

They analysed the gut microbiome of 612 people aged 65 to 79, before and after 12 months of either eating their usual diet (n = 289) or a Mediterranean diet (n = 323), rich in fruits, vegetables, nuts, legumes, olive oil and fish and low in red meat and saturated fats, and specially tailored to older people (NU-AGE diet).

The participants, who were either frail (n=28), on the verge of frailty (n=151), or not frail (n=433) at the beginning of the study, lived in five different countries: France, Italy, Netherlands, Poland, and the UK.

Sticking to the Mediterranean diet for 12 months was associated with beneficial changes to the gut microbiome.

It was associated with stemming the loss of bacterial diversity; an increase in the types of bacteria previously associated with several indicators of reduced frailty, such as walking speed and hand grip strength, and improved brain function, such as memory; and with reduced production of potentially harmful inflammatory chemicals.

More detailed analysis revealed that the microbiome changes were associated with an increase in bacteria known to produce beneficial short chain fatty acids and a decrease in bacteria involved in producing particular bile acids, overproduction of which are linked to a heightened risk of bowel cancer, insulin resistance, fatty liver and cell damage.

What's more, the bacteria that proliferated in response to the Mediterranean diet acted as 'keystone' species, meaning they were critical for a stable 'gut ecosystem,' pushing out those microbes associated with indicators of frailty.

The changes were largely driven by an increase in dietary fibre and associated vitamins and minerals--specifically, C, B6, B9, copper, potassium, iron, manganese, and magnesium.

The findings were independent of the person's age or weight (body mass index), both of which influence the make-up of the microbiome.

And while there were some differences in the make-up of a person's gut microbiome, depending on country of origin to start with, the response to the Mediterranean diet after 12 months was similar and consistent, irrespective of nationality.

The study findings can't establish a causative role for the microbiome in health, added to which some of the implications are inferred rather than directly measured, say the researchers.

"The interplay of diet, microbiome and host health is a complex phenomenon influenced by several factors," they emphasise.

"While the results of this study shed light on some of the rules of this three-way interplay, several factors such as age, body mass index, disease status and initial dietary patterns may play a key role in determining the extent of success of these interactions," they explain.

Older people may have dental problems and/or difficulty swallowing, so it may be impractical for them to eat a Mediterranean diet, they add. But the beneficial bacteria implicated in healthy ageing found in this study might yet prove useful therapeutic agents to ward off frailty, they suggest.

Credit: 
BMJ Group

Twenty four genetic variants linked to heightened womb cancer risk

Twenty four common variations in genes coding for cell growth and death, the processing of oestrogen, and gene control factors may be linked to a heightened risk of developing womb (endometrial) cancer, indicates the most comprehensive review of the published evidence so far in the Journal of Medical Genetics.

Women with all or most of these variants may run two to three times the risk of developing the disease, which is the most common gynaecological cancer in the developed world, say the researchers.

In the US women have an estimated 1 in 35 lifetime risk of being diagnosed with womb cancer, and the death toll from the disease has risen by 2% every year since 2008.

Inherited susceptibility accounts for only a small proportion of cases (3-5%), with most cases thought to be spontaneous or related to lifestyle factors.

There are two types of womb cancer: type I, which is usually diagnosed at an early stage, and is treatable; and type II, which is aggressive, often diagnosed late, and has a poor outlook.

Trying to pinpoint genetic factors that may increase the risk is therefore important. And previous research has implicated differences in just one DNA building block (nucleotide), known as single nucleotide polymorphisms, or SNPs for short.

SNPs are common, with around 4 to 5 million in an individual's complete genetic code (genome). These variations may be unique to that person or occur in many people.

Most SNPs don't affect health or development. But when they arise in a gene or in a regulatory region near one, they may affect that gene's function and boost the risk of disease.

Previous research has implicated SNPs in womb cancer risk, but the quality of these studies varies considerably and they were mostly carried out before the advent of whole genome association studies.

The researchers therefore systematically reviewed relevant research, published between 2007 and 2018, to see if there was any association between SNPs and endometrial cancer risk and came up with 149 studies out of an initial 453.

They found that 24 common SNPs in or around six genes, coding for cell growth and death, the processing of oestrogen, and gene control factors (transcription control), were strongly associated with the development of womb cancer.

Nineteen of these genetic variants were statistically significant, while five were suggestive of disease risk. But no convincing evidence was found for the most widely studied (to date) genetic variant in womb cancer.

Women with most or all of these 24 genetic variants would be expected to be twice or three times as likely to develop womb cancer as those without, calculate the researchers.

There are likely to be more SNPs involved in womb cancer development, they add: hundreds have been implicated in breast cancer, for example.

But these 24 could be used to come up with a personalised risk score for women most at risk of womb cancer, to boost early detection and/or preventive treatment efforts, they suggest.

Credit: 
BMJ Group

Our digital afterlife

Social media pages and accounts often turn into memorials when someone dies, giving people a chance to still feel connected to those they've lost. But after we're gone, who owns the information on our pages? Who can access them?

Faheem Hussain, a clinical assistant professor in the School for the Future of Innovation in Society (SFIS) at Arizona State University (ASU), will explore this topic in his discussion at the annual meeting of the American Association for the Advancement of Science (AAAS).

"It's certain we're going to be dead, so where's the design for that?" said Hussain. "There's a huge design disconnection."

The dilemma of what happens to your digital self after you're gone is something Hussain has seen first-hand. He's witnessed family and friends struggling to gain access to a loved one's social media page after that person passed away, and he's gotten Facebook friend suggestions for a person who had died.

"We have normalized talking about safety and security of our data and privacy, but we should also start including the conversation of how to manage data afterwards," said Hussain. "It's a bit tricky because it involves death and no one wants to talk about it."

Hussain has spent several years researching technology in society, including the digital afterlife, social media and digital rights. He's been documenting the changes companies have made in terms of managing the data of people who have died, along with digital afterlife provisions.

In his research Hussain and his colleagues looked at digital afterlife policies, cases and user feedback, specifically in developing countries. They found that people in these countries are more vulnerable to the challenges associated with the digital afterlife, including privacy issues, digital ownership and legal framework. Hussain and his colleagues concluded that more needs to be done to lessen the gap in digital afterlife policies between developed and developing countries, to ensure solutions are inclusive and truly global.

Problems surrounding the digital afterlife are not slowing down anytime soon. A recent study found that Facebook could have nearly 5 billion dead users by 2100. Hussain will discuss the policies that need to be in place regarding digital products and platforms and what needs to be considered in their design.

"It's important for us to talk about the digital afterlife," said Hussain. "You need to manage what will happen when you are not here anymore."

In recent years, many digital platforms, including Facebook and Google have been making changes when it comes to the death of users. Facebook will turn your page into a memorial, and you can appoint a legacy contact to look after your account. With Google, you can set up a trusted contact who will get access to parts of your account if it's been inactive for a period of time. But much of this in the hands of the user, who has to set up these settings in preparation for death. Hussain said we need to talk about our concerns and communicate with the companies providing these digital services.

"I think it's important that we have a say in it," said Hussain.

Credit: 
Arizona State University

Facial expressions don't tell the whole story of emotion

Interacting with other people is almost always a game of reading cues and volleying back. We think a smile conveys happiness, so we offer a smile in return. We think a frown shows sadness, and maybe we attempt to cheer that person up.

Some businesses are even working on technology to determine customer satisfaction through facial expressions.

But facial expressions might not be reliable indicators of emotion, research indicates. In fact, it might be more accurate to say we should never trust a person's face, new research suggests.

"The question we really asked is: 'Can we truly detect emotion from facial articulations?'" said Aleix Martinez, a professor of electrical and computer engineering at The Ohio State University.

"And the basic conclusion is, no, you can't."

Martinez, whose work has focused on building computer algorithms that analyze facial expressions, and his colleagues presented their findings today (Feb. 16, 2020) at the annual meeting of the American Association for the Advancement of Science in Seattle.

The researchers analyzed the kinetics of muscle movement in the human face and compared those muscle movements with a person's emotions. They found that attempts to detect or define emotions based on a person's facial expressions were almost always wrong.

"Everyone makes different facial expressions based on context and cultural background," Martinez said. "And it's important to realize that not everyone who smiles is happy. Not everyone who is happy smiles. I would even go to the extreme of saying most people who do not smile are not necessarily unhappy. And if you are happy for a whole day, you don't go walking down the street with a smile on your face. You're just happy."

It is also true, Martinez said, that sometimes, people smile out of an obligation to the social norms. This would not inherently be a problem, he said -- people are certainly entitled to put on a smile for the rest of the world -- but some companies have begun developing technology to recognize facial muscle movements and assign emotion or intent to those movements.

The research group that presented at AAAS analyzed some of those technologies and, Martinez said, largely found them lacking.

"Some claim they can detect whether someone is guilty of a crime or not, or whether a student is paying attention in class, or whether a customer is satisfied after a purchase," he said. "What our research showed is that those claims are complete baloney. There's no way you can determine those things. And worse, it can be dangerous."

The danger, Martinez said, lies in the possibility of missing the real emotion or intent in another person, and then making decisions about that person's future or abilities.

For example, consider a classroom environment, and a teacher who assumes that a student is not paying attention because of the expression on the student's face. The teacher might expect the student to smile and nod along if the student is paying attention. But maybe that student, for reasons the teacher doesn't understand -- cultural reasons, perhaps, or contextual ones -- is listening intently, but not smiling at all. It would be, Martinez argues, wrong for the teacher to dismiss that student because of the student's facial expressions.

After analyzing data about facial expressions and emotion, the research team -- which included scientists from Northeastern University, the California Institute of Technology and the University of Wisconsin -- concluded that it takes more than expressions to correctly detect emotion.

Facial color, for example, can help provide clues.

"What we showed is that when you experience emotion, your brain releases peptides -- mostly hormones -- that change the blood flow and blood composition, and because the face is inundated with these peptides, it changes color," Martinez said.

The human body offers other hints, too, he said: body posture, for example. And context plays a crucial role as well.

In one experiment, Martinez showed study participants a picture cropped to display just a man's face. The man's mouth is open in an apparent scream; his face is bright red.

"When people looked at it, they would think, wow, this guy is super annoyed, or really mad at something, that he's angry and shouting," Martinez said. "But when participants saw the whole image, they saw that it was a soccer player who was celebrating a goal."

In context, it's clear the man is very happy. But isolate his face, Martinez said, and he appears almost dangerous.

Cultural biases play a role, too.

"In the U.S., we tend to smile a lot," Martinez said. "We are just being friendly. But in other cultures, that means different things -- in some cultures, if you walked around the supermarket smiling at everyone, you might get smacked."

Martinez said the research group's findings could indicate that people -- from hiring managers to professors to criminal justice experts -- should consider more than just a facial expression when they evaluate another person.

And while Martinez said he is "a big believer" in developing computer algorithms that try to understand social cues and the intent of a person, he added that two things are important to know about that technology.

"One is you are never going to get 100 percent accuracy," he said. "And the second is that deciphering a person's intent goes beyond their facial expression, and it's important that people -- and the computer algorithms they create -- understand that."

Credit: 
Ohio State University

Factories reimagined

Factories in the future will definitely look different than today. As the fourth industrial revolution transforms manufacturing from mass production to mass customization, factory workers will increasingly need to apply new ICT to work remotely, collaborate with robots or use AI-based assistants, to increase their performance while developing further their creative, innovative and improvisational skills. Advanced technologies offer factory workers unprecedented opportunities to organize their jobs in a more autonomous way. Industrial work, jobs and skills are therefore being radically rethought.

For its 2020 conference in Seattle, AAAS invited researchers from Europe, along with NIST, to present their vision and findings how future factories may provide both tempting new career options to skilled young people and concrete support to current workers in acquiring new skills. This discussion is timely. The European Union is committed to and invests in a thoroughly human-centric approach to AI. In addition, it will be important to pave the way for next-generation robots capable of smooth interactions with humans.

Upon taking office on December 1, President of the European Commission, Ursula von der Leyen, announced March 9 as a deadline to deliver a series of new policy plans on AI, climate change and a new Industrial Strategy where research and innovation will take center stage. This follows the ambitious 'Green Deal' putting Europe on track to reach net-zero global warming emissions by 2050. Europe wants to be a front-runner in climate friendly industries and clean technologies and regarding future factory work that will be highlighted in this session from the viewpoints of technology and business, psychology and sociology.

The session will include discussions on new skills required, the nature of work, future work's cognitive demands, labor processes and work organization, as well as their impact on workers' wellbeing. The session's presenters include European and US experts and Dr. Eija Kaasinen from the VTT Technical Research Centre of Finland as chair. Research activities in Europe have been supported under the European Union's Research & Innovation Framework Program "Horizon 2020", under a focused cluster on Factories of the Future - Human-Centered Factories, contributing to supporting the upskilling the work force and making it fit for the digital transformation.

K. C. Morris will provide examples of new skill sets factory workers need, many of which invite creativity and allow workers to avoid tedious tasks. Morris will explain how automation improves production efficiency. AI-based analysis of informal machine maintenance logs, for instance, help identify previously hidden obstacles related to machine malfunctions. As a congressional fellow on Capitol Hill working with Congressman Tom Reed, Co-chair of the House Manufacturing Caucus, Morris will also outline Congress's role in helping to ensure that automated manufacturing becomes a reality in America. Large, strong, high-speed robots traditionally kept behind physical safeguards may soon be working more collaboratively with factory workers, thanks to advanced sensor technologies. The potential impacts of such collaborations on the psychological safety of workers are unknown. System designers currently have no available tools for evaluating such changes.

Professor Sarah Fletcher of Cranfield University in the UK will share research investigating the key psychological effects on humans working collaboratively with, or in close proximity to large industrial robots. Fletcher's team is developing practical tools to enhance robot design and evaluating how these new designs impact human trust in large robots. These efforts are also helping companies evaluate their readiness for the introduction of collaborative systems. As collaborations between factory
2
workers and robots increases, questions about how tasks should be organized in 'smart factories' remain largely unexplored. Who gets the profits of different material and immaterial outputs also remains a question.

Dr. Anu-Hanna Anttila, a researcher from Finland, will address questions, complicated by the fact that robots, unlike their human counterparts, make decisions based on data alone and in the absence of ethics. Resulting situations can take a toll on the psyche of factory workers, whose rights and needs must be brought to the forefront.

Credit: 
European Commission Joint Research Centre

Energized by enzymes -- nature's catalysts

image: Pacific Northwest National Laboratory chemist Joe Laureanti designed this artificial enzyme that converts carbon dioxide to formate, a kind of fuel.

Image: 
Joe Laureanti, Pacific Northwest National Laboratory

With millions of years to experiment, nature solved the problem of efficiently converting raw materials into usable energy. Scientists at Pacific Northwest National Laboratory are using nature's energy converters - enzymes - as a model to develop more efficient and less polluting energy sources. Scientists are exploring the essential features of enzymes that allow them to convert abundant raw materials like carbon dioxide into usable fuel. Harnessing that knowledge to create industrial-scale synthetic enzymes could help usher in a renewable energy future. For example, PNNL scientists used a custom virtual reality app to design an artificial enzyme that converts carbon dioxide to formate, a kind of fuel. PNNL's Wendy Shaw and Aaron Appel organized a session at the 2020 #AAASmtg and invited colleagues from across the nation to share what they've learned.

Credit: 
DOE/Pacific Northwest National Laboratory

New technologies, strategies expanding search for extraterrestrial life

Emerging technologies and new strategies are opening a revitalized era in the Search for Extraterrestrial Intelligence (SETI). New discovery capabilities, along with the rapidly-expanding number of known planets orbiting stars other than the Sun, are spurring innovative approaches by both government and private organizations, according to a panel of experts speaking at a meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Washington.

New approaches will not only expand upon but also go beyond the traditional SETI technique of searching for intelligently-generated radio signals, first pioneered by Frank Drake's Project Ozma in 1960. Scientists now are designing state-of-the-art techniques to detect a variety of signatures that can indicate the possibility of extraterrestrial technologies. Such "technosignatures" can range from the chemical composition of a planet's atmosphere, to laser emissions, to structures orbiting other stars, among others.

The National Radio Astronomy Observatory (NRAO) and the privately-funded SETI Institute announced an agreement to collaborate on new systems to add SETI capabilities to radio telescopes operated by NRAO. The first project will develop a system to piggyback on the National Science Foundation's Karl G. Jansky Very Large Array (VLA) that will provide data to a state-of-the-art technosignature search system.

"As the VLA conducts its usual scientific observations, this new system will allow for an additional and important use for the data we're already collecting," said NRAO Director Tony Beasley. "Determining whether we are alone in the Universe as technologically capable life is among the most compelling questions in science, and NRAO telescopes can play a major role in answering it," Beasley continued.

"The SETI Institute will develop and install an interface on the VLA permitting unprecedented access to the rich data stream continuously produced by the telescope as it scans the sky," said Andrew Siemion, Bernard M. Oliver Chair for SETI at the SETI Institute and Principal Investigator for the Breakthrough Listen Initiative at the University of California, Berkeley. "This interface will allow us to conduct a powerful, wide-area SETI survey that will be vastly more complete than any previous such search," he added.

Siemion highlighted the singular role the $100-million Breakthrough Listen Initiative has played in reinvigorating the field of SETI in recent years. Siemion also announced the latest scientific results from Listen, a SETI survey in the direction of stars where a distant civilization could observe the Earth's passage across the sun, and the availability of nearly 2 PetaBytes of data from the Listen Initiative's international network of observatories.

Other indicators of possible technologies include laser beams, structures built around stars to capture the star's power output, atmospheric chemicals produced by industries, and rings of satellites similar to the ring of geosynchronous communication satellites orbiting above Earth's equator.

"Such indicators are becoming detectable as our technology advances, and this has renewed interest in SETI searches at both government agencies and private foundations," Siemion said.

Life forms, whether intelligent or not, also can produce detectable indicators. These include the presence of large amounts of oxygen, smaller amounts of methane, and a variety of other chemicals. Victoria Meadows, Principal Investigator for NASA's Virtual Planetary Laboratory at the University of Washington, described how scientists are developing computer models to simulate exterrestrial environments and to help support future searches for habitable planets and life beyond the Solar System.

"Upcoming telescopes in space and on the ground will have the capability to observe the atmospheres of Earth-sized planets orbiting nearby cool stars, so it's important to understand how best to recognize signs of habitability and life on these planets," Meadows said, "These computer models will help us determine whether an observed planet is more or less likely to support life."

As new programs implement the expanding technical capabilities for detecting extraterrestrial life and intelligence, it's important to define what constitutes compelling, credible evidence, according to Jill Tarter, of the SETI Institute.

"How strong does the evidence need to be to justify claiming a discovery? Can we expect to find smoking guns? If the evidence requires many caveats, how do we responsibly inform the public," Tarter asked.

Tarter pointed out that projects such as the University of California at San Diego's PANOSETI visible-light and infrared search, and the SETI Institute's Laser SETI search are being built with co-observing sites to reduce false positives. Such measures, she said, will boost confidence in reported detections, but also add to the expense of the project.

The news media also share responsibility for communicating accurately with the public, Tarter emphasized. She cited cases in recent years of "exuberant reporting" of bogus claims of SETI detections. "A real detection of extraterrestrial intelligence would be such an important milestone in our understanding of the Universe that journalists need to avoid uncritical reporting of obviously fake claims," she said.

"As continuing discoveries show us that planets are very common components of the Universe, and we are able to study the characteristics of those planets, it's exciting that at the same time, technological advances are giving us the tools to greatly expand our search for signs of life. We look forward to this new realm of discovery," said Beasley, who organized the AAAS panel.

"We also look forward to the coming decade, when we hope to build a next-generation Very Large Array, which will be able to search a volume of the Universe a thousand times larger than that accessible to current telescopes -- making it the most powerful radio technosignature search machine humanity has ever constructed," Beasley added.

Credit: 
National Radio Astronomy Observatory

Breakthrough Listen releases 2 petabytes of data from SETI survey of Milky Way

The Breakthrough Listen Initiative today (Friday, Feb. 14) released data from the most comprehensive survey yet of radio emissions from the plane of the Milky Way Galaxy and the region around its central black hole, and it is inviting the public to search the data for signals from intelligent civilizations.

At a media briefing today in Seattle as part of the annual meeting of the American Association for the Advancement of Science (AAAS), Breakthrough Listen principal investigator Andrew Siemion of the University of California, Berkeley, announced the release of nearly 2 petabytes of data, the second data dump from the four-year old search for extraterrestrial intelligence (SETI). A petabyte of radio and optical telescope data was released last June, the largest release of SETI data in the history of the field.

The data, most of it fresh from the telescope prior to detailed study from astronomers, comes from a survey of the radio spectrum between 1 and 12 gigahertz (GHz). About half of the data comes via the Parkes radio telescope in New South Wales, Australia, which, because of its location in the Southern Hemisphere, is perfectly situated and instrumented to scan the entire galactic disk and galactic center. The telescope is part of the Australia Telescope National Facility, owned and managed by the country's national science agency, CSIRO.

The remainder of the data was recorded by the Green Bank Observatory in West Virginia, the world's largest steerable radio dish, and an optical telescope called the Automated Planet Finder, built and operated by UC Berkeley and located at Lick Observatory outside San Jose, California.

"Since Breakthrough Listen's initial data release last year, we have doubled what is available to the public," said Breakthrough Listen's lead system administrator, Matt Lebofsky. "It is our hope that these data sets will reveal something new and interesting, be it other intelligent life in the universe or an as-yet-undiscovered natural astronomical phenomenon."

The National Radio Astronomy Observatory (NRAO) and the privately-funded SETI Institute in Mountain View, California, also announced today an agreement to collaborate on new systems to add SETI capabilities to radio telescopes operated by NRAO. The first project will develop a system to piggyback on the National Science Foundation's Karl G. Jansky Very Large Array (VLA) in New Mexico and provide data to state-of-the-art digital backend equipment built by the SETI Institute.

"The SETI Institute will develop and install an interface on the VLA, permitting unprecedented access to the rich data stream continuously produced by the telescope as it scans the sky," said Siemion, who, in addition to his UC Berkeley position, is the Bernard M. Oliver Chair for SETI at the SETI Institute. "This interface will allow us to conduct a powerful, wide-area SETI survey that will be vastly more complete than any previous such search."

"As the VLA conducts its usual scientific observations, this new system will allow for an additional and important use for the data we're already collecting," said NRAO Director Tony Beasley. "Determining whether we are alone in the universe as technologically capable life is among the most compelling questions in science, and NRAO telescopes can play a major role in answering it."

"For the whole of human history, we had a limited amount of data to search for life beyond Earth. So, all we could do was speculate. Now, as we are getting a lot of data, we can do real science and, with making this data available to general public, so can anyone who wants to know the answer to this deep question," said Yuri Milner, the founder of Breakthrough Listen.

Earth transit zone survey

In releasing the new radio and optical data, Siemion highlighted a new analysis of a small subset of the data: radio emissions from 20 nearby stars that are aligned with the plane of Earth's orbit such that an advanced civilization around those stars could see Earth pass in front of the sun (a "transit" like those focused on by NASA's Kepler space telescope). Conducted by the Green Bank Telescope, the Earth transit zone survey observed in the radio frequency range between 4 and 8 gigahertz, the so-called C-band. The data were then analyzed by former UC Berkeley undergraduate Sofia Sheikh, now a graduate student at Pennsylvania State University, who looked for bright emissions at a single radio wavelength or a narrow band around a single wavelength. She has submitted the paper to the Astrophysical Journal.

"This is a unique geometry," Sheikh said. "It is how we discovered other exoplanets, so it kind of makes sense to extrapolate and say that that might be how other intelligent species find planets, as well. This region has been talked about before, but there has never been a targeted search of that region of the sky."

While Sheikh and her team found no technosignatures of civilization, the analysis and other detailed studies the Breakthrough Listen group has conducted are gradually putting limits on the location and capabilities of advanced civilizations that may exist in our galaxy.

"We didn't find any aliens, but we are setting very rigorous limits on the presence of a technologically capable species, with data for the first time in the part of the radio spectrum between 4 and 8 gigahertz," Siemion said. "These results put another rung on the ladder for the next person who comes along and wants to improve on the experiment."

Sheikh noted that her mentor, Jason Wright at Penn State, estimated that if the world's oceans represented every place and wavelength we could search for intelligent signals, we have, to date, explored only a hot tub's worth of it.

"My search was sensitive enough to see a transmitter basically the same as the strongest transmitters we have on Earth, because I looked at nearby targets on purpose," Sheikh said. "So, we know that there isn't anything as strong as our Arecibo telescope beaming something at us. Even though this is a very small project, we are starting to get at new frequencies and new areas of the sky."

Beacons in the galactic center?

The so-far unanalyzed observations from the galactic disk and galactic center survey were a priority for Breakthrough Listen because of the higher likelihood of observing an artificial signal from that region of dense stars. If artificial transmitters are not common in the galaxy, then searching for a strong transmitter among the billions of stars in the disk of our galaxy is the best strategy, Simeon said.

On the other hand, putting a powerful, intergalactic transmitter in the core of our galaxy, perhaps powered by the 4 million-solar-mass black hole there, might not be beyond the capabilities of a very advanced civilization. Galactic centers may be so-called Schelling points: likely places for civilizations to meet up or place beacons, given that they cannot communicate among themselves to agree on a location.

"The galactic center is the subject of a very specific and concerted campaign with all of our facilities because we are in unanimous agreement that that region is the most interesting part of the Milky Way galaxy," Siemion said. "If an advanced civilization anywhere in the Milky Way wanted to put a beacon somewhere, getting back to the Schelling point idea, the galactic center would be a good place to do it. It is extraordinarily energetic, so one could imagine that if an advanced civilization wanted to harness a lot of energy, they might somehow use the supermassive black hole that is at the center of the Milky Way galaxy."

Visit from an interstellar comet

Breakthrough Listen also released observations of the interstellar comet 2I/Borisov, which had a close encounter with the sun in December and is now on its way out of the solar system. The group had earlier scanned the interstellar rock 'Oumuamua, which passed through the center of our solar system in 2017. Neither exhibited technosignatures.

"If interstellar travel is possible, which we don't know, and if other civilizations are out there, which we don't know, and if they are motivated to build an interstellar probe, then some fraction greater than zero of the objects that are out there are artificial interstellar devices," said Steve Croft, a research astronomer with the Berkeley SETI Research Center and Breakthrough Listen. "Just as we do with our measurements of transmitters on extrasolar planets, we want to put a limit on what that number is."

Regardless of the kind of SETI search, Siemion said, Breakthrough Listen looks for electromagnetic radiation that is consistent with a signal that we know technology produces, or some anticipated signal that technology could produce, and inconsistent with the background noise from natural astrophysical events. This also requires eliminating signals from cellphones, satellites, GPS, internet, Wi-fi and myriad other human sources.

In Sheikh's case, she turned the Green Bank telescope on each star for five minutes, pointed away for another five minutes and repeated that twice more. She then threw out any signal that didn't disappear when the telescope pointed away from the star. Ultimately, she whittled an initial 1 million radio spikes down to a couple hundred, which she was able to eliminate as Earth-based human interference. The last four unexplained signals turned out to be from passing satellites.

Siemion emphasized that the Breakthrough Listen team intends to analyze all the data released to date and to do it systematically and often.

"Of all the observations we have done, probably 20% or 30% have been included in a data analysis paper," Siemion said. "Our goal is not just to analyze it 100%, but 1000% or 2000%. We want to analyze it iteratively."

Credit: 
University of California - Berkeley

The Lancet Oncology: Young cancer survivors face higher risk of severe health problems in later life than the general population, study suggests

Cancer survivors who are diagnosed during adolescence or early adulthood (age 15-20 years) are at increased risk of premature death compared to the general population. They have an elevated risk of severe or life-threatening conditions compared to a sibling control group, according to an observational study of more than 10,000 cancer patients from 27 academic institutions in the USA and Canada, published in The Lancet Oncology journal on International Childhood Cancer Day (15 February).

The study involved people diagnosed with cancer between 1970 and 1999, so it is not clear whether young adults diagnosed today can expect similar outcomes as treatments have improved in the intervening decades. Nevertheless, the authors say their findings highlight the need for long-term screening of young cancer survivors throughout their lives.

Childhood cancer survivors are known to have a higher risk of early death and long-term health problems related to their treatment, including heart conditions and circulatory problems. Little is known about the long-term consequences of cancer treatment on adolescents and young adults, however.

Dr Eugene Suh, Loyola University Chicago Health Sciences, USA, explains: "While previous studies have examined long-term outcomes for people who survive five years or more after cancer diagnosis in childhood or adulthood, our study is the first to specifically investigate health consequences for those diagnosed in adolescence and young adulthood. These survivors have the potential to live long, healthy lives but they remain at risk of health problems as a consequence of their previous cancer treatments." [1]

In the new study, the researchers examined data from the Childhood Cancer Survivor Study, which has been tracking health outcomes from more than 24,000 people who were diagnosed between 1970 and 1999 in the USA and survived for five or more years after diagnosis.

The team focused on study participants who were aged between 15 and 21 years at the time of their initial cancer diagnosis (5,804 people). For comparison, the team randomly selected a cohort of childhood cancer survivors diagnosed with the same cancers before age 15 from the larger study database (5,804 people). A third group of siblings of similar ages was also recruited for comparison (5,059).

The researchers tracked the death rates and causes of death for each group. They also examined medical records for occurrences of cancer, severe health problems associated with the heart, lungs, and musculoskeletal systems, as well as metabolic and neurological conditions. These are common consequences of cancer treatments such as radiotherapy and chemotherapy.

Based on their analysis, the authors estimate that the likelihood of a young adult cancer survivor developing a severe health condition by age 45 was 39% (one in 2.6), compared with 12% for siblings of the same age (one in 8). The risks for people diagnosed as adolescents were lower than those of childhood cancer survivors diagnosed before age 15, where the likelihood of developing a severe condition by age 45 was 56% (more than one in two).

The risk of mortality from any cause among young adult survivors was almost six times higher than would be expected in those of same age and sex in the general population (standardized mortality ratio [SMR] 5.9, 1357 deaths compared with an expected 231.9). The standardized mortality ratio for childhood cancer survivors was similarly elevated over the general population (SMR 6.2, 963 deaths compared with an expected 155.7).

Early adolescent and young adult survivors were 1.6 times as likely to die from recurrence or progression of their primary cancer compared with childhood cancer survivors (492 deaths of adolescent survivors compared with 325 deaths of childhood survivors).

The differences in health outcomes between childhood survivors and young adults were most notable twenty years after diagnosis. The researchers say this underlines the importance of long-term health screening for both groups of cancer survivors.

Dr Tara Henderson, Associate Professor at the University of Chicago, USA, said: "Focused efforts are needed to ensure young adult cancer survivors receive long-term health monitoring, with a focus on cancer screening, to reduce their risk of health problems and early death. Studies have shown that adherence to such programmes is poor, so we need to do more to highlight the importance of lifelong care to survivors and their families, as well as primary health care providers. We also need further research to understand how best to deliver risk-based care for survivors that incorporates both cancer specialists and primary care providers in the community." [1]

The researchers caution that their findings may not be generalisable to patients treated today, who are likely to receive different treatment regimens than in previous decades.

An additional limitation is that the cohort of survivors included in the study did not include the full range of cancers typically seen in this age group. Notably, those with gonadal tumours, melanomas and thyroid cancers were not included, yet these account for almost 40% of cancers diagnosed between the ages of 15 and 20 years.

Writing in a linked Comment, Dr Paivi Lahteenmaki (who was not involved in the study), Turku University, Finland, said: "Accurately characterising individuals at high risk who would benefit from a tailored screening programme is most important. In the future, identifying underlying genetic or molecular factors that might define patients at high risk of late sequelae would help planning approaches to survivorship."

Credit: 
The Lancet

Earth's cousins: Upcoming missions to look for 'biosignatures' in exoplanet atmospheres

Scientists have discovered thousands of exoplanets, including dozens of terrestrial -- or rocky -- worlds in the habitable zones around their parent stars. A promising approach to search for signs of life on these worlds is to probe exoplanet atmospheres for "biosignatures" -- quirks in chemical composition that are telltale signs of life. For example, thanks to photosynthesis, our atmosphere is nearly 21% oxygen, a much higher level than expected given Earth's composition, orbit and parent star.

Finding biosignatures is no straightforward task. Scientists use data about how exoplanet atmospheres interact with light from their parent star to learn about their atmospheres. But the information, or spectra, that they can gather using today's ground- and space-based telescopes is too limited to measure atmospheres directly or detect biosignatures.

Exoplanet researchers such as Victoria Meadows, a professor of astronomy at the University of Washington, are focused on what forthcoming observatories, like the James Webb Space Telescope, or JWST, could measure in exoplanet atmospheres. On Feb. 15 at the American Association for the Advancement of Science's annual meeting in Seattle, Meadows, a principal investigator of the UW's Virtual Planetary Laboratory, will deliver a talk to summarize what kind of data these new observatories can collect and what they can reveal about the atmospheres of terrestrial, Earth-like exoplanets. Meadows sat down with UW News to discuss the promise of these new missions to help us view exoplanets in a new light.

Q: What changes are coming to the field of exoplanet research?

In the next five to 10 years, we'll potentially get our first chance to observe the atmospheres of terrestrial exoplanets. This is because new observatories are set to come online, including the James Webb Space Telescope and ground-based observatories like the Extremely Large Telescope. A lot of our recent work at the Virtual Planetary Laboratory, as well as by colleagues at other institutions, has focused on simulating what Earth-like exoplanets will "look" like to the JWST and ground-based telescopes. That allows us to understand the spectra that these telescopes will pick up, and what those data will and won't tell us about those exoplanet atmospheres.

Q: What types of exoplanet atmospheres will the JWST and other missions be able to characterize?

Our targets are actually a select group of exoplanets that are nearby -- within 40 light years -- and orbit very small, cool stars. For reference, the Kepler mission identified exoplanets around stars that are more than 1,000 light years away. The smaller host stars also help us get better signals on what the planetary atmospheres are made of because the thin layer of planetary atmosphere can block more of a smaller star's light.

So there are a handful of exoplanets we're focusing on to look for signs of habitability and life. All were identified by ground-based surveys like TRAPPIST and its successor, SPECULOOS -- both run by the University of Liège -- as well as the MEarth Project run by Harvard. The most well-known exoplanets in this group are probably the seven terrestrial planets orbiting TRAPPIST-1. TRAPPIST-1 is an M-dwarf star -- one of the smallest you can have and still be a star -- and its seven exoplanets span interior to and beyond the habitable zone, with three in the habitable zone.

We've identified TRAPPIST-1 as the best system to study because this star is so small that we can get fairly large and informative signals off of the atmospheres of these worlds. These are all cousins to Earth, but with a very different parent star, so it will be very interesting to see what their atmospheres are like.

Q: What have you learned so far about the atmospheres of the TRAPPIST-1 exoplanets?

The astronomy community has taken observations of the TRAPPIST-1 system, but we haven't seen anything but "non-detections." That can still tell us a lot. For example, observations and models suggest that these exoplanet atmospheres are less likely to be dominated by hydrogen, the lightest element. That means they either don't have atmospheres at all, or they have relatively high-density atmospheres like Earth.

Q: No atmospheres at all? What would cause that?

M-dwarf stars have a very different history than our own sun. After their infancy, sun-like stars brighten over time as they undergo fusion.

M-dwarfs start out big and bright, as they gravitationally collapse to the size they will then have for most of their lifetimes. So, M-dwarf planets could be subjected to long periods of time -- perhaps as along as a billion years -- of high-intensity luminosity. That could strip a planet of its atmosphere, but volcanic activity can also replenish atmospheres. Based on their densities, we know that many of the TRAPPIST-1 worlds are likely to have reservoirs of compounds -- at much higher levels than Earth, actually -- that could replenish the atmosphere. The first significant JWST results for TRAPPIST-1 will be: Which worlds retained atmospheres? And what types of atmospheres are they?

I'm quietly optimistic that they do have atmospheres because of those reservoirs, which we're still detecting. But I'm willing to be surprised by the data.

What types of signals will the JWST and other observatories look for in the atmospheres of TRAPPIST-1 exoplanets?
Probably the easiest signal to look for will be the presence of carbon dioxide.

Q: Is CO2 a biosignature?

Not on its own, and not just from a single signal. I always tell my students -- look right, look left. Both Venus and Mars have atmospheres with high levels of CO2, but no life. In Earth's atmosphere, CO2 levels adjust with our seasons. In spring, levels draw down as plants grow and take CO2 out of the atmosphere. In autumn, plants break down and CO2 rises. So if you see seasonal cycling, that might be a biosignature. But seasonal observations are very unlikely with JWST.

Instead, JWST can look for another potential biosignature, methane gas in the presence of CO2. Methane should normally have a short lifetime with CO2. So if we detect both together, something is probably actively producing methane. On Earth, most of the methane in our atmosphere is produced by life.

Q: What about detecting oxygen?

Oxygen alone is not a biosignature. It depends on its levels and what else is in the atmosphere. You could have an oxygen-rich atmosphere from the loss of an ocean, for example: Light splits water molecules into hydrogen and oxygen. Hydrogen escapes into space, and oxygen builds up into the atmosphere.

The JWST likely won't directly pick up oxygen from oxygenic photosynthesis -- the biosphere we're used to now. The Extremely Large Telescope and related observatories might be able to, because they'll be looking at a different wavelength than the JWST, where they will have a better chance of seeing oxygen. The JWST will be better for detecting biospheres similar to what we had on Earth billions of years ago, and for differentiating between different types of atmospheres.

Q: What are some of the different types of atmospheres that TRAPPIST-1 exoplanets might possess?

The M-dwarf's high-luminosity phase might drive a planet toward an atmosphere with a runaway greenhouse effect, like Venus. As I said earlier, you could lose an ocean and have an oxygen-rich atmosphere. A third possibility is to have something more Earth-like.

Q: Let's talk about that second possibility. How could JWST reveal an oxygen-rich atmosphere if it can't detect oxygen directly?

The beauty of the JWST is that it can pick up processes happening in an exoplanet's atmosphere. It will pick up the signatures of collisions between oxygen molecules, which will happen more often in an oxygen-rich atmosphere. So we likely can't see oxygen amounts associated with a photosynthetic biosphere. But if a much larger amount of oxygen was left behind from ocean loss, we can probably see the collisions of oxygen in the spectrum, and that's probably a sign that the exoplanet has lost an ocean.

So, JWST is unlikely to give us conclusive proof of biosignatures but may provide some tantalizing hints, which require further follow-up and -- moving forward -- thinking about new missions beyond the JWST. NASA is already considering new missions. What would we like their capabilities to be?

That also brings me to a very important point: Exoplanet science is massively interdisciplinary. Understanding the environment of these worlds requires considering orbit, composition, history and host star -- and requires the input of astronomers, geologists, atmospheric scientists, stellar scientists. It really takes a village to understand a planet.

Credit: 
University of Washington