Tech

Edouard now post-tropical in NASA-NOAA satellite imagery

image: On July 6, NASA-NOAA's Suomi NPP satellite provided a visible image of Post-Tropical Cyclone Edouard merging with a frontal boundary in the Northern Atlantic Ocean.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

When NASA-NOAA's Suomi NPP satellite passed over the western North Atlantic Ocean on July 6, it provided forecasters with a visible image of Edouard after it transitioned into a post-tropical cyclone.

The National Hurricane Center (NHC) defines a post-tropical cyclone as a former tropical cyclone. This generic term describes a cyclone that no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. Post-tropical cyclones can continue carrying heavy rains and high winds. Two classes of post-tropical cyclones include extratropical and remnant lows.

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image that showed Edouard's center of circulation has merged with a frontal boundary. Therefore, the system was classified as extratropical.

On July 6 at 5 p.m. EDT (2100 UTC), NOAA's National Hurricane Center (NHC) said that Edouard had become post-tropical. At that time, the center of Post-Tropical Cyclone Edouard was located near latitude 42.7 degrees north and longitude 46.0 degrees west. It was centered about 445 miles (715 km) southeast of Cape Race Newfoundland, Canada. The post-tropical cyclone was moving quickly toward the northeast near 38 mph (61 kph). Maximum sustained winds are near 45 mph (75 kph) with higher gusts. The estimated minimum central pressure is 1005 millibars.

The National Hurricane Center forecast said the post-tropical cyclone is forecast to continue moving quickly northeastward for the next day or so until it is absorbed into a larger frontal zone over the north Atlantic late today, July 7 or early Wednesday.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Study: Troubling connection between workplace pregnancy discrimination and health of mothers, babies

image: Kaylee Hackney, Ph.D., assistant professor of management, Baylor University's Hankamer School of Business

Image: 
Kaylee Hackney

WACO, Texas (July 7, 2020) - Perceived pregnancy discrimination indirectly relates to increased levels of postpartum depressive symptoms for mothers and lower birth weights, lower gestational ages and increased numbers of doctor visits for babies, according to a management study led by Baylor University.

The study - "Examining the Effects of Perceived Pregnancy Discrimination on Mother and Baby Health" - is published in the Journal of Applied Psychology.

"Despite being illegal, pregnancy discrimination still takes place in the workplace," said lead author Kaylee Hackney, Ph.D., assistant professor of management in Baylor University's Hankamer School of Business. "Obviously, this is troublesome. Our research highlights the negative impact that perceived pregnancy discrimination can have on both the mother's and the baby's health."

The researchers surveyed 252 pregnant employees over the course of two studies. They measured perceived pregnancy discrimination, perceived stress, demographics and postpartum depressive symptoms. The second study included the measurements of the babies' health outcomes, including gestational age (number of weeks of pregnancy when the baby was delivered), Apgar score (heart rate, respiration, muscle tone, reflex response and color), birth weight and visits to the doctor.

Sample survey statements and questions used to measure perceived discrimination, perceived stress and postpartum depressive symptoms included: "Prejudice toward pregnant workers exists where I work," "In the last month, how often have you felt nervous or stressed?" and "I am so unhappy that I cry." Mothers also logged their babies' health outcomes.

"I think the biggest surprise from this research is that pregnancy discrimination not only negatively impacted the mother, but also negatively impacted the baby she was carrying while experiencing the discrimination," Hackney said. "This just shows the far-reaching implications of workplace discrimination and highlights the importance of addressing it."

More than 50,000 discrimination claims in a decade

The study noted that over the last decade, more than 50,000 pregnancy discrimination claims were filed with the Equal Employment Opportunity Commission and Fair Employment Practices Agencies in the United States.

Pregnancy discrimination is defined as unfavorable treatment of women at work due to pregnancy, childbirth or medical conditions related to pregnancy or childbirth, Hackney said. Pregnant women perceive discrimination when they experience subtly hostile behaviors such as social isolation, negative stereotyping and negative or rude interpersonal treatment.

Examples might include lower performance expectations, transferring the pregnant employee to less-desirable shifts or assignments or inappropriate jokes and intrusive comments.

Practical steps for managers

Given that pregnancy discrimination led to adverse health outcomes through increased stress, the researchers believe managers are in a unique position to provide the support that pregnant employees need to reduce stress.

Some steps managers might take include:

Providing flexible schedules

Keeping information channels open and the employee in the loop, specifically with regards to work-family benefits and expectations leading up to leave/returning from leave

Accommodating prenatal appointments

Helping to plan maternity leave arrangements

Normalizing breastfeeding in the workplace

"Overall, I would suggest that managers 1) strive to create a workplace culture where discrimination does not take place and 2) not make assumptions about what pregnant employees want," Hackney said. "The best approach would be to have an open dialogue with their employees about what types of support are needed and desired."

Healthcare partnerships

In addition, Hackney said the findings suggest that healthcare organizations may find opportunities to provide guidance and outreach to workplaces to help pregnant workers reduce stress via reduced pregnancy discrimination and enhanced work-family support for pregnant women.

Some steps may include training managers to be more family supportive and less biased against expectant mothers, she said.

Credit: 
Baylor University

NASA finds powerful storm's around Tropical Storm Cristina's center

image: On July 7 at 4:10 a.m. EDT (0810 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite revealed two areas of very powerful thunderstorms (yellow) around Cristina's center where cloud top temperatures were as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius).

Image: 
NASA/NRL

A low-pressure area strengthened quickly and became Tropical Storm Cristina in the Eastern Pacific Ocean and infrared imagery from NASA revealed the powerful thunderstorms fueling that intensification.

Cristina developed by 5 p.m. EDT on Monday, July 6, according to the National Hurricane Center in Miami, Fla. Six hours later it strengthened into a tropical storm and was renamed Cristina.

On July 7 at 4:10 a.m. EDT (0810 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite used infrared light to analyze the strength of storms within Cristina. NASA researches these storms to determine how they rapidly intensify, develop and behave.

Tropical cyclones are made of up hundreds of thunderstorms, and infrared data can show where the strongest storms are located. That is because infrared data provides temperature information, and the strongest thunderstorms that reach highest into the atmosphere have the coldest cloud top temperatures.

MODIS found those strongest storms in two areas around Cristina's center of circulation where cloud top temperatures were as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius). NASA research has found that cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall.

At 11 a.m. EDT (1500 UTC) on July 7, the National Hurricane Center (NHC) said the center of Tropical Storm Cristina was located near latitude 12.2 degrees north and longitude 102.8 degrees west. Cristina is centered about 480 miles (770 km) south-southeast of Manzanillo, Mexico. The estimated minimum central pressure is 1005 millibars. Maximum sustained winds are near 40 mph (65 kph) with higher gusts.

Cristina was moving toward the west-northwest near 13 mph (20 kph), and the NHC expects that general motion to continue for the next few days, keeping the cyclone well away from the coast of Mexico.

NHC forecaster David Zelinsky noted in the July 7 Discussion, "The [vertical wind] shear and some nearby dry air that appear to have inhibited Cristina's organization so far are not expected to persist as negative factors for much longer. All of the models still forecast strengthening, and given the very favorable environment that the cyclone will encounter in a day or two, a period of rapid intensification at some point would not be surprising."

Strengthening is anticipated and Cristina is forecast to become a hurricane in a day or two.

Typhoons/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

For updated forecasts, visit: http://www.nhc.noaa.gov

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Survey: 7 in 10 respondents worry poor health will limit their life experiences

DALLAS and ARLINGTON Va., July 7, 2020 -- Seven in 10 U.S. adults worry poor health will prevent them from doing all the things they'd like to do in life, according to a new survey[1] from the American Heart Association and American Diabetes Association.

The research was conducted by OnePoll for Know Diabetes by Heart™, a joint initiative of the American Heart Association and the American Diabetes Association which combats two of the most persistent U.S. health threats - type 2 diabetes and cardiovascular disease - and the devastating link between them.

The survey asked 2,000 U.S. adults how the COVID-19 pandemic has impacted their views on time with friends and family, and generally, the role health plays in experiencing a full life.

Missing out on milestones and time with loved ones is a reality for millions of people in the U.S. living with type 2 diabetes. In addition to being at a higher risk of death from COVID-19 if blood glucose is poorly controlled,[2] people with type 2 diabetes are at double the risk of developing and dying from heart disease and stroke.[3],[4],[5] For adults at age 60, having type 2 diabetes and cardiovascular disease such as heart attacks, heart failure and strokes shortens life expectancy by an average of 12 years,[6] but there is a lot people can do to lower their risk.

The survey found respondents with type 2 diabetes, heart disease or stroke are more worried that health will limit their experiences (89%, 90% and 87%, respectively) compared to respondents who don't have those conditions (58%).

Generation Comparison Reveals Differences

About two in three (65%) respondents are worried their loved ones won't be healthy enough to experience various life moments with them. Millennials (ages 24-39) and Generation X (ages 40-55) were most worried, 73% and 69% respectively, compared to 59% for Generation Z (ages 18-23) and 58% for baby boomers (ages 56+).

Gen Z respondents are most worried about health preventing them from experiencing everything they'd like to do in life (75%), while baby boomers, are least worried overall (63%). Baby boomers however, report the highest percentage of prioritizing their health more as they've gotten older, 68%, compared to 34% for Gen Z, 48% for millennials and 65% for Gen X.

COVID-19 Pandemic Created Greater Appreciation for Daily Moments with Loved Ones

Survey results revealed the COVID-19 pandemic has changed the way many think about daily moments, and how respondents view their experiences with others. Eight in 10 respondents said the pandemic has made daily moments with their loved ones more special. Even more, 85%, said the pandemic has made them more grateful for the time they spend with their loved ones.

Eduardo Sanchez, M.D., MPH, FAAFP, American Heart Association chief medical officer for prevention, said COVID-19 shines a direct spotlight on chronic health conditions and the additional health risks they present.

"Controlling blood glucose and managing and modifying risk factors for heart disease and stroke has never been more important," Sanchez said. "If there's a silver lining in all of this, perhaps it's a new appreciation for wellness and emphasis on controlling the controllable, the existing threats to our health that we know more about and have more tools to manage."

Returning to Routine Medical Care

Robert H. Eckel, M.D., American Diabetes Association president of medicine and science and an endocrinologist at the University of Colorado School of Medicine, emphasized the need for regular, routine medical care and expressed concern that many patients canceled or postponed doctor appointments during the pandemic.

"If you want to have the full life you are hoping for on the other side of COVID-19, then resume your doctor appointments, check your health numbers, like blood glucose - and if you have diabetes your hemoglobin A1c - cholesterol and blood pressure, and get a plan for preventing heart disease and stroke," said Eckel. "Taking medications as prescribed is also an important thing you can do for yourself and the people you love."

Credit: 
American Heart Association

New study sparks fresh call for seagrass preservation

image: Known as 'Blue Carbon', seagrass meadows have been estimated to store CO2 in their soils about 30 times faster than most terrestrial forests.

Image: 
Centre for Marine Ecosystems Research at Edith Cowan University

An increase in carbon dioxide emissions equivalent to 5 million cars a year has been caused by the loss of seagrass meadows around the Australian coastline since the 1950s.

The stark finding was made possible by new modelling done by marine scientists at the Centre for Marine Ecosystems Research at Edith Cowan University (ECU) in Western Australia.

PhD student Cristian Salinas calculated that around 161,150 hectares of seagrass have been lost from Australian coasts since the 1950s, resulting in a 2 per cent increase in annual carbon dioxide emissions from land-use change.

The figures derive from Mr Salinas's research into the current carbon stocks of Cockburn Sound off the coast of Western Australia.

Cockburn Sound lost around 23 sqkm of seagrass between the 1960s and 1990s due to nutrient overflow caused by urban, port and industrial development.

Mr Salinas said the finding is significant because seagrass meadows play such a vital role in mitigating the impacts of climate change.

"Known as 'Blue Carbon', seagrass meadows have been estimated to store CO2 in their soils about 30 times faster than most terrestrial forests," he said.

"Seagrass meadows have been under constant threat in Australia through coastal development and nutrient run off since the 1960s. On top of that climate change is causing marine heatwaves that are catastrophic to the seagrasses.

"This study serves as a stark reminder of how important these environments are."

Mr Salinas said the study provided a clear baseline for carbon emissions from seagrass losses in Australia and warned of the need to preserve and restore the meadows. The inclusion of seagrass into the Australian Emission Reduction Fund could contribute to achieve this goal, he said.

Carbon flushed away

The ECU researchers assessed how environmental factors such as water depth, hydrodynamic energy, soil accumulation rates and soil grain size related to changes in soil carbon storage following seagrass loss.

Results showed that the degradation and loss of seagrass alone was not enough to cause the carbon loss from the soil -- hydrodynamic energy from waves, tides and currents also played a significant role.

"Without seagrass acting as a buffer, the hydrodynamic energy from the ocean releases the carbon by moving the seabed sand around," Mr Salinas Zapata said.

Researchers found hydrodynamic energy from water movement was much higher in the shallow water and associated low levels of carbon were recorded in these bare areas.

However, seagrass meadows established in shallow waters were found to have significantly more carbon stored compared to those growing in deeper areas.

"This means that nearshore meadows are particularly important to preserve," Mr Salinas said.

Credit: 
Edith Cowan University

Repurposing public health systems to decode COVID-19

Existing public health monitoring systems in the UK, could improve understanding of the risk factors associated with severe COVID-19.

Research published in the journal Microbial Genomics describes how national surveillance systems can be linked with the UK Biobank. This pooled data could then be used to understand how genetics and other epidemiological factors impact risk of developing severe infection.

The UK Biobank (UKB) is an international health resource which enables researchers to understand the genetic and lifestyle determinants of common diseases. The researchers linked UKB with Public Health England's Second-Generation Surveillance System (SGSS), a centralised microbiology database used for national disease surveillance in England. SGSS holds data collected in clinical diagnostic laboratories in England, including test results for SARS-CoV-2.

Large cohorts such as UKB are a useful resource for understanding how a disease behaves in different groups, according to Dr Danny Wilson, Associate Professor at the Big Data Institute, University of Oxford (UK). He said: "Large datasets are helpful for detecting risk factors, including those that have modest effects or vary from person-to-person, and for providing a sound footing for conclusions by reducing statistical noise. These discoveries help scientists better understand the disease and could inspire efforts aimed at improving treatment."

By linking the two systems, researchers hope to facilitate research into the risk factors for severe COVID-19. Repurposing public health systems in this way can provide near-to-real-time data on SARS-CoV-2, and allow researchers to understand the spread, testing and disease characteristics of the virus.

This new computerised system will provide weekly linkage of test results to UKB and other cohorts. The UK Biobank database consists of around 500,000 men and women in the UK, aged 50+. This group is particularly appropriate for the study of COVID-19, as severity of disease increases with age. Further data is also being released by UKB, according to Dr Wilson: "UK Biobank are releasing, or have released other data relevant to COVID-19, like mortality records, and they plan to release hospital episode statistics and primary care data soon too".

Their data provides in-depth analysis of disease severity, symptoms and risk in people from the UKB database. Researchers hope that this data can reveal additional risk factors for severe infection and improve understanding of the disease. "By providing information about COVID-19 to large cohorts including UK Biobank, INTERVAL, COMPARE, Genes & Health, Genomics England and the National Institute for Health Research (NIHR) Biorepository, this work facilitates research into lifestyle, medical and genetic risk factors" said Dr Wilson.

Credit: 
Microbiology Society

Microplastic pollution harms lobster larvae, study finds

image: Accumulated microplastic fibers are visible under this larval lobster's carapace. New research shows that microplastic fiber pollution impacts larval lobsters at each stage of their development.

Image: 
Madelyn Woods

Microplastic fiber pollution in the ocean impacts larval lobsters at each stage of their development, according to new research. A study published in the Marine Pollution Bulletin reports that the fibers affect the animals' feeding and respiration, and they could even prevent some larvae from reaching adulthood.

"In today's ocean, organisms are exposed to so many environmental factors that affect how many make it to the next stage of life," said Paty Matrai, a study author and senior research scientist at Bigelow Laboratory for Ocean Sciences. "Lobsters play a fundamental role in the Gulf of Maine ecosystem as well as the state's economy, and it is important that we understand how pollutants impact their development."

Young lobsters grow to adulthood through four distinct developmental stages, and the researchers found that the physiology of each stage determined how the animals interacted with plastic fibers. The youngest lobsters didn't consume them - but they were plagued by fibers accumulating under the shells that protect their gills. In experiments where the larvae were exposed to high levels of fibers, the youngest larvae were the least likely to survive.

More mobile and agile, the older lobster larvae did not accumulate fibers under their shells - but they did ingest the particles and keep them in their digestive systems. This could be problematic for lobster larvae coming of age in the ocean. Fresh plastics often leach chemicals, and their surfaces can foster potentially toxic sea life.

"Plastic particles have been found in almost every animal in the ocean," said David Fields, another study author and a senior research scientist at Bigelow Laboratory. "If an animal can fit something in its tiny little piehole, it's probably going to - and that can have repercussions for the animal and potentially for the food web."

Microplastic fibers enter the ocean from sources including wastewater, and they can also be created in the ocean as larger materials degrade. Plastics tend to float at the surface, where they are exposed to sunlight and wave action that eventually break them down into small particles.

Though the levels of microplastic fibers in coastal Maine waters are relatively low, they can still present a serious challenge to the animals that encounter them. In addition, some animals are predisposed to encounter any fibers that are in the area. Because microplastic fibers tend to remain at the ocean's surface, animals that inhabit surface waters are more likely to come into contact with them - including larval lobsters.

"Even relatively low levels of plastics can be harmful for the animals that encounter them, and where an animal lives in the water column can amplify the problem," Fields said. "A lobster larva that eats a plastic fiber is just like us eating a candy wrapper - it's not great, but it will probably just pass though. But if all you're eating is candy wrappers, it's certainly going to have other repercussions for your health." 

With ocean acidification and rising temperatures already challenging lobsters and other sea life, the researchers are particularly interested in how this plastic pollution may compound with the other environmental stressors that ocean animals are facing. They are interested in conducting future experiments that could probe how animals are impacted when challenged by all three of these factors simultaneously.

Matrai and Fields previously studied the impact of microplastic fibers on mussels with Madelyn Woods, a recent Bigelow Laboratory intern and the lead author of this paper. Fellow authors Theresa Hong, Donaven Baughman, and Grace Andrews also all studied with Matrai and Fields as Research Experience for Undergraduates interns during the summer of 2019.

"As a global community, we are just becoming aware of the impact of plastics in the ocean, and the reality that this pollution is superimposed on other changes in the environment," Matrai said. "By working together to reduce the amount of microplastic fibers in the ocean, we can all help protect our important marine resources."

Credit: 
Bigelow Laboratory for Ocean Sciences

Black patients have higher rates of death after PCI

Black patients who undergo percutaneous coronary intervention (PCI) are at an increased risk for major adverse outcomes, including death, compared to white patients, according to a study published today in JACC: Cardiovascular Interventions. The study underscores the high rates of cardiovascular disease and risk factors in minorities and continued need for further research on race-based outcomes after cardiovascular procedures, including PCI, to understand and alleviate these differences.

It has been several decades since racial differences in cardiovascular outcomes were first reported. Blacks consistently have higher rates of cardiovascular risk factors and adverse outcomes compared with non-Hispanic whites, and both Blacks and Hispanics have been associated with increased risk of cardiovascular diseases.

"The variables that may contribute to the greater cardiovascular risks in minorities are multifactorial," said Gregg W. Stone, MD, senior author of the study and professor of cardiology at the Icahn School of Medicine at Mount Sinai in New York. "The variables include risk factors (e.g., greater hypertension, smoking and diabetes, and differences in body mass index), differences in cardiac structure (e.g., coronary artery size and left ventricular mass) and socioeconomic considerations (e.g., reduced access to care and/or insurance coverage, lack of preventative care, disease awareness and education, delayed presentation when ill, and in some studies varying provision of care)."

While previous studies have suggested that short- and long-term outcomes after PCI vary by race and cannot be fully explained by different baseline risk factors and treatment characteristics, other studies have failed to show a significant association. Furthermore, since current raced-based PCI outcomes have been acquired from registries and single-center studies lacking central monitoring and event adjudication, the authors of this study sought to assess the presence of racial disparities in clinical characteristics and outcomes from a large patient-level pooled database.

Researchers in this study used data from the Cardiovascular Research Foundation to analyze 10 prospective, randomized controlled trials with a combined total of 22,638 patients classified by race who underwent PCI. There were 20,585 (90.9%) white patients (reference group), 918 (4.1%) Black patients, 404 (1.8%) Asian patients, and 473 (2.1%) Hispanic patients. Other races were excluded from the analysis due to small sample sizes. Baseline characteristics such as body mass index (BMI) and PCI outcomes at 30 days, one-year, and five-years were assessed. Principal outcomes of interest were all-cause death, myocardial infarction (MI), and major adverse cardiac events (MACE), defined as the composite of cardiac death, MI or ischemia-driven target lesion revascularization.

Study findings showed in the five-year follow up after PCI, the MACE rates were 23.9% in Black patients compared with 18.8% in white patients, 21.5% in Hispanic patients and 11.2% in Asian patients. Multivariable analysis demonstrated an independent association between Black patients and five-year risk of MACE following PCI. Black patients also had higher rates of death in both the one- and five-year follow-ups after PCI compared to any other group. Results from the analysis showed that while enrolled in PCI, both Black and Hispanic patients had greater cardiovascular risk factors including diabetes, hypertension and hyperlipidemia than white and Asian patients.

"I am inherently optimistic, and I am hopeful that the increased societal attention to racial disparities prompted by recent social injustices will translate to improve health care and outcomes for minorities," Stone said. "But it won't just happen without active concerted efforts to promote change and opportunity, a task for government, regulators, payors, hospital administrators, physicians and all health care providers."

This study has several limitations, including Black and Hispanic patients were underrepresented in the trials so there is a potential for some selection bias; socioeconomic factors were not captured, which may influence differences in the risk of adverse outcomes after PCI; and the term "Hispanic" was designated as a race and not ethnicity, thereby potentially limiting the differences in outcomes between Hispanics and other races. Because this study included racial disparities in the three largest minority groups, further studies are required to address other minority groups and ethnicities/subgroups.

"Achieving representative proportions of minorities in clinical trials is essential but has proven challenging," Stone said. "We must ensure that adequate numbers of hospitals and providers that are serving these patients participate in multicenter trials, and trust has to be developed so that minority populations have confidence to enroll in studies. Language can be a barrier, requiring translation services and extra time and effort. Involvement of the families and primary care physicians who have an established relationship with patients in clinical trial discussions is useful....Finally, enrollment of non-Hispanic whites can be capped at a certain proportion of the total to ensure adequate enrollment of other racial and ethnic groups."

In an accompanying editorial comment, Eric D. Peterson, MD, MPH, professor of medicine in the Division of Cardiology at Duke University School of Medicine, said the study findings serve as a reminder of how large the racial gaps in cardiovascular disease treatments and outcomes are and continue to be.

"Solving racial health disparities is a crucial and pressing priority for all in health care," Peterson said. "We must all commit to reversing racial health disparities for tomorrow's generation. Talking is no longer enough; it is our responsibility to finally deliver effective solutions."

Credit: 
American College of Cardiology

Behind the dead-water phenomenon

image: What makes ships mysteriously slow down or even stop as they travel, even though their engines are working properly? This was first observed in 1893 and was described experimentally in 1904 without all the secrets of this "dead water" being understood. A French team has explained this phenomenon for the first time.

Image: 
Morgane Parisi - www.StudioBrou.com

What makes ships mysteriously slow down or even stop as they travel, even though their engines are working properly? This was first observed in 1893 and was described experimentally in 1904 without all the secrets of this "dead water" being understood. An interdisciplinary team from the CNRS and the University of Poitiers has explained this phenomenon for the first time: the speed changes in ships trapped in dead water are due to waves that act like an undulating conveyor belt on which the boats move back and forth. This work was published in PNAS on July 6, 2020.

In 1893, the Norwegian explorer Fridtjof Nansen experienced a strange phenomenon when he was travelling north of Siberia: his ship was slowed by a mysterious force and he could barely manoeuvre, let alone pick up normal speed. In 1904, the Swedish physicist and oceanographer Vagn Walfrid Ekman showed in a laboratory that waves formed under the surface at the interface between the salt water and freshwater layers that form the upper portion of this area of the Arctic Ocean interact with the ship, generating drag.

This phenomenon, called dead water, is seen in all seas and oceans where waters of different densities (because of salinity or temperature) mix. It denotes two drag phenomena observed by scientists. The first, Nansen wave-making drag, causes a constant, abnormally low speed. The second, Ekman wave-making drag, is characterized by speed oscillations in the trapped boat. The cause of this was unknown. Physicists, fluid mechanics experts, and mathematicians at the CNRS' Institut Pprime and the Laboratoire de Mathématiques et Applications (CNRS/Université de Poitiers) have attempted to solve this mystery. They used a mathematical classification of different internal waves and analysis of experimental images at the sub-pixel scale, a first.

This work showed that these speed variations are due to the generation of specific waves that act as an undulating conveyor belt on which the ship moves back and forth. The scientists have also reconciled the observations of both Nansen and Ekman. They have shown that the Ekman oscillating regime is only temporary: the ship ends up escaping and reaches the constant Nansen speed.

This work is part of a major project[1] investigating why, during the Battle of Actium (31 BC), Cleopatra's large ships lost when they faced Octavian's weaker vessels. Might the Bay of Actium, which has all the characteristics of a fjord, have trapped the Queen of Egypt's fleet in dead water? So now we have another hypothesis to explain this resounding defeat, that in antiquity was attributed to remoras, 'suckerfish' attached to their hulls, as the legend goes.

Credit: 
CNRS

A 'breath of nothing' provides a new perspective on superconductivity

Zero electrical resistance at room temperature? A material with this property, i.e. a room temperature superconductor, could revolutionize power distribution. But so far, the origin of superconductivity at high temperature is only incompletely understood. Scientists from Universität Hamburg and the Cluster of Excellence "CUI: Advanced Imaging of Matter" have succeeded in observing strong evidence of superfluidity in a central model system, a two-dimensional gas cloud for the first time. The scientists report on their experiments in the renowned journal "Science", which allow to investigate key issues of high-temperature superconductivity in a very well-controlled model system.

There are things that aren't supposed to happen. For example, water cannot flow from one glass to another through the glass wall. Surprisingly, quantum mechanics allows this, provided the barrier between the two liquids is thin enough. Due to the quantum mechanical tunneling effect, particles can penetrate the barrier, even if the barrier is higher than the level of the liquids. Even more remarkably, this current can even flow when the level on both sides is the same or the current must flow slightly uphill. For this, however, the fluids on both sides must be superfluids, i.e. they must be able to flow around obstacles without friction.

This striking phenomenon was predicted by Brian Josephson during his doctoral thesis, and it is of such fundamental importance that he was awarded the Nobel Prize for it. The current is driven only by the wave nature of the superfluids and can, among other things, ensure that the superfluid begins to oscillate back and forth between the two sides - a phenomenon known as Josephson oscillations.

The Josephson effect was first observed in 1962 between two superconductors. In the experiment - in direct analogy to the water flow without level difference - an electric current could flow through a tunnel contact without an applied voltage. With this discovery, an impressive proof had been provided that the wave nature of matter in superconductors can be observed even at the macroscopic level.

Now, for the first time, the scientists in Prof. Henning Moritz's group have succeeded in observing Josephson oscillations in a two-dimensional (2D) Fermi gas. These Fermi gases consist of a "breath of nothing", namely a gas cloud of only a few thousand atoms. If they are cooled to a few millionth of a degree above absolute zero, they become superfluid. They can now be used to study superfluids in which the particles interact strongly with each other and live in only two dimensions - a combination that seems to be central to high-temperature superconductivity, but which is still only incompletely understood.

"We were amazed at how clearly the Josephson oscillations were visible in our experiment. This is clear evidence of phase coherence in our ultracold 2D Fermi gas," says first author Niclas Luick. "The high degree of control we have over our system has also allowed us to measure the critical current above which the superfluidity breaks down."

"This breakthrough opens up many new opportunities for us to gain insights into the nature of strongly correlated 2D superfluids", says Prof. Moritz, "These are of outstanding importance in modern physics, but very difficult to simulate theoretically. We are pleased to contribute to a better understanding of these quantum systems with our experiment."

Credit: 
University of Hamburg

2D semiconductors found to be close-to-ideal fractional quantum hall platform

image: A monolayer semiconductor is found to be a close-to-ideal platform for fractional quantum Hall state—a quantum liquid that emerges under large perpendicular magnetic fields. The image illustrates monolayer WSe2 hosting "composite fermions," a quasi-particle that forms due to the strong interactions between electrons and is responsible for the sequence of fractional quantum Hall states.

Image: 
Cory Dean/Columbia University

New York, NY--July 6, 2020--Columbia University researchers report that they have observed a quantum fluid known as the fractional quantum Hall states (FQHS), one of the most delicate phases of matter, for the first time in a monolayer 2D semiconductor. Their findings demonstrate the excellent intrinsic quality of 2D semiconductors and establish them as a unique test platform for future applications in quantum computing. The study was published online today in Nature Nanotechnology.

"We were very surprised to observe this state in 2D semiconductors because it has generally been assumed that they are too dirty and disordered to host this effect," says Cory Dean, professor of physics at Columbia University. "Moreover, the FQHS sequence in our experiment reveals unexpected and interesting new behavior that we've never seen before, and in fact suggests that 2D semiconductors are close-to-ideal platforms to study FQHS further."

The fractional quantum Hall state is a collective phenomenon that comes about when researchers confine electrons to move in a thin two-dimensional plane, and subject them to large magnetic fields. First discovered in 1982, the fractional quantum Hall effect has been studied for more than 40 years, yet many fundamental questions still remain. One of the reasons for this is that the state is very fragile and appears in only the cleanest materials.

"Observation of the FQHS is therefore often viewed as a significant milestone for a 2D material--one that only the very cleanest electronic systems have reached," notes Jim Hone, Wang Fong-Jen Professor of Mechanical Engineering at Columbia Engineering.

While graphene is the best known 2D material, a large group of similar materials have been identified over the past 10 years, all of which can be exfoliated down to a single layer thickness. One class of these materials is transition metal dichalcogenides (TMD), such as WSe2, the material used in this new study. Like graphene, they can be peeled to be atomically thin, but, unlike graphene, their properties under magnetic fields are much simpler. The challenge has been that the crystal quality of TMDs was not very good.

"Ever since TMD came on the stage, it was always thought of as a dirty material with many defects," says Hone, whose group has made significant improvement to the quality of TMDs, pushing it to a quality near to graphene--often considered the ultimate standard of purity among 2D materials.

In addition to sample quality, studies of the semiconductor 2D materials have been hindered by the difficulties to make good electrical contact. To address this, the Columbia researchers have also been developing the capability to measure electronic properties by capacitance, rather than the conventional methods of flowing a current and measuring the resistance. A major benefit of this technique is that the measurement is less sensitive both to poor electrical contact and to impurities in the material. The measurements for this new study were performed under very large magnetic fields--which help to stabilize the FQHS--at the National High Magnetic Field Lab.

"The fractional numbers that characterize the FQHS we observed--the ratios of the particle to magnetic flux number--follow a very simple sequence," says Qianhui Shi, the paper's first author and a postdoctoral researcher at the Columbia Nano Initiative. "The simple sequence is consistent with generic theoretical expectations, but all previous systems show more complex and irregular behavior. This tells us that we finally have a nearly ideal platform for the study of FQHS, where experiments can be directly compared to simple models."

Among the fractional numbers, one of them has an even denominator. "Observing the fractional quantum Hall effect was itself surprising, seeing the even-denominator state in these devices was truly astonishing, since previously this state has only been observed in the very best of the best devices," says Dean.

Fractional states with even denominators have received special attention since their first discovery in the late 1980s, since they are thought to represent a new kind of particle, one with quantum properties different from any other known particle in the universe. "The unique properties of these exotic particles," notes Zlatko Papic, associate professor in theoretical physics at the University of Leeds, "could be used to design quantum computers that are protected from many sources of errors."

So far, experimental efforts to both understand and exploit the even denominator states have been limited by their extreme sensitivity and the extremely small number of materials in which this state could be found. "This makes the discovery of the even denominator state in a new--and different--material platform, really very exciting," Dean adds.

The two Columbia University laboratories--the Dean Lab and the Hone Group--worked in collaboration with the NIMS Japan, which supplied some of the materials, and Papic, whose group performed computational modelling of the experiments. Both Columbia labs are part of the university's Material Research Science and Engineering Center. This project also used clean room facilities at both the Columbia Nano Initiative and City College. Measurements at large magnetic fields were made at the National High Magnetic Field Laboratory, a user facility funded by the National Science Foundation and headquartered at Florida State University in Tallahassee, Fl.

Now that the researchers have very clean 2D semiconductors as well as an effective probe, they are exploring other interesting states that emerge from these 2D platforms.

Credit: 
Columbia University School of Engineering and Applied Science

Highest peak power and excellent stability

image: The average pulse energy Emean=52.5 mJ was measured over a period of 120 minutes. The standard deviation has a value of 0.23%, the pulse-to-pulse energy fluctuations are 2.1%. Left inset: Beam profile (far-field intensity distribution). Right inset: Autocorrelation trace of the re-compressed 52.5 mJ pulses, measured and simulated.

Image: 
MBI

Power-scalable ultrafast laser sources in the midwave-infrared (MWIR) are a key element for basic research and applications in material processing and medicine. Optical amplifiers based on chirped pulse amplification (CPA) are used to generate high intensity pulses, a technique awarded with the Nobel Prize in physics in 2018. In the CPA scheme, a weak temporally stretched seed pulse is amplified to high energy in a laser amplifier and finally re-compressed resulting in an ultrashort pulse of very high intensity. Applying this concept a new system was developed at MBI delivering few-ps pulses at 2 μm wavelength with peak power beyond 10 GW (10 billion watt) at a 1 kHz repetition rate. The emitted pulses are characterized by excellent stability and brilliant beam quality. The results are reported in the latest issue of Optics Letters.

The main amplifiers of the 2-μm CPA system are based on Ho:YLF crystals and consist of a highly stable regenerative amplifier and two booster amplifiers. All of them are operated at room temperature and pumped by continuous-wave Tm:fiber lasers with a total power of 270 W. Starting from a 2-μm supercontinuum source the seed pulses are stretched and pre-amplified and subsequently fed into the Ho:YLF amplifier chain. The re-compressed pulse energy of the Ho:YLF CPA amounts to 52.5 mJ and reveals an excellent pulse-to-pulse stability of 93%. The recorded autocorrelation trace exhibits a FWHM of 4.1 ps. This corresponds to a duration of the main pulse of 2.4 ps (FWHM) with an estimated energy content of 85%, translating into 17 GW peak power. The latter and the pulse energy of >50 mJ represent the highest values ever achieved for few-ps pulses at 2 μm wavelength yet.

This source is currently being applied as pump in a system for the generation of few-cycle pulses around 5-μm with multi-millijoule energies. Applications in nonlinear optics, spectroscopy and materials processing are underway.

Credit: 
Forschungsverbund Berlin

The electrified brain

image: Fiber tracts within the target area for deep brain stimulation: The image depicts electrode contacts in 50 patients with obsessive-compulsive disorder and the nerve fibers associated with positive (red) and negative (blue) outcomes.

Image: 
Image: Horn/ Charité.

A group of researchers from Charité - Universitätsmedizin Berlin have further refined the use of deep brain stimulation in the treatment of obsessive-compulsive disorder. By accurately localizing electrode placement in the brains of patients, the researchers were able to identify a fiber tract which is associated with the best clinical outcomes following deep brain stimulation. The researchers' findings, which have been published in Nature Communications*, may be used to improve the treatment of obsessive-compulsive disorder.

A person with obsessive compulsive disorder (OCD) experiences unwanted thoughts and behaviors, the urge for which they find difficult or impossible to resist. More than 2 percent of people are affected by obsessive thoughts and compulsive behaviors which severely impair daily activities. A treatment option for severe cases is deep brain stimulation, a technique which is also used in other disorders, such as Parkinson's disease. Deep brain stimulation involves the implantation of tiny electrodes into structures deep inside the brain. After implantation, these electrodes deliver very weak electric currents to help rebalance brain activity. By stimulating different areas of the brain, such as a fiber tract within the internal capsule or the subthalamic nucleus, this technique can help improve clinical symptoms in some cases. Treatment success depends on the accurate placement of electrodes and requires millimeter-level precision. The optimal stimulation target for patients with obsessive-compulsive disorders had not previously been identified.

For the first time, a team of researchers - led by Dr. Andreas Horn of Charité's Department of Neurology with Experimental Neurology - has been able to identify a specific nerve bundle which appears to be the optimal target for stimulation. The researchers studied 50 patients with obsessive-compulsive disorder who received treatment at a number of centers around the world. Using magnetic resonance imaging technology both before and after electrode placement, the researchers were able to visualize surrounding fibre tracts and test to see which of these the electrodes were selectively stimulating. "Our analysis shows that optimal results are linked to a very specific nerve bundle. Reliable evidence for this link was found across the cohorts of patients examined in Cologne, Grenoble, London and Madrid," explains Dr. Horn.

The researchers initially examined two cohorts of patients, both of which received deep brain stimulation to the internal capsule or the subthalamic nucleus. These brain structures have a variety of connections to other areas of the brain. And yet, a specific tract situated between the prefrontal cortex and the subthalamic nucleus was identified as a suitable target for stimulation in both of these groups. Precise electrode localizations allowed the researchers to reliably predict treatment outcomes in both of these groups. These results were then replicated in two further, independent cohorts. When comparing their results with other studies, the researchers showed that the target areas described were also located within the tract-target identified in this study.

Describing the way in which these findings could help with electrode implantation, the study's first author, Ningfei Li, says: "Our results do not alter the original target area, they simply helped us to define it more precisely. What this means is that: so far, we have had to steer our boat toward an island which was shrouded in fog. Now, we can make out the island itself and perhaps even the pier, so we can aim for it with greater accuracy." All 3D structural analysis data have been made publicly available to researchers around the world. No Charité patients with obsessive-compulsive disorder are receiving treatment using this invasive method of deep brain stimulation. However, the participating research centers continue to share their knowledge and are developing protocols for additional studies to test the newly defined target areas.

Credit: 
Charité - Universitätsmedizin Berlin

Increased risk of injury in contact sports after prolonged training restrictions

As professional sports look to make a phased return behind closed doors across much of Europe, researchers from the University of Bath caution that the prolonged individual training players have been exposed to for months is insufficient to help athletes maintain the physical fitness and mental strength they need for competition.

Writing in the International Journal of Sports Medicine the researchers and sports physicians express their fears that injuries could increase once competitions resume and make recommendations for resuming training.

Most athletes are attempting to overcome the current coronavirus crisis by undertaking individual training within their own four walls to stay fit. But this might not be enough for those involved in contact sports, writes Professor Keith Stokes.

This is because, in addition to physical fitness, such sporting activities require training in evasive manoeuvres and contact situations. It is also near impossible to practice and hone the skills for game strategy when working alone. In addition, the researchers suggest, restrictions imposed on training and games also affects players' morale, which negatively impacts their mental health.

In the paper the researchers draw parallels with what happened with American football in 2011. Then, the American National Football League had a 20-week lockout when clubs and players could not agree on payment. On returning to competition, injuries were more frequent, especially in the Achilles heel area.

Professor Keith Stokes from the University of Bath's Department for Health and also England Rugby explains: "After months out of the game, without access to proper training facilities for much of that time, the return to playing matches must be carefully managed.

"Clubs must balance the need to prepare players for high levels of performance, the risk of injury after such a long lay-off, and the risk of infection with SARS-CoV-2. The key will be to build appropriate progression into training to give players the safest and most effective possible return."

In their paper the authors give practical advice on how athletes can protect themselves from injury once they resume sports activities suggesting that:

- Athletes should work on their individual weaknesses during the period of training restriction.

- Before return to full training a sports medical examination should be undertaken to inform training progression.

- Athletes who had COVID-19 themselves should be very carefully managed. Strength and muscle mass might be impacted, but there are also potential impacts of the infection on the heart.

- Reintroduction to training requires an individualised approach in these athletes.

In addition to their athletic abilities, players' nutritional condition and mental health may suffer during training restrictions. These two aspects would therefore also have to be taken into account when planning the return to training and games. The authors recommend a high-protein diet, supplemented with vitamins D and C and probiotics as appropriate.

They also point out that forced, abrupt cessation of activity is often even more stressful for athletes than it is for other people. It is common for athletes to develop what is known as "detraining syndrome", which is characterised by insomnia, anxiety and depression, can have a direct effect on their physical fitness and can delay their resumption of training.

Despite this, the authors are confident that most players will be able to play competitively again after a roughly six-week preparatory period. However, a great deal depends on how long the forced stop of competition has lasted and on what conditions training and games can resume.

Credit: 
University of Bath

Electrically focus-tuneable ultrathin lens for high-resolution square subpixels

image: a Illustration of applying the ETF-USSL in a display. The USSL enables multifocusing, allowing implementation of glassless 3D and multiview displays, and the ETF characteristics enable a variable viewing angle. b, the spot intensity depends on the distance along the z-axis and the distance in the lateral direction between the focal spots of peaks 1 and 2. At a fixed focal length position, the maximum intensity of the focal spot decreases as the focal length of the USSL becomes longer owing to the driving voltage. c Schematic of the tuneable focal length when a DC voltage bias is applied to graphene in the in-plane direction. In the ribbon made of graphene, the centre area (C) absorbs the light, and the carrier are concentrated in the left side (L) and right side (R) due to the DC bias; thus, the Fermi level is far from the Dirac point, and light is not absorbed and transmitted. Consequently, the change in the nanoribbon width via an external electric field effectively modulates the FZP topology, thereby changing the focal length of the lens.

Image: 
??Sehong Park, Gilho Lee, Byeongho Park, Youngho Seo, Chae bin Park, Young Tea Chun, Chulmin Joo, Junsuk Rho, Jong Min Kim, James Hone & Seong Chan Jun

Traditional tuneable lenses consisting of complex lenses with manipulation systems have limited designs because of the spatial occupancy, which eventually confines their applications in advanced pixel-based devices, such as flat panel displays. The graphene can be patterned into nanoribbons, then a graphene-based FZP lens can be an ideal combination of near and far optical fields because the optical conductivity of graphene can be tuned by adjusting the Fermi level or by varying the geometry. In the past, the lenticular lens and parallax barrier used in multiview autostereoscopic displays were considered infeasible in displays owing to their thickness, low transmittance, high aberration, and low resolution. Therefore, an original device with high optical performances and advantageous physical properties was constantly demanded.

As published in Light Science & Application journal in June 2020, the research team held by Professor Seong Chan Jun at Yonsei University, Korea, with fellow researchers from POSTECH, University of Cambridge, UK and Columbia University, USA have developed graphene-based ultrathin subpixel square lens that works by controlling the carrier distribution within the Fermi level and accordingly altering the absorbance characteristics. The Fresnel lens made of graphene enables electrically tuneable focusing based on the difference in the absorption characteristics depending on the position of Fermi level. By designing in an arc ribbon pattern, the effective spacing of the arc ribbons is controlled by the difference in the carrier distribution depending on the position of the electric field. Accordingly, a variation in the diffraction characteristics of the slit is achieved such that the focal length can be adjusted in the visible regime without any change in the design. Furthermore, the lens can be customized according to the wavelength of each subpixel in the display device without any additional light source or device. Thus, a multifunctional display using an ultrathin square subpixel lens with high transmittance and high resolution can be facilitated.

This graphene ultrathin lens is uniquely designed for the user's field of view (FOV) in multi-view autostereoscopic displays. Composed of 5 layers of graphene, the electrically focus-tunable ultrathin device exhibits 82% of transmittance and above 60% of focusing efficiency. Furthermore, it shows 19.42% shift in focal length which achieves multi-focusing property according to the observer's FOV. Therefore, this ultrathin focusing device allows the realization of multiview autostereoscopic display without additional calibration system. The scientists summarize the working principle of the device as follows:

"The electric field normal to the plane due to the DC bias concentrates the carrier density at the edges of the arc ribbon. The arc ribbon absorbs light in the central area (C), but the Fermi level on the left (L) and right (R) sides shifts away from the Dirac point due to the increase in the carriers. This results in a longer focal length because the decrease in the size of the arc ribbon increases the linearity of the diffraction by the arc ribbon"

"This subpixel lens can be uniquely designed according to the wavelength of each RGB subpixels. Therefore, the chromatic aberration that frequently occurs in conventional lenticular devices can be eliminated, and each individual wavelength of light can be focused into a single focal spot."

"The device's structural advantage within the subpixel scale can be embedded into each individual pixel in glassless 3D displays, privacy displays, and multiview displays for display applications. In addition, this design can be customised for 3D hologram displays, acoustic applications, and optical devices comprising metasurfaces, as expected by the researchers."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS