Tech

MTU and Argonne engineers improve signal processing for small fiber optic cables

image: New research from a collaboration between Michigan Tech and Argonne National Laboratory further improves optical signal transmission, which could lead to the fabrication of even smaller fiber-optic devices. The image was taken on Michigan Tech's FEI 200kV Titan Themis Scanning Transmission Electron Microscope.

Image: 
Miguel Levy/Michigan Tech

Optical signals produced by laser sources are extensively used in fiber-optic communications, which work by pulsing information packaged as light through cables, even at great distances, from a transmitter to a receiver. Through this technology it is possible to transmit telephone conversations, internet messages and cable television images. The great advantage of this technology over electrical signal transmission is its bandwidth -- namely, the amount of information that can be broadcast.

New research from a collaboration between Michigan Technological University and Argonne National Laboratory further improves optical signal processing, which could lead to the fabrication of even smaller fiber-optic devices.

The article, unveiling an unexpected mechanism in optical nonreciprocity -- developed by the research group of Miguel Levy, professor of physics at Michigan Tech -- has been published in the journal Optica. "Boosting Optical Nonreciprocity: Surface Reconstruction in Iron Garnets" explains the quantum and crystallographic origins of a novel surface effect in nonreciprocal optics that improves the processing of optical signals.

An optical component called the magneto-optic isolator appears ubiquitously in these optical circuits. Its function is to protect the laser source -- the place where light is generated before transmission -- from unwanted light that might be reflected back from downstream. Any such light entering the laser cavity endangers the transmitted signal because it creates the optical equivalent of noise.

"Optical isolators work on a very simple principle: light going in the forward direction is allowed through; light going in the backwards direction is stopped," Levy said. "This appears to violate a physical principle called time-reversal symmetry. The laws of physics say that if you reverse the direction of time -- if you travel backwards in time -- you end up exactly where you started. Therefore, the light going back should end up inside the laser."

But the light doesn't. Isolators achieve this feat by being magnetized. North and south magnetic poles in the device do not switch places for light coming back.

"So forward and backward directions actually look different to the traveling light. This phenomenon is called optical nonreciprocity," Levy said.

Optical isolators need to be miniaturized for on-chip integration into optical circuits, a process similar to the integration of transistors into computer chips. But that integration requires the development of materials technologies that can produce more efficient optical isolators than presently available.

Recent work by Levy's research group has demonstrated an order-of-magnitude improvement in the physical effect responsible for isolator operation. This finding, observable in nanoscale iron garnet films, opens up the possibility of much tinier devices. New materials technology development of this effect hinges on understanding its quantum basis.

The research group's findings provide precisely this type of understanding. This work was done in collaboration with physics graduate student Sushree Dash, Applied Chemical and Morphological Analysis Laboratory staff engineer Pinaki Mukherjee and Argonne National Laboratory staff scientists Daniel Haskel and Richard Rosenberg.

The Optica article explains the role of the surface in the electronic transitions responsible for the observed enhanced magneto-optic response. These were observed with the help of Argonne's Advanced Photon Source. Mapping the surface reconstruction underlying these effects was made possible through the state-of-the-art scanning transmission electron microscope acquired by Michigan Tech two years ago.

The new understanding of magneto-optic response provides a powerful tool for the further development of improved materials technologies to advance the integration of nonreciprocal devices in optical circuits.

Credit: 
Michigan Technological University

Can pumping up cold water from deep within the ocean halt coral bleaching?

image: Part of an experimental setup (heat plus AU from 50m depth) in a wet lab at the Bermuda Institute of Ocean Sciences (BIOS). The cold, deep water is supplied once a day through the silicone tubing. It is allowed to mix with the ambient water in each tank before overflowing onto the concrete table and draining out.

Image: 
Yvonne Sawall

The risk of severe coral bleaching--a condition in which corals lose their symbiotic algae, called zooxanthellae--is five times more frequent today than it was forty years ago. Coral bleaching is a direct result of global warming, where rising temperatures cause marine heat waves, which place stress on the living coral animals, as well as the photosynthetic algae on which they depend for energy. This heat stress causes the algae to malfunction, at which point they are expelled by the corals, causing the organisms to lose their color and appear white (thus the term coral "bleaching").

Due to the increasing pressure of global warming on highly valuable coral reef ecosystems, scientists are now seeking novel ways to decrease heat stress on corals. A new study led by Yvonne Sawall, assistant scientist at the Bermuda Institute of Ocean Sciences (BIOS), is showing potential for the use of artificial upwelling (AU)--or the application of cooler, deep water--as a way to mitigate the thermal stress on corals.

Upwelling is a natural oceanographic process in which winds push surfaces waters away from a region, such as a coastline, allowing the uplift of deep, cold waters to the surface. These waters are typically rich in nutrients and form the basis of productive marine ecosystems which, in turn, support many of the world's most important commercial fisheries. AU is a geoengineering method that uses pumps to bring deep-ocean water to the surface. Originally designed to fertilize surface waters to increase fish stocks or carbon dioxide (CO2) sequestration, AU may also be used to cool surface waters during heat waves, if the depth and intensity of AU is chosen wisely.

"Ocean warming and the occurrence of heat waves will increase in frequency and intensity over the coming decades and we need to consider rather unconventional solutions to protect and sustain coral reefs," Sawall said.

With funding from the German Research Foundation (DFG, with principal investigator Yuming Feng, doctoral student at the GEOMAR Helmholtz Center for Ocean Research in Kiel, Germany), Sawall and her co-authors studied three shallow water reef building coral species in Bermuda: Montastrea cavernosa (great star coral), Porites astreoides (mustard hill coral), and Pseododiploria strigosa (symmetrical brain coral).

After collecting fragments from living corals on Sea Venture Shoals, Bermuda, at a depth of 15 feet (5 meters), the research team placed the colonies in aquaria at BIOS to test the effects of deep cold-water pulses (AU) during thermal stress. Fragments were treated with various temperatures conditions, including an average summer temperature (28°C); a heat stress treatment known to cause bleaching (31°C); a heat stress treatment with daily pulses of cooler deep water from a depth of 164 feet (50 m, 24°C); and a heat stress treatment with daily pulses of cooler deep water from a depth of 300 feet (100 m, 20°C). The deep water used for the experiment was collected aboard the BIOS-operated research vessel (R/V) Atlantic Explorer approximately 2 miles (3 km) off the Bermuda Platform.

The results of the study showed that even short intrusions of cooler deep water (less than two hours per day) can mitigate thermal stress in corals. This was evident in higher levels of zooxanthellae performance in corals exposed to heat stress and AU compared to corals that were exposed to heat stress only, and this effect seemed stronger in the simulations with water from deeper depths.

"Our study shows the potential benefits of pulsed AU during heat waves. The next steps now are to find suitable AU settings to maximize the benefits, while minimizing potential harmful side effects of AU for corals and the ecosystem they support," Sawall said.

The Bermuda Institute of Ocean Sciences is an independent U.S. not-for-profit marine research and educational organization with 501(c)(3) status and a Bermuda Registered Charity (#116).

Credit: 
Bermuda Institute of Ocean Sciences

The accident preventers

Before autonomous vehicles participate in road traffic, they must demonstrate conclusively that they do not pose a danger to others. New software developed at the Technical University of Munich (TUM) prevents accidents by predicting different variants of a traffic situation every millisecond.

A car approaches an intersection. Another vehicle jets out of the cross street, but it is not yet clear whether it will turn right or left. At the same time, a pedestrian steps into the lane directly in front of the car, and there is a cyclist on the other side of the street. People with road traffic experience will in general assess the movements of other traffic participants correctly.

"These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs," explains Matthias Althoff, Professor of Cyber-Physical Systems at TUM. "But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users - no matter how confusing the traffic situation."

Algorithms that peer into the future

The ultimate goal when developing software for autonomous vehicles is to ensure that they will not cause accidents. Althoff, who is a member of the Munich School of Robotics and Machine Intelligence at TUM, and his team have now developed a software module that permanently analyzes and predicts events while driving. Vehicle sensor data are recorded and evaluated every millisecond. The software can calculate all possible movements for every traffic participant - provided they adhere to the road traffic regulations - allowing the system to look three to six seconds into the future.

Based on these future scenarios, the system determines a variety of movement options for the vehicle. At the same time, the program calculates potential emergency maneuvers in which the vehicle can be moved out of harm's way by accelerating or braking without endangering others. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.

Streamlined models for swift calculations

This kind of detailed traffic situation forecasting was previously considered too time-consuming and thus impractical. But now, the Munich research team has shown not only the theoretical viability of real-time data analysis with simultaneous simulation of future traffic events: They have also demonstrated that it delivers reliable results.

The quick calculations are made possible by simplified dynamic models. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume. When all characteristics of the road users are taken into account, the calculations become prohibitively time-consuming. That is why Althoff and his team work with simplified models. These are superior to the real ones in terms of their range of motion - yet, mathematically easier to handle. This enhanced freedom of movement allows the models to depict a larger number of possible positions but includes the subset of positions expected for actual road users.

Real traffic data for a virtual test environment

For their evaluation, the computer scientists created a virtual model based on real data they had collected during test drives with an autonomous vehicle in Munich. This allowed them to craft a test environment that closely reflects everyday traffic scenarios. "Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop," Althoff sums up.

The computer scientist emphasizes that the new security software could simplify the development of autonomous vehicles because it can be combined with all standard motion control programs.

Credit: 
Technical University of Munich (TUM)

NASA finds wind shear not letting up on Tropical Storm Vicky

image: On Sept. 16, 2020 at 8:40 a.m. EDT (1240 UTC), NASA's Terra satellite provided a visible image of Tropical Storm Vicky battling wind shear in the eastern North Atlantic Ocean.

Image: 
Image Courtesy: NASA/NRL

NASA's Terra satellite obtained visible imagery of Tropical Storm Vicky as it continued moving through the eastern North Atlantic Ocean fighting strong wind shear. Outside winds are pushing at the storm and weakening it.

Terra Sees Wind Shear Tearing Vicky Apart

On Sept. 16, 2020 at 8:40 a.m. EDT (1240 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite captured a visible image of Tropical Storm Vicky battling strong southwesterly wind shear. The image showed wispy clouds had surrounded the center of circulation, while the wind shear was blowing the bulk of clouds and showers to the northeast of the center.

NASA's Terra satellite is one in a fleet of NASA satellites that provide data for hurricane research.

About Wind Shear

The shape of a tropical cyclone provides forecasters with an idea of its organization and strength. When outside winds, or wind shear batters a storm, it can change the shape of it by pushing much of the associated clouds and rain to one side. Vicky has been in an area with strong southwesterly wind shear.

Dr. Michael Brennan, Branch Chief of NOAA's National Hurricane Center noted in the 11 a.m. EDT (1500 UTC) Discussion, "Hostile vertical shear of 50 to 60 knots has finally taken a toll on Vicky. A 1227 UTC ASCAT-B overpass showed peak winds of 35 knots north of the center, and that is the basis for the advisory intensity. The strong shear is expected to continue while Vicky moves over marginal 26-27 degrees Celsius sea surface temperatures, so additional weakening is forecast. Vicky should become a tropical depression in around 24 hours before weakening to a remnant low in about 2 days, with dissipation expected by day 3."

Vicky on Sept. 16

NOAA's National Hurricane Center (NHC) noted at 5 a.m. EDT (0900 UTC), the center of Tropical Storm Vicky was located near latitude 21.6 degrees north and longitude 33.9 degrees west. Vicky is centered about 755 miles (1,215 km) west-northwest of the Cabo Verde Islands. Vicky is moving toward the west-northwest near 9 mph (15 kph). Maximum sustained winds are near 50 mph (85 km/h) with higher gusts. The estimated minimum central pressure is 1004 millibars.

Weakening Forecast for Vicky

A westward motion is expected to begin later today, followed by a west-southwestward motion by late Thursday [Sept. 17]. Gradual weakening is forecast over the next few days, and the system could become a remnant low on Thursday or Friday.

NASA Researches Earth from Space

For more than five decades, NASA has used the vantage point of space to understand and explore our home planet, improve lives and safeguard our future. NASA brings together technology, science, and unique global Earth observations to provide societal benefits and strengthen our nation. Advancing knowledge of our home planet contributes directly to America's leadership in space and scientific exploration.

For updated forecasts, visit: http://www.nhc.noaa.gov

By Rob Gutro 
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Controlled dynamics of colloidal rods

image: Model of the colloidal rods of different lengths, which move like chess pieces on a magnetized chip.

Image: 
Image: Adrian Ernst.

Colloidal particles have become increasingly important for research as vehicles of biochemical agents. In future, it will be possible to study their behaviour much more efficiently than before by placing them on a magnetised chip. A research team from the University of Bayreuth reports on these new findings in the journal Nature Communications. The scientists have discovered that colloidal rods can be moved on a chip quickly, precisely, and in different directions, almost like chess pieces. A pre-programmed magnetic field even enables these controlled movements to occur simultaneously.

For the recently published study, the research team, led by Prof. Dr. Thomas Fischer, Professor of Experimental Physics at the University of Bayreuth, worked closely with partners at the University of Poznán and the University of Kassel. To begin with, individual spherical colloidal particles constituted the building blocks for rods of different lengths. These particles were assembled in such a way as to allow the rods to move in different directions on a magnetised chip like upright chess figures - as if by magic, but in fact determined by the characteristics of the magnetic field.

In a further step, the scientists succeeded in eliciting individual movements in various directions simultaneously. The critical factor here was the "programming" of the magnetic field with the aid of a mathematical code, which in encrypted form, outlines all the movements to be performed by the figures. When these movements are carried out simultaneously, they take up to one tenth of the time needed if they are carried out one after the other like the moves on a chessboard.

"The simultaneity of differently directed movements makes research into colloidal particles and their dynamics much more efficient," says Adrian Ernst, doctoral student in the Bayreuth research team and co-author of the publication. "Miniaturised laboratories on small chips measuring just a few centimetres in size are being used more and more in basic physics research to gain insights into the properties and dynamics of materials. Our new research results reinforce this trend. Because colloidal particles are in many cases very well suited as vehicles for active substances, our research results could be of particular benefit to biomedicine and biotechnology," says Mahla Mirzaee-Kakhki, first author and Bayreuth doctoral student.

Credit: 
Universität Bayreuth

A new species of spider

image: Lower picture: Various views of a male specimen (upper row) and a female specimen (lower row) of the newly discovered spider species Ocrepeira klamt.

Image: 
Photo: Charlotte Hopfe.

During a research stay in the highlands of Colombia conducted as part of her doctorate, Charlotte Hopfe, PhD student under the supervision of Prof. Dr. Thomas Scheibel at the Biomaterials research group at the University of Bayreuth, has discovered and zoologically described a new species of spider. The previously unknown arachnids are native to the central cordillera, not far from the Pacific coast, at an altitude of over 3,500 meters above sea-level. In the magazine PLOS ONE, the scientist from Bayreuth presents the spider she has called Ocrepeira klamt.

"I chose the zoological name Ocrepeira klamt in honour of Ulrike Klamt, my German teacher at high school. The enthusiasm with which she pursues her profession and the interest she shows in her students and in literature are an inspiration to me," says Charlotte Hopfe.

The cordillera in Colombia is famous for its unusually large variety of species. The habitats of these species are distributed at altitudes with very different climatic conditions, vegetation, and ecosystems. The Bayreuth researcher has collected and zoologically determined specimens of more than 100 species of spider in these habitats. In doing so, she was mainly in a region that has only been accessible to researchers since the end of civil war in Colombia in 2016. She discovered the new spider, which differs from related species in the striking structure of its reproductive organs, at altitudes of over 3,500 meters above sea-level. In the identification of this and many other spider specimens, Hopfe received valuable support from researchers at Universidad del Valle in Cali, Colombia, with which the University of Bayreuth has a research cooperation. Colombia has been identified as a priority country in the internationalization strategy of the University of Bayreuth, which is why it maintains close connections with several Colombian universities.

The study of spiders from regions of such various huge climatic and ecological variety may also offer a chance to find answers to two as yet unexplored questions. It is not yet known whether temperatures, precipitation, or other climatic factors influence the evolution of spiders, or the properties of their silk. For example, is the proportion of species with extremely elastic silk in the lowland rainforest higher than in the semi-desert? And it is also still unclear whether the properties of the silk produced by a species of spider are modified by climatic factors. Would a spider living in the high mountains, such as Ocrepeira klamt, produce the same silk if it were native to a much lower region of the cordillera? The answer to these questions could provide important clues as to the conditions under which unusual spider silks develop.

Along similar lines, it would also be interesting to explore whether there are spider silk proteins which, due to their properties, are even more suitable for certain applications in biomedicine and biotechnology than silk proteins currently known. "The greater the variety of spider silks whose structures and properties we know, the greater the potential to optimize existing biomaterials and to develop new types of biomaterials on the basis of silk proteins," Hopfe explains.

Credit: 
Universität Bayreuth

Perfectionists may be more prone to helicopter parenting, study finds

Perfectionists often have high standards, not only for themselves but for their children. Yet, in their quest for perfection, they might find themselves with a less-than-ideal label: helicopter parent.

So-called helicopter parents engage in what's known as "over-parenting" - hovering over their young adult children and taking care of tasks that the children should be able to do themselves, such as cooking, cleaning or paying bills.

"Over-parenting is when you apply what we call developmentally inappropriate parenting or guidance structure for the child," said University of Arizona researcher Chris Segrin, who studies the parenting style.

"By developmentally inappropriate, we mean we're providing to the child that which the child could easily do him or herself. People who engage in over-parenting are not adjusting their parenting and letting the child have greater autonomy; they still want to control all the child's outcomes."

The negative effects of over-parenting are well documented. Researchers have found it can lead to psychological distress, narcissism, poor adjustment, alcohol and drug use, and a host of other behavioral problems in emerging adults ages 18 to 25.

Yet, far less is known about why certain people become helicopter parents in the first place.

In a new study, Segrin and co-authors Tricia Burke from Texas State University and Trevor Kauer from the University of Nebraska find that perfectionism might be one driver of over-parenting.

"Perfectionism is a psychological trait of wanting to be prefect, wanting success, wanting to have positive accolades that you can point to," said Segrin, professor and head of the UArizona Department of Communication in the College of Social and Behavioral Sciences.

Perfectionist parents may see their children's success as a reflection on them, Segrin said, and they may engage in over-parenting in an effort to achieve "perfect" results.

"They want to live vicariously through their children's achievements. They want to see their children achieve because it makes them look good," he said. "I'm not saying they don't care about their children; of course they do. But they measure their self-worth by the success of their children. That's the yardstick that they use to measure their own success as a parent."

Segrin and his collaborators conducted two studies looking at the link between perfectionism and over-parenting, the results of which are published together in the journal Couple and Family Psychology: Research and Practice.

In the first study, 302 parents of young adults were asked to rate a series of statements designed to measure their levels of engagement in over-parenting and their levels of perfectionism. In the second, the researchers surveyed 290 parent-young adult pairs. The young adults responded to statements designed to measure their perception of their parent's parenting style.

The findings from both studies confirmed that perfectionism is indeed associated with helicopter parenting.

'Anxious Parents' May Also be Prone to Helicoptering

It's important to understand what motivates over-parenting in order to determine how to intervene in the potentially harmful behavior, Segrin says.

"All the research thus far on helicopter parenting, or over-parenting, has focused on what are the outcomes for the children who are the recipients of over-parenting, and no one has been looking at who does this in the first place," he said. "We think knowing more about the motivations of the parents has important implications for understanding what happens to the children."

Although he doesn't specifically address it in the study, Segrin suspects that middle-aged moms and dads who grew up in the "self-esteem era" of the 1970s and 1980s might be especially prone to perfectionism that can lead to over-parenting. In that era, children's bad behavior was often blamed on low self-esteem, and the remedy for low self-esteem was lots of praise, Segrin said.

"We started giving kids trophies at the end of the season just for being on the team, not because they actually achieved anything," he said. "Fast-forward 35, 40 years and these people are now adults who have children who are entering into adulthood. They were raised in a culture of 'you're special, you're great, you're perfect,' and that fuels perfectionistic drives. 'If I really am special, if I really am great, then my kids better be special and great, too, or it means I'm not a good parent.'"

Perfectionism isn't the only characteristic that can lead to over-parenting. Previous research by Segrin showed there's also a link between over-parenting and its close cousin: anxious parenting.

Anxious parents tend to worry a lot and ruminate on bad things that could happen to their child, so they parent with risk aversion in mind, Segrin said. His previous work showed that parents who have many regrets in their own lives may engage in this type of parenting as they try to prevent their children from repeating similar mistakes.

Just because someone engages in anxious parenting doesn't mean they engage in over-parenting, but anxious parenting is "one of the ingredients in the over-parenting stew," Segrin said, adding that anxious parenting can sometimes lead to over-parenting.

More Moms Than Dads Fall in the Over-parenting Trap

The parents in the study were mostly moms, and there's an explanation for that, Segrin said.

"When we recruit young people into the study and ask them to get a parent to also fill out the survey for us, we let them pick the parent, with the understanding that they will naturally lead us to the helicopter parent among their parents," Segrin said. "The one who's super involved in the child's life is, of course, going to want to participate in the research project with their child. So, like a moth to the flame, these young adults draw us right to the parent who delivers the most over-parenting, and we're finding that it is the mothers, usually."

That's not to say that dads can't be helicopter parents. They certainly can and in some cases are, Segrin said, but it seems to be less common.

"We know that in our culture, for better or worse, women end up getting strapped with child-rearing responsibilities to a much greater extent than men, so it stands to reason that as the child matures and gets older, the mother sort of stays on board with that job," he said.

Segrin hopes his research illuminates the hazards of helicopter parenting, not only for the young adults on the receiving end, but the parents themselves.

For perfectionism-driven helicopter parents to change their ways, they first need to recognize their own value, independent of their children, Segrin said.

"I sometimes see, especially in mothers, that they define their whole universe as 'mother' - not spouse, not wife, not worker, not hobbyist but 'mother.' I think those blurred boundaries between parent and child can be harmful to the psychological landscape of the parent," Segrin said. "We need the parents to realize they have some element of their own life - whether it's their career, their personal relationships, their hobbies - that's independent of their role as a parent, so they don't get caught up in this trap of wanting to just keep parenting their children until they're 40 years old."

Avoiding that trap is also important for the well-being of emerging adults, as a growing body of research shows.

"Parents need to learn to accept their children's own goals and give them the chance to explore," Segrin said. "Young adults need the room to go out and explore and find their own life and their own ambitions."

Credit: 
University of Arizona

Fast calculation dials in better batteries

image: A graph that maps the capacity of batteries to cathode thickness and porosity shows a laborious search based on numerical simulations (black square) and a new Rice University algorithm (red dot) return nearly the same result. Rice researchers say their calculations are at least 100,000 times faster.

Image: 
Fan Wang/Rice University

HOUSTON - (Sept. 16, 2020) - A simpler and more efficient way to predict performance will lead to better batteries, according to Rice University engineers.

That their method is 100,000 times faster than current modeling techniques is a nice bonus.

The analytical model developed by materials scientist Ming Tang and graduate student Fan Wang of Rice University's Brown School of Engineering doesn't require complex numerical simulation to guide the selection and design of battery components and how they interact.

The simplified model developed at Rice -- freely accessible online -- does the heavy lifting with an accuracy within 10% of more computationally intensive algorithms. Tang said it will allow researchers to quickly evaluate the rate capability of batteries that power the planet.

The results appear in the open-access journal Cell Reports Physical Science.

There was a clear need for the updated model, Tang said.

"Almost everyone who designs and optimizes battery cells uses a well-established approach called P2D (for pseudo-two dimensional) simulations, which are expensive to run," Tang said. "This especially becomes a problem if you want to optimize battery cells, because they have many variables and parameters that need to be carefully tuned to maximize the performance.

"What motivated this work is our realization that we need a faster, more transparent tool to accelerate the design process, and offer simple, clear insights that are not always easy to obtain from numerical simulations," he said.

Battery optimization generally involves what the paper calls a "perpetual trade-off" between energy (the amount it can store) and power density (the rate of its release), all of which depends on the materials, their configurations and such internal structures as porosity.

"There are quite a few adjustable parameters associated with the structure that you need to optimize," Tang said. "Typically, you need to make tens of thousands of calculations and sometimes more to search the parameter space and find the best combination. It's not impossible, but it takes a really long time."

He said the Rice model could be easily implemented in such common software as MATLAB and Excel, and even on calculators.

To test the model, the researchers let it search for the optimal porosity and thickness of an electrode in common full- and half-cell batteries. In the process, they discovered that electrodes with "uniform reaction" behavior such as nickel-manganese-cobalt and nickel-cobalt-aluminum oxide are best for applications that require thick electrodes to increase the energy density.

They also found that battery half-cells (with only one electrode) have inherently better rate capability, meaning their performance is not a reliable indicator of how electrodes will perform in the full cells used in commercial batteries.

The study is related to the Tang lab's attempts at understanding and optimizing the relationship between microstructure and performance of battery electrodes, the topic of several recent papers that showed how defects in cathodes can speed lithium absorption and how lithium cells can be pushed too far in the quest for speed.

Credit: 
Rice University

Immune 'cloaking' in cancer cells and implications for immunotherapy

Researchers at Queen Mary University of London, The Institute of Cancer Research, London, and the Moffitt Cancer Centre have created a mathematical model that can determine the impact of the immune system on tumour evolution. The information gained from using this model may be able to be used to predict whether immunotherapy is likely to be effective for a patient's cancer, helping to guide treatment decisions.

A battle between cancer cells and immune cells

Immune cells recognise tumours by detecting molecules present on the surface of cancer cells, known as neoantigens. Neoantigens are 'faulty' molecules that are made as the result of changes to the genetic code (mutations) within cancer cells. When immune cells scan the body, they recognise these faulty molecules as being non-self, which triggers an immune response against the cancer.

In order to overcome destruction by the immune system, the cancer cells fight back by developing 'cloaking' mechanisms which they can use to hide from immune cells. Consequently, the cancer can continue to grow undetected within the body.

The team, led by Dr Eszter Lakatos and Professor Trevor Graham from Queen Mary University of London, and Professor Andrea Sottoriva from The Institute of Cancer Research (ICR), set out to develop a computational model to map the arms race that occurs between cancer cells and immune cells as tumours evolve. Using genomic data from bowel, stomach and endometrial cancers available from the Cancer Genome Atlas, the team developed a model that works out the interplay between the immune system and cancer cells using the genetic codes of cancer cells as its input. By calculating the number of neoantigens present in a tumour and looking at how fast they have accumulated, the model is able to predict when the cancer is likely to activate its cloaking mechanisms against the immune system.

Dr Lakatos from Queen Mary University of London said: "Mathematical modelling can help us re-create biological processes even when only a small part of the picture is available. It is especially useful in determining how a cancer developed, as we typically only see a single snapshot of that process. We developed our model to characterise all possible developing cancers - including the ones that are successfully destroyed by the immune system - to highlight what measurements are most informative about their future evolution."

The research was published today in Nature Genetics and primarily funded by Cancer Research UK, with additional support from Wellcome and the National Institutes of Health.

Considerations for immunotherapy

The predictions from the model may have important implications for immunotherapy sensitivity in patients with cancer. Immunotherapies are a group of drugs which boost the activity of the patient's immune system to help fight their cancer. Such drugs have been shown to be extremely effective against some cancer types, even achieving cure in some cases. However, this is not the case for all patients and not all tumours respond to immunotherapy. Being able to know ahead of time if immunotherapy is likely to be effective would be extremely valuable for patient management.

Professor Graham from Queen Mary University of London said: "Boosting the immune system is an incredibly effective way to treat cancer, but it has been difficult to understand why some patients respond to immunotherapy and others don't. Our study is a step towards understanding the interplay between immune cells and tumour cells, and we hope that this understanding might prove useful for guiding treatment decisions in the future."

The new mathematical model may help researchers make predictions about people's cancers, informing on whether immunotherapy is likely to be an effective treatment option for a patient based on the cancer cells' immune cloaking status. The model predicts that immunotherapy may be most effective when it is administered after a cancer has cloaked itself, as at this point the therapy could be used to reinvigorate the immune system to recognise and fight the cancer. The model also predicts that if the cancer has not activated immune cloaking mechanisms early on in its development, immunotherapy may be effective initially but the long-term response to the treatment may not be so successful.

Professor Sottoriva from The Institute of Cancer Research (ICR) said: "Our model helps us understand, through mathematics, the arms race that takes place within the body between a tumour and the patient's immune system. Cancers are constantly adapting and evolving - and can often dodge the effects of treatment or hide themselves from the immune system. Our study gives us a valuable tool for understanding and predicting how cancers will evolve and interact with the immune system, so we can anticipate cancer's next move, and devise new treatment strategies for patients."

The team has developed a model that expands the understanding of the action of the immune system on a tumour just from looking at the genetic codes of cancers. Because genetic analysis is becoming routine in the NHS, knowing how to use the genetic data to select patients for treatment and improve understanding of why immunotherapies are less effective in some patients than in others is very useful. The team are now keen to apply their model to cancers that have been treated with immunotherapy to see if the predictions are accurate.

Credit: 
Queen Mary University of London

NASA imagery reveals Paulette became a strong extratropical cyclone 

image: On Sept. 16 at 10:16 a.m. EDT (1416), the MODIS instrument aboard NASA's Terra satellite provided a visible image of Paulette that showed the storm had transitioned into an extra-tropical cyclone in the North Atlantic Ocean.

Image: 
NASA/NRL

Tropical cyclones can become post-tropical before they dissipate, meaning they can become sub-tropical, extra-tropical or a remnant low-pressure area. As Hurricane Paulette transitioned into an extra-tropical storm, NASA's Terra satellite provided a visible image of the powerful storm, and the National Hurricane Center issued their final advisory on the system.

What is a Post-tropical Storm? 

A Post-Tropical Storm is a generic term for a former tropical cyclone that no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. Former tropical cyclones that have become fully extratropical, subtropical, or remnant lows--all three classes of post-tropical cyclones. In any case, they no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. However, post-tropical cyclones can continue carrying heavy rains and high winds.

What is an Extra-tropical Storm?

Often, a tropical cyclone will transform into an extra-tropical cyclone as it recurves toward the poles (north or south, depending on the hemisphere the storm is located in). An extra-tropical cyclone is a storm system that primarily gets its energy from the horizontal temperature contrasts that exist in the atmosphere.

Tropical cyclones have their strongest winds near the earth's surface, while extra-tropical cyclones have their strongest winds near the tropopause - about 8 miles (12 km) up. Tropical cyclones, in contrast, typically have little to no temperature differences across the storm at the surface and their winds are derived from the release of energy due to cloud/rain formation from the warm moist air of the tropics.

Visible NASA Imagery Shows the Transition

Visible imagery from NASA's Terra satellite revealed Paulette's extra-tropical transition.

On Sept. 16 at 10:16 a.m. EDT (1416), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Terra satellite provided a visible image of the storm. The MODIS image showed Paulette had a closed surface wind circulation about a well-defined center, but the storm has become asymmetric with the bulk of the clouds north of the center.

U.S. Navy Hurricane Specialist Dave Roberts at NOAA's National Hurricane Center (NHC) in Miami, Fla. noted, "Conventional GOES-16 [satellite] visible and enhanced BD-curve satellite imagery show that Paulette has merged with the large baroclinic zone extending over the north-central Atlantic. Deep convection just to the north of the surface center that was noted on earlier microwave images has dissipated.  Therefore, the system is now classified as extratropical cyclone and this is the last NHC advisory."

According to NOAA, a Baroclinic Zone is a region in which a temperature gradient exists on a constant pressure surface. Baroclinic zones are favored areas for strengthening and weakening systems; barotropic systems, on the other hand, do not exhibit significant changes in intensity. Also, wind shear is characteristic of a baroclinic zone.

Paulette's Final Advisory

At 11 a.m. EDT (1500 UTC), the center of Post-Tropical Cyclone Paulette was located near latitude 43.3 degrees north and longitude 45.2 degrees west. That is about 450 miles (725 km) east-southeast of Cape Race Newfoundland, Canada. The post-tropical cyclone is moving toward the east-northeast near 35 mph (56 kph), and this general motion is expected through Thursday. Maximum sustained winds have decreased to near 85 mph (140 kph) with higher gusts. The estimated minimum central pressure is 973 millibars.

Paulette's Final Forecast

Further weakening is forecast during the next couple of days. The cyclone is forecast to slow down and turn toward the southeast and south late Thursday and Friday.

Meanwhile, ocean swells generated by Paulette will continue to affect Atlantic Canada, Bermuda, the Bahamas, and portions of the east coast of the United States through tonight. These swells are likely to cause life-threatening surf and rip current conditions.

NASA Researches Earth from Space

For more than five decades, NASA has used the vantage point of space to understand and explore our home planet, improve lives and safeguard our future. NASA brings together technology, science, and unique global Earth observations to provide societal benefits and strengthen our nation. Advancing knowledge of our home planet contributes directly to America's leadership in space and scientific exploration.

Credit: 
NASA/Goddard Space Flight Center

Turbulence affects aerosols and cloud formation

image: A sonic temperature sensor and cloud droplets in the laser sheet inside Michigan Tech's cloud chamber.

Image: 
Will Cantrell/Michigan Tech

Chat with an atmospheric scientist for more than a few minutes, and it's likely they'll start advocating for a planetary name change. Planet Ocean-Cloud is much more fitting than Earth, they'll say, when so much of our planet's life systems are affected by the interactions of clouds and the oceans.

The ability to predict the behavior of clouds gives meteorologists, climate scientists, physicists and others a better understanding of change of precipitation (currently one of the most difficult aspects of weather forecasting to predict) and improves climate modeling.

Last week, Prasanth Prabhakaran, Will Cantrell and Raymond Shaw, along with several coauthors, published "The role of turbulent fluctuations in aerosol activation and cloud formation" in the journal Proceedings of the National Academy of Sciences. Their article asks: Under what environmental conditions do cloud droplets form? Does turbulence -- the chaotic air motion that results in a bumpier ride on an airplane -- affect the properties of clouds, such as how many cloud droplets they have and whether they will produce precipitation?

"There are very few absolutes in life and I'm about to give you one of them: When you look up in the sky, every cloud droplet you see formed on a preexisting speck of dust. But not every speck of dust will give you a cloud droplet," said Will Cantrell, professor of physics.

"If you give me the atmospheric conditions, I can give you a pretty good idea of whether the speck of dust will form a cloud droplet. So far in atmospheric science, what we haven't accounted for is the fact that the atmosphere is turbulent," Cantrell said. "If the dust particles were identical, but they are exposed to different conditions, that will play a role in whether they become cloud droplets."

The role of turbulence in cloud formation is the gap Cantrell's research steps into. Traditionally, the mechanics of cloud formation have not accounted for turbulence. Prabhakaran and coauthors have developed a framework, backed with experiments from Tech's cloud chamber, to explain how preexisting aerosol (dust) particles -- the seeds of cloud droplets -- make the transition to becoming droplets (and thus become eligible to start the process of falling on your garden).

Michigan Tech's cloud chamber is one of only two in the world capable of performing such experiments. Shaw, distinguished professor of physics and director of Michigan Tech's atmospheric sciences PhD program, is also affiliated with the other: the LACIS-T chamber in Leipzig, Germany, at the Institute for Tropospheric Research. Clouds can be maintained for hours in Michigan Tech's chamber, a huge advantage over in situ experiments in a jet outfitted with measurement equipment traveling a hundred meters a second through a cloud.

"Under controlled conditions we investigated the aspects of cloud formation," said Prabhakaran, who is a postdoctoral research scholar in Michigan Tech's department of physics. "Modeling under different regimes shows how cloud droplets form and the significance of formation of cloud droplets under the conditions we have, whether it's a highly polluted environment or out in a relatively clean environment like out over the ocean."

Atmospheric conditions matter: In clean conditions, all researchers need to know are the mean values such as average water vapor concentration and average temperature, to have enough information to predict whether dust specks will become cloud droplets. Under more polluted conditions, the exact conditions the particles are exposed to becomes more important.

"The way that clouds interact with sunlight and whether they precipitate will depend a lot on how many droplets and how big they are," Cantrell said. "Understanding the transition from dust to cloud droplets is a key part of understanding whether you'll have a lot or few droplets. Our theory adds a way to understand whether the turbulent mixing in the atmosphere will affect the number of droplets you get, and that cascades into other properties of cloud formation."

To conduct the experiment, the researchers created a turbulent environment inside the 3.14 meters cubed cloud chamber by heating the chamber's lower plate and cooling the top plate to create a turbulent, convective flow. Into the flow the team introduced 130-nanometer sodium chloride aerosol particles. By varying the temperature differential between the top and bottom plates and the number of aerosol particles in the chamber, the researchers saw differences in how clouds formed.

Based on those observations, the research team developed a semiquantitative theory to describe the conditions. Whether aerosol particles become droplets has a tremendous effect on the properties of clouds, and the Michigan Tech experiments and model provide a framework to categorize droplet formation in numerical models.

Cantrell said turbulence has not been a part of the cloud physics curriculum until very recently.

"Our measurements in the chamber show that turbulence can mimic the behaviors that have been attributed to particle variation, primarily size and composition. This experiment changes our understanding of the properties of the clouds and we become better able to represent those processes in climate models," he said.

The researchers said their model will help forecasters predict the fluctuations Planet Ocean-Cloud will experience as the climate changes.

"Hopefully within a few years, this will improve the observations of climate models for predicting long-term climate change," Prabhakaran said.

Credit: 
Michigan Technological University

NASA finds coldest cloud tops on hurricane Teddy's western side

image: On Sept. 16 at 12:53 a.m. EDT (0453 UTC) NASA's Aqua satellite analyzed Hurricane Teddy's cloud top temperatures using the AIRS instrument. AIRS showed the strongest storms with the coldest cloud top temperatures were as cold as or colder than 210 Kelvin (purple) minus 81 degrees Fahrenheit (minus 63.1 degrees Celsius).

Image: 
NASA JPL/Heidar Thrastarson

NASA analyzed the cloud top temperatures in Hurricane Teddy using infrared light to determine the strength of the storm. Infrared imagery revealed that the strongest storms were on Teddy's western side.

An Infrared View of Teddy

One of the ways NASA researches tropical cyclones is using infrared data that provides temperature information. Cloud top temperatures identify where the strongest storms are located. The stronger the storms, the higher they extend into the troposphere, and the colder the cloud top temperatures.

On Sept. 16 at 12:53 a.m. EDT (0453 UTC) NASA's Aqua satellite analyzed the storm using the Atmospheric Infrared Sounder or AIRS instrument. The AIRS imagery showed the strongest storms west of Teddy's center of circulation. AIRS found coldest cloud top temperatures as cold as or colder 210 Kelvin minus 81 degrees Fahrenheit (minus 63.1 degrees Celsius). NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain. The eye was barely visible in the infrared imagery.

NASA then provides data to tropical cyclone meteorologists so they can incorporate it in their forecasts.

Over 10 hours later at 11 a.m. EDT on Sept. 16, Andrew Latto, Hurricane Specialist at NOAA's National Hurricane Center in Miami, Fla, noted, "Teddy's overall appearance has changed little over the past several hours. Microwave and infrared satellite images depict a well-defined inner core with an eye evident in the microwave imagery. However, visible imagery reveals that the eye remains cloud filled. Over the past few hours, the coldest cloud tops and have become confined to the western portion of the circulation, which could be the early signs of the cyclone experiencing some westerly wind shear."

Teddy's Status on Sept. 16

At 11 a.m. EDT (1500 UTC), the center of Hurricane Teddy was located near latitude 16.5 degrees north and longitude 49.7 degrees west. Teddy was centered about 775 miles (1,245 km) east of the Lesser Antilles. Teddy was moving toward the northwest near 12 mph (19 kph) and this general motion is forecast to continue for the next few days. Maximum sustained winds are near 100 mph (155 kph) with higher gusts. The estimated minimum central pressure is 978 millibars.

Teddy's Forecast

Additional strengthening is expected over the next couple of days, and Teddy could become a major hurricane by late tonight, Sept. 16.

In addition, large swells generated by Teddy are expected to reach the Lesser Antilles and the northeastern coast of South America today and should spread westward to the Greater Antilles, the Bahamas, and Bermuda by Friday, Sept. 18. These swells are likely to cause life-threatening surf and rip current conditions.

NASA Researches Earth from Space

For more than five decades, NASA has used the vantage point of space to understand and explore our home planet, improve lives and safeguard our future. NASA brings together technology, science, and unique global Earth observations to provide societal benefits and strengthen our nation. Advancing knowledge of our home planet contributes directly to America's leadership in space and scientific exploration.

Credit: 
NASA/Goddard Space Flight Center

Better communication helps translate molecular tools

image: In order to effectively develop, test, validate and standardize novel monitoring tools, researchers and policymakers need to establish a robust, solid network.

Image: 
© 2020 KAUST; Xavier Pita

A sustained dialogue must be established between molecular ecologists, policymakers and other stakeholders for DNA-based approaches to be adopted in marine monitoring and assessment, according to KAUST scientists and colleagues.

New tools able to solve some of the challenges facing this field aren't getting the attention they most likely deserve, explains KAUST molecular ecologist, Eva Aylagas, the article's corresponding author. "This is because it is common practice for researchers, policymakers and other stakeholders involved in marine environmental management to act independently," says Aylagas.

DNA barcoding and metabarcoding are molecular techniques used to identify species by comparing small fragments of their DNA against a reference database. Traditionally, assessing the health of a marine ecosystem involves identifying organisms from samples based on their morphological characteristics. This requires the involvement of specialized taxonomists and is often very expensive and time consuming. DNA barcoding and metabarcoding could save monitoring programs a lot of time and money.

Aylagas and her colleagues propose a roadmap for developing meaningful collaboration between stakeholders with the aim of implementing molecular approaches in marine monitoring. The roadmap was based on lessons from several successful projects.

For example, DNA metabarcoding is being tested in New Zealand for the purpose of monitoring the impacts of the country's extensive aquaculture farms on the surrounding marine environment. Aquaculture can cause environmental damage through the accumulation of organic matter from fish excretions and nutrients from uneaten food, causing low-oxygen conditions for animals and plants that inhabit the marine sediment, while also generating toxic conditions for aquaculture fish.

The New Zealand government has funded a multiyear project to compare traditional and DNA-based approaches for monitoring marine sediment in the vicinity of a large number of aquaculture farms in several regions of the country. This involved extensive collaboration between government, monitoring agencies, industry and researchers.

Having aquaculture farmers and relevant government agencies directly involved from the outset was critical for helping scientists develop a protocol that resulted in a product that satisfied everyone involved. "Currently, DNA metabarcoding is in its final phase of validation and will be established in environmental legislation in New Zealand for routinely monitoring the effects of aquaculture activities. The approach provided reliable, faster and ultimately cheaper results than the methods previously used," says KAUST co-author and marine ecologist Susana Carvalho.

Another example comes from the European DEVOTES project that developed innovative tools and indicators for assessing the impacts of human activities on marine biodiversity. A large number of stakeholders were involved in comparing traditional taxonomic methods with DNA metabarcoding approaches for monitoring macroinvertebrates living in marine sediment, such as small crustaceans and worms. The diversity of these organisms in the sediment is considered a robust indicator of marine ecosystem health. DNA metabarcoding yielded very positive results in this effort as well, and the technique is proposed for improving ecological assessments within Europe.

"The main lesson learned from this and other projects is the need to establish robust and solid networking between researchers and policymakers to effectively develop, test, validate and standardize novel monitoring tools," says Aylagas.

KAUST's researchers and their colleagues recommend a roadmap that encourages interaction and engagement, communication and commitment, and finally they stress the need for decision framing for the successful integration of new molecular methods into routine use.

On the home front, KAUST researchers have been in discussions with representatives from governmental agencies in the Kingdom of Saudi Arabia and with other stakeholders to present the potential of DNA-based tools for enhancing marine monitoring in the Red Sea region.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Energy harvesting goes organic, gets more flexible

image: A group of researchers has explored peptide-based nanotubes and, in the Journal of Applied Physics, reports using a combination of ultraviolet and ozone exposure to generate a wettability difference and an applied field to create horizontally aligned polarization of nanotubes on flexible substrates with interlocking electrodes. The work will enable the use of organic materials more widely.

This image shows optical (a-c) and lateral piezoresponse force microscopy (LPFM) phase images (d-f) of the peptide nanotubes on interlocking electrode substrates: (a, d) without alignment, (b, e) aligned using both electric field and UV/ozone, and (c, f) aligned PNTs with graphene oxide (GO) using both electric field and UV/ozone.

Image: 
Sawsan Almohammed

WASHINGTON, September 15, 2020 -- Nanogenerators capable of converting mechanical energy into electricity are typically made from metal oxides and lead-based perovskites. But these inorganic materials aren't biocompatible, so the race is on to create natural biocompatible piezoelectric materials for energy harvesting, electronic sensing, and stimulating nerves and muscles.

University College Dublin and University of Texas at Dallas researchers decided to explore peptide-based nanotubes, because they would be an appealing option for use within electronic devices and for energy harvesting applications.

In the Journal of Applied Physics, from AIP Publishing, the group reports using a combination of ultraviolet and ozone exposure to generate a wettability difference and an applied field to create horizontally aligned polarization of nanotubes on flexible substrates with interlocking electrodes.

"The piezoelectric properties of peptide-based materials make them particularly attractive for energy harvesting, because pressing or bending them generates an electric charge," said Sawsan Almohammed, lead author and a postdoctoral researcher at University College Dublin.

There's also an increased demand for organic materials to replace inorganic materials, which tend to be toxic and difficult to make.

"Peptide-based materials are organic, easy to make, and have strong chemical and physical stability," she said.

In the group's approach, the physical alignment of nanotubes is achieved by patterning a wettability difference onto the surface of a flexible substrate. This creates a chemical force that pushes the peptide nanotube solution from the hydrophobic region, which repels water, with a high contact angle to the hydrophilic region, which attracts water, with a low contact angle.

Not only did the researchers improve the alignment of the tubes, which is essential for energy harvesting applications, but they also improved the conductivity of the tubes by making composite structures with graphene oxide.

"It's well known that when two materials with different work functions come into contact with each other, an electric charge flows from low to high work function," Almohammed said. "The main novelty of our work is that controlling the horizontal alignment of the nanotubes by electrical field and wettability-assisted self-assembly improved both the current and voltage output, and further enhancement was achieved by incorporating graphene oxide."

The group's work will enable the use of organic materials, especially peptide-based ones, more widely within electronic devices, sensors, and energy harvesting applications, because two key limitations of peptide nanotubes -- alignment and conductivity -- have been improved.

"We're also exploring how charge transfer processes from bending and electric field applications can enhance Raman spectroscopy-based detection of molecules," Almohammed said. "We hope these two efforts can be combined to create a self-energized biosensor with a wide range of applications, including biological and environmental monitoring, high-contrast imaging, and high-efficiency light-emitting diodes."

Credit: 
American Institute of Physics

From star to solar system: How protoplanetary rings form in primordial gas clouds

WASHINGTON, September 15, 2020 -- Four-hundred fifty light-years from Earth, a young star is glowing at the center of a system of concentric rings made from gas and dust, and it is producing planets, one for each gap in the ring.

Its discovery has shaken solar system origin theories to their core. Mayer Humi, a scientist from the Worcester Polytechnic Institute, believes it provides an apt study target for theories about protoplanetary rings around stars. The research is published in the Journal of Mathematical Physics, by AIP Publishing.

The star, HL Tauri, is located in the constellation Taurus and awakened interest in Pierre-Simon Laplace's 1796 conjecture that celestial clouds of gas and dust around new stars condense to form rings and then planets. An exciting image of HL Tauri captured in 2014 by the Atacama Large Millimeter Array is the first time planetary rings have been photographed in such crisp detail, an observational confirmation of Laplace's conjecture.

"We can observe many gas clouds in the universe that can evolve into a solar system," Humi said. "Recent observational data shows solar systems are abundant in the universe, and some of them might harbor different types of life."

Humi, alongside some of the greatest astronomers throughout history, wondered about the creation of solar systems and their evolution in the universe. How do they form and what trajectory will they follow in the future?

"The basic issue was and is how a primordial cloud of gas can evolve under its own gravitation to create a solar system," Humi said.

Humi uses the Euler-Poisson equations, which describe the evolution of gas clouds, and reduces them from six to three model equations to apply to axi-symmetric rotating gas clouds.

In the paper, Humi considers the fluid in the primordial gas cloud to be an incompressible, stratified fluid flow and derives time dependent solutions to study the evolution of density patterns and oscillations in the cloud.

Humi's work shows that, with the right set of circumstances, rings could form from the cloud of dust and gas, and it lends credence to Laplace's 1796 hypothesis that our solar system formed from a similar dust and gas cloud around the sun.

"I was able to present three analytical solutions that demonstrate rings can form, insight that cannot be obtained from the original system of equations," Humi said. "The real challenge is to show that the rings can evolve further to create the planets."

Credit: 
American Institute of Physics