Brain

Ritual suffering improves psychological well-being

image: Photos depict different stages of ritual intensity.

Image: 
Dimitris Xygalatas

According to a new study published in Current Anthropology, an extreme ritual involving bodily mutilation has no detectable long-term harmful effects on participants and actually has a positive effect on psychological well-being. In "Effects of Extreme Ritual Practices of Psychophysiological Well-Being," Dimitris Xygalatas and his team investigate the effects of participation in the kavadi attam, a ritual performed annually by millions of Tamil Hindus around the world, on physical and psychological well-being.

The research is particularly important in the context of developing societies, where biomedical and folk health interventions often co-exist. "Our results stress the importance and utility of traditional cultural practices for health management," he writes. "Although these practices are not meant to substitute biomedical interventions, their complementary utility should not be overlooked, especially in contexts where psychiatric or other medical interventions are not widely available or are associated with stigma."

The kavadi is part of a longer festival of Thaipussam, which involves preparations through fasting and prayer. On the day of the ritual, devotees pierce their bodies with numerous metallic objects, including needles, hooks and rods impaled through both cheeks. Once these piercings are in place, devotees embark on a several-hour-long pilgrimage to the temple of Lord Murugan, the most popular deity among the Tamil Hindus, carrying portable altars on their shoulders. These structures are often over three meters (10ft) tall and can weigh up to 60kg (130lbs).

The study was completed using 37 participants from the Tamil Hindu community in the town of Quatre Bornes in Mauritius, an island nation in the Indian Ocean. For three weekly periods before, during, and after the ritual procedures, participants wore portable monitoring devices that recorded their stress levels, sleep efficiency, and physical activity. Participants' heart rate was recorded on a daily basis during these measuring periods. Clinically and cross-culturally validated surveys were administered before and after the ritual to assess psychological wellbeing. The researchers also recorded the health and socio-economic status of the participants, and examined whether these factors predicted whether a participant chose a low or high-intensity engagement in the ritual.

Results showed that participating in the ritual had no detrimental effects on physiological health, and actually had positive effects on psychological well-being, with those who engaged in a higher number of body piercings experiencing the greatest improvements in perceived health and quality of life. Additionally, people who had been experiencing health problems or were of low socioeconomic status sought more painful levels of engagement.

The authors offer several possible explanations for the observed benefits of performing the kavadi, ranging from neurochemical processes to social factors related to participation. First, there is evidence that the sensory, physiological, and emotional hyperarousal involved in strenuous ordeals can affect the levels of neurotransmitters such as endorphins and endocannabinoids, resulting in feelings of euphoria.

There is also a great deal of evidence that shows that extreme rituals, when performed collectively, strengthen communal bonds and provide a sense of belonging. Additionally, participating in the kavadi allows participants--who are viewed as more devout and trustworthy than non-participants--to improve their social standing within the community. "Multiple lines of research suggest that individuals are strongly motivated to engage in status-seeking efforts, and that there is a strong positive relationship between social rank and subjective well-being," the researchers write. "Indeed, we found that individuals of lower socioeconomic status were more motivated to invest in the painful activities that can function as costly signals of commitment."

Whether the positive psychological effects of kavadi participation are primarily biological, social, or a combination thereof, should be a focus of further research, Xygalatas says. He also suggests expanding study length to include many kavadi events over the course of a lifetime and other possible ways to deepen the understanding of this widespread ritual.

Credit: 
University of Chicago Press Journals

Autism study stresses importance of communicating with all infants

A new language-skills study that included infants later diagnosed with autism suggests that all children can benefit from exposure to more speech from their caregivers.

Dr. Meghan Swanson, assistant professor at The University of Texas at Dallas, is the corresponding author of the study, published online June 28 in Autism Research. It is the first to extend research about the relationship between caregiver speech and infant language development from typically developing children to those with autism. The findings could inform guidelines for earlier action in cases of developmental difficulties.

"You can diagnose autism at 24 months at the earliest; most people are diagnosed much later. Early intervention, from birth to age 3, has shown to be effective at supporting development in various cohorts of children," said Swanson, who joined the School of Behavioral and Brain Sciences in January as the director of the Infant Neurodevelopment & Language Research Lab, known as the Baby Brain Lab.

She said there has been a push to identify autism earlier or demonstrate that the same techniques that help most children develop language skills also benefit those eventually diagnosed with autism.

The study involved 96 babies, 60 of whom had an older sibling with autism. Swanson said that this "baby-sibling" research design was necessary.

"How do you study autism in infancy when you can't diagnose it until the kids are age 2 at least?" she asked. "The answer relies on the fact that autism tends to run in families. These younger siblings have about a 20% chance of being diagnosed eventually with autism."

Indeed, 14 children from the high-risk subset of 60 were diagnosed with autism at 24 months.

The study results directly tied the number of words an infant hears, as well as the conversational turns he or she takes, to the performance on the 24-month language evaluation -- both for typical children and those with autism.

"One conclusion we've come to is that parents should be persistent in talking with their babies even if they aren't getting responses," Swanson said.

Swanson emphasized how important large, longitudinal studies -- tracking the same individuals across an extended period -- like this one are in her field.

"You have to follow the same children for years to learn anything conclusive about development," she said. "You can't simply shift from a group of 2-year-olds to a different group of 3-year-olds and so on."

Correcting the misunderstanding of parents' influence in autism has been a gradual fight against outdated conceptions, Swanson said.

"When parents receive an autism diagnosis for a child, some might wonder, 'What could I have done differently?'" she said. "There is no scientific backing for them to think in these terms. But there is a dark history in autism where parents were wrongly blamed, which reinforced these thoughts. To do research involving mothers as we have, you must approach that topic with sensitivity but also firmly reinforce that the logic that parenting style can cause autism is flawed."

The children's interactions with caregivers were recorded over two days -- once at nine months and again at 15 months -- via a LENA (Language Environment Analysis) audio recorder. The children's language skills were then assessed at 24 months.

"The LENA software counts conversational turns anytime an adult vocalizes and the infant responds, or vice versa," Swanson said. "The definition is not related to the content of the speech, just that the conversation partner responds. We believe that responding to infants when they talk supports infant development, regardless of eventual autism diagnosis."

The project was undertaken by the Infant Brain Imaging Study (IBIS) network, a consortium of eight universities in the United States and Canada funded by the National Institutes of Health as an Autism Center of Excellence. Before joining UT Dallas, Swanson was a postdoctoral fellow at the University of North Carolina at Chapel Hill, one of IBIS' study sites. The other study sites are Children's Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle and the University of Minnesota Twin Cities campus.

Dr. Joseph Piven, the IBIS network's principal investigator, is the director of the Carolina Institute for Developmental Disabilities at UNC-Chapel Hill. For parents, the results should highlight the long-term effect of initiating conversations from an early age, he said.

"Talking to your kids makes a big difference," Piven said. "Any impact on early language skills will almost certainly have an impact on a wide range of later abilities in school-age children and significantly enhance their probability of success."

Swanson said the most important takeaway from this work is that parents can make a significant difference in language development, even in children who are eventually diagnosed with autism.

"Parents can be amazing agents of change in their infants' lives from as early as 9 months old," she said. "If we teach parents how to provide their children with a rich communication environment, it helps support their children's development. I find that incredibly hopeful -- the power that parents have to be these positive role models."

Credit: 
University of Texas at Dallas

New research offers solution to reduce organ shortage crisis

INFORMS Journal Management Science New Study Key Takeaways:

A combination of the donor-priority rule and freeze-period prove successful for increasing quality organ donations by 12.8%.

Choosing the appropriate freeze length can mitigate the number of low-quality organs, a problem that arises as a result of only using the donor-priority rule.

Healthy organs can increase patient life expectancy by 18 years at a value of $50,000 a year.

CATONSVILLE, MD, September 4, 2019 -Eighteen people die every day waiting for transplants, and a new patient is added to the organ transplant list every 10 minutes. Much of the problem surrounds the lack of registered donors. New research in the INFORMS journal Management Science provides incentives that could lead to a solution and ultimately save lives.

The study looks at national transplant data from the Organ Procurement and Transplantation Network (OPTN). Ultimately, the researchers--Tinglong Dai of Johns Hopkins University, Ronghuo Zheng of The University of Texas at Austin, and Katia Sycara of Carnegie Mellon University--concluded that the solution is a two-part method: a combination of the donor-priority rule, and instituting a freeze-period. To test their solution, the researchers created a simulated organ market.

The donor-priority rule allows registered organ donors to cut in line and move to the top of the list should they need a transplant in the future. The researchers found an unintended consequence of the donor-priority rule: those with a higher risk of needing organ transplants were more likely to sign up to become organ donors. By adding a freeze-period, individuals are not entitled to a higher spot on the donor's list until they've been on the registry for a specific span of time.

"When the donor-priority rule and freeze-period are imposed together the average quality of donated organs is restored. The freeze-period makes it more time-consuming to get on the list deterring people from using the organ-donation system to solely get to the top of the list if they need to, whereby increasing the amount of unhealthy donations," said Dai, a professor in the Carey Business School at Johns Hopkins University.

"When a stronger incentive is given to high-risk individuals it results in reduction of organ quality," added Dai. "Such problems can outweigh the potential gain for the registry. These people are pressured into donating because they need a donation themselves and there is a slim chance they'll get it if they don't register."

The biggest benefit of the proposed combination would come from increasing the average patient's life expectancy. In one simulation, an organ from a healthy donor is estimated to add an average of 18 years to someone's life, at a value of $50,000 a year; an organ from a sick donor is estimated to add only 10 years.

"By adding the freeze period restriction, it helps rebalance the incentive structure and can guarantee a boost in organ supply without compromising organ quality. The change would boost social welfare by $235 million a year," continued Dai.

Credit: 
Institute for Operations Research and the Management Sciences

An examination of prosecutorial staff, budgets, caseloads and the need for change

We decided to examine the state of prosecutor funding and caseloads after recent local debates on the issue. Prosecutors contend they need more staff to ensure due process and increased diversion options and others are concerned that doing so would reverse justice reform efforts, under the assumption that more prosecutors equate to more convictions. As a result, we, the Center for Justice Research (CJR), have released a research brief comparing the budgets, caseloads and staffing levels of the country's largest county prosecutor offices. Below are some of the study's results, which highlight the challenges faced by prosecutors, denotes the concerns of policy makers, and recognizes the trepidations of justice reformers.

Prosecutor caseloads should not be examined in a vacuum. They must be understood within the context of comparative analyses. How do they compare to other, equally situated prosecutor offices? Our findings extend the positions of the experts in the field. Prosecutors can only be assessed if and when they have the optimal level of resources with the proper controls to ensure that they don't erode the gains of the current justice reform movement. Any prosecutor additions must be aligned with agreements not to further exacerbate racial/ethnic or class inequities.

The impact of large workloads and inadequate funding create an obvious problem for prosecutors and the community they serve. Research demonstrates that overworked prosecutors increase the likelihood of extended case processing time, error, plea bargains, stress-related burnout and turnover. Significant trial delays increases in-case processing time, and excessive use of plea bargains are just some of the consequences Americans must face due to overworked prosecutors. The minority community bears the brunt of these consequences as Blacks and Hispanics are more likely to face conviction due to a criminal justice system that often disparages those unable to afford effective representation.

Despite having 2,400 district attorneys in this country, no one has sought to determine the average number of hours it takes to process a case through the district attorney's office in the last 40 years. Even fewer have conducted a simple comparison of prosecutor budgets, caseloads, and staff. Because of the traditional lack of exposure to prosecutor data and as a result of the recent dismantling of this convention, we took upon this challenge and put forth some praxised recommendations.

Here, at the Center for Justice Research, we aim to create a more equitable criminal justice system. By proposing a few formulas that can put in place caseload standards for district attorney offices, we are better able to assist this widespread dilemma. One solution is to establish a caseload determination matrix by which prosecutors can determine their appropriate caseload, respective of local nuances. Simultaneously, there is a need for more collaborative approaches to decision-making protocols and a commitment to the continued use of pre-charge diversion programs.

Prosecutors, in this era of 'mass-decarceration', must ensure that they exercise their unbridled discretion in an unbiased manner. At the same time, society cannot afford for prosecutors to be understaffed, overworked, underfunded or misaligned with the least prohibitive, most effective approaches.

By finding a balance between justice reform-oriented prosecutors and protectors of public safety, Harris County could transform its criminal justice system into a more equitable design and serve as a model for the body politic.

It's not just a matter of hiring more prosecutors, but DA's offices also must consider the costs associated with the additional courtroom work group that will need to be included in the augmentations. At the same time, critics of this approach must provide substantiated alternatives after comparing equally situated prosecutor offices.

At the end of the day, reducing prosecutor caseloads is not just about funding, but about ensuring that constitutional protections are afforded to everyone.

Credit: 
Center for Justice Research at Texas Southern University

Receptor protein in brain promotes resilience to stress

image: This is Seema Bhatnagar, PhD, leads the Stress Neurobiology Program at Children's Hospital of Philadelphia.

Image: 
Children's Hospital of Philadelphia

Scientists have discovered that a receptor on the surface of brain cells plays a key role in regulating how both animals and people respond to stress. The research suggests that the receptor may represent an important biomarker of post-traumatic stress disorder (PTSD) in humans and may offer a new target for future, more effective treatments for stress and anxiety.

"We have found that a specific cell receptor promotes resilience to the adverse effects of stress in animals," said study leader Seema Bhatnagar, PhD, a neuroscientist in the Department of Anesthesiology and Critical Care at Children's Hospital of Philadelphia (CHOP). "Because we found links to the same receptor in patients with PTSD, we may have insights into developing more effective treatments for human psychiatric disorders."

The research appeared online July 17, 2019 in Nature Communications.

Bhatnagar leads CHOP's Stress Neurobiology Program, which includes first author Brian F. Corbett, PhD, of CHOP, who performed much of the laboratory analysis. Other key collaborators were psychiatrists Philip Gerhman, MD, and Richard Ross, MD, of the Perelman School of Medicine at the University of Pennsylvania, who are attending physicians at the Corporal Michael J. Crescenz Veterans Affairs Medical Center in Philadelphia.

The researchers focused on the sphingosine-1-phosphate receptor 3 (S1PR3), a lipid molecule found on cell membranes that is active in many cellular processes, including inflammation, cell migration and proliferation. It is one of a broader set of molecules called sphingolipid receptors. Scientists previously knew little about S1PR3's function in the brain. Bhatnagar said the current study points to this receptor as important in neural signaling, and added, "We found that manipulating SIRPR3 levels affected how well animals cope with stress."

Because current psychiatric treatments succeed in only a subset of patients with stress-related psychiatric disorders, neurobiologists often model stress in laboratory animals, such as rats, to understand what makes some animals vulnerable to stress and others more resilient.

Social hierarchies and territoriality are sources of stress in rats. Bhatnagar's team used validated behavioral tools, such as a forced swim test or a social defeat test, to investigate how rats use coping strategies to deal with stress. Rats that cope more passively, showing anxiety- and depressive-type behaviors, are classified as vulnerable; those that cope more actively are classified as resilient.

In the current study, the researchers detected higher levels of the S1PR3 protein in resilient rats and lower levels in the vulnerable group. The study team then adjusted the expression of the S1PR3 gene to raise or reduce the gene's product--the S1PR3 protein. Their results confirmed that increasing the protein levels increased stress-resilient behaviors, while "knocking down" or reducing protein levels raised vulnerable behaviors.

The scientists also measured S1PR3 levels in the blood of patients at the Veterans Affairs hospital, all of whom had experienced combat. The veterans with PTSD had lower levels of S1PR3 than those without PTSD. Furthermore, those with more severe PTSD symptoms had lower levels of S1PR3. "Our findings in both laboratory models and patients suggest that this protein is a potential blood-based biomarker for PTSD," said Bhatnagar.

She added that follow-up studies in larger patient samples will be necessary to validate these initial findings. "If we can establish that SIPR3 or related sphingolipid receptors are valid biomarkers for PTSD and other stress-related disorders, we may have a new tool to predict a person's risk for PTSD, or to predict the severity of a patient's symptoms. It may help us to better evaluate potential treatments, and perhaps to design better treatments," she added.

Credit: 
Children's Hospital of Philadelphia

New whale species discovered along the coast of Hokkaido

image: These are dorsal, ventral, and lateral views of the B. minimus skull (From the left). The rostrum is smaller than that of other Berardius species.

Image: 
Tadasu K. Yamada et al., <i>Scientific Reports</i>. August 30, 2019

A new beaked whale species Berardius minimus, which has been long postulated by local whalers in Hokkaido, Japan, has been confirmed.

In a collaboration between the National Museum of Nature and Science, Hokkaido University, Iwate University, and the United States National Museum of Natural History, a beaked whale species which has long been called Kurotsuchikujira (black Baird's beaked whale) by local Hokkaido whalers has been confirmed as the new cetacean species Berardius minimus (B. minimus).

Beaked whales prefer deep ocean waters and have a long diving capacity, making them hard to see and inadequately understood. The Stranding Network Hokkaido, a research group founded and managed by Professor Takashi F. Matsuishi of Hokkaido University, collected six stranded un-identified beaked whales along the coasts of the Okhotsk Sea.

The whales shared characteristics of B. bairdii (Baird's beaked whale) and were classified as belonging to the same genus Berardius. However, a number of distinguishable external characteristics, such as body proportions and color, led the researchers to investigate whether these beaked whales belong to a currently unclassified species.

"Just by looking at them, we could tell that they have a remarkably smaller body size, more spindle-shaped body, a shorter beak, and darker color compared to known Berardius species," explained Curator Emeritus Tadasu K. Yamada of the National Museum of Nature and Science from the research team.

In the current study, the specimens of this unknown species were studied in terms of their morphology, osteology, and molecular phylogeny. The results, published in the journal Scientific Reports, showed that the body length of physically mature individuals is distinctively smaller than B. bairdii (6.2-6.9m versus 10.0m). Detailed cranial measurements and DNA analyses further emphasized the significant difference from the other two known species in the genus Berardius. Due to it having the smallest body size in the genus, the researchers named the new species B. minimus.

"There are still many things we don't know about B. minimus," said Takashi F. Matsuishi. "We still don't know what adult females look like, and there are still many questions related to species distribution, for example. We hope to continue expanding what we know about B. minimus."

Local Hokkaido whalers also refer to some whales in the region as Karasu (crow). It is still unclear whether B. minimus (or Kurotsuchikujira) and Karasu are the same species or not, and the research team speculate that it is possible Karasu could be yet another different species.

This study was conducted in collaboration with multiple institutions. Dr. Shino Kitamura and Dr. Shuichi Abe of Iwate University carried out the DNA analyses while Dr. Tadasu K. Yamada and Dr. Yuko Tajima of the National Museum of Nature and Science made osteological specimens, morphological observations and detailed measurements to depict systematic uniqueness. Dr. Takashi F. Matsuishi and Dr. Ayaka Matsuda of Hokkaido University made the multivariate analyses. Dr. James G. Mead of Smithsonian Institution contributed to discussions related to systematic comparison.

Credit: 
Hokkaido University

Spreading light over quantum computers

image: Jan-Åke Larsson and his co-workers have also supplemented their theoretical simulations with a physical version built with electronic components. The gates are similar to those used in quantum computers, and the toolkit simulates how a quantum computer works.

Image: 
Karl Ofverstrom

Scientists at Linköping University have shown how a quantum computer really works and have managed to simulate quantum computer properties in a classical computer. "Our results should be highly significant in determining how to build quantum computers", says Professor Jan-Åke Larsson.

The dream of superfast and powerful quantum computers has again been brought into focus, and large resources have been invested in research in Sweden, Europe and the world. A Swedish quantum computer is to be built within ten years, and the EU has designated quantum technology one of its flagship projects.

At the moment, few useful algorithms are available for quantum computers, but it is expected that the technology will be hugely significant in simulations of biological, chemical and physical systems that are far too complicated for even the most powerful computers currently available. A bit in a computer can take only the value one or zero, but a quantum bit can take all values in between. Simply put, this means that quantum computers do not need to take as many operations for each calculation they carry out.

Professor Jan-Åke Larsson and his doctoral student Niklas Johansson, in the Division for Information Coding at the Department of Electrical Engineering, Linköping University, have come to grips with what happens in a quantum computer and why it is more powerful than a classical computer. Their results have been published in the scientific journal Entropy.

"We have shown that the major difference is that quantum computers have two degrees of freedom for each bit. By simulating an additional degree of freedom in a classical computer, we can run some of the algorithms at the same speed as they would achieve in a quantum computer", says Jan-Åke Larsson.

They have constructed a simulation tool, Quantum Simulation Logic, QSL, that enables them to simulate the operation of a quantum computer in a classical computer. The simulation tool contains one, and only one, property that a quantum computer has that a classical computer does not: one extra degree of freedom for each bit that is part of the calculation.

"Thus, each bit has two degrees of freedom: it can be compared with a mechanical system in which each part has two degrees of freedom - position and speed. In this case, we deal with computation bits - which carry information about the result of the function, and phase bits - which carry information about the structure of the function", Jan-Åke Larsson explains.

They have used the simulation tool to study some of the quantum algorithms that manage the structure of the function. Several of the algorithms run as fast in the simulation as they would in a quantum computer.

"The result shows that the higher speed in quantum computers comes from their ability to store, process and retrieve information in one additional information-carrying degree of freedom. This enables us to better understand how quantum computers work. Also, this knowledge should make it easier to build quantum computers, since we know which property is most important for the quantum computer to work as expected", says Jan-Åke Larsson.

Jan-Åke Larsson and his co-workers have also supplemented their theoretical simulations with a physical version built with electronic components. The gates are similar to those used in quantum computers, and the toolkit simulates how a quantum computer works. With its help students, for example, can simulate and understand how quantum cryptography and quantum teleportation works, and also some of the most common quantum computing algorithms, such as Shor's algorithm for factorisation. (The algorithm works in the current version of the simulation but is equally fast - or slow - as in classical computers).

Credit: 
Linköping University

Extracting clean fuel from sunlight

image: Image of the experimental photoelectrosynthetic cell described in the new study. Technologies of this kind combine light-gathering semiconductors and catalytic materials capable of chemical reactions that produce clean fuel.

Image: 
Biodesign Institute at Arizona State University

Securing enough energy to meet human needs is one of the greatest challenges society has ever faced. Previously reliable sources--oil, gas and coal--are degrading air quality, devastating land and ocean and altering the fragile balance of the global climate, through the release of CO2 and other greenhouse gases. Meanwhile, earth's rapidly industrializing population is projected to reach 10 billion by 2050. Clean alternatives are a matter of urgent necessity.

Researchers at ASU's Biodesign Center for Applied Structural Discovery are exploring new technologies that could pave the way to clean, sustainable energy to help meet daunting global demand.

In new research appearing in the Journal of the American Chemical Society (JACS), the flagship journal of the ACS, lead author Brian Wadsworth, along with colleagues Anna Beiler, Diana Khusnutdinova, Edgar Reyes Cruz, and corresponding author Gary Moore describe technologies that combine light-gathering semiconductors and catalytic materials capable of chemical reactions that produce clean fuel.

The new study explores the subtle interplay of the primary components of such devices and outlines a theoretical framework for understanding the underlying fuel-forming reactions. The results suggest strategies for improving the efficiency and performance of such hybrid technologies, bringing them a step closer to commercial viability.

The production of hydrogen and reduced forms of carbon by these technologies could one day supplant fossil fuel sources for a broad range of reduced carbon commodities, including fuels, plastics and building materials.

"In this particular work we've been developing systems that integrate light capture and conversion technologies with chemical-based energy storage strategies," says Moore, who is an assistant professor in ASU's School of Molecular Sciences. Rather than direct generation of electricity from sunlight, this new breed of technology uses solar energy to drive chemical reactions capable of producing fuels, which store the sun's energy in chemical bonds. "That's where catalysis becomes extremely important. It's the chemistry of controlling both the selectivity of reactions and the overall energy requirements for driving those transformations," Moore says.

Something new under the sun

One of the most attractive sources for sustainable, carbon-neutral energy production is both ancient and abundant: sunlight. Indeed, adoption of solar energy technologies has gained significant momentum in recent years.

Photovoltaic (PV) devices, or solar cells, gather sunlight and transform the energy directly into electricity. Improved materials and lowered costs have made photovoltaics an attractive energy option, particularly in sun-drenched states like Arizona, with large solar arrays covering multiple acres capable of powering thousands of homes.

"But just having access to solar power using photovoltaics is not enough," Moore notes. Many renewables like sunlight and wind power are not always available, so storage of intermittent sources is a key part of any future technology to meet global human energy demands on a large scale.

As Moore explains, borrowing a page from Nature's handbook may help researchers harness the sun's radiant energy to generate sustainable fuels. "One thing is clear," Moore says. "We are likely to continue using fuels as part of our energy infrastructure for the foreseeable future, especially for applications involving ground and air transportation. That's where the bioinspired part of our research becomes particularly relevant--looking to Nature for hints as to how we might develop new technologies for producing fuels that are carbon free or neutral."

Solar flair

One of Nature's more impressive tricks involves the use of sunlight to produce energy-rich chemicals, a process mastered billions of years ago by plants and other photosynthetic organisms. "In this process, light is absorbed, and the energy is used to drive a series of complex biochemical transformations that ultimately produce the foods we eat and, over long geological time scales, the fuels that run our modern society," Moore says.

In the current study, the group analyzed key variables governing the efficiency of chemical reactions used to produce fuel through various artificial devices. "In this paper, we've developed a kinetic model to describe the interplay between light absorption at the semiconductor surface, charge migration within the semiconductor, charge transfer to our catalyst layer and then the chemical catalysis step," said Wadsworth.

The model the group developed is based on a similar framework governing enzyme behavior, known as Michaelis-Menten kinetics, which describes the relationship between enzymatic reaction rates and the medium in which the reaction takes place (or substrate). Here, this model is applied to technological devices combining light-harvesting semiconductors and catalytic materials for fuel formation.

"We describe the fuel-forming activities of these hybrid materials as a function of light intensity and also the potential," Wadsworth says. (Similar Michaelis-Menten-type kinetic models have proven useful in analyzing such phenomena as antigen-antibody binding, DNA-DNA hybridization, and protein-protein interaction.)

In modeling the dynamics of the system, the group made a surprising discovery. "In this particular system we are not limited by how fast the catalyst can drive the chemical reaction," Moore says. "We're limited by the ability to deliver electrons to that catalyst and activate it. That is related to the light intensity striking the surface. Brian, Anna, Diana, and Edgar have shown in their experiments that increasing the light intensity increases the rate of fuel formation."

The discovery has implications for the future design of such devices with an eye toward maximizing their efficiencies. "Simply adding more catalyst to the surface of the hybrid material does not result in greater rates of fuel production. We need to consider the light absorbing properties of the underpinning semiconductor, which in turn forces us think more about the selection of the catalyst and how the catalyst interfaces with the light absorbing component."

Ray of hope

Much work remains to be done before such solar-to-fuels solutions are ready for prime time. Making technologies like these practical for human demands requires efficiency, affordability and stability. "Biological assemblies have the ability to self-repair and reproduce; technological assemblies have been limited in this aspect. It's one area where we can learn more from biology," Moore says.

The task could hardly be more urgent. Global demand for energy is projected to swell from around 17 terawatts today to a staggering 30 terawatts by mid-century. In addition to significant scientific and technological hurdles, Moore stresses that profound policy changes will also be essential. "There's a real question of how we're going to meet our future energy demands. If we're going to do it in an environmentally conscious and egalitarian manner, it's going to take a serious political commitment."

The new research is a step on the long pathway to a sustainable future. The group notes that their findings are important because they are likely relevant to a wide range of chemical transformations involving light-absorbing materials and catalysts. "The key principles, particularly the interplay between illumination intensity, light absorption and catalysis should apply to other materials as well," Moore says.

Credit: 
Arizona State University

Cometh the hourglass: Why do men prefer a low waist-to-hip ratio?

Male turkeys famously will attempt to mate with a head on a stick. In fact, gobblers prize a snug snood over the whole hen. How far then can a man's ideal sexual partner be stripped?

As hopeless romantics we practice a more esoteric eroticism. Nevertheless, there are patterns.

Waist-to-hip ratio (WHR) is a strong predictor of women's physical attractiveness. The 'ideal' value varies, but it is always low relative to men's or the average female WHR. Writing in Frontiers in Psychology, one woman asks: why?

Euphemisms

Over the last 25 years, research on WHR as an indicator of women's attractiveness has flourished. But its link to female mate value - i.e. how WHR preferences influence a man's reproductive success - is rarely expressed beyond euphemisms like "health" and "fertility" of low-WHR women. (Note: a low or 'narrow' WHR means hips relatively larger than the waist.)

It is a classic example of "just-so storytelling" in evolutionary explanations of human behavior, says Dr. Jeanne Bovet of Stony Brook University (SUNY).

After combing the literature, Bovet defined specific traits that could link WHR with mate value, to be subjected to empirical scrutiny. She asked: can a man select this trait in a mate, based on her WHR? And will he have more and higher-quality descendants as a consequence?

Sex, age, pregnancy and parity

Most of the mate value-related information provided by WHR is relatively basic, suggests Bovet: "Sex, age, pregnancy and number of children can all be reliably inferred".

WHR is high in children and men. In women though, WHR drops around the onset of puberty until early adulthood, then rises again with age and number of children. Temporary increases in waist size are the unique reliable visual cue of current pregnancy. As such, WHR tracks reproduction potential, which is null in prepubertal, pregnant and postmenopausal women; peaks in the twenties; and is unreliable in women with many children or none.

Baby fat

One surprising WHR-related trait enjoys particularly compelling evidence of mate value, however.

That a wider pelvis facilitates delivery of big-brained offspring is a widely accepted idea. Perhaps it was the first to enter your mind in relation to WHR.

Alas though, the mechanical demands of bipedal locomotion strictly limit pelvis size - so that most WHR variance is in fact due to fat storage on the hips and waist. But it appears that the distribution of this fat is likewise a major gatekeeper of brain development.

Fat on the hips, thighs and buttocks is special in women. Even with restricted food intake, the body avoids burning it. But in late pregnancy and lactation, the same fat becomes freely available as the main source of long-chain polyunsaturated fatty acids critical for early brain development. Abdominal fat interferes with this: it inhibits production of the enzyme Δ-5 desaturase, required for synthesis of the fatty acids.

In keeping with this, one study has shown that women with lower WHRs and their children have significantly higher cognitive test scores - and IQ is negatively correlated with birth order, following the loss of gluteofemoral fat with each child.

A moving target

Evolution of preferences for a low WHR in female mates likely involved a number of these traits. Still more - including WHR as a hormone-driven indicator of sexual and maternal behavior, or a warning of abdominal parasites - remain untested.

WHR's correlation with attractiveness might even prove to be an artifact, with some related physical characteristic like hip size alone, or waist/stature ratio, the real object of men's desires.

But whether rapid cultural evolution and reproductive technologies will relax men's preferences for a narrow female WHR - or if 'runaway selection' for once-useful traits, and pursuit of gene-propagating 'sexy daughters', will intensify them - is a story that will unfold deep into the future.

Credit: 
Frontiers

Single atoms as catalysts

image: This is Gareth Parkinson (left) and Zdenek Jakub.

Image: 
TU Wien

They make our cars more environmentally friendly and they are indispensable for the chemical industry: catalysts make certain chemical reactions possible - such as the conversion of CO into CO2 in car exhaust gases - that would otherwise happen very slowly or not at all. Surface physicists at the TU Wien have now achieved an important breakthrough; metal atoms can be placed on a metal oxide surface so that they show exactly the desired chemical behavior. Promising results with iridium atoms have just been published in the renowned journal Angewandte Chemie.

Smaller and smaller - all the way down to the single atom

For car exhaust gases, solid catalysts such as platinum are used. The gas comes into contact with the metal surface, where it reacts with other gas components. "Only the outermost layer of metal atoms can play a role in this process. The gas can never reach the atoms inside the metal so they are basically wasted," says Prof. Gareth Parkinson from the Institute of Applied Physics at TU Wien. It therefore makes sense to construct the catalyst not as a single large block of metal, but in the form of fine granules. This makes the number of active atoms as high as possible. Since many important catalyst materials (such as platinum, gold or palladium) are very expensive, cost is a major issue.

For years, efforts have been made to turn the catalysts into finer and finer particles. In the best case scenario, the catalyst could be made up of individual catalyst atoms, and all would be active in just the right way. This is easier said than done, however. "When metal atoms are deposited on a metal oxide surface, they usually have a very strong tendency to clump together and form nanoparticles," explained Gareth Parkinson.

Instead of attaching the active metal atoms to a surface, it is also possible to incorporate them into a molecule with cleverly selected neighboring atoms. The molecules and reactants are then dissolved into a liquid, and the chemical reactions happen there.

Both variants have advantages and disadvantages. Solid metal catalysts have a higher throughput, and can be run in continuous operation. With liquid catalysts, on the other hand, it is easier to tailor the molecules as required, but the product and the catalyst have to be separated again afterwards.

The best of both worlds

Parkinson's team at TU Wien has is working to combine the advantages of both variants: "For years we have been working on processing metal oxide surfaces in a controlled manner and imaging them under the microscope," says Gareth Parkinson. "Thanks to this experience, we are now one of a few laboratories in the world that can incorporate metal atoms into a solid surface in a well defined way."

In much the same way as liquid catalyst molecules are designed, it is becoming possible to choose the neighbouring atoms in the surface that would be the most favourable from a chemical point of view - and special surface-physics tricks make it possible to incorporate them into a solid matrix on a special iron oxide surface. This can be used, for example, to convert carbon monoxide into carbon dioxide.

Optimal control

"Single atom catalysis is a new, extremely promising field of research," says Gareth Parkinson. "There have already been exciting measurements with such catalysts, but so far it was not really known why they worked so well. Now, for the first time, we have full control over the atomic properties of the surface and can clearly prove this by means of images from the electron microscope".

Credit: 
Vienna University of Technology

European guidelines on lipid control advocate 'lower is better' for cholesterol levels

Paris, France - 31 Aug 2019: Low-density lipoprotein (LDL) cholesterol levels should be lowered as much as possible to prevent cardiovascular disease, especially in high and very high risk patients. That's one of the main messages of the European Society of Cardiology (ESC) and European Atherosclerosis Society (EAS) Guidelines on dyslipidaemias published online today in European Heart Journal,(1) and on the ESC website.(2)

Cardiovascular disease (CVD) is responsible for more than four million deaths in Europe each year. Clogged arteries, known as atherosclerotic CVD, are the main type of disease. The guidelines provide recommendations on how to modify plasma lipid levels through lifestyle and medication to reduce the risk of atherosclerotic CVD.

"There is now overwhelming evidence from experimental, epidemiological, genetic studies, and randomised clinical trials, that higher LDL cholesterol is a potent cause of heart attack and stroke," said Professor Colin Baigent, Chairperson of the guidelines Task Force and director of the MRC Population Health Research Unit, University of Oxford, UK. "Lowering LDL cholesterol reduces risk irrespective of the baseline concentration. It means that in people at very high risk of heart attack or stroke, reducing LDL cholesterol is effective even if they have below average starting levels."

There is no lower limit of LDL cholesterol that is known to be unsafe. The guidelines aim to ensure that the available drugs (statins, ezetimibe, PCSK9 inhibitors) are used as effectively as possible to lower levels in those most at risk. It is recommended that such patients should achieve both a target LDL cholesterol level and a minimum 50% relative reduction.

"This is to ensure that high- or very high-risk patients receive intensive LDL cholesterol lowering therapy irrespective of their baseline level," said Professor Alberico L. Catapano, Chairperson of the guidelines Task Force and professor of pharmacology at the Department of Pharmacological and Biomolecular Sciences, University of Milan, Italy. "Patients who are already close to their target on current treatment will be offered additional treatment that provides a further minimum 50% reduction."

"Statins are very well tolerated, and true 'statin intolerance' is uncommon. Most patients can take a statin regimen," noted Professor François Mach, Chairperson of the guidelines Task Force and head of the Cardiology Department, Geneva University Hospital, Switzerland. "Statins have very few side-effects. These include an increased risk of developing diabetes, and they may rarely cause myopathy. But the benefits of statins greatly outweigh their hazards, even among those at low risk of atherosclerotic CVD."

However, statins are not recommended in pre-menopausal women considering pregnancy or not using adequate contraception. "Although these drugs have not been shown to cause foetal malformations when unintentionally used in the first trimester of pregnancy, women needing a statin should avoid them during any period when they might conceive, as no formal study to address this question has been performed," said Prof Catapano.

The evidence for statin therapy is more limited in patients over 75, though is still consistent with a benefit. The guidelines advise taking level of risk, baseline LDL cholesterol, health status, and the risk of drug interactions into account when deciding whether statins are appropriate in those aged 75 or over.

Revisions have been made to the risk stratification categories so that patients with atherosclerotic CVD, diabetes with target organ damage, familial hypercholesterolaemia, and severe chronic kidney disease are all categorised as very high-risk (and so will be offered intensive LDL-lowering therapy). Treatment goals for a particular risk category apply regardless of whether or not patients have had a heart attack or stroke.

Evidence since the 2016 guidelines suggests that raised Lp(a) is a cause of atherosclerotic CVD, and patients with genetically elevated Lp(a) can have similar lifetime risk of heart attack or stroke as those with familial hypercholesterolaemia. Since Lp(a) is largely genetically determined, the guidelines recommend measuring it at least once in adulthood. "Assessment should be around 40 years of age to identify people before they have a heart attack or stroke," said Prof Baigent.

Fish oil supplements (particularly icosapent ethyl) are recommended, in combination with a statin, for patients with hypertriglyceridaemia despite statin treatment. In these patients, supplements reduce the risk of atherosclerotic CVD events, including heart attack and stroke, by about one quarter.

The guidelines advocate a lifetime approach to cardiovascular risk. This means that people of all ages and risk levels should be encouraged to adopt and sustain a healthy lifestyle. "The main requirements are healthy diet, avoidance of cigarette smoking, and regular exercise," said Prof Mach. "There is no evidence that fish oil supplements prevent first heart attacks and strokes, so we did not recommend them for healthy people."

Credit: 
European Society of Cardiology

FEFU scientists developed brand-new rapid strength eco-concrete

image: Cincinnati, United States

Image: 
Ali Morshedlou on Unsplash

The compressive strength of concrete -- is achieved 28 days after pouring -- has increased by 2.7 - 3.3 times (B60) compared with traditional concrete mixtures of similar components. Frost-resistance is increased three times up to F600 from F200. Water-resistance (the pressure under which water permeate a concrete) increased more than four times -- W18 instead of W4.

New concrete is more environmentally friendly compared with traditional samples. When pouring, steam-heat treatment is not required, so there is no extra heat emission to the atmosphere associated with this stage of construction. Energy costs go up to 70 percent down.

The technology for the manufacturing of the new concrete could be implemented at the plants with minimum spending.

'When designing the composition of new concrete, we applied the fundamental principles of modern science called geonics (geomimetics). It studies the similarity of construction materials to the natural ones, their nature-likeness. Professor Valery Lesovik from the Belgorod State Technological University, a corresponding member of the Russian Academy of Architecture and Building Sciences conceived the foundation for this science. Currently, engineers failed to achieve the same concrete strength as mountain conglomerates and sandstones. The strength of these natural stones is ten times higher, although they have almost the same composition and structure as concrete has. Our task is to improve the strength of new building materials bringing up their characteristics closer to natural ones via using new technologies. Right now we are capable of creating concrete several times stronger than one obtained using old technologies', told Lieutenant Colonel Roman Fediuk, associate professor of the Training Military Center of FEFU, the winner of the XIII All-Russian contest "Engineer of the Year 2018".

Fediuk went on that for the production of new concrete the components were selected in terms of similarity of their chemical composition, physical and mechanical characteristics. According to the principles of geonics, this similarity can be achieved through the stone, sand, cement, and water - all traditional components of concrete - are obtained in the same geographical area. Thus, it is cost-effective to produce components for the concrete mixture in the region where the very concrete will be produced.

Engineers abandoned the excessive use of water in the production of new concrete. Usually, water the fluidity of the concrete mix. However, when dried, water provokes cracks decreasing the strength of concrete. In the new composition, all additional water is replaced with fifth-generation superplasticizers. These substances make the molecules of the concrete mixture to push off from each other which results in increasing fluidity, workability and other concrete qualities useful for construction engineering.

Next important step in the manufacturing of new concrete is mechanochemical activation, i.e. the concrete components are mixed and grind at high speed in a rotary pulsation apparatus - a special concrete mixer. Due to the fine grinding of concrete particles, a greater amount of artificial stone is obtained from a volume unit of the mixture.

The rapid strength of the new concrete is the quality which makes it possible to remove the formwork from the embedded structures in three to seven days instead of 28 as it usually takes and apply them at new stages of pouring. It still takes 28 days for the new concrete to achieve its brand strength.

Roman Fediuk summed up that at the present time it's possible to design rapid-strength concrete akin to the new one using traditional methods, but there will be drawbacks as cost-inefficiency and harm to the environment. To achieve the same rapid strength with traditional methods, one should use a larger amount of expensive higher-quality cement while the cement manufacturing occupies world number two position in terms of greenhouse gas contamination.

The composition of the new concrete was brought up during FEFU and KGASU inter-university collaboration. Ruslan Ibragimov, Head of the Department of Construction Technology, KGASU, took part in the project.

Credit: 
Far Eastern Federal University

Scientists discover evidence for past high-level sea rise

image: A closeup of the bulbous stalactitic feature of a phreatic overgrowth on speleothems (POS).

Image: 
University of New Mexico

An international team of scientists, studying evidence preserved in speleothems in a coastal cave, illustrate that more than three million years ago - a time in which the Earth was two to three degrees Celsius warmer than the pre-industrial era - sea level was as much as 16 meters higher than the present day. Their findings represent significant implications for understanding and predicting the pace of current-day sea level rise amid a warming climate.

The scientists, including Professor Yemane Asmerom and Sr. Research Scientist Victor Polyak from The University of New Mexico, the University of South Florida, Universitat de les Illes Balears and Columbia University, published their findings in today's edition of the journal Nature. The analysis of deposits from Artà Cave on the island of Mallorca in the western Mediterranean Sea produced sea levels that serve as a target for future studies of ice sheet stability, ice sheet model calibrations and projections of future sea level rise, the scientists said.

Sea level rises as a result of melting ice sheets, such as those that cover Greenland and Antarctica. However, how much and how fast sea level will rise during warming is a question scientists have worked to answer. Reconstructing ice sheet and sea-level changes during past periods when climate was naturally warmer than today, provides an Earth's scale laboratory experiment to study this question according to USF Ph.D. student Oana Dumitru, the lead author, who did much of her dating work at UNM under the guidance of Asmerom and Polyak.

"Constraining models for sea level rise due to increased warming critically depends on actual measurements of past sea level," said Polyak. "This study provides very robust measurements of sea level heights during the Pliocene."

"We can use knowledge gained from past warm periods to tune ice sheet models that are then used to predict future ice sheet response to current global warming," said USF Department of Geosciences Professor Bogdan Onac.

The project focused on cave deposits known as phreatic overgrowths on speleothems. The deposits form in coastal caves at the interface between brackish water and cave air each time the ancient caves were flooded by rising sea levels. In Artà Cave, which is located within 100 meters of the coast, the water table is - and was in the past - coincident with sea level, says Professor Joan J. Fornós of Universitat de les Illes Balears.

The scientists discovered, analyzed, and interpreted six of the geologic formations found at elevations of 22.5 to 32 meters above present sea level. Careful sampling and laboratory analyses of 70 samples resulted in ages ranging from 4.4 to 3.3 million years old BP (Before Present), indicating that the cave deposits formed during the Pliocene epoch. The ages were determined using uranium-lead radiometric dating in UNM's Radiogenic Isotope Laboratory.

"This was a unique convergence between an ideally-suited natural setting worked out by the team of cave scientists and the technical developments we have achieved over the years in our lab at The University of New Mexico," said Asmerom. "Judicious investments in instrumentation and techniques result in these kinds of high-impact dividends."

"Sea level changes at Artà Cave can be caused by the melting and growing of ice sheets or by uplift or subsidence of the island itself," said Columbia University Assistant Professor Jacky Austermann, a member of the research team. She used numerical and statistical models to carefully analyze how much uplift or subsidence might have happened since the Pliocene and subtracted this from the elevation of the formations they investigated.

One key interval of particular interest during the Pliocene is the mid Piacenzian Warm Period - some 3.264 to 3.025 million years ago - when temperatures were 2 to 3º Celsius higher than pre-industrial levels. "The interval also marks the last time the Earth's atmospheric CO2 was as high as today, providing important clues about what the future holds in the face of current anthropogenic warming," Onac says.

This study found that during this period, global mean sea level was as high as 16.2 meters (with an uncertainty range of 5.6 to 19.2 meters) above present. This means that even if atmospheric CO2 stabilizes around current levels, the global mean sea level would still likely rise at least that high, if not higher, the scientists concluded. In fact, it is likely to rise higher because of the increase in the volume of the oceans due to rising temperature.

"Considering the present-day melt patterns, this extent of sea level rise would most likely be caused by a collapse of both Greenland and the West Antarctic ice sheets," Dumitru said.

The authors also measured sea level at 23.5 meters higher than present about four million years ago during the Pliocene Climatic Optimum, when global mean temperatures were up to 4°C higher than pre-industrial levels. "This is a possible scenario, if active and aggressive reduction in green house gases into the atmosphere is not undertaken", Asmerom said.

Credit: 
University of New Mexico

UCI scientist identifies cone snail's strike as one of the quickest in the animal kingdom

image: This is Ecology and Evolutionary Biology Associate Professor Emanuel Azizi in his research lab.

Image: 
Shannon Cottrell, University of California, Irvine

Irvine, Calif., Aug. 30, 2019 : With the use of ultra-high-speed videography, Ecology and Evolutionary Biology Associate Professor Emanuel Azizi and colleagues from Occidental College Los Angeles have shed light on the hunting mechanism of the cone snail Conus catus. Published online in Current Biology - Cell Press, the researchers identified the snail's hydraulically propelled feeding structure as the quickest movement among mollusks by an order of magnitude.

Most people may not equate snails with speed, but members of the aquatic species C. catus have been found to possess some of the quickest movement among the animal kingdom. While many land snails use their radula, or feeding structure, to munch on plants, members of C. catus use their chitinous radula to catch fast moving fish and other marine animals with remarkable speed. And Professor Azizi and his colleagues were interested in determining just how fast their harpooning radula could function.

"When studying movement in animals, we found that latch and muscular sphincter structures like the one found in the cone snail's hydraulically propelled radula are capable of producing movements at remarkable speeds. By evaluating the anatomy and functional limits of these structures, we hope to uncover insights into how they evolve and how their design could inspire new forms for robots or medical devices," said Professor Azizi.

When searching for food, cone snails use their radula as a projectile and conduit for the delivery of powerful venom. Scientists believe that the high speed of the movement is necessary to deliver the venom quick enough to exceed the escape time of potential prey, which include fast swimming fish. Using high-speed videography, the researchers determined that the radular harpoon can be propelled into prey within 100 microseconds, with a peak acceleration exceeding 280,000 m/s2 and a maximal acceleration exceeding 400,000 m/s2. These extreme speeds are similar to a fired bullet.

"We are still somewhat puzzled by the fact that cone snails are so darn fast despite the fact that their prey are two orders of magnitude slower," says Professor Azizi. "We are continuing to work on the species, and are following up on potential reasons for such extraordinary speeds."

Credit: 
University of California - Irvine

MIT's fleet of autonomous boats can now shapeshift

image: MIT's fleet of robotic boats has been updated with new capabilities to 'shapeshift,' by autonomously disconnecting and reassembling into different configurations to form various floating platforms in the canals of Amsterdam. In experiments in a pool, the boats rearranged themselves from a connected straight line into an 'L' (shown here) and other shapes.

Image: 
Courtesy of the researchers/MIT

MIT's fleet of robotic boats has been updated with new capabilities to "shapeshift," by autonomously disconnecting and reassembling into a variety of configurations, to form floating structures in Amsterdam's many canals.

The autonomous boats -- rectangular hulls equipped with sensors, thrusters, microcontrollers, GPS modules, cameras, and other hardware -- are being developed as part of the ongoing "Roboat" project between MIT and the Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute). The project is led by MIT professors Carlo Ratti, Daniela Rus, Dennis Frenchman, and Andrew Whittle. In the future, Amsterdam wants the roboats to cruise its 165 winding canals, transporting goods and people, collecting trash, or self-assembling into "pop-up" platforms -- such as bridges and stages -- to help relieve congestion on the city's busy streets.

In 2016, MIT researchers tested a roboat prototype that could move forward, backward, and laterally along a preprogrammed path in the canals. Last year, researchers designed low-cost, 3-D-printed, one-quarter scale versions of the boats, which were more efficient and agile, and came equipped with advanced trajectory-tracking algorithms. In June, they created an autonomous latching mechanism that let the boats target and clasp onto each other, and keep trying if they fail.

In a new paper presented at the last week's IEEE International Symposium on Multi-Robot and Multi-Agent Systems, the researchers describe an algorithm that enables the roboats to smoothly reshape themselves as efficiently as possible. The algorithm handles all the planning and tracking that enables groups of roboat units to unlatch from one another in one set configuration, travel a collision-free path, and reattach to their appropriate spot on the new set configuration.

In demonstrations in an MIT pool and in computer simulations, groups of linked roboat units rearranged themselves from straight lines or squares into other configurations, such as rectangles and "L" shapes. The experimental transformations only took a few minutes. More complex shapeshifts may take longer, depending on the number of moving units -- which could be dozens -- and differences between the two shapes.

"We've enabled the roboats to now make and break connections with other roboats, with hopes of moving activities on the streets of Amsterdam to the water," says Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. "A set of boats can come together to form linear shapes as pop-up bridges, if we need to send materials or people from one side of a canal to the other. Or, we can create pop-up wider platforms for flower or food markets."

Joining Rus on the paper are: Ratti, director of MIT's Senseable City Lab, and, also from the lab, first author Banti Gheneti, Ryan Kelly, and Drew Meyers, all researchers; postdoc Shinkyu Park; and research fellow Pietro Leoni.

Collision-free trajectories

For their work, the researchers had to tackle challenges with autonomous planning, tracking, and connecting groups of roboat units. Giving each unit unique capabilities to, for instance, locate each other, agree on how to break apart and reform, and then move around freely, would require complex communication and control techniques that could make movement inefficient and slow.

To enable smoother operations, the researchers developed two types of units: coordinators and workers. One or more workers connect to one coordinator to form a single entity, called a "connected-vessel platform" (CVP). All coordinator and worker units have four propellers, a wireless-enabled microcontroller, and several automated latching mechanisms and sensing systems that enable them to link together.

Coordinators, however, also come equipped with GPS for navigation, and an inertial measurement unit (IMU), which computes localization, pose, and velocity. Workers only have actuators that help the CVP steer along a path. Each coordinator is aware of and can wirelessly communicate with all connected workers. Structures comprise multiple CVPs, and individual CVPs can latch onto one another to form a larger entity.

During shapeshifting, all connected CVPs in a structure compare the geometric differences between its initial shape and new shape. Then, each CVP determines if it stays in the same spot and if it needs to move. Each moving CVP is then assigned a time to disassemble and a new position in the new shape.

Each CVP uses a custom trajectory-planning technique to compute a way to reach its target position without interruption, while optimizing the route for speed. To do so, each CVP precomputes all collision-free regions around the moving CVP as it rotates and moves away from a stationary one.

After precomputing those collision-free regions, the CVP then finds the shortest trajectory to its final destination, which still keeps it from hitting the stationary unit. Notably, optimization techniques are used to make the whole trajectory-planning process very efficient, with the precomputation taking little more than 100 milliseconds to find and refine safe paths. Using data from the GPS and IMU, the coordinator then estimates its pose and velocity at its center of mass, and wirelessly controls all the propellers of each unit and moves into the target location.

In their experiments, the researchers tested three-unit CVPs, consisting of one coordinator and two workers, in several different shapeshifting scenarios. Each scenario involved one CVP unlatching from the initial shape and moving and relatching to a target spot around a second CVP.

Three CVPs, for instance, rearranged themselves from a connected straight line -- where they were latched together at their sides -- into a straight line connected at front and back, as well as an "L." In computer simulations, up to 12 roboat units rearranged themselves from, say, a rectangle into a square or from a solid square into a Z-like shape.

Scaling up

Experiments were conducted on quarter-sized roboat units, which measure about 1 meter long and half a meter wide. But the researchers believe their trajectory-planning algorithm will scale well in controlling full-sized units, which will measure about 4 meters long and 2 meters wide.

In about a year, the researchers plan to use the roboats to form into a dynamic "bridge" across a 60-meter canal between the NEMO Science Museum in Amsterdam's city center and an area that's under development. The project, called RoundAround, will employ roboats to sail in a continuous circle across the canal, picking up and dropping off passengers at docks and stopping or rerouting when they detect anything in the way. Currently, walking around that waterway takes about 10 minutes, but the bridge can cut that time to around two minutes.

"This will be the world's first bridge comprised of a fleet of autonomous boats," Ratti says. "A regular bridge would be super expensive, because you have boats going through, so you'd need to have a mechanical bridge that opens up or a very high bridge. But we can connect two sides of canal [by using] autonomous boats that become dynamic, responsive architecture that float on the water."

To reach that goal, the researchers are further developing the roboats to ensure they can safely hold people, and are robust to all weather conditions, such as heavy rain. They're also making sure the roboats can effectively connect to the sides of the canals, which can vary greatly in structure and design.

Credit: 
Massachusetts Institute of Technology