Tech

No, Siri and Alexa are not making us ruder

Is the way we bark out orders to digital assistants like Siri, Alexa and Google Assistant making us less polite? Prompted by growing concerns, two Brigham Young University information systems researchers decided to ask.

"Hey Siri, is the way we talk to you making humans less polite?"

OK, OK, they didn't ask Siri. Or Alexa. Instead they asked 274 people, and after surveying and observing those people, they found some good news: Artificially-intelligent digital assistants are not making adult humans ruder to other humans. Yet.

"Worried parents and news outlets alike have fretted about how the personification of digital assistants affects our politeness, yet we have found little reason to worry about adults becoming ruder as a result of ordering around Siri or Alexa," said James Gaskin, associate professor of information systems at BYU. "In other words, there is no need for adults to say "please" and "thank you" when using a digital assistant."

Gaskin and lead author Nathan Burton actually expected to find the opposite -- that the way people treat AIs would make a difference in their life and interpersonal interactions. According to their assessment, digital assistants in their current form are not personified enough by adult users to affect human-to-human interactions.

But that may not be the case with children. Parental concerns have already prompted both Google and Amazon to make adjustments to their digital assistants, with both now offering features that thank and compliment children when they make requests politely.

Gaskin and Burton did not study children, but assessed young adults, who generally have already formed their behavioral habits. The researchers believe that if they repeated the study with kids, they would find different results.

They also say that as artificial intelligence becomes more anthropomorphic in form, such as the new Vector Robot -- which has expressive eyes, a moving head and arm-like parts -- the effects on human interactions will increase because people will be more likely to perceive the robots as having and understanding emotion.

"The Vector Robot appears to do a good job of embodying a digital assistant in a way that is easily personifiable," Burton said. "If we did the same type of study using a Vector Robot, I believe we would have found a much stronger effect on human interactions."

Credit: 
Brigham Young University

When human expertise improves the work of machines

image: A single crystal sample is loaded onto the measurement stage of a modified atomic force microscope (i.e. piezoresponse force microscope).

Image: 
Rob Felt, Georgia Tech

Machine learning algorithms can sometimes do a better job with a little help from human expertise, at least in the field of materials science.

In many specialized areas of science, engineering and medicine, researchers are turning to machine learning algorithms to analyze data sets that have grown much too large for humans to understand. In materials science, success with this effort could accelerate the design of next-generation advanced functional materials, where development now depends on old-fashioned trial-and-error.

By themselves, however, data analytics techniques borrowed from other research areas often fail to provide the insights needed to help materials scientists and engineers choose which of many variables to adjust -- and can't account for dramatic changes such as the introduction of a new chemical compound into the process. In some complex materials such as ferroelectrics, as many as 10 different factors can affect the properties of the resulting product.

In a paper published this week in the journal NPJ Computational Materials, researchers explain how to give the machines an edge at solving the challenge by intelligently organizing the data to be analyzed based on human knowledge of what factors are likely to be important and related. Known as dimensional stacking, the technique shows that human experience still has a role to play in the age of machine intelligence.

The research was sponsored by the National Science Foundation and the Defense Threat Reduction Agency, as well as the Swiss National Science Foundation. Measurements were performed, in part, at the Oak Ridge National Laboratory in Oak Ridge, Tennessee.

"When your machine accepts strings of data, it really does matter how you are putting those strings together," said Nazanin Bassiri-Gharb, the paper's corresponding author and a professor in the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology. "We must be mindful that the organization of data before it goes to the algorithm makes a difference. If you don't plug the information in correctly, you will get a result that isn't necessarily correlated with the reality of the physics and chemistry that govern the materials."

Bassiri-Gharb works on ferroelectrics, crystalline materials that exhibit spontaneous electrical polarizations switchable by an external electric field. Widely used for their piezoelectric properties -- which allow electrical inputs to generate mechanical outputs, and mechanical motion to generate electrical voltages -- their chemical formulas are usually complicated, including lead, manganese, niobium, oxygen, titanium, indium, bismuth and other elements.

Researchers, who have been working for decades to improve the materials, would like to develop advanced ferroelectrics that don't include lead. But trial-and-error design techniques haven't led to major breakthroughs, and she is not alone in wanting a more direct approach -- one that could also more rapidly lead to improvements in other functional materials used in microelectronics, batteries, optoelectronic systems and other critical research fields.

"For materials science, things get really complicated, especially with the functional materials," said Bassiri-Gharb. "As materials scientists, it's very difficult to design the materials if we don't understand why a response is increased. We have learned that the functionalities are not compartmentalized. They are interrelated among many properties of the material."

The technique described in the paper involves a preprocessing step in which the large data sets are organized according to physical or chemical properties that make sense to material scientists.

"As a scientist or engineer, you have an idea whether or not there are physical or chemical correlations," she explained. "You have to be cognizant of what kind of correlations could exist. The way you stack your data to be analyzed would have implications with respect to the physical or chemical correlations. If you do this correctly, you can get more information from any data analytics approach you might be using."

To test the techniques, Bassiri-Gharb and collaborators Lee Griffin, Iaroslav Gaponenko, and Shujun Zhang tested samples of relaxor-ferroelectric materials used in advanced ultrasonic imaging equipment. Griffin, a Georgia Tech graduate research assistant and the paper's co-first author, did the experimental measurements. Zhang, a researcher at the University of Wollongong in Australia, provided samples for the study. Bassiri-Gharb and Gaponenko, a research affiliate in her group, developed the approach.

Using a conductive tip on an atomic force microscope, they examined the electromechanical response from a series of chemically related samples, generating as many as 2,500 time- and voltage-dependent measurements on a grid of points established on each sample. The process generated hundreds of thousands of data points and provided a good test for the stacking approach, known technically as concatenation.

"Instead of just looking at the chemical composition that provides the highest response, we looked at a range of compositions and tried to figure out the commonality," she said. "We figured out that if we applied this data stacking with some thought process behind it, we could learn more about these interesting materials."

Among their findings: Though the material is a single crystal, the functional response showed highly disordered behavior, reminiscent of a fully disordered material like glass. "This glassy behavior really is unexpectedly persisting beyond a small percentage of the material compositions," said Bassiri-Gharb. "It is persisting across all of the compositions that we have looked at."

She hopes the technique will ultimately lead to information that will improve many materials and their functionalities. Knowing which chemicals need to be included could allow the materials scientists to move to the next phase -- working with chemists to put the right atoms in the right places.

"The big goal for any materials' functionality is to find the guidelines that will provide the properties we want," she said. "We want to find the straight path to the best compositions for the next generation of these materials."

Credit: 
Georgia Institute of Technology

Researchers demonstrate three-dimensional quantum hall effect for the first time

image: An illustration of the 3D quantum Hall effect. Under enhanced interaction effects, the electrons form a special charge density wave along the applied magnetic field. The interior becomes insulating, while the conduction is through the surface of the material.

Image: 
Wang Guoyan & He Cong

The quantum Hall effect (QHE), which was previously known for two-dimensional (2D) systems, was predicted to be possible for three-dimensional (3D) systems by Bertrand Halperin in 1987, but the theory was not proven until recently by researchers from the Singapore University of Technology and Design (SUTD) and their research collaborators from around the globe.

The Hall effect, a fundamental technique for material characterization, is formed when a magnetic field deflects the flow of electrons sideways and leads to a voltage drop across the transverse direction. In 1980, a surprising observation was made when measuring the Hall effect for a two-dimensional (2D) electron gas trapped in a semiconductor structure - the measured Hall resistivity showed a series of completely flat plateau, quantized to values with a remarkable accuracy of one part in 10 billion. This became known as the QHE.

QHE has since revolutionized our fundamental understanding of condensed matter physics, generating a vast field of physics research. Many new emerging topics, such as topological materials, can also be traced back to it.

Soon after its discovery, researchers pursued the possibility of generalizing QHE from 2D systems to three dimensions (3D). Bertrand Halperin predicted that such a generalized effect, called the 3D QHE, is indeed possible in a seminal paper published in 1987. From theoretical analysis, he gave signatures for 3D QHE and pointed out that enhanced interactions between the electrons under a magnetic field can be the key to drive a metal material into the 3D QHE state.

30 years have passed since Halperin's prediction and while there have been continuing efforts in trying to realize 3D QHE in experiment, clear evidence has been elusive due to the stringent conditions required for 3D QHE - the material needs to be very pure, have high mobility, and low carrier density.

SUTD's experimental collaborator, the Southern University of Science and Technology (SUSTech) in China, has been working on a unique material known as ZrTe5 since 2014. This material is able to satisfy the required conditions and exhibit the signatures of 3D QHE.

In the research paper published in Nature, the researchers show that when the material is cooled to very low temperature while under a moderate magnetic field, its longitudinal resistivity drops to zero, indicating that the material transforms from a metal to an insulator. This is due to the electronic interactions where the electrons redistribute themselves and form a periodic density wave along the magnetic field direction (as illustrated in the image) called the charge density wave.

"This change would usually freeze the electron motion and the material becomes insulating, disallowing the electron to flow through the interior of the material. However, using this unique material, the electrons can move through the surfaces, giving a Hall resistivity quantized by the wavelength of the charge density wave," explained co-author Professor Zhang Liyuan from SUSTech. This in turn proves the first demonstration of the long speculating 3D QHE, pushing the celebrated QHE from 2D to 3D.

"We can expect that the discovery of 3D QHE will lead to new breakthroughs in our knowledge of physics and provide a cornucopia of new physical effects. This new knowledge, in one way or another, will also provide us new opportunities for practical technological development," said co-author, Assistant Professor Yang Shengyuan from SUTD.

Credit: 
Singapore University of Technology and Design

Router guest networks lack adequate security, according to researchers at Ben-Gurion University

BEER-SHEVA...August 15, 2019 - While many organizations and home networks use a host and guest network on the same router hardware to increase security, a new study by Ben-Gurion University indicates that routers from well-known manufacturers are vulnerable to cross-router data leaks through a malicious attack on one of the two separated networks.

According to Adar Ovadya, a master's student in BGU's Department of Software and Information Systems Engineering, "all of the routers we surveyed regardless of brand or price point were vulnerable to at least some cross-network communication once we used specially crafted network packets. A hardware-based solution seems to be the safest approach to guaranteeing isolation between secure and non-secure network devices."

The BGU research was presented at the 13th USENIX Workshop on Offensive Technologies (WOOT) in Santa Clara this week.

Most routers sold today offer consumers two or more network options--one for the family, which may connect all the sensitive smart home and IoT devices and computers, and the other for visitors or less sensitive data.

In an organization, data traffic sent may include mission-critical business documents, control data for industrial systems, or private medical information. Less sensitive data may include multimedia streams or environmental sensor readings. Network separation and network isolation are important components of the security policy of many organizations if not mandated as standard practice, for example, in hospitals. The goal of these policies is to prevent network intrusions and information leakage by separating sensitive network segments from other segments of the organizational network, and indeed from the general internet.

In the paper, the researchers demonstrated the existence of different levels of cross-router covert channels which can be combined and exploited to either control a malicious implant, or to exfiltrate or steal the data. In some instances, these can be patched as a simple software bug, but more pervasive covert cross-channel communication is impossible to prevent, unless the data streams are separated on different hardware.

The USENIX Workshop on Offensive Technologies (WOOT) aims to present a broad picture of offense and its contributions, bringing together researchers and practitioners in all areas of computer security. WOOT provides a forum for high-quality, peer-reviewed work discussing tools and techniques for attack.

All vulnerabilities were previously disclosed to the manufacturers. This research was supported by Israel Science Foundation grants 702/16 and 703/16. Adar Ovadya is co-supervised by Dr. Yossi Oren, a senior lecturer in BGU's Department of Software and Information Systems Engineering and head of the Implementation Security and Side-Channel Attacks Lab at Cyber@BGU, and Dr. Niv Gilboa, from BGU's Department of Communication Systems Engineering. Also contributing to the research were BGU graduate student Rom Ogen and undergraduate student Yakov Mallah.

For more information about this research, please visit: https://orenlab.sise.bgu.ac.il/publications/CrossRouter

Credit: 
American Associates, Ben-Gurion University of the Negev

Stronger graphene oxide 'paper' made with weaker units

Want to make a super strong material from nano-scale building blocks? Start with the highest quality building blocks, right?

Wrong -- at least when working with "flakes" of graphene oxide (GO).

A new study from Northwestern University researchers shows that better GO "paper" can be made by mixing strong, solid GO flakes with weak, porous GO flakes. The finding will aid the production of higher quality GO materials, and it sheds light on a general problem in materials engineering: how to build a nano-scale material into a macroscopic material without losing its desirable properties.

"To put it in human terms, collaboration is very important," said Jiaxing Huang, Northwestern Engineering professor of materials science and engineering, who led the study. "Excellent players can still make a bad team if they don't work well together. Here, we add some seemingly weaker players and they strengthen the whole team."

The research was a four-way collaboration. In addition to Huang's, three other groups participated, led by Horacio Espinosa, professor of mechanical engineering at the McCormick School of Engineering; SonBinh Nguyen, professor of chemistry at Northwestern; and Tae Hee Han, a former postdoc researcher at the University who's now a professor of organic and nano engineering at Hanyang University, South Korea.

The study was published today in Nature Communications.

High-tech paper

GO is a derivative of graphite that can be used to make the two-dimensional, super material graphene. Since GO is easier to make, scientists study it as a model material. It generally comes as a dispersion of tiny flakes in water. From one end to the other, each flake is smaller than the width of a human hair and only one nanometer thick.

When a solution of GO flakes is poured onto a filter and the water removed, a thin "paper" is formed, usually a few inches in diameter with a thickness less than or equal to 40 micrometers. Intermolecular forces hold the flakes together, nothing more.

Strength from weakness

Scientists can make strong GO in single layers but layering the flakes into a paper form doesn't work too well. While testing the effect of holes on the strength of GO flakes, Huang and his collaborators discovered a solution.

Using a mixture of ammonia and hydrogen peroxide, the researchers chemically "etched" holes in the GO flakes. Flakes left soaking for one to three hours were drastically weaker than un-etched flakes. After five hours of soaking, flakes became so weak they couldn't be measured.

Then, the team found something surprising: Paper made from the weakened flakes was stronger than expected. At the single layer level, one-hour-etched porous flakes, for example, were 70 percent weaker than solid flakes, but paper made from those flakes was only 10 percent weaker than paper made from solid flakes.

Things got even more interesting when the team mixed solid and porous flakes together, Huang said. Instead of weakening the paper made solely from solid flakes, the addition of 10 or 25 percent of the weakest flakes strengthened it by about 95 and 70 percent, respectively.

Effective connection

If GO sheets can be likened to aluminum foil, Huang said, making a GO paper is just like stacking the foil up to make a thick aluminum slab. If you start with large sheets of aluminum foil, chances are good that many will wrinkle, impeding tight packing between sheets. On the other hand, smaller sheets don't wrinkle as easily. They pack together well but create tight stacks that don't integrate well with other tight stacks, creating voids within GO paper where it can easily break.

"Weak flakes warp to fill in those voids, which improves the distribution of forces throughout the material," Huang said. "It's a reminder that the strength of individual units is only part of the equation; effective connection and stress distribution is equally important."

This finding will be directly applicable to other two-dimensional materials, like graphene, Huang said, and will also lead to the design of higher quality GO products. He hopes to test it out on GO fibers next.

Credit: 
Northwestern University

Stressed plants must have iron under control

image: Plants have to absorb a large number of nutrients, but often these are available in varying amounts due to changing environmental conditions. They respond by activating different genetic programs in phases of nutrient scarcity. Researchers at HHU found that iron in particular is handled differently.

Image: 
HHU / Rumen Ivanov

Unlike animals, plants cannot move and tap into new resources when there is a scarcity or lack of nutrients. Instead, they have to adapt to the given situation. They do so by activating sets of stress-specific genes As a result, they basically reprogram their metabolism.

Plant researchers Dr. Tzvetina Brumbarova and PD Dr. Rumen Ivanov from the HHU Institute of Botany (head: Prof. Dr. Petra Bauer) looked at the stress programmes activated by the model plant Arabidopsis thaliana. They wanted to find out which response strategies are used by the plant, which is also known as 'thale cress'. Brumbarova and Ivanov used a computer-assisted approach to analyse extensive gene expression data from the scientific community.

Based on the analysis, the HHU researchers found that the data contained some surprising strategies used by Arabidopsis. Firstly, three of the main regulators identified by the authors to play a role in stress responses are already known: They also control the plant's response to light. "This suggests that light can also control nutrient uptake in the subterranean parts of the plant like the roots," explains Tzvetina Brumbarova.

Even more important is the discovery that plants adapt their iron levels in particular when there is a lack of nutrient availability. Rumen Ivanov says: "In stress situations like these, iron can quickly turn from friend to foe. On the one hand, iron is vital for various processes in order for the plant to survive. On the other hand, however, iron can also result in reactive compounds that can cause irreversible damage to the plant when there is a scarcity of nutrients."

Yet another discovery took the researchers by surprise. "In the case of stress, the plant mainly tries to adapt iron import within the cell rather than internal iron redistribution," explains Brumbarova, and goes on to say: "From a cellular perspective, this means that controlling the 'external borders' matters most."

The study, which has now been published online in the journal iScience, is based entirely on available knowledge. Ivanov says: "All the primary data was already available. We simply examined it from a new angle to answer different questions." As a result, the biologists already have a large dataset that can be used to generate new findings.

Credit: 
Heinrich-Heine University Duesseldorf

Skoltech scientists found a way to create long-life fast-charging batteries

A group of researchers led by Skoltech Professor Pavel Troshin studied coordination polymers, a class of compounds with scarcely explored applications in metal-ion batteries, and demonstrated their possible future use in energy storage devices with a high charging/discharging rate and stability. The results of their study were published in the journal Chemistry of Materials.

The charging/discharging rate is one of the key characteristics of lithium-ion batteries. Most of modern commercial batteries need at least an hour to get fully charged, which certainly limits the scope of their application, in particular, for electric vehicles. The trouble with the active materials, such as the most popular anode material, graphite, is that their capacity decays significantly, as their charging rate increases. To retain the battery capacity at high charging rates, the active electrode materials must have high electronic and ionic conductivity, which is the case with the newly-discovered coordination polymers that are derived from aromatic amines and salts of transition metals, such as nickel or copper. Although these compounds hold a great promise, their application in lithium-ion batteries remains virtually unexplored.

A recent study undertaken by a group of scientists from Skoltech and the Institute for Problems of Chemical Physics of RAS led by Professor P. Troshin in collaboration with the University of Cologne (Germany) and the Ural Federal University, focused on tetraaminobenzene-based linear polymers of nickel and copper. Although the linear polymers exhibited much lower initial electronic conductivity as compared to their two-dimensional counterparts, it transpired that they can be used as anode materials that get charged/discharged in less than a minute, because their conductivity increases dramatically after the first discharge due to lithium doping.

Additionally, it was found that these anode materials have excellent stability at high charging/discharging rates: they were demonstrated to retain up to 79% of their maximum capacity after as many as 20,000 charging-discharging cycles.

Furthermore, it was discovered that copper-based polymers can be used both as anode and high-capacity cathode materials. The authors point out that there is plenty of opportunity for structure optimization, even though the cathode cannot yet operate in a stable manner. "There are a lot of methods for fine-tuning the characteristics of coordination polymers," explains the first author of the study and Skoltech PhD student, Roman Kapaev. "Actually we deal here with a sort of a construction kit where the parts can be easily changed or replaced. We can modify both the amine structure and the transition metal cation, and by doing so, raise the capacity, increase or decrease the redox potential, improve stability and various other performances. This trail-blazing study touches upon an extensive research area, which, I am sure, has yet a lot to reveal."

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Political campaigns may influence acceptance of violence against women

During the 2016 Presidential Election, both major party candidates, Hillary Clinton and Donald Trump, ran on polarizing platforms focusing on a few central issues: immigration, medicare, social issues (i.e., abortion, paid family leave), international trade, and sexism and violence against women.

The first major party nomination for a woman was triumphant, but much like the 2008 Election highlighted racism within the U.S., the 2016 election highlighted the role of sexism in the U.S. as both Clinton and Trump were subject to criticism from their own parties and their opposing parties regarding their personal histories of violence against women, making it a cornerstone of the presidential election.

Nicole Johnson, assistant professor of counseling psychology at Lehigh University, set out to examine the influence, both positive and negative, of presidential campaigns, voting behavior, and candidate selection, on social views of rape culture.

She found in her latest research that political campaigns may in fact influence the acceptance of violence against women.

In the new study led by Johnson, titled: "Rape Culture and Candidate Selection in the 2016 U.S. Presidential Election," (10.1093/sp/jxz021) published this week in the Journal of Social Politics, she and her colleagues collected and studied data from two samples of college students at the same university, pre- and post- the 2016 U.S. Presidential Election in order to determine the effect of political campaigns on how participants viewed the acceptance of violence against women.

Results of the study showed an increase in how participants viewed the acceptance of violence and a decrease in how they viewed the acceptance of traditional feminine gender roles in the post election sample compared to the pre-election sample.

"This means that following the 2016 U.S. Presidential Election participants perceived their peers as more accepting of violence, including violence against women, and less accepting of traditional feminine gender roles. We hypothesized that this may have been due to strong statements endorsing violence during the campaign, as well as the demonstration of a woman (Hillary Clinton) being successful in a perceived male sphere (i.e., Politics)."

Supporters of Democratic candidates--Clinton and Sanders--perceived less overall acceptance of rape culture compared to Trump supporters with specific differences on hostile sexism, hostility toward women, and acceptance of violence. Thus, Johnson says, "Trump supporters perceived their peers as being more accepting of attitudes contributing to violence against women, which has demonstrated predictive power of personal attitudes and actions."

"We expected to find an overall increase in perceived acceptance of rape culture from the pre- to post-election samples, however, the decrease in traditional feminine gender roles, potentially due to the first female major party candidate, may have dampened the overall effect," Johnson said.

She says this research is important because it highlights the potential influence of political campaigns on the acceptance of violence against women and in turn, a critical area for intervention and the creation of public policy.

"Policy makers would benefit from this information in order to inform the creation of policy surrounding political campaigns, particularly those involving women candidates."

She and her co-authors hope these findings will highlight the potential impact of political campaigns and candidate selection on acceptance of violence and prejudice and in turn will inform the development of public policy to increase fairness and safety within politics and culture.

Credit: 
Lehigh University

Early species developed much faster than previously thought, OHIO research shows

image: Building block model of the earth system that produced the Great Ordovician Biodiversificaiton Event. Figure from Stigall et al., 2019.

Image: 
Christian Rasmussen

ATHENS, Ohio (Aug. 15, 2019) - When Earth's species were rapidly diversifying nearly 500 million years ago, that evolution was driven by complex factors including global cooling, more oxygen in the atmosphere, and more nutrients in the oceans. But it took a combination of many global environmental and tectonic changes occurring simultaneously and combining like building blocks to produce rapid diversification into new species, according to a new study by Dr. Alycia Stigall, Professor of Geological Sciences at Ohio University.

She and fellow researchers have narrowed in a specific time during an era known as the Ordovician Radiation, showing that new species actually developed rapidly during a much shorter time frame than previously thought. The Great Biodiversification Event where many new species developed, they argue, happened during the Darriwilian Stage about 465 million years ago. Their research, "Coordinated biotic and abiotic change during the Great Ordovician Biodiversification Event: Darriwilian assembly of early Paleozoic building blocks," was published in Palaeogeography, Palaeoclimatology, Palaeoecology as part of a special issue they are editing on the Great Ordovician Biodiversification Event.

New datasets have allowed them to show that what previously looked like species development widespread over time and geography was actually a diversification pulse.
Picture a world before the continents as we know them, when most of the land mass was south of the equator, with only small continents and islands in the vast oceans above the tropics. Then picture ice caps forming over the southern pole. As the ice caps form, the ocean recedes and local, isolated environments form around islands and in seas perched atop continents. In those shallow marine environments, new species develop.

Then picture the ice caps melting and the oceans rising again, with those new species riding the waves of global diversification to populate new regions. The cycle then repeats producing waves of new species and new dispersals.

Lighting the Spark of Diversification

The early evolution of animal life on Earth is a complex and fascinating subject. The Cambrian Explosion (between about 540 to 510 million years ago) produced a stunning array of body plans, but very few separate species of each, notes Stigall. But nearly 40 million years later, during the Ordovician Period, this situation changed, with a rapid radiation of species and genera during the Great Ordovician Biodiversification Event.

The triggers of the GOBE and processes that promoted diversification have been subject to much debate, but most geoscientists haven't fully considered how changes like global cooling or increased oxygenation would foster increased diversification.

A recent review paper by Stigall and an international team of collaborators attempts to provide clarity on these issues. For this study, Stigall teamed up with Cole Edwards (Appalachian State University), a sedimentary geochemist, and fellow paleontologists Christian Mac Ørum Rasmussen (University of Copenhagen) and Rebecca Freeman (University of Kentucky) to analyze how changes to the physical earth system during the Ordovician could have promoted this rapid increase in diversity.

In their paper, Stigall and colleagues demonstrate that the main pulse of diversification during the GOBE is temporally restricted and occurred in the Middle Ordovician Darriwilian Stage (about 465 million years ago). Many changes to the physical earth system, including oceanic cooling, increased nutrient availability, and increased atmospheric oxygen accumulate in the interval leading up to the Darriwilian.

These physical changes were necessary building blocks, but on their own were not enough to light the spark of diversification.

The missing ingredient was a method to alternately connect and isolate populations of species through cycles of vicariance and dispersal. That spark finally occurs in the Darriwilian Stage when ice caps form over the south pole of the Ordovician Earth. The waxing and waning of these ice sheets caused sea level to rise and fall (similar to the Pleistocene), which provided the alternate connection and disconnection needed to facilitate rapid diversity accumulation.

Stigall and her collaborators compared this to the assembly of building blocks required to pass a threshold.

Credit: 
Ohio University

Trauma begets trauma: Bullying associated with increased suicide attempts among 12-to-15-year-olds

Washington, DC, August 15, 2019 - A new study in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), published by Elsevier, reports that bullying victimization may increase the risk of suicide attempts among young adolescents by approximately three-times worldwide.

"Globally, approximately 67,000 adolescents die of suicide each year and identifying modifiable risk factors for adolescent suicide is a public health priority," said lead author Ai Koyanagi, MD, and Research Professor at Parc Sanitari Sant Joan de Deu, Barcelona, Spain.

The findings are based on nationally representative data collected through the World Health Organization's (WHO) Global School-based Student Health Survey, which is a school-based survey conducted in multiple countries across the globe.

The study included 134,229 school-going adolescents aged between 12 and 15 years from 48 countries across five WHO regions, including Africa, the Americas, the Eastern Mediterranean, South-East Asia, and the Western Pacific. The sample was comprised of nine high-income-, 33 middle-income-, and 6 low-income-countries.

The researchers found that more than 30 percent of the adolescents experienced bullying in the past 30 days. Adolescents who were bullied were approximately three-times more likely to report having attempted suicide than those who were not bullied regardless of region.

Dr. Koyanagi and her team also found that the greater number of days adolescents reported being bullied, the more likely they were to report a suicide attempt. When compared to participants who were not bullied, being bullied on more than 20 days in the past 30 was associated with a 5.51 times increased likelihood of reporting suicide attempts.

"The high prevalence of bullying victimization and the substantially heightened dose-dependent risk for suicide attempts among adolescent bullying victims, across multiple continents found in our study, point to the urgent need to implement effective and evidence-based interventions to address bullying for the prevention of adolescent suicides and suicide attempts worldwide," concluded Dr. Koyanagi.

Credit: 
Elsevier

New contrast agent could make MRIs safer

BOSTON - Researchers at Massachusetts General Hospital (MGH) have taken a key step forward in developing a new, possibly safer contrast agent for use in magnetic resonance imaging (MRI) exams. Contrast enhanced MRI is a widely used diagnostic tool with over 30 million procedures performed annually. Currently, gadolinium-based contrast agents (GBCAs) are used for this purpose, but recently concerns have been raised about the long-term safety of the gadolinium metal ion. The study's senior author is Eric M. Gale, PhD, assistant in Biomedical Engineering at MGH, and assistant professor in Radiology at Harvard Medical School (HMS).

Results of the study are available online at Investigative Radiology, and describe how a novel manganese-based agent (Mn-PyC3A) provides comparable tumor contrast enhancement to state of the art (GBCAs). The new agent also may be safer than GBCAs, because manganese from Mn-PyC3A is much more quickly and thoroughly cleared from the body than gadolinium from even the 'safest' GBCA.

A key feature of the new agent is that the manganese is tightly bound to a chelator which prevents it from interacting with cells or proteins in an adverse way and allows rapid elimination from the body after the imaging exam. "Without a chelator of sufficient strength, the manganese will be taken up by the liver and remain in the body," says Peter Caravan, PhD, one of the study's authors, co-director of the Institute for Innovation in Imaging (i3) at MGH, and associate professor of Radiology at HMS. With a strong chelator, the manganese is distributed throughout the body and can detect the location and size of lesions.

While the first GBCA was approved by the U.S. Food and Drug Administration (FDA) in 1988, there are lingering safety concerns about these agents. In 2007 it was determined that GBCAs can cause nephrogenic systemic fibrosis (NSF) when used in patients with kidney disease. NSF is a rare,but devastating progressive condition that affects multiple organ systems. As a result, three GBCAs can no longer be used in patients with advanced kidney disease, while the use of other GBCAs is avoided. However, avoiding contrast enhanced MRI makes it harder to provide accurate diagnoses for these patients.

Also, it's been known for several years that residual gadolinium can stay in the body for a very long time after an imaging procedure. Studies have demonstrated that gadolinium levels in the brain and other organs increase with increased exposure to GBCAs. Concerns around gadolinium retention caused the European Medicines Agency to remove several GBCAs from the market in Europe. "No confirmed side-effects have yet been irrefutably linked to the long-term presence of gadolinium in the body. But, since some people are repeatedly exposed to GBCAs, doctors want to be cautious," says Gale. He and Caravan invented Mn-PyC3A as a gadolinium-free contrast agent to address these various safety concerns. They note that unlike gadolinium, manganese is an essential element that is naturally found in the body.

Derek Erstad, MD, clinical fellow in the Department of Surgery at MGH and lead author on the study points out that "A number of conditions require multiple follow up scans with GBCAs that result in increased gadolinium exposure. For instance, women with a high risk of breast cancer, brain cancer survivors, or patients with relapsing multiple sclerosis may receive frequent GBCA enhanced MRIs for surveillance."

For their study, the MGH team compared the efficacy of their novel contrast agent Mn-PyC3A to two state of the art GBCAs to detect tumors in mouse models of breast cancer and metastatic live cancer. They concluded that the tumor contrast enhancement provided by Mn-PyC3A was comparable to the performance of the two GBCAs. They also measured fractional excretion and elimination of Mn-PyC3A in a rat model and compared to the leading GBCA. In that study, MnPyC3A was more completely eliminated than the GBCAs.

Credit: 
Massachusetts General Hospital

Fracking has less impact on groundwater than traditional oil and gas production

image: Oil and gas well with brine separator tank in background in southern Ontario, Canada.

Image: 
Jennifer McIntosh

Conventional oil and gas production methods can affect groundwater much more than fracking, according to hydrogeologists Jennifer McIntosh from the University of Arizona and Grant Ferguson from the University of Saskatchewan.

High-volume hydraulic fracturing, known as fracking, injects water, sand and chemicals under high pressure into petroleum-bearing rock formations to recover previously inaccessible oil and natural gas. This method led to the current shale gas boom that started about 15 years ago.

Conventional methods of oil and natural gas production, which have been in use since the late 1800s, also inject water underground to aid in the recovery of oil and natural gas.

"If we want to look at the environmental impacts of oil and gas production, we should look at the impacts of all oil and gas production activities, not just hydraulic fracturing," said McIntosh, a University of Arizona professor of hydrology and atmospheric sciences.

"The amount of water injected and produced for conventional oil and gas production exceeds that associated with fracking and unconventional oil and gas production by well over a factor of 10," she said.

McIntosh and Ferguson looked at how much water was and is being injected underground by petroleum industry activities, how those activities change pressures and water movement underground, and how those practices could contaminate groundwater supplies.

While groundwater use varies by region, about 30% of Canadians and more than 45% of Americans depend on the resource for their municipal, domestic and agricultural needs. In more arid regions of the United States and Canada, surface freshwater supplies are similarly important.

McIntosh and Ferguson found there is likely more water now in the petroleum-bearing formations than initially because of traditional production activities.

To push the oil and gas toward extraction wells, the conventional method, known as enhanced oil recovery, injects water into petroleum-bearing rock formations. Saline water is produced as a by-product and is then re-injected, along with additional freshwater, to extract more oil and gas.

However, at the end of the cycle, the excess salt water is disposed of by injecting it into depleted oil fields or deep into geological formations that don't contain oil and gas. That injection of waste water has changed the behavior of liquids underground and increases the likelihood of contaminated water reaching freshwater aquifers.

Some of the water injected as part of oil and gas production activities is freshwater from the surface or from shallow aquifers. McIntosh said that could affect groundwater and surface water supplies in water-stressed regions such as New Mexico or Texas.

"There's a critical need for long-term -- years to decades -- monitoring for potential contamination of drinking water resources not only from fracking, but also from conventional oil and gas production," McIntosh said.

The team published their paper, "Conventional Oil--The Forgotten Part of the Water-Energy Nexus," online June 30 in the journal Groundwater. Global Water Futures funded the research.

McIntosh has been involved in studies about the environmental impacts of hydraulic fracturing. She started wondering how those impacts compare to the impacts of the conventional methods of oil and gas production -- methods that have been used for about 120 years and continue to be used.

Both fracking and conventional practices use groundwater and surface water when there isn't enough water from other sources to continue petroleum production.

To see how all types of oil and gas production activities affected water use in Canada and the U.S., she and Ferguson synthesized data from a variety of sources. The published scientific studies that were available covered only a few regions. Therefore, the scientists also delved into reports from state agencies and other sources of information.

The researchers found information for the Western Canada Sedimentary Basin, the Permian Basin (located in New Mexico and Texas), the states of Oklahoma, California and Ohio, and the total amount of water produced by high-volume hydraulic fracturing throughout the U.S.

"What was surprising was the amount of water that's being produced and re-injected by conventional oil and gas production compared to hydraulic fracturing," McIntosh said. "In most of the locations we looked at -- California was the exception -- there is more water now in the subsurface than before. There's a net gain of saline water."

There are regulations governing the petroleum industry with regard to groundwater, but information about what is happening underground varies by province and state. Some jurisdictions keep excellent data while for others it's virtually nonexistent. Despite this, Ferguson said he and McIntosh can make some observations.

"I think the general conclusions about water use and potential for contamination are correct, but the details are fuzzy in some areas," Ferguson said. "Alberta probably has better records than most areas, and the Alberta Energy Regulator has produced similar numbers to ours for that region. We saw similar trends for other oil and gas producing regions, but we need better reporting, record keeping and monitoring."

Oil and gas production activities can have environmental effects far from petroleum-producing regions. For example, previous studies show that operating disposal wells can cause detectable seismic activity more than 90 kilometers away. Conventional activities inject lower volumes of water and at lower pressure but take place over longer periods of time, which may cause contamination over greater distances.

Another wild card is the thousands of active, dormant and abandoned wells across North America. Some are leaky or were improperly decommissioned, providing possible pathways for contamination of freshwater aquifers.

While there is some effort to deal with this problem through organizations such as Alberta's Orphan Well Association, there is little consensus as to the size of the problem. Ferguson said depending on which source is cited, the decommissioning price tag ranges from a few billion to a few hundred billion dollars.

A 2017 report from Canada's C.D. Howe Institute indicates that there are 155,000 wells in Alberta yet to be remediated. A 2014 paper by other researchers suggest Pennsylvania alone has at least 300,000 abandoned wells, many of which are "lost" because there are no records of their existence nor is there surface evidence that an oil well was once there.

"We haven't done enough site investigations and monitoring of groundwater to know what the liability really looks like," Ferguson said. "My guess is that some wells probably should be left as is and others are going to need more work to address migration of brines and hydrocarbons from leaks that are decades old."

Credit: 
University of Arizona

UNH technology helps map the way to solve mystery of pilot Amelia Earhart

image: The autonomous surface vehicle known as BEN (Bathymetric Explorer and Navigator), designed and built at the University of New Hampshire, is aboard the EV Nautilus helping researchers solve the disappearance of Amelia Earhart.

Image: 
UNH

DURHAM, N.H.-- Researchers from the University of New Hampshire's Marine School are part of the crew, led by National Geographic Explorer-at-Large Robert Ballard, that is setting out to hopefully find answers to questions around the disappearance of famed pilot Amelia Earhart. UNH's Center for Coastal and Ocean Mapping has developed an autonomous surface vehicle (ASV), or robot, that can explore the seafloor in waters that may be too deep for divers.

The UNH robot known as BEN, the Bathymetric Explorer and Navigator, provides Ballard and the crew aboard the EV Nautilus with a unique capability to map the seafloor in the shallow areas adjacent to the island where Earhart sent her last radio transmission. This area is too deep for divers and too shallow for safe navigation of the Nautilus to use its deep-water sonar systems. Maps of the ocean floor produced by BEN will be used by the Nautilus crew to target dives with remotely operated vehicles (ROV) in the search for remnants of the plane.

Evidence suggests Amelia Earhart made a successful landing, likely near the coral reef around the island of Nikumaroro, in the western Pacific Ocean, and was able to transmit radio signals afterward. However, no plane was seen by Navy pilots surveying the islands several days after her disappearance suggesting that the plane may have been pushed off the reef into deeper water.

BEN is equipped with state-of-the-art seafloor mapping systems including a Kongsberg EM2040P multibeam echo-sounder and Applanix POS/MV navigation system, which allow it to make 3D topographic and acoustic backscatter maps of the seafloor. The Center has developed mission planning and "back-seat-driver" control software designed specifically for piloting BEN for the seafloor mapping mission. BEN was manufactured by ASV Global, in a design collaboration with the Center.

The UNH crew consists of research engineers Val Schmidt, lead of this operation, KG Fairbarn, and Andy McLeod, who are all aboard the EV Nautilus as well as Roland Arsenault, who is supporting the crew from shore. All are a part of the UNH Marine School's Center for Coastal and Ocean Mapping development and use of robotics for marine science and seafloor mapping.

The expedition will be featured in a two-hour special titled EXPEDITION AMELIA that will premiere October 20, on National Geographic.

Credit: 
University of New Hampshire

Physical and mental exercise lower chances for developing delirium after surgery

After having surgery, many older adults develop delirium, the medical term for sudden and severe confusion. In fact, between 10 and 67 percent of older adults experience delirium after surgery for non-heart-related issues, while 5 to 61 percent experience delirium after orthopedic surgery (surgery dealing with the bones and muscles).

Delirium can lead to problems with thinking and decision-making. It can also make it difficult to be mobile and perform daily functions and can increase the risk for illness and death. Because adults over age 65 undergo more than 18 million surgeries each year, delirium can have a huge impact personally, as well as for families and our communities.

Healthcare providers can use several tools to reduce the chances older adults will develop delirium. Providers can meet with a geriatrician before surgery, review prescribed medications, and make sure glasses and hearing aids are made available after surgery (since difficulty seeing or hearing can contribute to confusion). However, preventing delirium prior to surgery may be the best way to help older adults avoid it.

A team of researchers from Albert Einstein College of Medicine designed a study to see whether older adults who are physically active before having surgery had less delirium after surgery. The research team had previously found that people who enjoy activities such as reading, doing puzzles, or playing games experienced lower rates of delirium. The team published new findings on physical activity in the Journal of the American Geriatrics Society.

The researchers noted that several studies have shown that exercise and physical activity may reduce the risks of developing dementia (another medical condition affecting mental health, usually marked by memory problems, personality changes, and poor thinking ability). What's more, earlier studies have shown that regular exercise can lower the risk of developing delirium by 28 percent.

The participants in this study were adults over 60 years old who were undergoing elective orthopedic surgery. Most participants were around 70 years old. None had delirium, dementia, or severe hearing or vision problems.

The researchers asked participants the question "In the last month, how many days in a week did you participate in exercise or sport?" The researchers noted the type of physical activities the participants did, as well as whether and how often they read newspapers or books, knitted, played cards, board games, or computer games, used e-mail, sang, wrote, did crossword puzzles, played bingo, or participated in group meetings.

The participants said their physical exercise included walking, taking part in physical therapy, lifting weights, cycling, stretching, engaging in competitive sports, and dancing. The most commonly reported activity was walking. Though most participants were only active one day a week, nearly 26 percent were active five to six days a week and 31 percent were active five to seven days a week.

Of the 132 participants, 41 (31 percent) developed postoperative delirium.

The researchers reported that participants who were physically active six to seven days a week had a 73 percent lower chance of experiencing postoperative delirium (delirium that develops after surgery). They also reported that being mentally active was a strong factor in reducing chances of developing postoperative delirium. Participants who regularly read newspapers or books, knitted, played games, used e-mail, sang, wrote, worked crossword puzzles, played bingo, or participated in group meetings had an 81 percent lower chance of developing postoperative delirium.

"While our study was preliminary in nature, we found that modest regular physical activity, as well as performing stimulating mental activities, were associated with lower rates of delirium after surgery," said the researchers. The researchers also found that physical and cognitive activities seemed to offer benefits independent of each other. This suggests that people with activity-limiting injuries or conditions can still benefit from being mentally active, and people with mild cognitive impairment can still benefit from being physically active. The researchers noted that more research is needed to learn about the role of exercise and cognitive training in reducing delirium after surgery.

Credit: 
American Geriatrics Society

Revolutionising the CRISPR method

image: Genes and proteins in cells interact in many different ways. Each dot represents a gene; the lines are their interactions. For the first time, the new method uses biotechnology to influence entire gene networks in one single step.

Image: 
ETH Zurich / Carlo Cosimo Campa

Everyone's talking about CRISPR-Cas. This biotechnological method offers a relatively quick and easy way to manipulate single genes in cells, meaning they can be precisely deleted, replaced or modified. Furthermore, in recent years, researchers have also been using technologies based on CRISPR-Cas to systematically increase or decrease the activity of individual genes. The corresponding methods have become the worldwide standard within a very short time, both in basic biological research and in applied fields such as plant breeding.

To date, for the most part, researchers could modify only one gene at a time using the method. On occasion, they managed two or three in one go; in one particular case, they were able to edit seven genes simultaneously. Now, Professor Randall Platt and his team at the Department of Biosystems Science and Engineering at ETH Zurich in Basel have developed a process that - as they demonstrated in experiments - can modify 25 target sites within genes in a cell at once. As if that were not enough, this number can be increased still further, to dozens or even hundreds of genes, as Platt points out. At any rate, the method offers enormous potential for biomedical research and biotechnology. "Thanks to this new tool, we and other scientists can now achieve what we could only dream of doing in the past."

Targeted, large-scale cell reprogramming

Genes and proteins in cells interact in many different ways. The resulting networks comprising dozens of genes ensure an organism's cellular diversity. For example, they are responsible for differentiating progenitor cells to neuronal cells and immune cells. "Our method enables us, for the first time, to systematically modify entire gene networks in a single step," Platt says.

Moreover, it paves the way for complex, large-scale cell programming. It can be used to increase the activity of certain genes, while reducing that of others. The timing of this change in activity can also be precisely controlled.

This is of interest for basic research, for example in investigating why various types of cells behave differently or for the study of complex genetic disorders. It will also prove useful for cell replacement therapy, which involves replacing damaged with healthy cells. In this case, researchers can use the method to convert stem cells into differentiated cells, such as neuronal cells or insulin-producing beta cells, or vice versa, to produce stem cells from differentiated skin cells.

The dual function of the Cas enzyme

The CRISPR-Cas method requires an enzyme known as a Cas and a small RNA molecule. Its sequence of nucleobases serves as an "address label", directing the enzyme with utmost precision to its designated site of action on the chromosomes. ETH scientists have created a plasmid, or a circular DNA molecule, that stores the blueprint of the Cas enzyme and numerous RNA address molecules, arranged in sequences: in other words, a longer address list. In their experiments, the researchers inserted this plasmid into human cells, thereby demonstrating that several genes can be modified and regulated simultaneously.

For the new technique, the scientists did not use the Cas9 enzyme that has featured in most CRISPR-Cas methods to date, but the related Cas12a enzyme. Not only can it edit genes, it can also cut the long "RNA address list" into individual "address labels" at the same time. Furthermore, Cas12a can handle shorter RNA address molecules than Cas9. "The shorter these addressing sequences are, the more of them we can fit onto a plasmid," Platt says.

Credit: 
ETH Zurich