Tech

Study finds specialty behavioral health establishments increased, but more needs to be done

image: "The good news is that this new data resource we've assembled documents a growth in the number of establishments and in the workforce, meaning an increase in treatment capacity," said Kosali Simon, co-author of the study and a Herman B. Wells Endowed Professor in IU's O'Neill School of Public and Environmental Affairs.

Image: 
Indiana University

The number of specialty behavioral health establishments, their workforce and their wages have increased steadily between 2011 and 2019, according to a new study by Indiana University and University of Michigan researchers.

The largest increases were found in the number of outpatient establishments and the size of their workforce, as well as an increase in the average wage at residential health establishments.

Researchers say while these increases are important in closing the gaps in needed treatment, more work needs to be done to increase behavioral health workforce deficits, especially in areas with an elevated drug overdose mortality rate.

At the county-level, they found that the growth of residential specialty behavioral health establishments was positively and significantly associated with the county's drug mortality rate. They did not observe a similar positive association in other settings, including outpatient clinics and hospitals.

"The good news is that this new data resource we've assembled documents a growth in the number of establishments and in the workforce, meaning an increase in treatment capacity," said Kosali Simon, co-author of the study and a Herman B. Wells Endowed Professor in IU's O'Neill School of Public and Environmental Affairs. "However, the growth in the need for treatment still outpaces available resources, as the number of overdose deaths continues to rise."

The study was published in the Journal of Substance Abuse Treatment. It is the first study to examine recent changes in the specialty behavioral health workforce and the job characteristics, specifically wages, for individuals working in these settings.

"Limited availability of specialty behavioral health providers is often reported as a key barrier to filling treatment gaps," said Thuy Nguyen, research assistant professor at the University of Michigan School of Public Health and lead author of the study. "Through our study, we found increases in the number of establishments, employees and average wages in the treatment sector in recent years, which may indicate that the specialty behavioral health workforce is responding to the increased need for treatment."

According to the study, the number of specialty behavioral health establishments has increased 34 percent from 2011, with the largest increase in outpatient establishments (46 percent) compared to 29 percent for hospitals and 22 percent for residential establishments.

Within each type of specialty behavioral health establishment setting, the study found the number of employees also increased considerably: 33 percent in outpatient establishments, 23 percent in residential and 5 percent in hospitals. By comparison, the increase in the total employment in the health care sector was 20 percent.

Using a longitudinal dataset from the U.S. Census Bureau, the study quantified national and county-level changes in specialty behavioral health workforce outcomes and assessed associations between these measures and age-adjusted drug mortality rate. The study described specialty behavioral health workforce outcomes in 3,130?U.S. counties between 2011 and 2019.

The study stratified workforce outcomes, including the number of establishments, likelihood of having establishments, mean number of workers and average wage of workers per county by service settings: outpatient, residential, and hospital.

Credit: 
Indiana University

THOR: Driving collaboration in heavy-ion collision research

In the universe's earliest moments, particles existed in an unimaginably hot plasma, whose behaviour was governed by deeply complex webs of interaction between individual particles. Today, researchers can recreate these exotic conditions through high-energy collisions between heavy ions, whose products can tell us much about how hot, strongly-interacting matter behaves. Yet without extensive, highly coordinated collaborations between researchers across many different backgrounds, studies like this simply wouldn't be possible. This Topical Issue of EPJ A draws together a large collection of papers inspired by the theory of hot matter and relativistic heavy-ion collisions (THOR) European Cooperation in Science and Technology (COST) Action. Running between November 2016 and April 2021, THOR has provided a way for over 300 researchers involved in heavy-ion collision analysis to freely exchange their ideas, leading to exciting new advances in the wider field of particle physics.

Collisions between heavy ions have now been conducted extensively at facilities including CERN's Large Hadron Collider, and Brookhaven National Laboratory's Relativistic Heavy Ion Collider. Through the efforts of the THOR COST Action, collaborations between researchers encompassing a broad range of skill sets have led to innovative new ways to analyse the data gathered by these experiments. Interpreting the aftermath of a heavy-ion collision requires extremely precise statistical analysis - involving techniques ranging from foundational microscopic theories, to computer simulations of real collisions. Inevitably, such a wide array of approaches has generated an expansive network of interrelated results in literature.

The THOR COST Action was founded on the idea that the predictions provided by theories and simulations could be improved upon, by creating a platform for researchers across this network to freely communicate their results. This has enabled the physicists participating in the project to improve their methods, based on the outcomes of other studies taking completely different approaches. Over its 4-year run, THOR has supported 73 Short Term Scientific Missions, involving 301 researchers overall. In addition, the project has placed a strong emphasis on encouraging participation from students, providing unprecedented training opportunities for a future generation of experts.

Credit: 
Springer

DNA circuits

The myriad processes occurring in biological cells may seem unbelievably complex at first glance. And yet, in principle, they are merely a logical succession of events, and could even be used to form digital circuits. Researchers have now developed a molecular switching circuit made of DNA, which can be used to mechanically alter gels, depending on the pH. DNA-based switching circuits could have applications in soft robotics, say the researchers in their article in Angewandte Chemie.

DNA is a long molecule that can be folded and twisted in various ways. It has a backbone and bases that stick out from the backbone and pair up with counterparts in other DNA strands. When a series of these matching pairs comes together, they form a twisted, ladder-like double strand--the familiar DNA double helix. The flexibility of DNA, which makes it possible to produce bends, loops, and a wide variety of other shapes, has inspired researchers to build DNA switches. These switches change shape after receiving an input, and can then affect their surroundings.

Hao Pei from Shanghai Key Laboratory of Green Chemistry and Chemical Processes at the East China Normal University in Shanghai, China, and colleagues have now developed a configurable, multi-mode logic switching network that reacts differently with its surroundings depending on pH and DNA input. All the components of the switching circuit were produced from DNA.

The team developed a series of four DNA switches, each with slightly different lengths and combinations of bases. These variations meant they reacted differently with a single DNA strand depending on the pH of their surroundings. For example, at a slightly alkaline pH of 8, two of the switches formed triple-stranded DNA (triplexes), while the others remained loosely stretched out. These reactions and folds led to secondary reactions, which were utilized by the researchers as logic functions in the switching circuit. The result was, for example, a fluorescent signal that could be read as an output.

To demonstrate the use of the switching circuit in a real mechanical system, the team incorporated the DNA switches into polyacrylamide gels. The DNA acted as a crosslinker, joining the polymer molecules in the gel together. The shorter the crosslinker, or the more folded the DNA, the denser the gel became. Once a piece of DNA with matching bases was added as an input, a logic circuit was set in place, causing the DNA switches to unfold, form triplexes, or relax. The reaction circuit was also dependent on the pH. As a result, certain combinations of DNA input and pH range caused the DNA crosslinker to grow longer and the gel to swell up, in some cases nearly doubling in size.

As DNA switches have almost infinite possibilities for combinations of twists and folds, the researchers consider their switching circuits to be a vital step toward soft matter robotics, where controllable, miniaturized logic functional networks are important.

Credit: 
Wiley

The best strawberries to grow in hot locations

It's strawberry season in many parts of the U.S, and supermarkets are teeming with these fresh heart-shaped treats. Although the bright red, juicy fruit can grow almost anywhere with lots of sunlight, production in some hot, dry regions is a challenge. Now, researchers reporting in ACS' Journal of Agricultural Food and Chemistry have identified five cultivars that are best suited for this climate, which could help farmers and consumers get the most fragrant, sweetest berries.

Most strawberries commercially grown in the U.S. come from California and Florida. With the expansion of local farmer's markets and people's excitement about fresh berries, growers in other states are trying to increase production. In Texas, for example, current commercial operations grow a few of the "day-neutral" and "spring-bearing" varieties that have a potentially high fruit output. But there are hundreds of options, including some that are more heat tolerant, and many factors to consider when choosing cultivars to grow that will produce strawberries appealing to consumers. So, Xiaofen Du and colleagues wanted to determine which ones grow well in Texas' semi-arid, hot environment and have the most desirable berry characteristics -- information that could help growers in similar climates.

The researchers grew 10 common strawberry cultivars in northwest Texas, comparing seven spring-bearing and three day-neutral varieties. First, they monitored plant growth and yields and found eight of the cultivars had plant survival rates of more than 96% before the first harvest. Overall, the day-neutral varieties had the lowest total berry weight per plant. Then, the team measured ripe berries' characteristics, including color, sugar content, acidity and aroma compounds. Their results showed red intensity was not linked to berry sweetness; in fact, the redder varieties had more citric acid, which made them taste more sour than sweet. Taste tests on berry purees showed that desirable flavors were related to the varieties' sugar content and 20 aroma compounds. Perhaps surprisingly, tasters ranked the two varieties that grew the fewest and smallest fruits as having the most intense flavors. The researchers concluded that five cultivars -- Albion, Sweet Charlie, Camarosa, Camino Real and Chandler -- can grow well in Texas' climate and have the best flavor and aroma.

Credit: 
American Chemical Society

Oncotarget: E6-specific inhibitors as therapeutics for HPV+ head and neck carcinomas

image: GA-OH activates p53 and Caspase 8 and induces apoptosis. (A) SCC19, SCC90 and SCC104 were seeded and treated with vehicle, 0.5 μM and 1 μM Of GA-OH for 24 hours. Activation of p53 was tested by blotting for p53 expression and activation. Caspase 8 activation and apoptosis was tested by blotting for Caspase 8 expression and activation of downstream targets. (B) Various HPV+ and HPV- cell lines were seeded in 96 well plates and treated with 0.75 uM GA-OH. Caspase 3/7 activity was measured after 24 hours. HPV+ cell lines show high levels of Caspase 3/7 activity.

Image: 
Correspondence to - Penelope J. Duerksen-Hughes - pdhughes@llu.edu

Oncotarget published "A high-content AlphaScreen™ identifies E6-specific small molecule inhibitors as potential therapeutics for HPV+ head and neck squamous cell carcinomas" which reported that the incidence of human papillomavirus-positive head and neck squamous cell carcinoma has increased dramatically over the past decades due to an increase in infection of the oral mucosa by HPV.

The etiology of HPV -HNSCC is linked to expression of the HPV oncoprotein, E6, which influences tumor formation, growth and survival.

E6 effects this oncogenic phenotype in part through inhibitory protein-protein interactions and accelerated degradation of proteins with tumor suppressor properties, such as p53 and caspase 8. Interfering with the binding between E6 and its cellular partners may therefore represent a reasonable pharmacological intervention in HPV tumors.

In this study, the authors probed a small-molecule library using AlphaScreen™ technology to discover novel E6 inhibitors.

Further testing of this analog in a panel of HPV and HPV– cell lines showed good potency and a large window of selectivity as demonstrated by apoptosis induction and significant inhibition of cell growth, cell survival in HPV cells.

Dr. Penelope J. Duerksen-Hughes from The Loma Linda University said, "Head and neck squamous cell carcinomas (HNSCC) are heterogeneous tumors that arise in the upper respiratory tract and are the 6th most common cancer worldwide by incidence."

"Head and neck squamous cell carcinomas (HNSCC) are heterogeneous tumors that arise in the upper respiratory tract and are the 6th most common cancer worldwide by incidence."

In parallel, HPV -HNSCC, which is caused by HPV, has risen dramatically within the same period.

Compared to its HPV- counterpart, HPV -HNSCC carries a more favorable prognosis and is more prevalent in younger and otherwise healthier patients.

HPV oncoproteins, particularly E6, represent a unique and potentially therapeutically favorable strategic approach for targeted HPV -HNSCC treatment.

These findings are corroborated by findings that have shown that the absence of p53 and caspase 8 in HNSCC is correlated with attenuation of sensitivity of HPV -HNSCC to chemotherapy and radiation.

Consistent with this, genetic tools such as CRISPER, TALEN gene knockouts, RNAi and other agents that indirectly knock down E6 mRNA have demonstrated that depleting the protein abundance of E6 leads to anti-proliferative effects and enhances the response of HPV cells to chemotherapy agents and radiation.

The Duerksen-Hughes Research Team concluded in their Oncotarget Research Output, "the evidence we have gathered suggests that GA-OH can serve as a basis for better understanding the mechanism of E6 inhibition, and has the potential for further development to improve potency and drug-likeness. Tools such as computational modelling and medicinal chemistry can utilize this inhibitor as a good starting point for optimization and in understanding important interactions between E6 and its inhibitors."

Credit: 
Impact Journals LLC

The powerhouse future is flexoelectric

image: Kosar Mozaffari is a graduate student at the Cullen College of Engineering at the University of Houston.

Image: 
University of Houston

Researchers have demonstrated "giant flexoelectricity" in soft elastomers that could improve robot movement range and make self-powered pacemakers a real possibility. In a paper published this month in the Proceedings of the National Academy of Sciences, scientists from the University of Houston and Air Force Research Laboratory explain how to engineer ostensibly ordinary substances like silicone rubber into an electric powerhouse.

What do the following have in common: a self-powered implanted medical device, a soft human-like robot and how we hear sound? The answer as to why these two disparate technologies and biological phenomena are similar lies in how the materials they are made of can significantly change in size and shape - or deform - like a rubber band, when an electrical signal is sent.

Some materials in nature can perform this function, acting as an energy converter that deforms when an electrical signal is sent through or supplies electricity when manipulated. This is called piezoelectricity and is useful in creating sensors and laser electronics, among several other end uses. However, these naturally occurring materials are rare and consist of stiff crystalline structures that are often toxic, three distinct drawbacks for human applications.

Man-made polymers offer steps toward alleviating these pain points by eliminating material scarcity and creating soft polymers capable of bending and stretching, known as soft elastomers, but previously those soft elastomers lacked significant piezoelectric attributes.

In a paper published this month in the Proceedings of the National Academy of Sciences, Kosar Mozaffari, graduate student at the Cullen College of Engineering at the University of Houston; Pradeep Sharma, M.D. Anderson Chair Professor & Department Chair of Mechanical Engineering at the University of Houston and Matthew Grasinger, LUCI Postdoctoral Fellow at the Air Force Research Laboratory, offer a solution.

"This theory engineers a connection between electricity and mechanical motion in soft rubber-like materials," said Sharma. "While some polymers are weakly piezoelectric, there are no really soft rubber like materials that are piezoelectric."

The term for these multifunctional soft elastomers with increased capability is "giant flexoelectricity." In other words, these scientists demonstrate how to boost flexoelectric performance in soft materials.

"Flexoelectricity in most soft rubber materials is quite weak," said Mozaffari, "but by rearranging the chains in unit cells on a molecular level, our theory shows that soft elastomers can attain a greater flexoelectricity of nearly 10,000 times the conventional amount."

The potential uses are profound. Human-like robots made with soft-elastomers that contain increased flexoelectric properties would be capable of a greater range of motion to perform physical tasks. Pacemakers implanted in human hearts and utilizing lithium batteries could instead be self-powered as natural movement generates electrical power.

The mechanics of soft elastomers generating and being manipulated by electrical signals replicates a similar function observed in human ears. Sounds hit the ear drum that then vibrates and send electrical signals to the brain, which interprets them. In this case, movement can manipulate soft elastomers and generate electricity to power a device on its own. This process of self-generating power by movement appears as a step up from a typical battery.

The advantages of this new theory stretch beyond just that. In the process of research, the capability to design a unit cell that is stretch invariant - or remains unchanged under unwanted stretch transformation - emerged.

"For some applications we require certain amounts of electricity to be generated regardless of the stretch deformation, whereas with other applications we desire as much electricity generation as possible, and we have designed for both of these cases." said Mozaffari.

"In our research, we discovered a method to make one unit cell stretch invariant. The tunable nature of the flexoelectric direction can be useful for producing soft robots and soft sensors."

In other words, the amount of electric power generated from various physical stimulation can be controlled so that devices perform directed actions. This can moderate the functioning of electronic devices that are self-sufficient.

Next steps include testing this theory in a lab using potential applications. Additionally, efforts to improve on the flexoelectric effect in soft elastomers will be the focus of further study.

Credit: 
University of Houston

Similarity of legs, wheels, tracks suggests target for energy-efficient robots

image: The Legged Locomotion and Movement Adaptation, or LLAMA, is an autonomous quadruped mobility research platform system patterned after a working dog and similar animals. Army researchers designed it to work alongside Soldiers, lighten physical workloads, and increase mobility, protection and lethality. (U.S. Army)

Image: 
U.S. Army Photo

ABERDEEN PROVING GROUND, Md. - A new formula from Army scientists is leading to new insights on how to build an energy-efficient legged teammate for dismounted warfighters.

In a recent peer-reviewed PLOS One paper, the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory's Drs. Alexander Kott, Sean Gart and Jason Pusey offer new insights on building autonomous military robotic legged platforms to operate as efficiently as any other ground mobile systems.

Its use could lead to potentially important changes to Army vehicle development. Scientists said they may not know exactly why legged, wheeled and tracked systems fit the same curve yet, but they are convinced their findings drive further inquiry.

"If vehicle developers find a certain design would require more power than is currently possible given a variety of real-world constraints, the new formula could point to specific needs for improved power transmission and generation, or to rethink the mass and speed requirements of the vehicle," Gart said.

Inspired by a 1980s formula that shows relationships between the mass, speed and power expenditure of animals, the team developed a new formula that applied to a very broad range of legged, wheeled and tracked systems - such as motor vehicles and ground robots.

Although much of the data has been available for 30 years, this team believes they are the first to actually assemble it and study the relationships that emerge from this data. Their findings show that legged systems are as efficient as wheeled and tracked platforms.

"In the world of unmanned combat aerial vehicle and intelligent munitions, there is a growing role for dismounted infantry that can advance, often for multiple days, and attack in the most cluttered terrain such as mountains, dense forests and urban environments," said Kott who serves as the laboratory's chief scientist. "That's because such terrain provides the greatest cover and concealment against the unmanned aerial vehicles. That, in turn, demands that dismounted infantry should be assisted by vehicles capable of moving easily in such a broken terrain. Legged vehicles - possibly autonomous-would be very helpful."

One of the problems with legged robots, Kott said, is they seem to have poor energy efficiency, which limits teaming with Soldiers in austere battlefields.

"For the past 30 years, U.S. military scientists have addressed a number of challenges in developing autonomous vehicles," said Kott. "Ground vehicles that maneuver on wheels or tracks, and air vehicles that resemble small airplanes which we call fixed wing and small helicopters, which are rotary wing, are now quieter and easier to integrate in troop formations. But for legged platforms, many hurdles remain elusive, and a huge one is making them energy efficient."

Soldiers cannot afford to carry fuel or batteries for "energy-thirsty legged robots," he said.

The paper explores whether artificial ground-mobile systems exhibit a consistent trend among mass, power, and speed.

As a starting point, the team investigated a scaling formula proposed in the 1980s for estimating the mechanical power expended by an animal of a given mass to move at a given speed, and compared this to a range of artificial mechanical systems varying in size, weight and power that are autonomous or driven by humans.

The team found the answer to their research question: a similar, consistent relationship does in fact apply also to ground-mobile systems including vehicles of different types over a broad range of their masses.

Kott said this relationship surprisingly turned out to be essentially the same for legged, wheeled and tracked systems. These findings suggest that human-made legged platforms should be as efficient as wheeled and tracked platforms, he said.

To conduct this study, the team collected diverse ground mobile system data from a literature review of previous studies and published data sets.

They studied wide ranges of sizes and morphologies within a data set that combined systems that included for example a 17th century British canon, the Ford Model T, the M1 Abrams tank and an ACELA train.

Gart said their research is relevant to designing ground mobile systems because it helps designers determine tradeoffs among power, speed and mass for future terrestrial robots for defense applications.

One Army goal is to develop new types of autonomous, or partly autonomous, ground vehicle to deliver supplies to Soldiers in challenging terrains, he said.

"To haul supplies, it must be able to carry a certain weight, or mass, at a certain time, or speed," Gart said.

The formula can approximate the amount of power that vehicle will need, researchers said.

"The Army must develop feasible yet ambitious targets for tradeoffs among the power, speed, and mass of future terrestrial robots," Kott said. "It is undesirable to base such targets on current experience, because military hardware is often developed and used for multiple years and even decades; therefore, the specifiers and designers of such hardware must base their targets-competitive yet achievable-on future technological opportunities not necessarily fully understood at the time of design."

The formula developed in this paper gives such a target and could enable the Army to make predictions of future performance of ground platforms such as legged robots given design constraints like vehicle and motor weight and desired speed, he said.

Credit: 
U.S. Army Research Laboratory

Why short selling is good for the capital markets

image: New evidence shows that short selling has a disciplinary effect on opportunistic insider sales.

Image: 
Singapore Management University

SMU Office of Research & Tech Transfer - Short selling often gets a bad rap because it is a type of trade that bets against the success of a firm. In essence, short selling allows investors to borrow stock from a broker to sell into the market with the hope of buying the stock back at a cheaper price, thus, profiting on the difference between the sell and buy prices. Because of this practice, short selling is sometimes seen as a controversial tactic.

Furthermore, speculative short selling attacks are concerning as it can put downward pressure on the entire stock market. It is for this reason that governments and regulators have stepped in to curtail or ban short selling during times of market stress such as the Global Financial Crisis or more recently, at the onset of the COVID-19 pandemic.

Contrary to the negativity surrounding short selling, SMU Associate Professor Rencheng Wang told the Office of Research and Tech Transfer: "Quite honestly, there are many benefits of short selling. Short selling can drive market liquidity, price stocks more efficiently, mitigate market bubbles, as well as provide a check on upward market manipulations."

"Because of monetary gains, short sellers are motivated to detect and expose negative news such as poor firm performance that investors have yet to be informed, or unethical and opportunistic behaviours taken by managers at the cost of investors. In other words, short sellers are like detectives of the capital markets," Professor Wang adds.

Given that there are still so many unanswered questions about the positive benefits of short selling, Professor Wang and his collaborators decided to probe deeper into the issue. Specifically, Professor Wang wanted to understand how short selling affects the behaviours of the firm's managers and large shareholders.

Short sale regulation

The regulation on short selling had remained relatively intact since its introduction over 60 years ago. It was believed that the short sale rule had become increasingly susceptible to abuse and was inconsistent with market developments, which led the SEC (U.S. Securities and Exchange Commission) to adopt a modified version of the proposed Rule 202T in 2005.

As part of the review of the short sale rule, the SEC introduced a pilot programme for a select group of securities, of which represented a third of the securities listed on the Russell 3000 index. Rule 202T referred to a temporary exemption from Rule 202 of Regulation SHO (short selling regulation) that suspended any short sale price test for the select group of securities.

The purpose of this pilot programme was to enable the SEC to study the effects of unrestricted short selling on, among other things, market volatility, price efficiency, and market liquidity.

The research

Given this scenario, Professor Wang was able to take advantage of this temporary rule exemption by designing a quasi-experiment to compare the performance and behaviours of the designated group of securities (pilot firms) with the rest of the securities (control firms) on the Russell 3000 index.

Professor Wang elaborated: "Our intent of conducting this research was not merely to observe the impact of Rule 202T on short selling. Rather, we expect the firm's managers, whom we call insiders, to adjust their behaviours to reflect the increased threat of short selling."

The sample included 974 pilot firms and 1,935 control firms listed on the Russell 300 index. The research included a total of 55,002 firm quarters in the pre-period of Rule 202T (January 2002 - April 2005) to post-period of Rule 202T (May 2005 - July 2007).

In conveying the research findings, Professor Wang was pleased to inform the Office of Research and Tech Transfer: "We saw a reduction of 11 percent in opportunistic insider sales in the pilot firms, which means that short selling has a disciplinary effect on the behaviours of the insiders. And the threat of short selling was more pronounced in deterring insiders whose firms have higher litigation risks, greater reputation concerns and have more insiders with large stock-related holdings."

He added: "When we extended our research to the securities listed on the Chinese (Shanghai and Shenzhen) and Hong Kong Stock Exchanges, we also saw a similar pattern - short sellers can deter unethical insiders from engaging in high volume opportunistic sell offs in stock exchanges that differ in culture, market development, and legal environments - from the New York Stock Exchange to other exchanges in Asia. Thus, we provide new evidence to highlight the importance of short sellers in capital market development and governance reforms across different institutional environments."

Professor Wang's paper on short selling can be found here.

Credit: 
Singapore Management University

Quantum Hall effect and the third dimension

image: Hall resistivity as a function of the applied magnetic field at 2 K in units of Planck's constant h, the elementary charge e and the Fermi wave vector along the applied magnetic field kF,z A sketch of the sample is shown at the top left. The three-dimensional Fermi surface of the electrons in ZrTe5 is shown at the bottom right.

Image: 
© MPI CPfS

The quantum Hall effect traditionally only plays a role in two-dimensional electron systems. Recently, however, a three-dimensional version of the quantum Hall effect was described in the Dirac semimetal ZrTe5. It has been suggested that this version results from a magnetic field-induced Fermi surface instability that transforms the original three-dimensional electron system into a stack of two-dimensional electron systems. Now scientists at the Max Planck Institute for Chemical Physics of Solids in Dresden, at the Technical University of Dresden, at the Brookhaven National Laboratory in New York, at the Helmholtz Center Dresden-Rossendorf, the Max Planck Institute for Microstructure Physics in Halle and at the Würzburg-Dresden Cluster of Excellence ct.qmat were able to show that the electron system of ZrTe5, contrary to the original explanation, remains three-dimensional even in strong magnetic fields and that the quasi-quantization of the Hall effect is nevertheless directly linked to quantum-Hall physics.

The findings from the study of quantum Hall physics in the third dimension can be universally applied to conventional metals and promise a unified explanation of the plateaus that have been observed in Hall measurements in many three-dimensional materials, which were often puzzling in the past. In addition, the concept can be directly applied to generalize the two-dimensional quantum anomalous Hall effect to generic three-dimensional magnets.

The results were published in Nature Communications.

Credit: 
Max Planck Institute for Chemical Physics of Solids

Pacific Northwest National Laboratory's shadow figment technology foils cyberattacks

video: Shadow Figment is a cybersecurity technology designed to protect critical infrastructure like buildings and the electric grid. The technology developed by Pacific Northwest National Laboratory is designed to lure hackers into an artificial world, then stop them from doing damage by feeding them illusory tidbits of success.

Image: 
Animation: Sara Levine | Pacific Northwest National Laboratory

RICHLAND, Wash.--Scientists have created a cybersecurity technology called Shadow Figment that is designed to lure hackers into an artificial world, then stop them from doing damage by feeding them illusory tidbits of success.

The aim is to sequester bad actors by captivating them with an attractive--but imaginary--world.

The technology is aimed at protecting physical targets--infrastructure such as buildings, the electric grid, water and sewage systems, and even pipelines. The technology was developed by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory.

The starting point for Shadow Figment is an oft-deployed technology called a honeypot--something attractive to lure an attacker, perhaps a desirable target with the appearance of easy access.

But while most honeypots are used to lure attackers and study their methods, Shadow Figment goes much further. The technology uses artificial intelligence to deploy elaborate deception to keep attackers engaged in a pretend world--the figment--that mirrors the real world. The decoy interacts with users in real time, responding in realistic ways to commands.

"Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond," said Thomas Edgar, a PNNL cybersecurity researcher who led the development of Shadow Figment.

Exploiting attackers' "success"

The system rewards hackers with false signals of success, keeping them occupied while defenders learn about the attackers' methods and take actions to protect the real system.

The credibility of the deception relies on a machine learning program that learns from observing the real-world system where it is installed. The program responds to an attack by sending signals that illustrate that the system under attack is responding in plausible ways. This "model-driven dynamic deception" is much more realistic than a static decoy, a more common tool that is quickly recognized by experienced cyberattackers.

Shadow Figment spans two worlds that years ago were independent but are now intertwined: the cyber world and the physical world, with elaborate structures that rely on complex industrial control systems. Such systems are more often in the crosshairs of hackers than ever before. Examples include the takedown of large portions of the electric grid in the Ukraine in 2015, an attack on a Florida water supply earlier this year, and the recent hacking of the Colonial pipeline that affected gasoline supplies along the East Coast.

Physical systems are so complex and immense that the number of potential targets--valves, controls, pumps, sensors, chillers and so on--is boundless. Thousands of devices work in concert to bring us uninterrupted electricity, clean water and comfortable working conditions. False readings fed into a system maliciously could cause electricity to shut down. They could drive up the temperature in a building to uncomfortable or unsafe levels, or change the concentration of chemicals added to a water supply.

Shadow Figment creates interactive clones of such system in all their complexity, in ways that experienced operators and cyber criminals would expect. For example, if a hacker turns off a fan in a server room in the artificial world, Shadow Figment responds by signaling that air movement has slowed and the temperature is rising. If a hacker changes a setting to a water boiler, the system adjusts the water flow rate accordingly.

Shadow Figment: undermining ill intent

The intent is to distract bad actors from the real control systems, to funnel them into an artificial system where their actions have no impact.

"We're buying time so the defenders can take action to stop bad things from happening," Edgar said. "Even a few minutes is sometimes all you need to stop an attack. But Shadow Figment needs to be one piece of a broader program of cybersecurity defense. There is no one solution that is a magic bullet."

PNNL has applied for a patent on the technology, which has been licensed to Attivo Networks. Shadow Figment is one of five cybersecurity technologies created by PNNL and packaged together in a suite called PACiFiC.

"The development of Shadow Figments is yet another example of how PNNL scientists are focused on protecting the nation's critical assets and infrastructure," said Kannan Krishnaswami, a commercialization manager at PNNL. "This cybersecurity tool has far-reaching applications in government and private sectors--from city municipalities, to utilities, to banking institutions, manufacturing, and even health providers."

"The development of Shadow Figment illustrates how PNNL technology makes a difference in so many lives," said Kannan Krishnaswami, a commercialization manager at PNNL. "The Laboratory's research provides protection against an array of threats, including cyberattacks."

Credit: 
DOE/Pacific Northwest National Laboratory

Spiders can sniff out and avoid killer ants, SFU study finds

Spiders avoid building webs near European fire ants, their natural predators, by sensing the chemicals they give off in the environment, Simon Fraser University researchers have found.

The findings, published recently in Royal Society Open Science, give us a peek inside the enduring struggle between spiders and ants, and could lead to the development of natural repellents for homeowners worried about unwanted eight-legged guests.

Many ants prey on spiders, suggesting that web-building spiders may avoid locations near ant colonies or frequented by foraging ants. The research team, led by SFU biological sciences PhD candidate Andreas Fischer, hypothesized that spiders instinctively know to avoid building webs in these danger areas by sensing chemical cues left behind by predatory ants.

Researchers tested the theory by exposing filter paper to several species of ants and placing it in a multi-chambered habitat of four different species of spiders. Filter papers without ant semiochemicals were put in another chamber to see which area the spiders would prefer.

They found that the chemical deposits of European fire ants specifically, which are known to be aggressive omnivorous scavengers and prey on many invertebrates, had a deterrent effect on all tested spider species. The spiders chose to stay in the chamber that had no chemical trace of ants nearby.

Given how much time and energy spiders put into building their webs, Fischer said it makes sense that spiders in the wild would pick locations that have fewer threats to their survival.

Meanwhile, people's fear of spiders has led to the development of many insecticides and other chemical products that claim to repel spiders. But most of them have proven largely ineffective because spiders are able to abandon their webs and rebuild elsewhere. Harnessing the natural chemicals given off by their natural predators could help create more effective repellents for homeowners. Researchers warn against using European fire ants, an invasive species, themselves as pest control.

Credit: 
Simon Fraser University

Printing a better microgrid

image: An illustration of a printed silver microgrid that was featured on a supplemental cover of the journal ACS Applied Electronic Materials

Image: 
Randal McKenzie / LAMP Lab

The future of electronic displays will be thin, flexible and durable. One barrier to this, however, is that one of the most widely used transparent conductors for electronic displays--indium tin oxide (ITO)--doesn't perform as well on larger areas and can crack and break down with wear. Indium is also a rare earth mineral, which is relatively scarce, and the process to create ITO requires high energy consumption and expensive equipment.

One emerging alternative is metal "microgrid" conductors. These microgrids can be customized to their application by varying the microgrid width, pitch and thickness, and they can be made with a variety of metals.

New research from the University of Pittsburgh Swanson School of Engineering investigates the use of microgrids printed with particle-free silver inks, demonstrating its advantages when compared with other particle-based inks. The paper is published in ACS Applied Electronic Materials and is featured on a supplemental cover of the journal.

"Among the alternatives to ITO being explored, metal microgrids are an attractive option because of their low sheet resistance and high transparency, which is well suited to many optoelectronic applications," explained Paul Leu, Associate Professor of Industrial Engineering, whose Laboratory for Advanced Materials at Pittsburgh (LAMP) conducted the research. "However, because of the fabrication processes available, it's difficult to perfect. Our research focuses on addressing key issues in fabricating silver microgrids using particle-free silver ink, and we found it has some key advantages over particle-based inks."

The project is a continuation of the LAMP lab's collaboration with Electroninks, a technology company in Austin, Texas. The company produces a circuit drawing kit called Circuit Scribe, which uses conductive silver ink to allow users to create working lights with circuits drawn on paper. Circuit Scribe sparked Leu's initial interest in working with the company to develop their particle-free metal ink as a way to address some of the limitations of ITO.

The researchers found that the particle-free fabricated microgrids were more reliable than those printed with particle-based inks, showing better transparent electrode performance, lower roughness, and better mechanical durability, which is necessary for flexible displays. To test its durability, the researchers performed several tests, including adhesion, bending and folding tests.

"These microgrids outperformed both particle-based ink-formed microgrids and ITO microgrids in all of our tests," said lead author and PhD student, Ziyu Zhou. "Our research paves the way for better performing, less expensive and more durable displays that don't rely on the mining of rare earth minerals."

In addition to evaluating the microgrids as a replacement for ITO in OLEDs, the team is evaluating them for transparent antennas and electromagnetic interference (EMI) shielding.

Credit: 
University of Pittsburgh

Declining fish biodiversity poses risks for human nutrition

ITHACA, N.Y. - All fish are not created equal, at least when it comes to nutritional benefits.

This truth has important implications for how declining fish biodiversity can affect human nutrition, according to a computer modeling study led by Cornell and Columbia University researchers.

The study, "Declining Diversity of Wild-Caught Species Puts Dietary Nutrient Supplies at Risk," published May 28 in Science Advances, focused on the Loreto region of the Peruvian Amazon, where inland fisheries provide a critical source of nutrition for the 800,000 inhabitants.

At the same time, the findings apply to fish biodiversity worldwide, as more than 2 billion people depend on fish as their primary source of animal-derived nutrients.

"Investing in safeguarding biodiversity can deliver both on maintaining ecosystem function and health, and on food security and fisheries sustainability," said the study's first author Sebastian Heilpern, a presidential postdoctoral scholar in the Department of Natural Resources and the Environment at Cornell University.

Practical steps could include establishing and enforcing "no-take zones" - areas set aside by the government where natural resources can't be extracted - in critical habitat; making sure that fishers adhere to fish size limits; and an increased investment in gathering species data to inform fisheries management policies, especially for inland fisheries.

In Loreto, people eat about 50 kilograms of fish annually per capita, rivaling the highest fish consumption rates in the world, and about half the amount of meat an average American consumes each year. Loreto residents eat a wide variety of fish, approximately 60 species, according to catch data. Species include large predatory catfish that migrate more than 5,000 kilometers, but whose numbers are dwindling due to overfishing and hydropower dams that block their paths. At the same time, the amount of fish caught has remained relatively consistent over time. This could be due to people spending more time fishing and smaller, more sedentary species or other predators filling voids left by dwindling larger predator populations.

"You have this pattern of biodiversity change but a constancy of biomass," Heilpern said. "We wanted to know: How does that affect nutrients that people get from the system?"

In the computer model, the researchers took all these factors into account and ran extinction scenarios, looking at which species are more likely to go extinct, and then which species are likely to replace those to compensate for a void in the ecosystem.

The model tracked seven essential animal-derived nutrients, including protein, iron, zinc, calcium and three omega-3 fatty acids, and simulated how changing fish stocks might affect nutrient levels across the population. While protein content across species is relatively equal, smaller, more sedentary fish have higher omega-3 content. Levels of micronutrients such as zinc and iron can also vary between species.

Simulations revealed risks in the system. For example, when small, sedentary species compensated for declines in large migratory species, fatty acid supplies increased, while zinc and iron supplies decreased. The region already suffers from high anemia rates, caused by iron deficiency, that such outcomes could further exacerbate.

"As you lose biodiversity, you have these tradeoffs that play out in terms of the aggregate quantity of nutrients," Heilpern said. "As you lose species, the system also becomes more and more risky to further shocks."

A related paper published March 19 in Nature Food considered whether other animal-based food sources, such as chicken and aquaculture, could compensate for the loss of biodiversity and dietary nutrients in the same region. The researchers found that those options were inadequate and could not replace the nutrients lost when fish biodiversity declines.

Credit: 
Cornell University

Researchers learn how swimming ducks balance water pressure in their feathers while diving

A team of students working with Jonathan Boreyko, associate professor in mechanical engineering at Virginia Tech, has discovered the method ducks use to suspend water in their feathers while diving, allowing them to shake it out when surfacing. The discovery opens the door for applications in marine technology. Findings were published in ACS Applied Materials & Interfaces.

Boreyko has a well-established body of work in the area of fluid mechanics, including the invention of a fog harp and the use of contained, recirculated steam as a cooling device. As his research has progressed throughout the past decade, the mechanics of duck de-wetting has been one of his longest-running projects.

"I got this idea when I was at Duke University," said Boreyko. "I had a really bad parking spot, but my walk took me right through the scenic Duke Gardens. I passed by ponds with lots of ducks, and I noticed that when a duck comes out of the water, they'd shake their feathers and water would fly off. I realized that what they were doing was a de-wetting transition, releasing water that was partially inside of their feathers. That was the germ of the idea. In my research, purely by coincidence, I was studying the same kind of thing. I realized that these transitions work only if the water isn't allowed to get all the way to the bottom of the porous feather structure."

Boreyko remained intrigued with how the balance was struck, curious about the mechanisms that allow a duck to hold water in its feathers without sinking completely. He brought Farzad Ahmadi into his lab in 2014 as a graduate student, sharing that intrigue in one of their early meetings. Ahmadi picked up the project and dove into the finer details. Their first approach was simple - they attempted to force a single drop of water through a natural duck feather.

"It didn't work," said Ahmadi. "Then we had the idea to build a pressure chamber to force a pool of water through several layers of feathers."

Under pressure

The team first needed to ensure the water could only penetrate directly through the feathers, as opposed to simply leaking around their outer edges. To achieve this, they sealed one feather at a time, leaving only a small area exposed. The researchers sealed each layer, leaving an area exposed in the same place on each surface. This allowed them to create a column of exposed feather surfaces upward through the stack. A thin pool of water was poured over the top exposed surface. The stack was placed in a pressure chamber, and gas pressure was employed to push the water downward through the feathers. A camera was placed at the bottom to observe the water as it passed through the layers.

Feathers have micro-sized openings in them, tiny slots that allow pressurized water to pass through. A duck sitting on the surface of a pond isn't encountering any water pressure, so the water penetration is negligible. A duck diving downward, however, encounters a steady increase in hydrostatic pressure, something familiar to anyone taking a dive into the deep end of a pool.

Ahmadi discovered that as the number of feather layers increases, the pressure required to push water through all the layers must also increase. This establishes a kind of baseline, a maximum pressure up to which feathers hold the water entering them, but do not allow the water to reach a duck's skin.

"Our hypothesis was to use multiple layers of feathers so that the water only comes in part way, but there are air pockets under that," Boreyko explained. "As long as those air pockets are present, it prevents something called irreversible wetting. As long as the wetting is only partial, they can shake it out when they surface."

Ahmadi also discovered that species of ducks tend to have the exact number of feather layers needed to avoid irreversible wetting during their dives. A mallard, for instance, has four layers of feathers. The maximum depth to which a typical mallard dives corresponds to a hydrostatic pressure that invaded a three-feather stack but not four. In this way, at least one layer of feathers remains dry after a dive, allowing the duck to shake out the water when it emerges.

Designing synthetic feathers

Having established the foundational mechanics of duck de-wetting, Boreyko's team set out to create a synthetic material that works in a similar way. The team made bio-inspired feathers from a thin sheet of aluminum foil, laser cutting an array of slots one-tenth of a millimeter wide to mimic the barbules of a duck feather. They also re-created the hairy nanostructure of feathers by adding an aluminum nanostructure to the aluminum barbules.

The synthetic feathers produced nearly identical results during testing, a credit to the strength of nature's design. Application and scaling of this technology is a logical next step for Boreyko, and he has a few ideas.

This layer effect may be helpful for trapping air pockets in desalination membranes, mechanisms that remove salt from seawater. Boreyko also thinks there is potential for applying layered synthetic feathers to the exterior of a boat, to make the boat travel more easily through the water and reduce the amount of barnacle-like organisms that cling to the hull.

"If we think of a ship moving over the water as an engineered bird, right now it's swimming naked," Boreyko says. "We wonder if clothing the ship in feathers could impart the same enhancements that waterfowl benefit from."

Credit: 
Virginia Tech

World's smallest, best acoustic amplifier emerges from 50-year-old hypothesis

image: Scientists Matt Eichenfield, left, and Lisa Hackett led the team at Sandia National Laboratories that created the world's smallest and best acoustic amplifier.

Image: 
Bret Latter, Sandia National Laboratories

ALBUQUERQUE, N.M. -- Scientists at Sandia National Laboratories have built the world's smallest and best acoustic amplifier. And they did it using a concept that was all but abandoned for almost 50 years.

According to a paper published May 13 in Nature Communications, the device is more than 10 times more effective than the earlier versions. The design and future research directions hold promise for smaller wireless technology.

Modern cell phones are packed with radios to send and receive phone calls, text messages and high-speed data. The more radios in a device, the more it can do. While most radio components, including amplifiers, are electronic, they can potentially be made smaller and better as acoustic devices. This means they would use sound waves instead of electrons to process radio signals.

"Acoustic wave devices are inherently compact because the wavelengths of sound at these frequencies are so small -- smaller than the diameter of human hair," Sandia scientist Lisa Hackett said. But until now, using sound waves has been impossible for many of these components.

Sandia's acoustic, 276-megahertz amplifier, measuring a mere 0.0008 square inch (0.5 square millimeter), demonstrates the vast, largely untapped potential for making radios smaller through acoustics. To amplify 2 gigahertz frequencies, which carry much of modern cell phone traffic, the device would be even smaller, 0.00003 square inch (0.02 square millimeter), a footprint that would comfortably fit inside a grain of table salt and is more than 10 times smaller than current state-of-the-art technologies.

The team also created the first acoustic circulator, another crucial radio component that separates transmitted and received signals. Together, the petite parts represent an essentially uncharted path toward making all technologies that send and receive information with radio waves smaller and more sophisticated, said Sandia scientist Matt Eichenfield.

"We are the first to show that it's practical to make the functions that are normally being done in the electronic domain in the acoustic domain," Eichenfield said.

Resurrecting a decades-old design

Scientists tried making acoustic radio-frequency amplifiers decades ago, but the last major academic papers from these efforts were published in the 1970s.

Without modern nanofabrication technologies, their devices performed too poorly to be useful. Boosting a signal by a factor of 100 with the old devices required 0.4 inch (1 centimeter) of space and 2,000 volts of electricity. They also generated lots of heat, requiring more than 500 milliwatts of power.

The new and improved amplifier is more than 10 times as effective as the versions built in the '70s in a few ways. It can boost signal strength by a factor of 100 in 0.008 inch (0.2 millimeter) with only 36 volts of electricity and 20 milliwatts of power.

Previous researchers hit a dead end trying to enhance acoustic devices, which are not capable of amplification or circulation on their own, by using layers of semiconductor materials. For their concept to work well, the added material must be very thin and very high quality, but scientists only had techniques to make one or the other.

Decades later, Sandia developed techniques to do both in order to improve photovoltaic cells by adding a series of thin layers of semiconducting materials. The Sandia scientist leading that effort happened to share an office with Eichenfield.

"I had some pretty heavy peripheral exposure. I heard about it all the time in my office," Eichenfield said. "So fast forward probably three years later, I was reading these papers out of curiosity about this acousto-electric amplifier work and reading about what they tried to do, and I realized that this work that Sandia had done to develop these techniques for essentially taking very, very thin semiconductors and transferring them onto other materials was exactly what we would need to make these devices realize all their promise."

Sandia made its amplifier with semiconductor materials that are 83 layers of atoms thick -- 1,000 times thinner than a human hair.

Fusing an ultrathin semiconducting layer onto a dissimilar acoustic device took an intricate process of growing crystals on top of other crystals, bonding them to yet other crystals and then chemically removing 99.99% of the materials to produce a perfectly smooth contact surface. Nanofabrication methods like this are collectively called heterogeneous integration and are a research area of growing interest at Sandia's Microsystems Engineering, Science and Applications complex and throughout the semiconductor industry.

Amplifiers, circulators and filters are normally produced separately because they are dissimilar technologies, but Sandia produced them all on the same acousto-electric chip. The more technologies that can be made on the same chip, the simpler and more efficient manufacturing becomes. The team's research shows that the remaining radio signal processing components could conceivably be made as extensions of the devices already demonstrated.

Work was funded by Sandia's Laboratory Directed Research and Development program and the Center for Integrated Nanotechnologies, a user facility jointly operated by Sandia and Los Alamos national laboratories.

So how long until these petite radio parts are inside your phone? Probably not for a while, Eichenfield said. Converting mass-produced, commercial products like cell phones to all acousto-electric technology would require a massive overhaul of the manufacturing infrastructure, he said. But for small productions of specialized devices, the technology holds more immediate promise.

The Sandia team is now exploring whether they can adapt their technology to improve all-optical signal processing, too. They are also interested in finding out if the technology can help isolate and manipulate single quanta of sound, called phonons, which would potentially make it useful for controlling and making measurements in some quantum computers.

Credit: 
DOE/Sandia National Laboratories