Culture

Clarifying the economic value of adjusting the power consumption

Since the output of renewable energy such as photovoltaic generation tends to fluctuate, the power system can be viewed as a large-scale complex system with uncertainty. To stabilize the balance of supply and demand of electricity, we need an energy management system to control this. In recent years, energy management systems have been actively researched against the background of the liberalization of power and the spread of smart meters that visualize the power consumption. Koichi Kobayashi, associate professor at Hokkaido University, Shun-ichi Azuma, professor at Nagoya University, and Nobuyuki Yamaguchi, associate professor at Tokyo University of Science etc. developed demand response analysis and control technologies focusing on time-varying power generation costs.

Demand response is one of the methods in energy management systems. Demand response is defined as "when the supply-demand balance is tight, consumers conserve the power consumption and change the power consumption pattern according to the setting of the electricity price or the payment of incentives (rewards)." The cost-effectiveness has not been clarified.

The introduction of the "aggregator" that controls the power consumption of consumers has attracted much attention. In this framework, aggregators trade between electric power companies and consumers, instead of direct trade between consumers and electric companies. Aggregators manage hundreds of consumers and control their power consumption in response to requests from electric companies. By the introduction of aggregators, control of the whole power system becomes easier.

During a day, the cost-effectiveness of demand response fluctuates depending on the demand and supply of electricity. It is expected that this fluctuation becomes larger by the spread of renewable energy. Demand response is aimed at maintaining the balance between supply and demand, and its cost-effectiveness has not been focused. However, in the future, it will be important to evaluate the economic value of demand response, focusing on the power generation cost and the adjustment cost (the cost required to adjust power consumption) at each time. Furthermore, it is necessary to develop control strategies that maximize the economic value of demand response.

In order for demand response to produce the economic value, the unit price of power generation costs needs to fluctuate greatly during the day. If the difference between the highest and lowest generation costs is large compared to the adjustment costs, then demand response produces the economic value. In this research, more specifically, we derived the condition that "demand response produces the economic value if the difference between the highest price and the lowest price is more than twice the adjustment cost". Because it is a simple condition, it can also be used as a guide to calculate the rewards to consumers.

Next, in order to maximize the economic value, a control method for demand response is developed based on model predictive control in which the optimal control strategy is found by the prediction via a mathematical model. In the simulation, the effectiveness of the proposed method is presented by using the data from the Japan Electric Power Exchange as a forecast value of the power generation cost and the power consumption.

Credit: 
Japan Science and Technology Agency

Dramatic change in ancient nomad diets coincides with expansion of networks across Eurasia

image: Map of millet and wheat/barley consumption over time: a) 1000-500 cal BC, b) 500-200 cal BC, and c) 200 BC-AD 400.

Image: 
I. Reese and A. R. Ventresca Miller, 2017

A meta-analysis of dietary information recorded in the bones of ancient animals and humans recovered from sites scattered across the Eurasian steppe, from the Caucasus region to Mongolia, demonstrates that pastoralists spread domesticated crops across the steppe through their trade and social networks. Researchers from Kiel University sifted through previously published stable isotopic data and applied new quantitative analyses that calibrate human dietary intake against environmental inputs. The results have allowed them to better isolate the timing of the incorporation of agricultural products into the diets of pastoral nomads and, crucially, link burgeoning socio-political networks to this dietary transformation.

Through a big data project that explored over a thousand stable isotope data points, researchers were able to find evidence for an early transition to agriculture - based on dietary intake across Eurasia. "Our understanding of the pace of crop transmission across the Eurasian steppe has been surprisingly unclear due in part to a focus on the excavation of cemeteries, rather than settlements where people threw out their food," says Alicia Ventresca Miller, lead author, formerly of Kiel University and currently at the Max Planck Institute for the Science of Human History. "Even when settlement sites are excavated, the preservation of carbonized seed remains is often poor. This is what makes stable isotope analyses of human remains from this region so valuable - it provides direct insights into the dietary dynamics of ancient pastoralists who inhabited diverse environments."

Millet spreads across the Eurasian steppe

Millet, originally domesticated in China, appears to have been occasionally consumed at low levels by pastoralists inhabiting the far-flung regions of Siberia and southeastern Kazakhstan, possibly as early as the late third millennium. This initial uptake of millet coincided with the expansion of trans-regional networks across the steppe, when objects and ideas were first regularly exchanged over long-distances.

However, it was not until a thousand years later that millet became a regular feature of pastoralist diets. This timing coincides with the intensification of complex political structures at the transition to the Iron Age. Burgeoning socio-political confederations drove a marked increase in the exchange of costly prestige goods, which strengthened political networks - and facilitated the transfer of cultigens.

Wheat and Barley in the Trans-Urals

Despite taking part in these political networks, groups in the Trans-Urals invested in wheat and barley farming rather than millet. A dietary focus on wheat and barley may have been due to different farming techniques, greater water availability, or a higher value on these cultigens. "Our research suggests that cultigens were converted from a rare luxury during the Bronze Age to a medium demarcating elite participation in political networks during the Iron Age," states Cheryl Makarewicz of Kiel University.

Regional variation in millet consumption

While herding of livestock was widespread, not all regions adopted millet. In southwest Siberia, dietary intake was focused on pastoral animal products and locally available wild plants and fish. In contrast, the delayed adoption of millet by populations in Mongolia during the Late Iron Age coincides with the rise of the Xiongnu nomadic empire. "This is particularly interesting because it suggests that communities in Mongolia and Siberia opted out of the transition to millet agriculture, while continuing to engage with neighboring groups," explains Ventresca Miller.

This study shows the great potential of using the available isotope record to provide evidence for human dietary intake in areas where paleobotany is understudied. Further research should clarify the exact type of grains, for example broomcorn or foxtail millet, were fundamental to the shift in dietary intake and how networks of exchange linked different regions.

Credit: 
Kiel University

'Green Revolution' in RNAi tools and therapeutics

"Green revolution" in the early 1950s, the extensive cultivation of Dwarf Rice solves the food problem in developing countries. At present, chronic infection with hepatitis B virus (HBV) has been a major public health problem. According to the announcement from World Health Organization (WHO), an estimated 257 million people are chronically infected with hepatitis B. Recent studies have shown that the expression level of hepatitis B virus surface antigen gene (HBsAg) is correlated with the occurrence of HCC or fibrosis severity in transgenic mice and HBV infection patients, therefore, HBsAg becomes a rising target for drug design for the treatment of hepatitis B.

In a study recently published in Biomaterials, Dr. Zhi Hong and Dr. Chen-Yu Zhang from Nanjing University and the collaborators report that the small silencing RNA sequences against HBsAg generated in edible lettuce (Lactuca sativa L.) can specifically bind and inhibit gene expression in p21-HBsAg knock-in transgenic mice at a relatively low amount when compared to synthetic siRNAs. More importantly, continuous administration of amiRNA-containing decoction relieves the liver injury in transgenic mice without extra negative effects even after 15-month treatment.

This work utilizes the plant endogenous microRNA biogenesis machinery to produce methylated short interfering sequences for increasing the stability of target siRNAs while reducing the cost of production. Therefore, this work not only provide an affordable treatment strategy for chronic hepatitis B patients in developing countries, but also reduces the required dose of RNAi drugs to minimize the potential side effects of RNAi therapy and allow the administration for a relatively long period or in conjunction with other antiviral drugs.

To those patients in immune-tolerant phase or resistant to conventional antivirus treatment, this RNAi-based therapy may effectively reduce their risk of liver injury by daily consumption of vegetable decoction containing HBsAg silencing RNAs.

If we take a long view, this method may also be applicable to the treatment of hepatitis C or other infectious diseases due to the effective, less toxic and financially viable strategy to produce short interfering sequences using engineered plants. It can be predicted that plant derived siRNAs will bring a "green revolution" in RNAi tools and therapeutics.

When we look back, the Green Revolution has brought us a richer food supply. At the same time, we should also know that the daily food is also changing ourselves, in which the small RNAs we take from food may play an important role.

Credit: 
Nanjing University School of Life Sciences

What's your poison? Scrupulous scorpions tailor venom to target

Replenishing venom takes time and energy - so it pays to be stingy with stings.

According to researchers at the Australian National Institute of Tropical Health and Medicine, scorpions adapt their bodies, their behavior and even the composition of their venom, for efficient control of prey and predators.

Writing in Frontiers in Ecology and Evolution, they say it's not just the size of the stinger, but also how it's used that matters.

Stingy stingers

"Scorpions can store only a limited volume of venom, that takes time and energy to replenish after use," says lead author Edward Evans. "Meanwhile the scorpion has a reduced capacity to capture prey or defend against predators, so the costs of venom use are twofold."

As a result, over 400 million years of evolution scorpions have developed a variety of strategies to minimize venom use.

The most obvious of these is to avoid using venom at all.

"Research has shown the lighter, faster male specimens of one species are more likely to flee from danger compared to the heavier-bodied females, rather than expend energy using toxins," notes Evans. "Others - particularly burrowing species - depend instead on their large claws or 'pedipalps', and have a small, seldom-used stinging apparatus."

When immobility, threat or lively prey forces venom use, scorpions can adjust the volume they inject - both within each sting and through the application of multiple stings.

"Scorpions can hold prey in their pedipalps and judiciously apply stings, just until it stops struggling."

At the other extreme, when the survival stakes are high some species abandon precision and spray their venom through the air.

"Spraying venom defensively is potentially wasteful but can avoid dangerous close contact with predators such as grasshopper mice, which disarm scorpions by biting off their tails."

Venom versatility

Scorpions can also tailor the composition of their venom to a target - both on-the-fly, and more precisely over weeks of exposure.

For starters, any given sting has three levels: dry, prevenom or venom.

As a light deterrent, a scorpion may sting with no venom at all. A 'wet' sting begins with clear, salty prevenom - essentially a "stun" setting - and might go no further.

"Research on prevenom suggests it contains an extremely high potassium salt concentration, which may cause quick paralysis in insects and pain in vertebrates," says Evans. "It seems to regenerate quickly and presumably at a low metabolic cost."

If things get heavy, the scorpion can go on to inject or spray a thick, milky, protein-rich venom.

"Venom injection is reserved for more active, persistent or sizeable targets. It is more toxic, but once spent can take weeks to replenish - leaving the scorpion vulnerable and with limited prey options."

Recent work by the James Cook University group suggests that scorpions can make more personalized changes to venom composition, in response to extended periods of predator exposure.

"Repeated encounters with a surrogate vertebrate predator - a taxidermied mouse - over a six week period led the scorpion Hormurus waigiensis to produce a higher relative abundance of a particular group of toxins, including some with vertebrate predator-specific activity," explains senior author Dr. David Wilson.

How exactly the change occurs remains unknown, however.

"Future work is needed to investigate how far observed changes in venom composition and use are due to adaptive responses - and to identify the precise stimuli for change," Wilson and Evans conclude.

Credit: 
Frontiers

The common wisdom about marketing cocreated innovations is wrong

Researchers from the University of Hong Kong, University of Tennessee, University of British Columbia, and Arizona State University published a new paper in the Journal of Marketing that seeks the optimal strategy for communicating the value of cocreated innovations in order to drive consumer purchase and acceptance in the marketplace.

The study forthcoming in the July issue of the Journal of Marketing, titled "Successfully Communicating a Cocreated Innovation," is authored by Helen Si Wang, Charles Noble, Darren Dahl, and Sungho Park.

Online platforms make it easy and inexpensive for companies to run contests, gather customers' ideas, and commercialize the most promising ideas into finished products. This is a key reason cocreation has been adopted as a key innovation strategy by nearly 78% of large companies. Thus far, however, the strategy has yielded disappointing results. One of the most heralded cocreation firms, Quirky, withdrew 70% of its 500-plus cocreated innovations between 2009 and 2014 because of stagnant sales and filed for bankruptcy thereafter. And at Apple's App Store, 80% of the apps do not generate enough revenue to survive for more than a few months.

Is the cocreation model a legitimate strategy to drive innovation and adoption of resulting products--or is it flawed by design? Marketing communications is often regarded as one of the major influences on innovation adoption and creators typically take two approaches to marketing new products. They either share a consumer creation or genesis story (also called user-generated content or UGC) or use more traditional, firm-generated content (FGC) that often stresses a feature's products and benefits. This research shows that it is wise to combine these strategies but with an interesting twist on conventional advertising wisdom.

When sharing a genesis story, creators tend to take one of two tacks: 1) an approach-oriented message about how they achieved new or desired outcomes; or 2) an avoidance-oriented message that promises to help users avoid unpleasant or undesirable outcomes they themselves experienced. Advertising best practice stresses that a firm should use consistent messaging to communicate with customers.

This practice does not hold up to scrutiny in the area of co-created products, however. Instead, the researchers found that a mixed or "mismatch" communication strategy works best to speed individual and mass consumer adoption. A mismatch communication strategy means that if the product creator's claim is approach-oriented, the firm should use an avoidance-oriented and vice versa.

As an example, for the cocreated Starbucks® Doubleshot Energy Mexican Mocha Coffee Drink, the creators' authentic message was approach-oriented and focused on "Embracing winter... fueling me with all of the winter warmth and energy I want." When the researchers combined this with an avoidance firm message, "What the world can't miss this winter... say bye-bye to the winter chill and blues" to make a mismatch strategy, adoption levels increased compared to when the approach firm message was used--"What the world desires this winter... makes you embrace all the winter warmth and joy."

Key findings from five studies include:

Products using a mismatch strategy were adopted 56.1% of the time compared to 26.3% of those using matching communication strategies.

This approach works best with low-expertise consumers who reference their own life stories when buying and using goods. High-expertise consumers are less motivated by this approach.

Firms using a mismatch communication strategy are 10% more likely to experience early takeoff, which is critical to the mass adoption of the innovation.

"This research offers important implications for managers and companies seeking to leverage the creative power of the crowd in developing innovations," says Wang. Noble adds, "Our findings challenge the conventional wisdom in many marketing campaigns. If you want takeoff, mismatch your message with the innovator creator's message."

Credit: 
American Marketing Association

Cyber of the fittest: Researchers develop first cyber agility framework to measure attacks

image: Cyber of the fittest: UTSA develops 1st framework to measure the evolution of cyber attacks.

Image: 
Photo courtesy of UTSA

(June 7, 2019) –- For more than a year, GozNym, a gang of five Russian cyber criminals, stole login credentials and emptied bank accounts from unaware Americans. To detect and quickly respond to escalating cyber-attacks like these, researchers at The University of Texas at San Antonio (UTSA) have developed the first framework to score the agility of cyber attackers and defenders. The cyber agility project was funded by the Army Research Office.

“Cyber agility isn’t just about patching a security hole, it’s about understanding what happens over time. Sometimes when you protect one vulnerability, you expose yourself to 10 others,” said computer science alumnus Jose Mireles ’17, who now works for the U.S. Department of Defense and co-developed this first known framework as part of his UTSA master’s thesis. “In car crashes, we understand how to test for safety using the rules of physics. It is much harder to quantify cybersecurity because scientists have yet to figure out what are the rules of cybersecurity. Having formal metrics and measurement to understand the attacks that occur will benefit a wide range of cyber professionals.”

To develop a quantifiable framework, Mireles collaborated with fellow UTSA student Eric Ficke, researchers at Virginia Tech, U.S. Air Force Research Laboratory, and the U.S. Army Combat Capabilities Development Command Army Research Laboratory (CCDC ARL). The project was conducted under the supervision of UTSA Professor Shouhuai Xu, who serves as the director of the UTSA Laboratory for Cybersecurity Dynamics.

Together, they used a honeypot—a computer system that lures real cyber-attacks—to attract and analyze malicious traffic according to time and effectiveness. As both the attackers and the defenders created new techniques, the researchers were able to better understand how a series of engagements transformed into an adaptive, responsive and agile pattern or what they called an evolution generation.

The framework proposed by the researchers will help government and industry organizations visualize how well they out-maneuver attacks. This groundbreaking work will be published in an upcoming issue of IEEE Transactions on Information Forensics and Security, a top cybersecurity journal.

“The cyber agility framework is the first of its kind and allows cyber defenders to test out numerous and varied responses to an attack,” said Xu. “This is an outstanding piece of work as it will shape the investigation and practice of cyber agility for the many years to come.”

"The DoD and US Army recognize that the Cyber domain is as important a battlefront as Ground, Air and Sea," said Purush Iyer, Ph.D. division chief, network sciences at Army Research Office, an element of CCDC ARL. "Being able to predict what the adversaries will likely do provides opportunities to protect and to launch countermeasures."

Mireles added, “A picture or graph in this case is really worth more than 1,000 words. Using our framework, security professionals will recognize if they’re getting beaten or doing a good job against an attacker.”

UTSA is home to the nation’s top cybersecurity program, an interdisciplinary approach that spans three colleges: the College of BusinessCollege of Engineering and College of Sciences. Research centers and outreach programs provide UTSA students and faculty with additional opportunities to explore the various facets of this high demand and ever-changing field.

The Department of Computer Science, housed in the UTSA College of Sciences, offers bachelor’s, master’s and doctoral degree programs that support more than 1,360 undergraduate students and 68 graduate students. Its major research units include the UTSA Institute for Cyber Security, which operates the FlexCloud and FlexFarm laboratories dedicated to both basic and applied cybersecurity research, and the UTSA Center for Infrastructure Assurance and Security (CIAS), which focuses on the cybersecurity maturity of cities and communities while conducting national cyber defense competitions for high school and college students.

San Antonio is home to one of the largest concentrations of cybersecurity experts and industry leaders outside Washington, D.C., which uniquely positions the city and UTSA to lead the nation in cybersecurity research and workforce development.

Credit: 
University of Texas at San Antonio

Exposure to videos of race-based violence online may be spurring mental-health issues

Social media-based movements like #BlackLivesMatter and #SayHerName have taken off over the past decade as a response to highly scrutinized police shootings of African American people. Recordings from body cameras or bystanders are frequently posted online and shared by activists and others as a way to press for police accountability.

But those videos may also have deleterious effects on the mental health of young members of the same racial communities as the victims in those shootings, suggests a new study published today in the Journal of Adolescent Health.

Previous research has linked exposure to violent media with trauma, and other research has connected actual police killings in a given region to poor mental health in same-race communities. Study authors say this study is the first to explore the relationship between repeated youth exposure to traumatic events online with mental health.

"Increased exposure to traumatic events online, whether they involve members of one's own racial-ethnic group or those of other racial-ethnic groups, are related to poor mental health outcomes," said lead author Brendesha Tynes, an associate professor of education and psychology at the USC Rossier School of Education.

Data were collected from a nationally representative sample of 302 Black and Hispanic adolescents ages 11-19. African American and Hispanic participants were asked about police shootings, immigrants being detained by federal border agents, and beatings.

Study participants reported the frequency of their exposure to traumatic events online, depressive symptoms, PTSD symptoms, and other demographic information.

Though not establishing causality, the researchers' findings showed that Hispanic participants reported significantly more depressive symptoms than African American participants. Female participants reported significantly more depressive and PTSD symptoms than male participants. This was true for teens that viewed violence involving both African Americans and Hispanic individuals.

"The study shows that the increase in depressive and PTSD symptoms crosses racial and ethnic lines - in other words, the mental health of both African American and Latinx teens may be linked to viewing any racial violence, not just that which depicts their own racial or ethnic group," Tynes said.

Pew Internet Research's most recent (2018) survey of adolescent technology usage shows that 45 percent of youth report they are online "almost constantly."

Given such high internet use, the researchers suggested that mental health professionals and educators have conversations with young people of color about their exposure to racial violence online, and that those professionals should also take steps to improve their own cultural competency.

"The videos of these injustices should be public and people should continue to record and post them," Tynes said. "The findings show that mental health problems are exacerbated with exposure, so viewers should be mindful of their viewing practices, auto-play settings and how they think about the event after they've seen it. They should exhaust all technological, personal and community resources to protect themselves and thrive in the face of these seemingly ubiquitous events."

Credit: 
University of Southern California

What if dark matter is lighter? Report calls for small experiments to broaden the hunt

image: Junsong Lin, an affiliate in Berkeley Lab's Physics Division and UC Berkeley postdoctoral researcher, holds components of a low-mass dark matter detector that is now in development at UC Berkeley.

Image: 
Marilyn Chung/Berkeley Lab

The search for dark matter is expanding. And going small.

While dark matter abounds in the universe - it is by far the most common form of matter, making up about 85 percent of the universe's total - it also hides in plain sight. We don't yet know what it's made of, though we can witness its gravitational pull on known matter.

Theorized weakly interacting massive particles, or WIMPs, have been among the cast of likely suspects comprising dark matter, but they haven't yet shown up where scientists had expected them.

Casting many small nets

So scientists are now redoubling their efforts by designing new and nimble experiments (https://newscenter.lbl.gov/2019/06/10/small-dark-matter-experiments-broaden-hunt/) that can look for dark matter in previously unexplored ranges of particle mass and energy, and using previously untested methods. The new approach, rather than relying on a few large experiments' "nets" to try to snare one type of dark matter, is akin to casting many smaller nets with much finer mesh.

Dark matter could be much "lighter," or lower in mass and slighter in energy, than previously thought. It could be composed of theoretical, wavelike ultralight particles known as axions. It could be populated by a wild kingdom filled with many species of as-yet-undiscovered particles. And it may not be composed of particles at all.

Momentum has been building for low-mass dark matter experiments, which could expand our current understanding of the makeup of matter as embodied in the Standard Model of particle physics, noted Kathryn Zurek, a senior scientist and theoretical physicist at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab).

Zurek, who is also affiliated with UC Berkeley, has been a pioneer in proposing low-mass dark matter theories and possible ways to detect it.

"What experimental evidence do we have for physics beyond the Standard Model? Dark matter is one of the best ones," she said. "There are these theoretical ideas that have been around for a decade or so," Zurek added, and new developments in technology - such as new advances in quantum sensors and detector materials - have also helped to drive the impetus for new experiments.

"The field has matured and blossomed over the last decade. It's become mainstream - this is no longer the fringe," she said. Low-mass dark matter discussions have moved from small conferences and workshops to a component of the overall strategy in searching for dark matter.

She noted that Berkeley Lab and UC Berkeley, with their particular expertise in dark matter theories, experiments, and cutting-edge detector and target R&D, are poised to make a big impact in this emerging area of the hunt for dark matter.

Report highlights need to search for "light" dark matter low-mass

Dark matter-related research by Zurek and other Berkeley Lab researchers is highlighted in a DOE report, "Basic Research Needs for Dark Matter Small Projects New Initiatives", based on an October 2018 High Energy Physics Workshop on Dark Matter. Zurek and Dan McKinsey, a Berkeley Lab faculty senior scientist and UC Berkeley physics professor, served as co-leads on a workshop panel focused on dark matter direct-detection techniques, and this panel contributed to the report.

The report proposes a focus on small-scale experiments - with project costs ranging from $2 million to $15 million - to search for dark matter particles that have a mass smaller than a proton. Protons are subatomic particles within every atomic nucleus that each weigh about 1,850 times more than an electron.

This new, lower-mass search effort will have "the overarching goal of finally understanding the nature of the dark matter of the universe," the report states.

In a related effort, the U.S. Department of Energy this year solicited proposals for new dark matter experiments, with a May 30 deadline, and Berkeley Lab participated in the proposal process, McKinsey said.

"Berkeley is a dark matter mecca" that is primed for participating in this expanded search, he said. McKinsey has been a participant in large direct-detection dark matter experiments including LUX and LUX-ZEPLIN and is also working on low-mass dark matter detection techniques.

3 priorities in the expanded search

The report highlights three major priority research directions in searching for low-mass dark matter that "are needed to achieve broad sensitivity and ... to reach different key milestones":

1. Create and detect dark matter particles below the proton mass and associated forces, leveraging DOE accelerators that produce beams of energetic particles. Such experiments could potentially help us understand the origins of dark matter and explore its interactions with ordinary matter, the report states.

2. Detect individual galactic dark matter particles - down to a mass measuring about 1 trillion times smaller than that of a proton - through interactions with advanced, ultrasensitive detectors. The report notes that there are already underground experimental areas and equipment that could be used in support of these new experiments.

3. Detect galactic dark matter waves using advanced, ultrasensitive detectors with emphasis on the so-called QCD (quantum chromodynamics) axion. Advances in theory and technology now allow scientists to probe for the existence of this type of axion-based dark matter across the entire spectrum of its expected ultralight mass range, providing "a glimpse into the earliest moments in the origin of the universe and the laws of nature at ultrahigh energies and temperatures," the report states.

This axion, if it exists, could also help to explain properties associated with the universe's strong force, which is responsible for holding most matter together - it binds particles together in an atom's nucleus, for example.

Searches for the traditional WIMP form of dark matter have increased in sensitivity about 1,000-fold in the past decade.

Berkeley scientists are building prototype experiments

Berkeley Lab and UC Berkeley researchers will at first focus on liquid helium and gallium arsenide crystals in searching for low-mass dark matter particle interactions in prototype laboratory experiments now in development at UC Berkeley.

"Materials development is also part of the story, and also thinking about different types of excitations" in detector materials, Zurek said.

Besides liquid helium and gallium arsenide, the materials that could be used to detect dark matter particles are diverse, "and the structures in them are going to allow you to couple to different dark matter candidates," she said. "I think target diversity is extremely important."

The goal of these experiments, which are expected to begin within the next few months, is to develop the technology and techniques so that they can be scaled up for deep-underground experiments at other sites that will provide additional shielding from the natural shower of particle "noise" raining down from the sun and other sources.

McKinsey, who is working on the prototype experiments at UC Berkeley, said that the liquid helium experiment there will seek out any signs of dark matter particles causing nuclear recoil -a process through which a particle interaction gives the nucleus of an atom a slight jolt that researchers hope can be amplified and detected.

One of the experiments seeks to measure excitations from dark matter interactions that lead to the measurable evaporation of a single helium atom.

"If a dark matter particle scatters (on liquid helium), you get a blob of excitation," McKinsey said. "You could get millions of excitations on the surface - you get a big heat signal."

He noted that atoms in liquid helium and crystals of gallium arsenide have properties that allow them to light up or "scintillate" in particle interactions. Researchers will at first use more conventional light detectors, known as photomultiplier tubes, and then move to more sensitive, next-generation detectors.

"Basically, over the next year we will be studying light signals and heat signals," McKinsey said. "The ratio of heat to light will give us an idea what each event is."

These early investigations will determine whether the tested techniques can be effective in low-mass dark matter detection at other sites that provide a lower-noise environment. "We think this will allow us to probe much lower energy thresholds," he said.

New ideas enabled by new technology

The report also notes a wide variety of other approaches to the search for low-mass dark matter.

"There are tons of different, cool technologies out there" even beyond those covered in the report that are using or proposing different ways to find low-mass dark matter, McKinsey said. Some of them rely on the measurement of a single particle of light, called a photon, while others rely on signals from a single atomic nucleus or an electron, or a very slight collective vibration in atoms known as a phonon.

Rather than ranking existing proposals, the report is intended to "marry the scientific justification to the possibilities and practicalities. We have motivation because we have ideas and we have the technology. That's what's exciting."

He added, "Physics is the art of the possible."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Last-ditch attempt to warn of coalmine harm

image: Scientists warn the Doongmabulla Springs Complex could be permanently damaged if the mine goes ahead.

Image: 
Coast and Country

Groundwater experts from around Australia have repeated calls for further investigations into the potential effects on heritage groundwater reserves in central Queensland if the giant Adani Carmichael coalmine gets the final regulatory go-ahead.
Concerns the ancient Doongmabulla Springs face a 'reasonable threat of extinction' from Adani's proposed Galilee Basin coalmine are raised in a new position paper, which echoes previous research by CSIRO and Geoscience Australia.

The Queensland Government is due to rule on the groundwater hurdle this week after clearing the way to another environmental concern, supporting Adani's proposed management plan for the endangered black-throated finch.

Experts from Flinders University, RMIT, Monash and Latrobe universities say their report, 'Deficiencies in the scientific assessment of the Carmichael Mine impacts to the Doongmabulla Springs' - now before the Queensland Government - highlights problems with Adani's own claims that the springs are safeguarded by "an impervious layer, restricting water from flowing between the underground aquifers".

"Adani has not properly examined the link between the mine's groundwater drawdown and impacts to the Doongmabulla Springs, which is a fundamental requirement of the Carmichael mine's approvals," says Flinders University Professor of Hydrogeology Adrian Werner, a founding member of National Centre for Groundwater Research and Training.

Instead Professor Werner, with Flinders Associate Professor Andy Love, Dr Eddie Banks and Dr Dylan Irvine - with Associate Professor Matthew Currell from RMIT University, Professor Ian Cartwright from Monash University and Associate Professor John Webb from Latrobe warn the springs face a "plausible threat of extinction'.

"Six years of advice from experts that the science is flawed does not seem to have overcome critical shortcomings with the science that have persisted despite several iterations of Adani's environmental management plans," says Professor Werner.

"With the deadline for approval approaching, we are compelled to reiterate concerns that flaws in Adani's scientific methods, modelling results, and the proposed 'adaptive management' approach by Adani have the potential to seriously mislead decision-makers," he says pointing to the 2013 Independent Expert Scientific Committee report, Land Court case of 2014-15 and this year's CSIRO review.

Professor Werner says: "We hope that our report assists the Queensland Government by highlighting the significant risk that the Carmichael Mine will cause the Doongmabulla Springs to become extinct, and will impact other groundwater-dependent ecosystems and water users to a greater degree than has so far been suggested by Adani."

The report pinpoints four areas where Adani's investigation and environmental management strategies do "not stack up against the science":

Adani appears likely to have significantly underestimated future impacts to the Doongmabulla Springs Complex.

Should the Carmichael Mine cause springs within the Doongmabulla complex to cease flowing, the impact could be permanent.

Adani's safeguard against the impacts, namely 'adaptive management', is unsuitable and unlikely to protect the springs from extinction.

Cumulative impacts to the Springs that may result from other mining activities in the Galilee Basin have not been adequately considered.

Credit: 
Flinders University

Engineers use graph networks to accurately predict properties of molecules and crystals

image: This is a schematic illustration of MEGNet models.

Image: 
Chi Chen/Materials Virtual Lab

Nanoengineers at the University of California San Diego have developed new deep learning models that can accurately predict the properties of molecules and crystals. By enabling almost instantaneous property predictions, these deep learning models provide researchers the means to rapidly scan the nearly-infinite universe of compounds to discover potentially transformative materials for various technological applications, such as high-energy-density Li-ion batteries, warm-white LEDs, and better photovoltaics.

To construct their models, a team led by nanoengineering professor Shyue Ping Ong at the UC San Diego Jacobs School of Engineering used a new deep learning framework called graph networks, developed by Google DeepMind, the brains behind AlphaGo and AlphaZero. Graph networks have the potential to expand the capabilities of existing AI technology to perform complicated learning and reasoning tasks with limited experience and knowledge--something that humans are good at.

For materials scientists like Ong, graph networks offer a natural way to represent bonding relationships between atoms in a molecule or crystal and enable computers to learn how these relationships relate to their chemical and physical properties.

The new graph network-based models, which Ong's team dubbed MatErials Graph Network (MEGNet) models, outperformed the state of the art in predicting 11 out of 13 properties for the 133,000 molecules in the QM9 data set. The team also trained the MEGNet models on about 60,000 crystals in the Materials Project. The models outperformed prior machine learning models in predicting the formation energies, band gaps and elastic moduli of crystals.

The team also demonstrated two approaches to overcome data limitations in materials science and chemistry. First, the team showed that graph networks can be used to unify multiple free energy models, resulting in a multi-fold increase in training data. Second, they showed that their MEGNet models can effectively learn relationships between elements in the periodic table. This machine-learned information from a property model trained on a large data set can then be transferred to improve the training and accuracy of property models with smaller amounts of data--this concept is known in machine learning as transfer learning.

Credit: 
University of California - San Diego

Marijuana and fertility: Five things to know

For patients who smoke marijuana and their physicians, "Five things to know about ... marijuana and fertility" provides useful information for people who may want to conceive. The practice article is published in CMAJ (Canadian Medical Association Journal).

Five things to know about marijuana and fertility:

1. The active ingredient in marijuana, tetrahydrocannabinol (THC), acts on the receptors found in the hypothalamus, pituitary and internal reproductive organs in both males and females.

2. Marijuana use can decrease sperm count. Smoking marijuana more than once a week was associated with a 29% reduction in sperm count in one study.

3. Marijuana may delay or prevent ovulation. In a small study, ovulation was delayed in women who smoked marijuana more than 3 times in the 3 months before the study.

4. Marijuana may affect the ability to conceive in couples with subfertility or infertility but does not appear to affect couples without fertility issues.

5. More, and better quality, research is needed into the effects of marijuana on fertility.

Credit: 
Canadian Medical Association Journal

How to improve care for patients with disabilities? We need more providers like them

image: Bonnielin Swenor, Ph.D., M.P.H.

Image: 
Johns Hopkins Medicine

It is common for patients to prefer seeking care from a clinician similar to them -- such as of the same gender, ethnicity and culture -- who can relate to their experiences and make treatment plans that work better for their lives. To meet these preferences from patients and improve quality of care, a diverse clinician workforce that matches the diversity in the general population is needed. However, when it comes to patients with disabilities, the chance of getting a clinician "like them" is extremely low, which may lead to patients' reluctance to seek care or follow prescribed interventions and treatments. Meanwhile, without adequate scientists with disabilities bringing perspectives to patient-centered research, the ability to improve care for patients with disabilities is limited.

Why is the representation of people with disabilities so limited in the biomedical workforce?

Bonnielin Swenor, Ph.D., M.P.H., associate professor of ophthalmology at the Johns Hopkins Wilmer Eye Institute and associate professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, is working to solve this disparity.

Living with low vision herself, Swenor experiences difficulties in many aspects of her life, but devotes her time to researching how to help patients like herself, and assuring those patients that there are ways to overcome the hardships and pursue their goals. Swenor sees herself as more a patient than a researcher, and uses her unique perspective to formulate patient-centered research questions to bring better care to people with visual impairment. She believes that more people with disabilities like her are needed in the biomedical workforce.

In a new editorial published on May 30 in the New England Journal of Medicine, Swenor and Lisa Meeks, a collaborator from University of Michigan Medical School, address barriers to an inclusive workforce and propose a roadmap to guide academic medical institutions toward creating a work environment more inclusive for people with disabilities.

"Although more institutions are embracing diversity and inclusion, people with disabilities still face barriers in pursuing and getting support in their careers," says Swenor. "We are providing employers with recommendations to enhance inclusion of persons with disabilities in these settings."

In the NEJM piece, Swenor and Meeks recommend that academic medical centers include in their diversity efforts people with disabilities, develop centralized ways to pay for accommodations that might be required and other actions that would encourage more students with disabilities to pursue careers in medicine.

Credit: 
Johns Hopkins Medicine

Courts' sentencing of Hispanic defendants differs by destination, citizenship, year

In the United States, heightened hostility toward Hispanic immigrants is common in contemporary public discourse, as are fears about Hispanic immigrants and crime. We know that the treatment of Hispanic immigrants differs depending on whether they come to areas of the United States that have historically welcomed Hispanic immigrants or to new destinations that have recently started welcoming Hispanics.

A new study examined whether federal courts in areas where Hispanics have historically immigrated handed out sentences differently than federal courts in areas that are new destinations for Hispanic immigration, and how those sentences differed by citizenship. It found that disparities were lowest in areas that have traditionally welcomed Hispanic immigrants and where Hispanic immigrants were numerous, and greatest in areas with few new Hispanic immigrants and small Hispanic immigrant populations. Disparities were also substantial in areas that were new immigrant destinations in the early 2000s.

The study, by researchers at the Pennsylvania State University, appears in Justice Quarterly, a publication of the Academy of Criminal Justice Sciences.

"Our research points to how perceptions of immigration and immigrants might shape ethnic and citizenship disparities in criminal sentencing, and raises questions about the fairness of criminal punishment," notes Jeffrey Ulmer, professor of sociology and criminology at Penn State, who led the study.

Prior to the 1990s, Hispanics immigrated primarily to California, Chicago, Florida, New York City, and Texas (now considered traditional destinations). In the 1990s and throughout the 2000s, they immigrated to new destinations throughout the United States, such as Georgia, Indiana, Kansas, North Carolina, Oregon, and Pennsylvania. To address these changes, this study looked at how different immigration destinations might foster different patterns of disparity among Hispanic defendants in federal courts.

The researchers examined categories of Hispanic immigration from 1990 to 2000, and from 2000 to 2010, comparing federal sentencing of Hispanic defendants in these areas as well as in areas that are not typically destinations for Hispanic immigrants. The study also looked at sentencing differences between Hispanics who were U.S. citizens and those who were not, and whether citizens were documented or undocumented. The analysis focused on sentencing differences that were not explained by the influence of many legally relevant factors like type and severity of crime, criminal history, factors related to sentencing guidelines, and other characteristics of defendants.

The study used data from the U.S. Sentencing Commission for 90 federal districts, as well as county-level data from the U.S. Census, American Community Survey, and Uniform Crime Reports, aggregated to the federal court district level. Researchers excluded cases in which the most serious charge was an immigration violation because these are handled differently than other crimes. The study took into consideration the poverty rate of Hispanics by district, caseloads in immigration courts, and overall crime rates for eight crimes listed by the Federal Bureau of Investigation.

In traditional destinations, the study found little or no disparity between Hispanics and non-Hispanics in federal sentencing from 1990 to 2000. But in new destinations and in areas with little Hispanic immigration, Hispanics who were U.S. citizens and noncitizens received longer sentences in 2000.

Moreover, by the early 2010s, in new destinations, federal courts didn't sentence Hispanics who were U.S. citizens significantly differently than federal courts in traditional destinations. However, courts in new destinations sentenced Hispanics who were noncitizens more harshly (e.g., they were given longer average sentences), especially those who were undocumented. This was also true in areas with comparatively little Hispanic immigration by 2010.

The authors note that their study relied on post-conviction sentencing information and did not include information on pre-conviction processes or decisions, such as arrests and initial charges. Thus, they did not consider prosecutorial discretion in charging, plea bargaining, or other pre-conviction decisions.

"Our findings suggest that harsher sentences for non-immigration federal crimes among Hispanic non-citizens, especially those undocumented, are shaped by recent historical contexts of Hispanic immigration," according to Brandy Parker, Ph.D. candidate at Penn State, who coauthored the study.

Credit: 
Crime and Justice Research Alliance

Undetected diabetes linked to heart attack and gum disease

People with undetected glucose disorders run a higher risk of both myocardial infarction and periodontitis, according to a study published in the journal Diabetes Care by researchers at Karolinska Institutet in Sweden. The results demonstrate the need of greater collaboration between dentistry and healthcare, say the researchers, and possibly of screening for diabetes at dental clinics.

Severe periodontitis is already known to be associated with a higher risk of myocardial infarction and lowered glucose tolerance, and diabetes to be more common in people who have suffered a heart attack.

The researchers behind these earlier findings have now studied whether undetected glucose disorders (dysglycaemia) - that is, a reduced ability to metabolise sugar - is linked to both these conditions: myocardial infarction and periodontitis. The results are published in the journal Diabetes Care.

The study was a collaboration between cardiologists and dentists at Karolinska Institutet and was based on data from a previous study called PAROKRANK. It included 805 myocardial infarction patients from 17 Swedish cardiology clinics and 805 controls, who were matched by age, sex and post code. The patients' periodontitic status was assessed with X-rays and dysglycaemic status with glucose load tests.

Participants with a diabetes diagnosis were excluded from the study, which left 712 patients and 731 controls with data on both periodontitic status and glucose status, the latter of which was divided into three categories: normal, reduced glucose tolerance, newly detected diabetes. Comparisons were made after adjusting for age, sex, smoking habits, education and civil status.

The study shows that previously undetected glucose disorders, which include diabetes and impaired glucose tolerance, were linked to myocardial infarction. It was roughly twice as common for myocardial infarction patients to have undetected dysglycaemia as for healthy controls, confirming the research group's earlier findings. Myocardial infarction affects approximately 30,000 people in Sweden annually.

Undetected diabetes was also found to be linked to severe periodontitis. When myocardial infarction patients and controls were analysed separately, the association was clearer in the patients than in the controls, which is possibly because many of the controls were very healthy and few had severe periodontitis and undetected diabetes.

"Our findings indicate that dysglycaemia is a key risk factor in both severe periodontitis and myocardial infarction and that the combination of severe periodontitis and undetected diabetes further increases the risk of myocardial infarction," says the study's lead author Anna Norhammar, cardiologist and Associate Professor at Karolinska Institutet's Department of Medicine in Solna.

The results substantiate previously known links between periodontitis and diabetes and show that such an association also exists in previously unknown diabetes.

According to the researchers, the findings should make diabetes specialists consider their patients' dental health and the need for closer collaboration with dentists.

"The PAROKRANK study is a good example of such collaboration," says the present study's senior author Lars Rydén, Professor at Karolinska Institutet's Department of Medicine in Solna and chair of the academically initiated PAROKRANK study.

"Our study shows that undetected glucose disorders are common in two major diseases - myocardial infarction and periodontitis," says Dr Norhammar. "Many people visit the dentist regularly and maybe it's worth considering taking routine blood-sugar tests in patients with severe periodontitis to catch these patients."

One of the study's limitations is that despite the large number of participants, the number of patients and controls with severe periodontitis and undetected diabetes was low. The observed differences in the links between undetected diabetes and severe periodontitis in patients and controls can therefore be attributable either to the low number of patients or to genuine differences in correlation.

Credit: 
Karolinska Institutet

Survey: Majority of current gun owners support the sale of personalized guns

A new study led by researchers at the Johns Hopkins Bloomberg School of Public Health found that almost four out of five current gun owners support the sale of both traditional and personalized guns through licensed dealers. However, only 18 percent of gun owners reported being likely to purchase a personalized gun for themselves when considering the additional costs.

Personalized guns--sometimes referred to as "smart" guns--include safety features to help prevent use by unauthorized individuals, including children. Some personalized guns require unique biometric information, like an individual's fingerprint, to function. Others use radio frequency identification (RFID) to connect a gun to a wearable device, like a bracelet, watch or ring, to activate the firearm much like a keyless entry. While not currently available for sale in the U.S., personalized guns have been promoted as a way to help prevent gun-related injuries.

The findings, published online June 10 in the American Journal of Preventive Medicine, explore factors that may influence the purchase of a personalized gun, such as added cost and safe storage practices. The research suggests that low to moderate interest in purchasing a personalized gun could limit the intended safety benefits the technology offers.

"This is an important time in discussing gun policy and other solutions to reduce the burden of gun-related deaths," says lead study author Cassandra Crifasi, PhD, MPH, deputy director of the Johns Hopkins Center for Gun Policy and Research and assistant professor in the Bloomberg School's Department of Health Policy and Management. "Our results show that while the technology is designed to reduce gun-related injuries, people who reported being likely to purchase a personalized gun already engaged in safe gun storage behaviors."

Advocates of personalized gun technology often highlight the context of school shootings or children and adolescents gaining access to a firearm. If the guns were personalized, children or adolescents who find the firearm would be unable to use it without authorization. However, personalized guns would do little to prevent the leading cause of firearm death-adult suicide.

The researchers used a nationally representative sample of current gun owners to examine knowledge and attitudes of purchasing a personalized gun. The internet-based survey was fielded by the research firm GfK Knowledge Networks between March 15 and April 13, 2016. Results of the 1,444 survey responses were analyzed in 2018. Respondents were asked questions about availability, how likely they would be to purchase a personalized gun when the cost increased by $300 dollars, and concerns about the technology.

The survey found that almost half, or 48 percent, of current gun owners had heard of personalized guns and 79 percent believe both traditional and personalized guns should be available for purchase from licensed dealers. Researchers then asked about the likelihood of purchasing a personalized gun if the safety features add $300 to the original price. Only five percent of respondents reported being extremely likely and 14 percent reported being somewhat likely to purchase a personalized gun for an additional $300 dollars. Overall, more than half, or 56 percent, of respondents had concerns about cost.

Researchers also found that respondents who reported safe storage practices, meaning all their guns are stored in a locked gun safe, cabinet or case, or locked into a gun rack or stored with a trigger lock or other lock, had a 50 percent higher likelihood of purchasing a personalized gun. Respondents who had attended a safety training course had a slightly higher likelihood--52 percent--of purchasing a personalized gun.

The survey also found that 70 percent of respondents were concerned about the technology working when the device was needed.

"Designers of personalized guns appear to be targeting individuals who would not have otherwise bought a gun but may consider doing so if they thought the gun was safe," says Crifasi. "The introduction of personalized guns in the U.S. market could increase the number of firearms in a home. Given what we know about exposure to firearms and risk of suicide, the potential unintended consequences of personalized guns warrant further examination."

Credit: 
Johns Hopkins Bloomberg School of Public Health