Tech

New findings on the largest natural sulfur source in the atmosphere

image: Laboratory set-up of the free jet experiment at TROPOS in Leipzig, which allows the investigation of the early phase of oxidation reactions under atmospheric conditions without the walls influencing the reaction behaviour.

Image: 
Torsten Berndt, TROPOS

Leipzig. An international research team was able to experimentally show in the laboratory a completely new reaction path for the largest natural sulfur source in the atmosphere. The team from the Leibniz Institute for Tropospheric Research (TROPOS), the University of Innsbruck and the University of Oulu are now reporting in the Journal of Physical Chemistry Letters on the new degradation mechanism for dimethyl sulfide (DMS), which is released mainly by the oceans. The new findings show that important steps in the Earth's sulfur cycle have not yet been properly understood, as they call into question the previously assumed formation pathways for sulfur dioxide (SO2), methanesulfonic acid (MSA) and carbonyl sulfide (OCS) based on DMS degradation, which strongly influence the Earth's climate through the formation of natural particles and clouds.

In the laboratory studies, a free-jet flow system was used at TROPOS in Leipzig, which allows the investigation of oxidation reactions under atmospheric conditions without disturbing wall effects. The products of the reactions were measured with state-of-the-art mass spectrometers using different ionization methods. The investigations on the degradation process of dimethyl sulfide (DMS; CH3SCH3) showed that this predominantly proceeds by a two-step radical isomerization process, in which HOOCH2SCHO is formed as a stable intermediate product as well as hydroxyl radicals. There has been theoretical speculation about this reaction pathway for four years now, but the German-Austrian-Finnish team has only now been able to prove it. "The interaction of optimal reaction conditions and highly sensitive detection methods allows us to look almost directly into a reaction system," reports Dr. Torsten Berndt from TROPOS, who is in charge of the investigations. The new reaction pathway is significantly faster than the traditional bimolecular radical reactions with nitrogen monoxide (NO), hydroperoxy (HO2) and peroxy radicals (RO2). "Further investigations on the degradation of the intermediate HOOCH2SCHO will hopefully give us clarity about the formation channels, especially of sulfur dioxide (SO2) and carbonyl sulfide (OCS)," Berndt continued about the upcoming investigations.

Dimethyl sulfide (DMS) is a sulfur-containing organic gas that occurs almost everywhere: the degradation product of bacteria, for example, is part of human bad breath. On the other hand, the large quantities of DMS that are produced and outgassed during decomposition processes in the ocean are important for the climate: Estimated 10 to 35 million metric tons from the seawater are released into the atmosphere every year. DMS is thus the largest natural source of sulfur for the atmosphere. As a result of its reaction with hydroxyl radicals, sulfuric acid (H2SO4) is formed starting from SO2 and methanesulfonic acid (MSA), which play a major role in the formation of natural particles (aerosols) and clouds over the oceans. Carbonyl sulfide (OCS) is also important, as its low reactivity in the atmosphere allows it to be mixed into the stratosphere, where it contributes to the formation of sulfuric acid aerosols and thus to the cooling of the Earth's atmosphere.

The new findings about the degradation pathways of DMS help to improve the knowledge about the formation of natural aerosols. The contribution of aerosols and the resulting clouds is still the greatest uncertainty in climate models. In contrast to greenhouse gases such as carbon dioxide, cloud formation processes are much more complex and difficult to model.

Credit: 
Leibniz Institute for Tropospheric Research (TROPOS)

High-protein diets may harm your kidneys

A high-protein diet is believed to be healthy. It is suggested that it keeps you fit, helps you to lose fat and to retain lean muscle mass. Avoiding carbohydrates and substituting them with proteins has become a leading dogma for all those who care for their looks and health.

Kamyar Kalantar-Zadeh, Holly M Kramer and Denis Fouque [1] now consider it necessary to question this belief and to put a tough warning label on our modern eating habits. "We may save calories, but we may also risk the health of our kidneys." The promise of saving calories and losing weight is why a high-protein diet is very often recommended to people who suffer from diabetes or who are obese. But the crux of the matter is that these groups of people are especially vulnerable to the kidney-harming effects of a high protein intake. "A high-protein diet induces glomerular hyperfiltration, which, according to our current state of knowledge, may boost a pre-existing low-grade chronic kidney disease, which, by the way, is often prevalent in people with diabetes. It might even increase the risk of de novo kidney diseases", explains Prof Fouque, past-chair of the European Renal Nutrition Working Group. "To put it in a nutshell: To recommend a high-protein diet to an overweight diabetes patient may indeed result in loss of weight, but also in a severe loss of kidney function. We want one, but we also get the other."

In view of the rising number of people affected by type-2 diabetes, and the fact that at least 30% of patients with diabetes suffer from an underlying chronic kidney disease, the experts believe it is high time that the diabetes population and the general public are warned. "By advising people - especially those with a high risk for chronic kidney disease, namely patients with diabetes, obese people, people with a solitary kidney and probably even elderly people - to eat a protein-rich diet, we are ringing the death bell for their kidney health and bringing them a big step closer to needing renal replacement therapy", says Fouque. This is the essence of the editorial by the three authors mentioned above, which has been published along with two new studies on that topic in the current issue of NDT. The analysis of a Dutch cohort [2] showed a strictly linear association between daily protein intake and decline in kidney function. The higher the intake, the faster the decline. The result of an epidemiological study conducted in South Korea [3] points to the same direction: Persons with the highest protein intake had a 1.3 higher risk of faster GFR loss. The finding itself is not new. Many previous studies have shown that a high-protein diet may harm kidney function, and this is why patients with a known early-stage chronic kidney disease are recommended a low-protein diet by their nephrologists. As long as it is unclear whether it makes any difference if the proteins are animal- or plant-based, the recommendation is to abstain in general from a high protein intake.

However, as Fouque and his colleagues point out, the problem are the people who have a mild chronic kidney disease which they are totally unaware of and who follow the trend of eating a protein-rich diet because they believe it is healthy. "These people do not know that they are taking the fast lane to irreversible kidney failure". Prof Fouque and his ERA-EDTA colleagues want to start an information campaign and raise awareness for this problem among the general population. "It is essential that people know there is another side to high-protein diets, and that incipient kidney disease should always be excluded before one changes one eating habits and adopts a high-protein diet."

Credit: 
ERA – European Renal Association

Nitrous oxide levels are on the rise

Nitrous oxide is a greenhouse gas and one of the main stratospheric ozone depleting substances on the planet. According to new research, we are releasing more of it into the atmosphere than previously thought.

Most of us know nitrous oxide (N2O) as "laughing gas", used for its anesthetic effects. It is however actually the third most important long-lived greenhouse gas, after carbon dioxide (CO2) and methane. According to new research by scientists from IIASA, the Norwegian Institute for Air Research (NILU), and several other institutions across Europe and the US, agricultural practices and the use of fertilizers containing nitrogen have greatly exacerbated N2O emissions to the atmosphere over the last two decades.

In their paper published in Nature Climate Change, the authors present estimates of N2O emissions determined from three global atmospheric inversion frameworks spanning 1998-2016, based on atmospheric N2O observations from global networks. IIASA contributed N2O emission inventory data from the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model, including those from industrial processes needed for data alignment.

"We see that the N2O emissions have increased considerably during the past two decades, but especially from 2009 onwards," says study lead author Rona Thompson, a senior scientist at NILU. "Our estimates show that the emission of N2O has increased faster over the last decade than estimated by the Intergovernmental Panel on Climate Change (IPCC) emission factor approach."

The authors state that N2O in the atmosphere has risen steadily since the mid-20th century. This rise is strongly linked to an increase in nitrogen substrates released to the environment. Since the mid-20th century, the production of nitrogen fertilizers, widespread cultivation of nitrogen-fixing crops (such as clover, soybeans, alfalfa, lupins, and peanuts), and the combustion of fossil and biofuels has increased the availability of nitrogen substrates in the environment enormously.

"While the increased nitrogen availability has made it possible to produce a lot more food, the downside is of course the environmental problems associated with it, such as rising N2O levels in the atmosphere," Thompson explains.

The results of the study indicate that N2O emissions increased globally by 1.6 (1.4-1.7) TgN y-1 (approximately 10% of the global total) between 2000-2005 and 2010-2015. This is about twice the increment reported to the United Nations Framework Convention on Climate Change based on the amount of nitrogen fertilizer and manure used, and the default emission factor specified by the IPCC. The authors argue that this discrepancy is due to an increase in the emission factor (that is, the amount of N2O emitted relative to the amount of nitrogen fertilizer used) associated with a growing nitrogen surplus. This suggests that the IPCC method, which assumes a constant emission factor, may underestimate emissions when the rate of nitrogen input and the nitrogen surplus are high.

"We will have to adjust our emission inventories in light of these results, including those in the GAINS model," says study coauthor Wilfried Winiwarter, a researcher in the IIASA Air Quality and Greenhouse Gases Program. "Future increments in fertilizer use may trigger much larger additional emissions than previously thought - emission abatement, as is already reflected in GAINS results, will therefore become even more prominent and also cost efficient for such situations."

From their inversion-based emissions, the researchers estimate a global emission factor of 2.3 or ± 0.6%, which is significantly larger than the IPCC default for combined direct and indirect emissions of 1.375%. The larger emission factor, and the accelerating emission increase found from the inversions suggest that N2O emission may have a non-linear response at global and regional scales with high levels of nitrogen input. The authors recommend using more complex algorithms and region-specific emission factors to estimate N2O going forward.

The outcome of the study implies that to ultimately help lower global N2O emissions, the use of nitrogen fertilizer in regions where there is already a large nitrogen surplus will have to be reduced, as this would result in larger than proportional reductions in N2O emissions. This is particularly relevant in regions like East Asia, where nitrogen fertilizer could be used more efficiently without reducing crop yields.

Credit: 
International Institute for Applied Systems Analysis

Standard treatment programmes for OCD are not always enough

They wash their hands until the skin hangs in tatters, are in a state of panic about bacteria and infections - and are unable to use common sense and distance themselves from the stressful thoughts that are controlling their lives.

Teenagers with the contamination and washing variant of OCD are not generally more ill than children and adolescents with other forms of disabling obsessive thoughts and compulsive behaviour. However, if they have poor insight into their condition, they find it more difficult to recover and become healthy again as a result of the 14-week cognitive behavioural therapy, which is the standard form of treatment in Denmark for OCD, Obsessive Compulsive Disorder.

This is one of the conclusions of a newly published scientific study which Professor, Department Chair Per Hove Thomsen and PhD student Sanne Jensen are behind. They are employed at Aarhus University and Aarhus University Hospital, Psychiatry, in Denmark.

"The research project shows that in the longer term, some of the patients who initially appear to react positively to cognitive behavioural therapy unfortunately turn out not to have received the help that they need. This is particularly true of young people with cleanliness rituals and reduced insight into their condition," says Sanne Jensen.

"The tricky thing is that they initially react positively to the cognitive behavioural therapy, and they therefore leave the mental health services again after the 14-week period of treatment. But when we contact them again after three years, we can see they demonstrate a worrying development - they have gotten worse," says Sanne Jensen about the research results, which have just been published in the Journal of Child Psychology and Psychiatry.

Both she and the study's senior researcher Per Hove emphasise that the research result does not in any way undermine the value of cognitive behavioural therapy, which is psychiatry's modern active form of treatment. It is characterised by patients receiving help from a practitioner to practise doing more of whatever it is that they are afraid of, while simultaneously training a realistic relation to the outside world. Mental health service's standard course of treatment lasts 14 weeks with a possible extension.

"Part of the overall picture is that almost eighty per cent of those we studied were so well-functioning following the cognitive behavioural therapy that after three years they no longer had OCD to a degree that required treatment," says Per Hove Thomsen.

He refers to the findings that after the three-year period, researchers measured the same low level of symptoms as they did following the completion of the treatment in no less than 210 out of 269 of the children and adolescents between the ages of 7-17 who participated in the study. Only 59 or approx. one in five of the young people were in a worrying situation where there was fear of a relapse after the three years had elapsed.

"We're fortunate that the study very precisely identifies the group which we should be keeping a close eye on after the end of the treatment, namely teenagers with cleanliness rituals/contamination anxiety and poor insights into their condition. This knowledge now needs to be disseminated to both clinicians and relatives," says Per Hove Thomsen - well aware that the research results may lead to despondency among particularly vulnerable patients and their relatives. However, as he puts it:

"The conclusion isn't that you're doomed to a life-long disabling OCD if you're a teenager with cleanliness rituals and poor insights into your condition. There are also young people from this patient group who don't suffer a relapse. On the contrary, the conclusion is that we need to become better at following up on precisely these patients, because otherwise we risk leaving them in the lurch. Perhaps the treatment needs to be repeated, or perhaps there's a need to supplement the treatment with SSRI medicine," says Per Hove Thomsen.

Fact box: Many forms of OCD

Up to four per cent of all children and adolescents struggle with disabling obsessive thoughts and compulsive behaviour to such an extent that they can be classified as suffering from OCD. Rituals and compulsive behaviour have many forms - for example:

Cleanliness rituals/contamination anxiety where the patient feels that they are gross or they are afraid of picking up a dangerous infection by touching something which someone else has already touched. Typical characteristics include avoidance behaviour (such as using an elbow to open doors) and attempts to remove the potential sources of the 'contamination'.

Fear of causing harm by being somehow 'wrong' where the patient is afraid of doing harm to him- or herself or to others (by being a pyromaniac, murderer, sadist), or is afraid that they are e.g. a paedophile. Typical characteristics include anxiety that whatever it is the person fears is actually a disguised wish with the patient often uncertain as to whether his or her own impulses can be controlled.

Symmetry/hoarding where the patient must place objects or make movements in a specific order or form of symmetry, or has some other form of specific order such as repeating an action a specific number of times. The rituals often have to be performed in a specific way and are often so time-consuming that they are almost exhausting. Hoarding is characterised by a fear of throwing something away which should not have been discarded - ranging from letters to advertising and cast-off clothing.

The research results - more information

The Nordic Long-term OCD Treatment Study (NordLOTS) is the largest clinical treatment study thus far of children and adolescents with OCD aged 7-17. The study included 269 patients from Denmark, Norway and Sweden in the period 2008 to 2012. Initially, all participants received 14 initial sessions of cognitive behavioural therapy, following which patients with a certain level of continued clinical symptoms drew lots on whether they should receive additional cognitive behavioural therapy or medicine of the type SSRI. All patients were examined after 6 months, 1, 2 and 3 years.

Credit: 
Aarhus University

New assessment finds EU electricity decarbonization discourse in need of overhaul

It's well known that the EU is focusing its efforts on decarbonizing its economy. In many respects, Germany's Energiewende personifies the poster child of that effort. Unfortunately, substantial investments in the Energiewende have not yet yielded significant reductions in GHG emissions and political disillusionment has emerged as an unwelcome result. Decarbonization efforts in other European countries risk making similar blunders unless the contemporary EU policy discourse is thoroughly cross-examined.

A new paper, published last week in Energy Research & Social Science, calls into question the credibility of electricity decarbonization narratives in Europe. The work, authored by researchers Ansel Renner and Mario Giampietro of ICTA-UAB in Barcelona, proposes a novel methodological approach for the study of complex issues such as electricity decarbonization.

"Quantitative story-telling," explains Renner, "is a rejection of our shared Reductionist upbringings. It embraces the discipline of complex systems analysis and, in the context of European electricity decarbonization, brings to light a number of serious causes for concern."

Defibrillating the socio-technical discourse

Renner and Giampietro's paper identifies the Achilles' heel of the EU decarbonization policy discourse to be the hyperfixation of that discourse on structural change. It is highly unlikely, the study finds, that the "EU's heroic transition" will result successful unless that transition substantially increases its engagement with functional societal change. Such a change implies moving from questioning what technologies are "made of" to questioning what technologies are "made for". For example, it may not be wise to assume that advances in solar photovoltaic technologies--coupled with the highly anticipated advent of electric vehicles and smart grids--will suffice to realize radical decarbonization in the immediate future. Instead, it may be wise to spend equal effort on policies that motivate use changes, including, for example, reductions in long-distance travel as well as the sharing of cars and apartments.

"When I was an undergraduate," remarks Giampietro, "universities had entire departments tasked with the study of energetics. Unfortunately for us, that field has since been systematically eliminated. Our study attempts to reinvigorate the contemporary discourse on decarbonization by identifying a number of 'elephants in the room' inspired by energetics."

The paper is also part of a greater cause and mission statement--that of the EU Horizon 2020 project Moving Towards Adaptive Governance in Complexity (MAGIC). The MAGIC project has been working closely with European policymakers over the past three years, building a strong résumé of heterodox analysis informing the resource nexus. More information and studies on and about water-energy-food resource security in Europe are available on the project website.

Credit: 
Universitat Autonoma de Barcelona

Quantum computers learn to mark their own work

Quantum computers can potentially answer questions beyond the capabilities of classical computing - but their answers might not be reliable

University of Warwick scientists have developed a protocol for quantum computers to measure how close their answers are to the correct ones

Checking whether these answers are correct using classical methods is extremely resource-intensive

Could be used in confirming whether a quantum computer has outperformed classical computers, so-called quantum supremacy

A new test to check if a quantum computer is giving correct answers to questions beyond the scope of traditional computing could help the first quantum computer that can outperform a classical computer to be realised.

By creating a protocol that allows a quantum computer to check its own answers to difficult problems, the scientists from the University of Warwick have provided a means to confirm that a quantum computer is working correctly without excessive use of resources.

Samuele Ferracin, Theodoros Kapourniotis and Dr Animesh Datta from the University's Department of Physics have recently tackled this problem in a paper for The New Journal of Physics, published today (18 November).

The researchers have developed a protocol to quantify the effects of noise on the outputs of quantum computers. Noise is defined as anything that affects a quantum machine's hardware but is beyond the user's control, such as fluctuations in temperature or flaws in the fabrication. This can affect the accuracy of a quantum computer's results.

When applied, the researchers' test produces two percentages: how close it estimates the quantum computer is to the correct result and how confident a user can be of that closeness.

The test will help the builders of quantum computers to determine whether their machine is performing correctly to help refine their performance, a key step in establishing the usefulness of quantum computing in the future.

Dr Animesh Datta from the University of Warwick Department of Physics said: "A quantum computer is only useful if it does two things: first, that it solves a difficult problem; the second, which I think is less appreciated, is that it solves the hard problem correctly. If it solves it incorrectly, we had no way of finding out. So what our paper provides is a way of deciding how close the outcome of a computation is to being correct."

Determining whether a quantum computer has produced a correct answer to a difficult problem is a significant challenge as, by definition, these problems are beyond the scope of an existing classical computer. Checking that the answer it has produced is correct typically involves using a large number of classical computers to tackle the problem, something that is not feasible to do as they tackle ever more challenging problems.

Instead, the researchers have proposed an alternative method that involves using the quantum computer to run a number of easy calculations that we already know the answer to and establishing the accuracy of those results. Based on this, the researchers can put a statistical boundary on how far the quantum computer can be from the correct answer in the difficult problem that we want it to answer, known as the target computation.

It is a similar process to that which computer programmers use to check large computer programs, by putting in small functions with known answers. If the program answers enough of these correctly then they can be confident that the whole program is correct.

Dr Datta adds: "The whole point of having a quantum computer is to not spend an exponential amount of time solving problems, so taking an exponential amount of time to check whether it's correct or not defeats the point of it. So our method is efficient in that it doesn't require an exponential amount of resources.

"We do not need a classical computer to check our quantum computer. Our method is self-contained within a quantum system that can be used independently of large servers."

Lead author Samuele Ferracin has been developing ways for scientists working on quantum computers to incorporate the test into their work. He said: "We have spent the last few years thinking about new methods to check the answers of quantum computers and proposing them to experimentalists. The first methods turned out to be too demanding for the existing quantum computers, which can only implement 'small' computations and perform restricted tasks. With our latest work we have successfully developed a method that suits existing quantum computers and encompasses all their main limitations. We are now collaborating with experimentalists to understand how it performs on a real machine."

Quantum computing harnesses the unusual properties of quantum physics to process information in a wholly different way to conventional computers. Taking advantage of the behaviour of quantum systems, such as existing in multiple different states at the same time, this radical form of computing is designed to process data in all of those states simultaneously, lending it a huge advantage over classical computing. Certain kinds of problems, like those found in codebreaking and in chemistry, are particularly suited to exploiting this property.

The last few years have seen unprecedented experimental advances. The largest quantum computers are doubling in size every six months and seem now very close to achieve quantum supremacy. Quantum supremacy refers to a milestone in the development of quantum computers, where a quantum computer first performs a function that would require an unreasonably large amount of time using a classical computer.

Dr Datta adds: "What we are interested in is designing or identifying ways of using these quantum machines to solve hard problems in physics and chemistry, to design new chemicals and materials, or identify materials with interesting or exotic properties. And that is why we are particularly interested in the correctness of the computation."

Credit: 
University of Warwick

Kick-starting Moore's Law? New 'synthetic' method for making microchips could help

Researchers at Johns Hopkins University have developed a new method for producing atomically-thin semiconducting crystals that could one day enable more powerful and compact electronic devices.

By using specially-treated silicon surfaces to tailor the crystals' size and shape, the researchers have found a potentially faster and less expensive way to produce next-generation semiconductor crystals for microchips. The crystalline materials produced this way could in turn enable new scientific discoveries and accelerate technological developments in quantum computing, consumer electronics, and higher efficiency solar cells and batteries.

The findings are described in a paper published today in Nature Nanotechnology.

"Having a method to sculpt crystals at the nanoscale precisely, quickly, and without the need for traditional top-down processes, presents major advantages for widespread utilization of nanomaterials in technology applications," said Thomas J. Kempa, a chemistry professor at Johns Hopkins University who directed the research.

Kempa's team first doused silicon substrates - the supports used widely in industrial settings to process semiconductors into devices - with phosphine gas. When crystals were coaxed to grow on the phosphine-treated silicon supports, the authors discovered that they grew into structures that were far smaller and of higher quality than crystals prepared through traditional means.

The researchers discovered that the reaction of phosphine with the silicon support caused the formation of a new "designer surface." This surface spurred the crystals to grow as horizontal "ribbons" as opposed to the planar and triangularly-shaped sheets that are typically produced. Moreover, the uniform complexion and clean-edged structure of these ribbons rivaled the quality of nanocrystals prepared through industry-standard patterning and etching processes, which are often laborious, lengthy, and expensive, Kempa said.

The nanocrystals prepared in this study are called "transition metal dichalcogenides" or TMDs. Like graphene, TMDs have enjoyed widespread attention for possessing powerful properties that are a unique consequence of their "two-dimensional" scale. But conventional processing methods struggle to readily alter the texture of TMDs in ways that suit new discoveries and the development of better-performing technologies.

Notably, the versions of TMDs that Kempa and his team were able to create were so small that they dubbed them "one-dimensional" to differentiate them from the usual two-dimensional sheets most researchers are familiar with.

Materials processing limitations are one reason why Moore's Law has been slowing in recent years. The rule, posed in 1965 by Intel co-founder Gordon E. Moore, states that the number of transistors, and their performance, in a dense integrated circuit will double approximately every two years. Packing so many micron-sized transistors into microchips, or integrated circuits, is the reason that consumer electronics have gotten steadily smaller, faster, and smarter over the past few decades.

However, the semiconductor industry is now struggling to maintain that pace.

Notable features of the crystals prepared by Kempa and his team include:

1. Their highly-uniform atomic structure and quality stems from the fact that they were synthesized as opposed to fabricated through the traditional methods of patterning and etching. The elegant quality of these crystals could render them more efficient at conducting and converting energy in solar cells or catalysts.

2. Researchers were able to directly grow the crystals to their precise specifications by changing the amount of phosphine.

3. The "designer substrate" is "modular," meaning that academic and industrial labs could use this technology in conjunction with other existing crystal growth processes to make new materials.

4. The "designer substrates" are also reusable, saving money and time on processing.

5. The resulting ribbon-shaped, one-dimensional crystals emit light whose color can be tuned by adjusting the ribbon width, indicating their potential promise in quantum information applications.

"We are contributing a fundamental advance in rational control of the shape and dimension of nanoscale materials," Kempa said.

This method can "sculpt nanoscale crystals in ways that were not readily possible before," he added. "Such precise synthetic control of crystal size at these length scales is unprecedented."

"Our method could save substantial processing time and money," he said. "Our ability to control these crystals at will could be enabling of applications in energy storage, quantum computing and quantum cryptography."

Credit: 
Johns Hopkins University

Money spent on beer ads linked to underage drinking

AMES, Iowa - Advertising budgets and strategies used by beer companies appear to influence underage drinking, according to new research from Iowa State University.

The findings show that the amount of money spent on advertising strongly predicted the percentage of teens who had heard of, preferred and tried different beer brands. For example, 99% of middle school and high school students surveyed for the study had heard of Budweiser and Bud Light - the top spender on advertising - and 44% said they had used the brand.

The study, published by the journal Addictive Behaviors Reports, is one of the first to examine the relationship among advertising budgets, underage drinking and brand awareness. The study was led by Iowa State professor Douglas Gentile, assistant professors Brooke Arterberry and Kristi Costabile, and Aalborg University assistant professor Patrick Bender.

Gentile says advertisers use cognitive and affective strategies - humor, animation, funny voices, special effects - that often appeal to youth. To test this, researchers looked at money spent on beer ads to determine the relationship with brand awareness, preference, loyalty and use among teens. They then compared advertising strategies with teens' intention to drink as an adult and current alcohol consumption.

Of the 1,588 middle and high school students surveyed, more than half (55%) had at least one alcoholic drink in the past year, 31% had one or more drinks at least once a month and 43% engaged in heavy drinking. When asked to name their two favorite TV commercials, alcohol-related ads had the highest recall (32%) followed by soft drinks (31%), fashion (19%), automotive (14%) and sports (9%). A quarter of those surveyed said they owned alcohol-related products.

"We can't say from this one study that advertisers are specifically targeting youth, but they are hitting them," Gentile said. "If you look at beer ads, advertisers are using all the tricks we know work at grabbing children's attention."

Research has shown teens are heavy consumers of media and therefore exposed to more advertising. Costabile, who studies entertainment narratives, says advertisers - beer companies or any brand - know that the message is more persuasive when delivered as a story.

"Viewers or readers aren't thinking about the message through a critical lens," Costabile said. "Instead, audiences become immersed in a compelling story and identify with the characters, a process which leads them to unintentionally be persuaded by the messages of the story."

Return on investment

According to the Federal Trade Commission, 14 alcohol companies spent $3.45 billion on marketing in 2011. Of that amount, 26% was spent on advertising. Spending has grown since 1999, when Iowa State researchers collected the survey data. At that time, the top five advertised brands (Budweiser/Bud Light, Miller Genuine Draft/Miller Lite, Coors/Coors Light, Corona/Corona Extra and Heineken) spent just over $1 billion.

"Not much has changed since we collected the data, other than amount spent on advertising," Gentile said. "Underage drinking is still a problem, beer companies still advertise and the psychological mechanisms of how ads work and the way teens learn are all the same."

ISU researchers also asked teens about their intentions to drink as an adult. Advertising and parent and peer approval of drinking were all significant predictors of intention to drink. Arterberry, who studies issues related to substance use, says with a growing number of young adults reporting substance use disorders this study offers insight as to why some may start drinking at a young age.

"By understanding what influences behavior we can design more effective prevention and intervention programs to reduce underage drinking, which in turn could lessen the likelihood that alcohol use becomes a problem," Arterberry said.

Credit: 
Iowa State University

Philadelphia had 46 neighborhood mass shootings over 10 years, Temple-led team finds

image: Dr. Jessica H. Beard, MD, MPH, Assistant Professor of Surgery in the Division of Trauma and Surgical Critical Care at the Lewis Katz School of Medicine at Temple University, the first and corresponding author on the article, "Examining mass shootings from a neighborhood perspective: An analysis of multiple-casualty events and media reporting in Philadelphia, United States," published in Preventive Medicine.

Image: 
Temple University Health System

(Philadelphia, PA) - The definition of mass shooting varies widely depending upon the information sources that are used. In a new study published online in the journal Preventive Medicine, a research team led by Temple's Jessica H. Beard, MD, MPH, standardized the defining features of neighborhood mass shootings using police department data, then examined media coverage of the shooting incidents.

The team, led by Dr. Beard, Assistant Professor of Surgery in the Division of Trauma and Surgical Critical Care at the Lewis Katz School of Medicine at Temple University (LKSOM), examined 15,672 firearm assaults reported to the Philadelphia Police Department between January 1, 2006 and December 31, 2015 and looked at the time, date, and location of the incidents; and demographics and mortality of the victims. The researchers determined that three variables were most relevant to a standardized definition of "neighborhood mass shooting": distance between shootings, time between shootings, and total number of victims. Specifically, they defined a neighborhood mass shooting as one that involves four or more victims shot in a one-hour window within 100 meters (about a city block).

Based on those criteria, during the 10 study years, Philadelphia experienced 46 neighborhood mass shootings that injured or killed 212 individuals. Other findings associated with those 46 incidents included:

The victims tended to be black (85.9%), male (76.4%) and were an average age of 24.

Of the 212 individuals who were shot in these incidents, 29 (13.7%) died.

The events took place in 41 distinct census block groups (of 1,337 in the city).

"There is no standard definition of a mass shooting," Dr. Beard said. "Our team sought to devise one that acknowledges conventional definitions but is more inclusive and also considers the impact on communities. We found that there were 46 neighborhood mass shootings in Philadelphia from 2006-2015; for comparison's sake, there were 41 traditionally defined mass shooting events in the country during that time period, according to Mother Jones."

The research team examined how the 46 mass shooting events were reported in public news media, using three full-text newspaper and television transcript databases of local, regional and national media outlets. Examination of reporting within four days of each mass shooting event revealed:

There were 183 total media reports.

Seven (15%) of these incidents received no identifiable media coverage.

The 31 victims of the no-coverage events tended to be black (84%) and 25 years old or younger (61%).

None of these seven unreported events had any fatalities.

Of the remaining 39 incidents that did receive coverage, that coverage generally came from local and regional print publications.

Nine of these 39 reported incidents (23%) attracted national media attention and involved either youth or female victims.

Only two headlines used the term "mass shooting," and both were for one incident.

"The media have a profound impact on society based on both what they choose to cover and how they choose to cover it," Dr. Beard added. "Neighborhood mass shootings are more common than the public - and policymakers - likely know. Going forward, we would suggest coverage focus more extensively on the complexity of the events, share victims' and survivors' stories, and explore root causes and solutions."

Credit: 
Temple University Health System

Paper: Outcomes vary for workers who 'lawyer up' in employment arbitration disputes

image: A worker who retains legal counsel to litigate a workplace dispute in arbitration doesn't account for the potentially countervailing effect of employers hiring their own legal counsel, says new research co-written by U. of I. labor professor Ryan Lamare.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Conventional wisdom dictates that workers who "lawyer up" in workplace disputes would be more likely to improve their chances at securing a better outcome, but in an alternative dispute resolution context such as arbitration, employers can all but cancel out those positive effects, says a new paper by a University of Illinois expert who studies labor and employment arbitration.

An employee who retains legal counsel to litigate a workplace conflict doesn't account for the potentially countervailing force of employers hiring their own representatives or for differences in attorney characteristics, which tend to favor the deep-pocketed employer, said J. Ryan Lamare, a professor of labor and employment relations at Illinois.

"There's this idea that employers are sometimes perceived as unleveling the playing field by taking advantage of the institutional structure of arbitration," Lamare said. "One of the counterarguments to that is employees don't have to go it alone in arbitration. They think they can hire an attorney who can essentially level the playing field. But there are pitfalls to that strategy, too."

The question of whether employees can level the playing field in arbitration by hiring attorneys is of particular interest since employees are often required to waive their right to sue their employers in court and are frequently forced to go to arbitration - even when dealing with issues as severe as discrimination or sexual harassment on the job, Lamare said.

"These policies have led to mass protests at companies like Google, and states such as California have considered banning arbitration for these types of employment disputes," he said. "If employees can use lawyers to shield them from the negative aspects of arbitration, this might calm the nerves of those who see the system as unfair. Alternatively, if lawyers are ineffectual in arbitration, this would lend support to those who want to limit arbitration usage."

Lamare analyzed employment arbitration awards rendered under the Financial Industry Regulatory Authority system for cases filed between 1986-2007. He found that hiring a lawyer benefits employees only in the rare instances when employers do not retain an attorney.

Conversely, when employers used an attorney in arbitration but employees did not, the employer benefited substantially. When both sides retained attorneys, however, the effects were statistically identical to those cases in which neither side hired lawyers, according to the paper.

"What people forget is that employers can also hire lawyers, and they can act in ways that offset the effects of the employee's attorney," Lamare said. "They roughly cancel each other out - and the employee has paid a lot of money out of their own pocket for representation. That alone is a negative outcome that's detrimental to employees."

Firms also have far more experience going through arbitration than employees, giving them a huge advantage, Lamare said.

"The firm goes through arbitration many times, and as a part of that experience, the firm becomes better or is able to game the system better than the employee who only goes through it once," he said.

For employees, hiring any random lawyer isn't enough to level the playing field. Attorney skill and specialization matter, but there are no guarantees - even with a "good" attorney, Lamare said.

"I find that higher-skill attorneys produce better outcomes in arbitration, but it may be the case that higher-skill attorneys attach themselves to better claims, more winnable cases," he said.

To control for that, Lamare accounted for as many different factors as possible in the types of claims that went to arbitration by examining lawyers' biographical records to determine attorney quality differences and their effects on outcomes conditional on both sides having legal counsel.

He found that employee and employer attorney characteristics differ and the contrast has grown more pronounced over time. The difference can affect awards, particularly for employees.

"The bottom line is: Simply hiring an attorney won't redress systematic imbalances within employment arbitration," he said. "Lawyers are certainly important to the system and certain types of representatives can affect the outcomes of arbitration. But inequalities persist, and attorneys vary - sometimes greatly - in the substantive value they add when they represent employees.

"Employees should not assume that they can overcome systemic power inequalities simply by hiring an attorney."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Structure of a mitochondrial ATP synthase

video: This is cryo-EM map of the membrane region with closeup view of a cardiolipin linking the dimer.

Image: 
A. Muhleip

ATP synthase is a universal molecular machine for energy conversion. By coupling to cellular respiration in mitochondria, it catalyzes conversion of chemical energy of cells.

Mitochondrial ATP synthase is composed of dimers that, when come together, form membrane curvature that is essential for efficient energy conversion. While the mitochondrial signature lipid cardiolipin and its interactions with proteins are believed to contribute to this process, it was not directly visualized before. In addition, it was unclear to what extent the ATP synthase has diverged across different species.

Alexander Mühleip from Alexey Amunts lab used the single-cell photosynthetic organism Euglena gracilis, which belongs to a phylum that also includes human parasites, to extract the mitochondrial ATP synthase. Its structure was then determined using cryo-EM, allowing the reconstruction of the atomic model. The high resolution of the cryo-EM density map allowed identification of 29 different protein subunits and 25 cardiolipin molecules. Some of the cardiolipins appear to modulate the critical channel for proton transfer that fuels the machine, which is the first evidence for their direct involvement.

The model shows that the Euglena mitochondrial ATP synthase is highly divergent with 13 protein subunits being specific to this system. The new subunits contribute to the formation of dimers glued by lipids, providing experimental evidence that the key functional property of membrane curvature induction has likely occurred independently in evolution. Furthermore, a membrane-embedded subcomplex was found at the periphery of the ATP synthase, displacing the membrane. Sarah McComas performed molecular dynamics simulations supporting that the newly found subcomplex contributes to the membrane curvature.

The atomic model of the mitochondrial ATP synthase with native lipids provides a new perspective for its functional analysis. The different type of molecular organization of this essential energy production machinery re-establishes the defining features of the complex and opens the door for reconstructing the evolution of the mitochondrial ATP synthase. Finally, identified similarities to the human parasites provide a new perspective for a therapeutic exploration.

Amunts' lab page

Credit: 
Science For Life Laboratory

Army project may lead to new class of high-performance materials

image: Synthetic biologists working on a US Army project have developed a set of design rules that guide how ribosomes, a cell structure that makes protein, could lead to a new class of synthetic polymers that may create new high-performance materials and therapeutics for Soldiers.

Image: 
Courtesy Northwestern University

RESEARCH TRIANGLE PARK, N.C. -- Synthetic biologists working on a U.S. Army project have developed a process that could lead to a new class of synthetic polymers that may create new high-performance materials and therapeutics for Soldiers.

Nature Communications published research conducted by Army-funded researchers at Northwestern University, who developed a set of design rules to guide how ribosomes, a cell structure that makes protein, can incorporate new kinds of monomers, which can be bonded with identical molecules to form polymers.

"These findings are an exciting step forward to achieving sequence-defined synthetic polymers, which has been a grand challenge in the field of polymer chemistry," said Dr. Dawanne Poree, program manager, polymer chemistry at the Army Research Office. "The ability to harness and adapt cellular machinery to produce non-biological polymers would, in essence, bring synthetic materials into the realm of biological functions. This could render advanced, high-performance materials such as nanoelectronics, self-healing materials, and other materials of interest for the Army."

Biological polymers such as DNA, have precise building block sequences that provide for a variety of advanced functions such as information storage and self-replication. This project looked at how to re-engineer biological machinery to allow it to work with non-biological building blocks that would offer a route to creating synthetic polymers with the precision of biology.

"These new synthetic polymers may enable the development of advanced personal protective gear, sophisticated electronics, fuel cells, advanced solar cells and nanofabrication, which are all key to the protection and performance of Soldiers," Poree said.

"We set out to expand the range of ribosomal monomers for protein synthesis to enable new directions in biomanufacturing," said Michael Jewett, the Charles Deering McCormick Professor of Teaching Excellence, professor of chemical and biological engineering, and director of the Center for Synthetic Biology at Northwestern's McCormick School of Engineering. "What's so exciting is that we learned the ribosome can accommodate more kinds of monomers than we expected, which sets the stage for using the ribosome as a general machine to create classes of materials and medicines that haven't been synthesized before."

Recombinant protein production by the ribosome has transformed the lives of millions of people through the synthesis of biopharmaceuticals, like insulin, and industrial enzymes that are used in laundry detergents. In nature, however, the ribosome only incorporates natural amino acid monomers into protein polymers.

To expand the repertoire of monomers used by the ribosome, Jewett's team set out to identify design rules for linking monomers to Transfer ribonucleic acid, known as tRNAs. That is because getting the ribosome to use a new monomer is not as simple as introducing a new monomer to the ribosome. The monomers must be attached to tRNAs, which are the molecules that carry them into the ribosome. Many current processes for attaching monomers to tRNAs are difficult and time-consuming, but a relatively new process called flexizyme enables easier and more flexible attachment of monomers.

To develop the design rules for using flexizyme, the researchers created 37 monomers that were new to the ribosome from a diverse repertoire of scaffolds. Then, they showed that the monomers that could be attached to tRNAs could be used to make tens of new peptide hybrids. Finally, they validated their design rules by predictably guiding the search for even more new monomers.

"With the new design rules, we show that we can avoid the trial-and-error approaches that have been historically associated with developing new monomers for use by the ribosome," Jewett said.

These new design rules should accelerate the pace in which researchers can incorporate new monomers, which ultimately will lead to new bioproducts synthesized by the ribosome. For example, materials made of protease-resistant monomers could lead to antimicrobial drugs that combat rising antibiotic resistance.

The research is part of the Department of Defense's Multidisciplinary University Research Initiatives program, supported by ARO, in which Jewett is working with researchers from three other universities to reengineer the ribosome as a biological catalyst to make novel chemical polymers. ARO is an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory.

"It's amazing that the ribosome can accommodate the breadth of monomers we showed," Jewett said. "That's really encouraging for future efforts to repurpose ribosomes."

Credit: 
U.S. Army Research Laboratory

Implementing no-till and cover crops in Texas cotton systems

Healthy soil leads to productive and sustainable agriculture. Farmers who work with, not against, the soil can improve the resiliency of their land. Because of this, practices such as no-till and cover crops and topics such as regenerative agriculture and soil biology have become increasingly important in the agricultural conversation.

While producers of many major crops in the United States have adopted conservation agriculture practices, cotton producers have lagged behind. In 2018, conservation tillage (which includes no-till, strip-till, and mulch tillage) was used in 70% of soybean acres, 67% of wheat acres, and 65% of corn acres but only 40% of cotton acres.

Many cotton producers are interested in conservation agriculture but question how conservation agriculture practices will fit into their farming operations. Paul DeLaune, of Texas A&M AgriLife Research, addresses these concerns in the webcast "Implementing No-Till and Cover Crops in Texas Cotton Systems." The webcast is directed specifically at stakeholders who are considering adopting practices such as no-till and cover crops.

DeLaune outlines the impact of conservation tillage and cover crop practices on cotton yields, economic returns, soil water storage, and soil health in dryland and irrigated cotton systems. He concludes that no-till with cover crop or no-till produces much higher net returns than conventional till and that over the long-term, no-till systems have produced improved yields. He also offers advice on which cover crops are most effective and how higher seeding rates don't necessarily translate to higher biomass production.

This 40-minute presentation is available through the "Focus on Cotton" resource on the Plant Management Network. This resource contains more than 75 webcasts, along with presentations from six conferences, on a broad range of aspects of cotton crop management: agronomic practices, diseases, harvest and ginning, insects, irrigation, nematodes, precision agriculture, soil health and crop fertility, and weeds. These webcasts are available to readers open access (without a subscription).

Credit: 
American Phytopathological Society

Helicopter parents and 'hothouse children' -- exploring the high stakes of family dynamics

image: WVU's Kristin Moilanen is researching the effects of helicopter parenting on young adults.

Image: 
West Virginia University

True helicopter parents talk a good game in making their actions all about their children, but according to one West Virginia University researcher, what they're doing is reaping--and heaping--the rewards for themselves.

Kristin Moilanen, associate professor of child development and family studies, said the phenomena of helicopter parenting most often occurs in middle- to upper-class families where stakes are high for parents to be able to show off their children’s success. Her research, which focuses on young adults 18- to 24- years-old, indicates that high helicopter parenting leads to “low mastery, self-regulation and social competence.”

“Unfortunately, I think the term for those children is ‘hothouse children,’” Moilanen said. “I think they’ve been raised to be these sort of delicate flowers under these very well-controlled conditions and —just like a tropical plant— they’re vulnerable whenever those conditions are exceeded, which is a scary thought.”

The college admissions scandal, which led to the arrest and incarceration of two Hollywood actresses who had bribed high-profile universities to admit their children by falsifying admissions test scores or outright lying about athletic abilities, might be the most currently-famous example of helicopter parenting gone wrong.

“Their stakes were different than, maybe for average people, but maybe [the fear was] they wouldn’t have access to the spotlight or that the college wouldn’t be prestigious enough, maybe that it wouldn’t be in keeping with their lifestyle they were accustomed to,” Moilanen said.

The motivation for “the right” college or university rounds out the helicopter parents’ career guidance, for example, forcing a choice in medicine when the child may want to be an artist, she continued. Helicopter parenting, Moilanen said, isn’t done for what the child wants; it can be done for what the parent wants for the child.

The dichotomy does more harm that just resentment toward an interfering parent. Moilanen said children take parents’ repeated over-involvement in their decisions to heart, undermining their sense of self-concept and their ability to self-regulate.

Moilanen said when those students come to college, where their parents have a financial stake, they have struggles they don’t necessarily know how to manage. Some of them handle the pressure with dangerous behaviors, including episodic drinking that they hide from their parents

“It can get messy for those kids really fast,” she said. “In a sense, they get caught between their parents’ desires, even if [the child] knows what’s best for themselves.”

Moilanen said children might figure out problems on their own, but the parent swoops in before they have the opportunity to learn for themselves. Collateral side effects of the child’s continued lack of autonomy could be heightened anxiety and internalizing problems, as well as leading to the belief that they are incapable of living independently and their outcomes are primarily shaped by external forces instead of their own decisions, the research said.

Moilanen noted that some children may need more oversight than others, and those situations vary from family-to-family and even from child-to-child within a family. Also, she said, “most kids turn out just fine and learn to ‘adult’ on their own.”

There’s no research yet that shows what kind of parents these “hothouse children” are or will be, Moilanen said.

“We do know that people tend to repeat the parenting that they receive, so I would say the chances are good that those children who were raised by helicopter parents would probably act in kind,” she said.

Credit: 
West Virginia University

Borderline personality disorder has strongest link to childhood trauma

People with Borderline Personality Disorder are 13 times more likely to report childhood trauma than people without any mental health problems, according to University of Manchester research.

The analysis of data from 42 international studies of over 5,000 people showed that 71.1% of people who were diagnosed with the serious health condition reported at least one traumatic childhood experience.

The study was carried out by researchers at The University of Manchester in collaboration with Greater Manchester Mental Health NHS Foundation Trust. It is published in the journal Acta Psychiatrica Scandinavia.

In the latest of a series of Meta-anayses by the team on the effects of childhood trauma on adult mental health, they show it is much more likely to be associated with BPD than mood disorders, psychosis and other personality disorders.

The most common form of adverse experience reported by people with BPD was physical neglect at 48.9%, followed by emotional abuse at 42.5%, physical abuse at 36.4%, sexual abuse at 32.1% and emotional neglect at 25.3%.

BPD is often a debilitating mental heath problem that makes it hard for someone a control their emotions and impulses.

The disorder, often linked to self-harm and substance abuse, is hard to treat and associated with significant costs to sufferers and society as a whole.

Some of the characteristics of this condition -such as experiencing extreme, overwhelming emotions over what might be seen to others as a minor issue - are common, but become chronic and exaggerated after childhood trauma.

Dr Filippo Varese, from The University of Manchester, said: "During childhood and adolescence, our brain is still undergoing considerable development and we are also refining strategies to deal with the challenges of everyday life, and the negative feelings that come with them.

"In some people who have experienced chronic, overwhelming stress in childhood, it is likely that these responses do not develop in the same way. People can become more sensitive to 'normal' stress. They are sometimes unable to deal with intense negative thoughts and feelings, and they might resort to dangerous or unhelpful measures to feel better, such as taking drugs or self-harming. This can lead to various mental health difficulties, including the problems commonly seen in people who receive a diagnosis of BPD.

"We found a strong link between childhood trauma and BPD, which is particularly large when emotional abuse and neglect was involved."

He added: "Borderline is a slightly misleading term - as it implies that this condition only has a mild impact. Far from that, BPD can be very distressing and difficult to treat.

"The term BPD was originally used to indicate mental health problems that were not a psychosis nor an anxiety or depressive disorder - but something in the middle. Another term used in modern times is 'emotionally unstable personality disorder', which perhaps gives a clearer picture of the kind of problems typically described by these people.

"We hope these findings underline the importance of trauma informed care for people accessing mental health services, where prevalence rates of BPD are high.

"But further research is needed to explore the complex factors also likely to be involved such as biology, experiences in later life, and psychological processes."

Credit: 
University of Manchester