Tech

Skin cancer mystery revealed in yin and yang protein

image: Scientists are using powerful supercomputers to uncover the mechanism that activates cell mutations found in about 50 percent of melanomas. Molecular dynamics simulations on TACC's Stampede2 supercomputer tested the stability of the structure of B-Raf:14-3-3 complex, which when mutated is linked to skin cancer. The study authors compare the B-Raf dimer to the Chinese yin-yang circular symbol of interconnected opposites joined at the tail.

Image: 
Karandur et al., TACC

It starts off small, just a skin blemish. The most common moles stay just that way -- harmless clusters of skin cells called melanocytes, which give us pigment. In rare cases, what begins as a mole can turn into melanoma, the most serious type of human skin cancer because it can spread throughout the body.

Scientists are using powerful supercomputers to uncover the mechanism that activates cell mutations found in about 50 percent of melanomas. The scientists say they're hopeful their study can help lead to a better understanding of skin cancer and to the design of better drugs.

In 2002, scientists found a link between skin cancer and mutations of B-Raf (Rapidly Accelerated Fibrosarcoma) kinase, a protein that's part of the signal chain that starts outside the cell and goes inside to direct cell growth. This signal pathway, called the Ras/Raf/Mek/Erk kinase pathway, is important for cancer research, which seeks to understand out of control cell growth. According to the study, about 50 percent of melanomas have a specific single mutation on B-Raf, known as the valine 600 residue to glutamate (V600E).

B-Raf V600E thus became an important drug target, and specific inhibitors of the mutant were developed in the following years. The drugs inhibited the mutant, but something strange happened. Paradoxically, quieting the mutant had a down side. It activated the un-mutated, wild-type B-Raf protein kinases, which again triggered melanoma.

"With this background, we worked on studying the structure of this important protein, B-Raf," said Yasushi Kondo, a postdoctoral researcher in the John Kuriyan Lab at UC Berkeley. Kondo is the co-author of an October 2019 study in the journal Science that determined the structure of the complex of proteins that make up B-Raf and also found how the paradoxical B-Raf activation happens.

"We aimed to study the more native-like state of the protein to understand how it's regulated in the cells, because most of the studies have been focused on the isolated kinase domain and how the drugs bind to the kinase domain." Kondo said.

The full-length B-Raf protein is made of several domains linked by disordered regions, something too unwieldy for scientists to yet image. Kondo's technique was to use intein chemistry to make smaller fragments, then stitch them up to get the full structure.

"As a result, we obtained an active form of the full-length B-Raf dimer called B-Raf co-purified with 14-3-3 dimer, a scaffolding protein bound to the phosphorylated B-Raf C-terminal tail," Kondo said.

Kondo's group used cryo-electron microscopy (cryo-EM) to determine the structure of B-Raf 14-3-3 complex, basically cryogenically freezing the protein complex, which kept it in a chemically-active, near-natural environment. Next they flashed it with electron beams to obtain thousands of 'freeze frames.' They sifted out background noise and reconstructed three-dimensional density maps that showed previously unknown details in the shape of the molecule. And for proteins, form follows function.

Kondo explained that the structure revealed an asymmetric organization of the complex, formed by two sets of internally symmetrical dimers, or pairs of bonded molecules. "We propose that this unexpected arrangement enables asymmetric activation of the B-Raf dimer, which is a mechanism that provides an explanation of the origin of the paradoxical activation of B-Raf by small molecule inhibitors," Kondo said.

Detailed analysis of the asymmetrical B-Raf 14-3-3 complex structure showed another unexpected structural feature, described as the distal tail segment, DTS for short, of one B-Raf molecules. Kondo said the tail of one is bound to the active site of the other, blocking its activity by competing with ATP binding. The blocked B-Raf molecule is stabilized in the active conformation. "We interpreted this structure that this blocked B-Raf molecule functions as an activator and stabilizes the other B-Raf receiver through the dimer interface," Kondo said.

Curiously enough, the authors compare the B-Raf dimer to the Chinese yin-yang circular symbol of interconnected opposites joined at the tail. "From looking at the subject, it's very clear that one is not capable of phosphorylating the downstream molecule, which is necessary for cell growth. The other molecule is clearly the one to do the job. In this set of two molecules, we clearly see one is doing the supporting job, and the other one is doing the actual work. It really does look like Yin and Yang in this B-Raf 14-3-3 complex we solved," Kondo said.

Looks, though, can be deceiving. Scientists used computer simulations to help verify that they were really onto something. "We ran molecular dynamics simulations of this complex of the B-Raf dimer bound to a 14-3-3 dimer to test the stability of the asymmetric conformation," said study co-author Deepti Karandur, also a postdoctoral researcher at the John Kuriyan Lab of UC Berkeley; she's also a postdoctoral fellow at the Howard Hughes Medical Institute. "We didn't know why the conformation was asymmetric, or what role it played in maintaining the active state of the enzyme," Karandur said.

They started the simulations using the structure that Kondo had solved by cryo-EM, with the DTS segment running from one kinase into the active site of the other. Then they ran a second set of simulations with the DTS segment removed.

"What we found was that in the system without the distal tail segment, the entire complex is not stable," Karandur explained. "The kinase domains move with respect to the scaffolding, the 14-3-3 dimer. In one of our simulations, the dimer state of B-Raf itself, which experiments have shown is necessary to maintain the active state of this kinase, it fell apart, indicating that this distal tail segment, DTS, is necessary to actually maintain this complex in this asymmetric conformation, which in turn is necessary to maintain the kinase dimer in the stable asymmetric dimer active state."

One of the main results of the study was finding the mechanism of action that switches on the B-Raf kinase complex of two B-Raf kinases and two 14-3-3 scaffolding proteins, where on B-Raf kinase is the activator, and the other is the receiver.

"The tail of the receiver molecule is inside the active site of the activator, so the activator cannot work as an enzyme," Kondo said. "Instead, the activator molecule stabilizes the active conformation of the receiver molecule. The 14-3-3 scaffold protein facilitates this arrangement, so that the tail insertion only happens to one kinase molecule. We hypothesize that when there is no 14-3-3 binding, both kinases can be blocked by the insertion of the DTS, but this needs to be tested."

The study's computational challenges involved molecular dynamics simulations that modeled the protein at the atomic level, determining the forces of every atom on every other atom for a system of about 200,000 atoms at time steps of two femtoseconds.

"For small systems, we can see what's happening relatively quickly, but for large systems like these, especially large biomolecular systems, these changes happen on like nanosecond timescales, microsecond timescales, or even millisecond timescales," Karandur said.

Karandur and colleagues turned to XSEDE, the NSF-funded Extreme Science and Engineering Discovery Environment, for allocation time on the Stampede2 supercomputer at the Texas Advanced Computing Center (TACC) to do the simulations, as well as the Bridges system at the Pittsburgh Supercomputer Center to investigate other proteins in the pathway. Stampede2's Skylake processor nodes, networked with Intel Omnipath, made quick work of the optimized-for-supercomputers NAMD molecular dynamics simulations.

"Stampede2 runs very, very fast, and it's very efficient. We generated a total of about 1.5 microseconds of trajectories for our systems in about four to six weeks. Whereas, if we ran it on our own in-house cluster it would have taken us months or longer," Karandur said.

About XSEDE, Karandur commented: " I think it's an amazing resource. I've been running simulations starting from when I was a graduate student. XSEDE made it possible for us to access timescales that are biologically relevant. Everything that happens in a cell, happens on microsecond timescales, to millisecond timescales, to longer. When I was starting, we could not run this simulation on any system anywhere. I mean, it would have taken five years, or more. To be able to do it in weeks and say, okay, we know understand why this is important so we can now start to gain real understanding into how the biology happens, is just amazing," Karandur said.

And there remains a lot to be discovered about B-Raf. It's just one link in the signal chain that governs cell growth and cancer.

"The structure that was resolved in this paper is part of a large, multi-domain system," Karandur explained. "We don't know what this complete protein looks like. We don't see it in the structure. We don't know what its dynamics look like, and how all these other parts of the protein play a role in maintaining the active state, or converting it from the inactive state to the active state."

She furthered that as the system gets bigger, the pertinent structural changes happen over longer timescales, and bigger supercomputers are needed to handle the complexity, such as the NSF-funded Frontera supercomputer, also at TACC.

"Frontera is getting there. We're very excited about this. We are in the process of getting an allocation on Frontera," Karandur said.

For non-scientists, this fundamental research could yield insight leading to better drugs for skin cancer.

"The paradoxical activation of Raf kinase by these B-Raf-specific inhibitors turn normal cells to tumors during skin cancer treatment," Kondo said. Understanding the mechanism of this phenomenon will allow us to design better drugs. Hopefully, our study can contribute to the understanding of this step. In addition, we found mutations in this link between the Kinase domain and the 14-3-3 binding element of the B-Raf molecule, which was never shown before. This mutation reduces the activity of B-Raf in the cells. It's also indicating that this part of the kinase domain can be a target point to develop new kinds of B-Raf inhibitors."

Said Karandur: "There's a lot of dynamics happening in the cell. We are, largely because of XSEDE, only starting to be able to look at things like that. Going forward, the only way we can continue to look at things is by using very, very large supercomputers, because the calculations require a lot of computational power. It's really exciting to be able to actually see these things happen and to say, here are how things change at the atomic level; here are these interactions between these two atoms form or break, and that translates into this huge change at the global level in the overall structure of the protein, and how it interacts with other proteins, or other molecules in the cell. We're very excited about where it will go in the future."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Large study links sustained weight loss to reduced breast cancer risk

A large new study finds that women who lost weight after age 50 and kept it off had a lower risk of breast cancer than women whose weight remained stable, helping answer a vexing question in cancer prevention. The reduction in risk increased with the amount of weight lost and was specific to women not using postmenopausal hormones. The study appears in JNCI.

In the United States, more than two in three adult women are overweight or obese. And while high body mass index (BMI) is an established risk factor for postmenopausal breast cancer, there has not been adequate evidence to determine if that risk is reversible by losing excess weight.

To learn more, investigators from the American Cancer Society, Harvard T.H. Chan School of Public Health, and others used the Pooling Project of Prospective Studies of Diet and Cancer (DCPP) to estimate the association of sustained weight loss in middle or later adulthood on subsequent breast cancer risk. Their analysis included more than 180,000 women aged 50 and older from ten prospective studies. The new analysis is the first with a large enough sample size to examine the important question of whether sustained weight loss can impact breast cancer risk with statistical precision. Weight was assessed three times over approximately 10 years: at study enrollment; after about five years; then again about four years later.

The results showed women with sustained weight loss had a lower risk of breast cancer than women whose weight remained stable, and the larger the amount of sustained weight loss, the lower was the risk of breast cancer. Women who lost 2 to 4.5 kg (about 4.4 to 10 lbs.) had a 13% lower risk (HR= 0.87, 95% CI: 0.77-0.99) than women with stable weight. Women who lost 4.5 to 9 kg (10- 20 lbs.) had a 16% lower risk (HR=0.84, 95% CI: 0.73-0.96). Women who lost 9 kg or more (20+ lbs.) had a 26% lower risk (HR=0.74, 95% CI: 0.58-0.94).

In addition, women who lost 9 kg or more and gained some (but not all) of the weight back had a lower risk of breast cancer compared with those whose weight remained stable (HR=0.77, 95% CI: 0.62-0.97).

"Our results suggest that even a modest amount of sustained weight loss is associated with lower breast cancer risk for women over 50," said Lauren Teras, PhD, lead author of the study. "These findings may be a strong motivator for the two-thirds of American women who are overweight to lose some of that weight. Even if you gain weight after age 50, it is not too late to lower your risk of breast cancer."

Credit: 
American Cancer Society

Fine-tuning thermoelectric materials for cheaper renewable energy

Researchers from Queen Mary University of London have developed new thermoelectric materials, which could provide a low-cost option for converting heat energy into electricity.

Materials known as halide perovskites have been proposed as affordable alternatives to existing thermoelectric materials, however so far research into their suitability for thermoelectric applications has been limited.

In this study, published in Nature Communications, scientists conducted a series of experiments on thin films of the halide perovskite, caesium tin iodide, to test its ability to create electrical current from heat. The researchers found they were able to improve the materials' thermoelectric properties through a combination of methods, which involved partial oxidation and the introduction of additional elements into the material.

Dr Oliver Fenwick, lead Royal Society University Research Fellow and Lecturer in Materials Science at Queen Mary University of London, said: "For many years halide perovskites have been suggested as promising thermoelectric materials. But whilst simulations have suggested good thermoelectric properties real experimental data hasn't met these expectations.

"In this study, we successfully used 'doping' techniques, where we intentionally introduce impurities into the material, to tweak and improve the thermoelectric properties of caesium tin iodide, opening up options for its use in thermoelectric applications."

Thermoelectric materials use temperature differences to generate electrical energy. They have been suggested as a promising sustainable approach to both energy production and recycling, as they can be used to convert waste heat into useful electricity. However, current widely-used thermoelectric materials are costly to produce and process, which has limited the uptake of this greener technology.

Dr Fenwick, said: "With the heightened global awareness of climate change and realisation that a number of renewable energy solutions will be needed to meet our energy demands, thermoelectric generators are now at the centre stage in today's "green technology" debate.

"The thermoelectric materials we currently have are expensive, and some even contain toxic components. One of the largest growth areas for thermoelectric technology is for domestic, commercial or wearable applications, so there's a need to find cheaper, non-toxic materials that can also operate well at low temperatures, for these applications to be fully realised. Our research suggests the halide perovskites could, with some fine-tuning, fill this void."

Credit: 
Queen Mary University of London

Agricultural parasite avoids evolutionary arms race, shuts down genes of host plants

image: Dodder can parasitize a variety of plant species, including some of agricultural importance, like tomatoes. In addition to reducing yield, its dense vine-like structure can interfere with harvesting machinery.

Image: 
Claude dePamphilis, Penn State

A parasitic plant has found a way to circumvent an evolutionary arms race with the host plants from which it steals nutrients, allowing the parasite to thrive on a variety of agriculturally important plants. The parasite dodder, an agricultural pest found on every continent, sends genetic material into its host to shut down host defense genes.

According to a new study by researchers at Penn State, dodder targets host genes that are evolutionarily conserved and sends many slightly different versions of its genetic weaponry to ensure effectiveness. This strategy, described in a paper appearing online in the journal eLife on December 17, 2019, restricts the host's ability to respond.

Instead of making its own energy through photosynthesis, dodder wraps itself around a host plant, using special structures to siphon off water and nutrients. Dodder can parasitize a variety of species, including some of agricultural importance like tomatoes, and its dense vine-like structure can interfere with harvesting machinery. The research team, led by Penn State Professor of Biology Michael Axtell, previously determined that dodder sends microRNAs--short segments of nucleic acids whose sequence matches a segment of a host gene--into its host. Binding to the host's protein-coding messenger RNAs prevents host proteins from being made.

"If this process were detrimental to the host plant, we would expect the targeted host genes to change over time, due to natural selection or even due to chance," said Axtell. "This kind of process often leads to what we call an evolutionary arms race, where host and parasite alternate changing the sequence of their genes slightly in order to up the ante. We wanted to know if this was actually the case."

The research team identified microRNAs implicated in this cross-species gene regulation within four different species of dodder. Surprisingly, microRNAs were often unique from species to species, and even from plant to plant. The team grouped microRNAs that share some sequence similarity into about 18 "superfamilies" of three to five microRNAs each.

The researchers then investigated the targets of these superfamilies across a range of host species, and found that targeted genes are highly conserved, meaning that they are generally very similar between species and do not change much over time. This is often the case in genes that code for important proteins, because any evolutionary changes to these genes could disrupt their function.

"The targeted amino acids are the most conserved amino acids within the protein chain," said Nathan Johnson, graduate student in plant biology at Penn State and first author of the paper "So we assume that sequence can't change due to natural selection or else the protein breaks. Because the host can't change its sequence without a negative effect on its own function, the parasite completely avoids an arms race on the genetic level."

The researchers found that, where there was variation within a microRNA superfamily, it matched up perfectly with variation in the host's target genes. Amino acids within a protein are coded by a set of three nucleic acids, the third of which can often be changed without affecting the resulting amino acid. Where variations were seen in the host sequence--and the corresponding microRNAs--they generally occurred in this third position.

"It seems that dodder creates several iterations of its microRNA in order to account for the natural variation within the host's targeted genes," said Johnson. "This shotgun strategy likely also helps the parasite be successful against a wide variety of host species."

Next the researchers hope to explore the evolutionary origins of these microRNAs, as well as the cellular and molecular mechanisms of their delivery from parasite to host.

"The microRNAs in these superfamiles have undergone natural selection to target these conserved sites," said Axtell. "We're looking at the knives that are already sharpened, but what are their origins? There have been studies of cross-species gene regulation by small RNAs in the past, but this is the first evidence that these processes have been subject to natural selection."

Credit: 
Penn State

Researchers uncover genetic mystery of infertility in fruit flies

(Boston)--Researchers have discovered a novel parasitic gene in fruit flies that is responsible for destroying the eggs in the ovaries of their daughters.

Just like fruit flies, human genomes are filled with mobile parasitic genes called transposons and similar to fruit flies, humans use small RNA molecules to silence these genetic parasites so that they can generate proper germ cells for reproduction.

The researchers focused on one parent fly that originated from Harwich, Mass., with the mobile parasitic gene called the P-element. They then generated hybrid offspring between the Harwich fly and a "clean" fly called ISO1 to determine which offspring still caused the infertility syndrome in their daughters and which did not.

They then analyzed the DNA genomes between these two different hybrids and found that Harwich fathers and the sons that still cause infertility in their daughters all had a special hyper mobile version of the P-element that they named the Har-P. "Our discovery of the Har-P element showed that it moves around so extensively in fly germ cells that it causes catastrophic ovary collapse," explained corresponding author Nelson Lau, PhD, associate professor of biochemistry at Boston University School of Medicine (BUSM).

According to the researchers, human infertility from the incompatibility of two different genomes from the mother and father could be modeled by the infertility syndrome of the Harwich fly fathers mating with ISO1 mothers to cause all their daughters to be infertile. "More than 45 percent of the human genome is made up of remnants of transposons and most of them are properly silenced, but there are still a few active transposons that can move each time a new human is conceived, changing our genomes in a way that is completely different from the general mixing of our fathers and mothers genes during the process of meiosis, when sperm and egg are generated."

By studying the simpler system of fruit flies where genetic manipulations are easier, the researchers hope to achieve a better understanding of how human genomes are shaped by the multitude of transposons lurking in our genomes and the small RNA molecules we depend upon to keep the transposons in check. They also hope to harness the hyper mobile Har-P element to turn it into a new tool for genetically marking animal cells for developmental biology studies.

Credit: 
Boston University School of Medicine

How bovine creatures cow-moonicate through their lives

image: PhD student Alexandra Green recording Holstein-Friesian cows at Mayfarm, Camden Campus, The University of Sydney, Australia.

Image: 
Lynne Gardner/University of Sydney

Farmers might finally be able to answer the question: How now brown cow?

Research at the University of Sydney has shown that cows maintain individual voices in a variety of emotional situations.

Cows 'talk' to one another and retain individual identity through their lowing.

Studying a herd of 18 Holstein-Friesian heifers over five months, PhD student Alexandra Green from the School of Life and Environmental Sciences determined that the cows gave individual voice cues in a variety of positive and negative situations. This helps them to maintain contact with the herd and express excitement, arousal, engagement or distress.

The study recorded 333 samples of cow vocalisations and analysed them using acoustic analyses programs with assistance from colleagues in France and Italy. The paper was published this month in Scientific Reports.

The conclusion of the research is that farmers should integrate knowledge of individual cow voices into their daily farming practices.

"We found that cattle vocal individuality is relatively stable across different emotionally loaded farming contexts," Ms Green said.

Positive contexts were during oestrus and anticipation of feeding. Negative contexts were when cows were denied feed access and during physical and visual isolation from the rest of the herd.

"We hope that through gaining knowledge of these vocalisations, farmers will be able to tune into the emotional state of their cattle, improving animal welfare," Ms Green said.

She said that by understanding these vocal characteristics, farmers will be able to recognise individual animals in the herd that might require individual attention.

"Ali's research is truly inspired. It is like she is building a Google translate for cows," said Associate Professor Cameron Clark, Ms Green's academic supervisor.

It was previously known that cattle mothers and offspring could communicate by maintaining individuality in their lowing. Ms Green's research confirms that cows maintain this individual voicing through their lives and across a herd.

"Cows are gregarious, social animals. In one sense it isn't surprising they assert their individual identity throughout their life and not just during mother-calf imprinting," Ms Green said. "But this is the first time we have been able to analyse voice to have conclusive evidence of this trait."

Ms Green travelled to Saint-Etienne, France, to work with some of the best bioacousticians in the world, including co-authors Professor David Reby and Dr Livio Favaro, to analyse the vocal traits of the cattle.

The study will be incorporated into her doctorate, which investigates cattle vocal communication and use in welfare assessment on dairy farms.

Credit: 
University of Sydney

There is no 'I' in team -- or is there?

There is no I in Team - as the saying goes. But new research suggests it is important for individuals to feel personal ownership towards a team project in order to be more creative.

The study, led by Dr Ieva Martinaityte of the University of East Anglia (UEA)'s Norwich Business School, suggests that this also drives each team member to invest more time and effort into the project.

At the same time though, managers should be aware that individual ownership minimizes collective effort. That is, teams with high levels of individual ownership are less collectively engaged, which in turn decreases team creativity.

Published today in the Journal of Occupational and Organizational Psychology, the study also involved Prof Kerrie Unsworth at the University of Leeds and Dr Claudia Sacramento from Aston University.

The researchers investigated two types of psychological ownership - personal ('This is my project') and collective ('This is our project') and how these influence individual and team behaviour in a project that required creative output.

The results show for the first time that although collective psychological ownership has positive effects on engagement and subsequently on creativity, for both individuals and teams, personal psychological ownership drives individual engagement and creativity, but has the opposite effect on team outcomes.

Dr Martinaityte, a lecturer in business and management, said: "Human nature to possess can be a powerful motivation to enhance employee engagement and creativity.

"Managers should invest time in making each team member feel like a project owner to maximize individual outputs, but equally focus on teams developing a feeling of collective ownership, 'our project' rather than 'my project', if they expect higher team dedication and more creative project outcomes. Without team members experiencing collective ownership, there is a risk that team performance will be lost.

"For employees it is about being aware of psychological ownership as a powerful driver to engage and perform in the team project. If they are not willing to put effort into the project perhaps they should consider whether they feel they don't own the project."

Kerrie Unsworth, professor of organisational behaviour at Leeds University Business School, added: "It may sound trite, but a team is more than just a collection of individuals. When team members only think of themselves as individually owning the project without collective ownership, then creativity drops. There has to be an 'us' as well as an 'I' in a successful team."

The study analysed data from 39 teams and 186 individuals - including team members and project managers - working at international organisations based in the United States, United Kingdom, Lithuania, and China.

Examples of projects they worked on included developing mobile software, creating and implementing a building design and launching an event.

In an initial questionnaire team members reported their personal psychological ownership and collective psychological ownership towards the specific project. In a second questionnaire three weeks later they reported their levels of individual engagement in the project and their own creativity. At the same time, project managers rated the team's engagement in the project. Finally, three weeks later managers reported team creativity.

Credit: 
University of East Anglia

Special issue of Educational Researcher examines the nature and consequences of null findings

WASHINGTON, D.C., December 17, 2019--A newly released special issue of Educational Researcher, titled "Randomized Controlled Trials Meet the Real World: The Nature and Consequences of Null Findings," focuses on important questions raised by the prevalence of null findings--the absence of expected or measurable results--particularly in randomized control trials. In the issue, leading researchers address what it means when an evaluation produces null findings, why null findings are so prevalent, and how they can be used to advance knowledge. Educational Researcher is a peer-reviewed journal of the American Educational Research Association.

In their introduction, special issue editors Carolyn D. Herrington (Florida State University) and Rebecca Maynard (University of Pennsylvania) write that the growing emphasis on evaluating the effectiveness of education programs, policies, and practices, along with the expanded use of randomized controlled trials for those evaluations, "has contributed to growing angst among some in the research, policy, and funder communities that so many experimental evaluations are producing null findings."

Herrington and Maynard note that "much of this angst arises from confusion over the meaning of a null finding," especially since "commonly, a null finding is interpreted as evidence that the tested strategy did not work or the study design was flawed." However, as examined in the special issue, null results are actually an expected and valuable product of evaluation research.

"This special issue goes a long way in clarifying how to understand and interpret null results, especially in the context of education interventions," said AERA Executive Director Felice J. Levine. "AERA is encouraging our journal editors to publish studies that have null results and to explore other ways to encourage researchers to share the knowledge that comes from them. This ultimately will enhance our collective understanding of what programs work, why they do, and how they can be implemented elsewhere to improve education outcomes."

In addition to the editors' introduction, the special issue--which is provided open access--includes the following research articles and commentaries.

"A Framework for Learning from Null Results," Robin T. Jacob (University of Michigan), Fred Doolittle (MDRC), James Kemple (New York University), and Marie-Andree Somers (MDRC)

In this article, the authors propose a framework for defining null results and interpreting them. They also propose a method for systematically examining a set of potential reasons for a study's null findings that would provide a more nuanced and useful way to understand them. The authors also argue that if studies were designed in preparation for weak, null, or even negative findings, they would be better situated to add useful information to the field.

"Using Implementation Fidelity to Aid in Interpreting Program Impacts: A Brief Review," Heather C. Hill (Harvard University) and Anna Erickson (University of Michigan)

Hill and Erickson examine the relationship of poor program implementation ("fidelity") to null results in trials of educational interventions. As expected, better implementation fidelity correlates with better program outcomes; they also find that the presence of new curriculum materials positively predicts fidelity level. However, their results also suggest that the quality of program implementation is a partial but not complete explanation for null results.

"Making Every Study Count: Learning from Replication Failure to Improve Intervention Research," James S. Kim (Harvard University)

Kim draws on case studies of the impact findings for a particular intervention that varied across the studies--with null findings in one study and confirmatory in another--to illustrate how such findings can be used to understand the role of context in determining impact. He advocates for greater attention to context by evaluators in their designs and reporting and for researchers to see replication failure as an opportunity to improve intervention research and explore new research questions.

"Commentary on the Null Results Special Issue," Carolyn J. Hill (MDRC)

In her commentary on the research articles in the special issue, Hill notes that null effects remain relatively under-examined, yet often contain important knowledge. She writes, "I share the authors' general view that the presence of null results is not in itself reason for despair." Instead of downplaying null results, "We have an obligation to anticipate their occurrence, interrogate their presence, and support continued, healthy attention to them."

"Expecting and Learning From Null Results," Jeffrey Valentine (University of Louisville)

In his commentary, Valentine argues that (1) it is critical that conversations about replication efforts begin with an agreed-upon definition of what it means to say that a study did or did not replicate the results of another study; (2) if a replication failure has been identified, using the surface similarity of the studies to reverse-engineer an explanation is unlikely to help; and (3) researchers and consumers should expect small and differing effects, and this fact points to the need to think across broad bodies of research evidence.

Credit: 
American Educational Research Association

Standard of care chemoradiation for Stage III NSCLC is superior to two tested alternatives

PHILADELPHIA, PA - Lung cancer is the leading cause of cancer death in the United States and approximately 75-80% of all cases are non-small cell lung cancer (NSCLC). Of these, 30-40% are considered locally advanced and are categorized as either Stage IIIA or IIIB. The currently accepted standard of care for patients with locally-advanced NSCLC is radiation plus chemotherapy, which is known as chemoradiation. In recent years, most research has focused on which chemotherapy drugs to use in chemoradiation, and how to properly integrate them with the radiation component of therapy. Less attention has been given to optimizing the radiation therapy component. Indeed, the nationally accepted standard radiation prescription dose has remained at the same level (60-63 Gy) for more than 30 years.

In light of this, a research team, led by Jeffrey D. Bradley, MD, from Winship Cancer Institute at Emory University/ Department of Radiation Oncology, set out to test whether higher doses of radiation would kill more cancer cells, and thus result in better patient survival. The researchers also explored whether a benefit would be gained by adding the drug cetuximab, a validated therapeutic target for NSCLC, to the chemoradiation regimen, as a previous study indicated that this drug could extend survival in certain patients with NSCLC. To their surprise, the researchers found that neither of the tested alternatives - increased doses of radiation or the addition of cetuximab - was superior to standard of care chemoradiation.

The study, "Long-Term results of NRG Oncology RTOG 0617; Comparing Standard Versus High Dose Chemoradiotherapy +/- Cetuximab for Unresectable Stage III Non-Small Cell Lung Cancer," published in the Journal of Clinical Oncology compared standard-dose (SD)(60 Gy) versus high-dose (HD)(74 Gy) radiation with concurrent chemotherapy and determined the efficacy of cetuximab for Stage III NSCLC. This 2x2 factorial design, with radiation dose as one factor and cetuximab as the other, had overall survival as the primary endpoint.

In an analysis of the study's 496 patients, the 5-year overall survival estimate for the standard radiation dose arm of the study, with or without cetuximab delivery, was 32.1%. "This is amongst the highest overall survival results of any phase III trial for patients with Stage III NSCLC," wrote the study's lead author Dr. Bradley, along with his colleagues from NRG Oncology. "These results argue strongly that the current standard of care radiation dose should be 60 Gy given in 2 Gy daily fractions to a target volume directed at tumor plus margin based on CT and PET/CT, excluding elective nodal irradiation." The authors also note that "the use of cetuximab confers no survival benefit at the expense of increased toxicity" and that a prior indication of a benefit to adding cetuximab to a chemoradiation regimen in NSCLC "is no longer apparent."

Credit: 
NRG Oncology

New way to make biomedical devices from silk yields better products with tunable qualities

image: Raw product in the form of silk powder can be easily stored, transported, and molded into various forms with superior properties to many other materials used in medical implants.

Image: 
Chunmei Li & David Kaplan, Tufts University

MEDFORD/SOMERVILLE, Mass. (December 16, 2019) -- Researchers led by engineers at Tufts University have developed a novel, significantly more efficient fabrication method for silk that allows them to heat and mold the material into solid forms for a wide range of applications, including medical devices. The end products have superior strength compared to other materials, have physical properties that can be "tuned" for specific needs, and can be functionally modified with bioactive molecules, such as antibiotics and enzymes. The thermal modeling of silk, described in Nature Materials, overcomes several hurdles to enable manufacturing flexibility common to many plastics.

"We and others have explored the development of many silk-based devices over the years using solution-based manufacturing," said David Kaplan, Stern Family Professor of Engineering at the Tufts University School of Engineering and corresponding author of the study. "But this new solid-state manufacturing approach can significantly cut the time and cost of producing many of them and offer even greater flexibility in their form and properties. Further, this new approach avoids the complications with solution-based supply chains for the silk protein, which should facilitate scale up in manufacturing."

Silk is a natural protein-based biopolymer that has long been recognized for its superior mechanical properties in fiber and textile form, producing durable fabrics and used in clinical sutures for thousands of years. Over the past 65 years, scientists have devised ways to break down the fibers and reconstitute the silk protein, called fibroin, into gels, films, sponges and other materials for applications that range from electronics to orthopedic screws, and devices for drug delivery, tissue engineering, and regenerative medicine. However, breaking down and reconstituting fibroin requires a number of complex steps. Additionally, the instability of the protein in aqueous soluble form sets limits on storage and supply chain requirements, which in turn impacts the range and properties of materials that can be created.

The researchers reported that they have overcome these limitations by developing a method for solid-state thermal processing of silk, resulting in the molding of the protein polymer directly into bulk parts and devices with tunable properties. The new method - similar to a common practice in plastics manufacturing - involves the fabrication of nanostructured 'pellets' with diameters from 30 nanometers to 1 micrometer that are produced by freeze drying an aqueous silk fibroin solution. The nanopellets are then heated from 97 to 145 degrees Celsius under pressure, when they begin to fuse. The pleated pattern structure of the silk protein chains become more amorphous, and the fused pellets form bulk materials that are not only stronger than the solution-derived silk materials but also superior to many natural materials such as wood and other synthetic plastics, according to the researchers. The pellets are an excellent starting material since they are stable over long periods and thus can be shipped to manufacturing sites without the requirement for bulk water, resulting in significant savings in time and cost.

The properties of the heat molded silk, such as flexibility, tensile and compression strength, can be tuned to specific ranges by altering the conditions in the molding process, such as temperature and pressure, while the bulk materials can be further machined into devices, such as bone screws and ear tubes, or imprinted with patterns during or after the initial molding. Adding molecules such as enzymes, antibiotics or other chemical dopants allows for the modification of the bulk materials into functional composites.

To demonstrate applications, the researchers tested the bone screws developed with solid state molding in vivo and found they showed biocompatibility as implanted devices, where they supported the formation of new bone structure on the screw surfaces without inflammation. The silk screws were also able to resorb as they were being replaced by bone tissue. Resorption rate can be tuned by preparing screws at different temperatures, ranging from 97 degrees to 145 degrees Celsius, which alters the crystallinity of the bulk material, and therefore its ability to absorb water.

The researchers also manufactured ear tubes -- devices used to help drain infected ear canals -- doped with a protease, which breaks down the silk polymer to accelerate degradation as needed after the tube has served its function.

"The thermal molding process is made possible because the amorphous silk has a well-defined melting point at 97 degrees Celsius, which earlier solution-based preparations did not exhibit," said Chengchen Guo, post-doctoral scholar in the Kaplan lab and co-first author of the study. "That gives us a lot of control over the structural and mechanical properties of what we make." Chunmei Li, Tufts research assistant professor who teamed up with Guo as first author, added that "the starting material - the nanopellets - are also very stable and can be stored over long periods. These are significant advances that can improve the application and scalability of silk product manufacturing."

Credit: 
Tufts University

Plastic biosensor finds sweet success

image: This is a schematic of the all-polymer biofuel cell, which draws energy from the glucose naturally present in saliva.

Image: 
© 2019 KAUST; Heno Hwang

An electronic biosensor powered using the glucose in bodily fluids has been developed by KAUST researchers. The device pairs an electron-transporting polymer with an enzyme that extracts electrons from its reaction with glucose to drive its circuitry. The plastic biosensor could act as a continuous monitor of key health indicators, such as blood sugar levels in diabetes patients.

"Quick, accurate and early detection of abnormalities in metabolism is of paramount importance to monitor, control and prevent many diseases, including diabetes," says David Ohayon, a Ph.D. student in Sahika Inal's lab who led led the research with postdoctoral colleague Georgios Nikiforidis. "Today's glucose monitors are mainly limited to finger-pricking devices, which are often painful," he says. Implantable glucose-sensing devices are being developed, but their batteries complicate implantation and must eventually be recharged or replaced.

An ideal alternative technology would be implantable polymer biosensors that are able to power themselves using molecules around them.

Inal and her team have hit upon a polymer--synthesized by Iain McCulloch's team at KAUST--that appears perfectly suited to the task. "The polymer is an n-type semiconductor, meaning that it can accept and transport electrons along its backbone," Ohayon says. The polymer is coupled with the glucose oxidase enzyme, which oxidatively extracts electrons from its reaction with glucose.

Usually, a third component is required to shuttle the electrons from enzyme to polymer. "These mediators are often toxic and need to be immobilized onto the electrode surface, which complicates device miniaturization and shortens lifetime," Ohayon says.

The new polymer needs no such mediator. "Our polymer seems to be able to host the enzyme in such proximity that it enables efficient electrical communication between the active center and the polymer backbone." The polymer's ethylene glycol side chains are probably the key to the interaction, a hypothesis currently under investigation in collaboration with Enzo di Fabrizo's group at KAUST.

The team used this n-type polymer material in a transistor to sense glucose levels in saliva and also as one half of an all-polymer fuel cell that uses glucose as an energy source to drive the device. "This fuel cell is the first demonstration of a completely plastic, enzyme-based electrocatalytic energy generation device operating in physiologically relevant media," Inal says.

"Glucose sensing and power generation are only two examples of the applications possible when a synthetic polymer communicates effectively with a catalytic enzyme-like glucose oxidase," Inal adds. "Our main aim was to show the versatile chemistry and novel applications of this special water-stable, polymer class, which exhibits mixed conduction (ionic and electronic)."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Heat or eat? How one energy conservation strategy may hurt vulnerable populations

COLUMBUS, Ohio - Any economic and conservation benefits associated with time-of-use electricity billing could be achieved at the expense of some of the most vulnerable citizens in our society: people with disabilities and the elderly, new research suggests.

Under a time-of-use system, energy prices are higher during high-demand "on-peak" times, a practice intended in part to create incentive for people to reduce their electricity use when it's more expensive.

The study showed that two vulnerable populations, people with disabilities who may be using life-saving equipment and elderly people more sensitive to temperature changes, saw the largest increases in their bills on the time-of-use rates.

Time-of-use rates also were linked to worse health outcomes in households occupied by ethnic minorities and people with disabilities compared to their non-vulnerable counterparts.

"For people with disabilities in particular, there may be a forced choice. Either you're using your medically necessary equipment, which can obviously be critical for maintaining health, or you're saving money. You don't get to do both," said Nicole Sintov, senior author of the study and an assistant professor of behavior, decision making and sustainability at The Ohio State University. "It's a bad choice for people to have to make."

The findings suggest that time-of-use electricity rates should be adopted on a large scale only after they're tested and designed to ensure they don't increase hardship for the most vulnerable energy users, Sintov said.

The study is published today (Dec. 16) in the journal Nature Energy.

Time-of-use billing rates are becoming more and more common as utilities try to shift residential energy use to times of day when demand on the power grid is lower or when the utilities can incorporate renewable sources, such as solar or wind, into the power supply - or both.

The policies themselves, as potential mitigators of climate change, have merit, said Sintov, a faculty member in Ohio State's School of Environment and Natural Resources. But she said this research suggests time-of-use rates come with consequences for people already more likely to experience injustices identified in previous research: people with disabilities and the elderly, who tend to be disregarded by decision-makers, and residents in low-income households who may experience energy poverty.

"Households suffering from energy poverty are forced to make trade-offs between paying for electricity bills versus other necessities, such as food and medicine," Sintov and first author Lee White, a former postdoctoral researcher at Ohio State now with Australian National University, wrote in the paper. "Time-of-use and other forms of demand-side response measures may worsen this trade-off pressure, often termed 'the heat or eat dilemma.'"

Sintov and White obtained data from a utility that was surveying participants in a time-of-use rate pilot program implemented during the summer in a hot climate in the southwestern United States. Participants were randomly assigned to either one of two time-of-use rate plans with different peak times and varying on-peak rates or to remain on the existing flat rate.

Based on demographic data the utility collected, the Ohio State researchers created six vulnerability indicators to which participants were assigned as appropriate: low-income, elderly, young children, people with disabilities, and residents identifying as Hispanic or African American. The final sample for the analysis comprised 7,487 respondents.

Sintov and White then applied statistical and mathematical modeling to the data to determine whether and how the time-of-use rates affected costs and health outcomes for residents from vulnerable and non-vulnerable populations.

Both time-of-use rates resulted in bill increases for all participants. But the bill increases from baseline to pilot year were higher for people with disabilities and the elderly than for their non-vulnerable counterparts.

In effect, these groups were penalized for a lack of flexibility in electricity use that is beyond their control, Sintov said, noting the affected households were less likely than others to reduce on-peak energy use and curtail use of air conditioning.

The analysis also showed that Hispanic households and people with disabilities experienced worse health outcomes on time-of-use rates, a finding based on these groups' more frequent reports that they sought medical attention for heat-related conditions.

In some cases, specific vulnerable populations on the time-of-use rates fared better than non-vulnerable participants: Low-income and Hispanic households had lower bill increases compared to non-vulnerable counterparts, and households with young children experienced better health outcomes.

These effects on vulnerable populations aren't just theoretical. A 2015 California Public Utilities Commission ruling ordered the state's investor-owned utilities to establish default time-of-use rates for residential customers beginning earlier this year. All affected residential customers are expected to default to time-of-use rates by October 2020.

"There are also utilities that already have time-of-use rates as a default. It's possible to opt out, but people tend to stick with the default," Sintov said. "If you do an overall evaluation of the effects of these changes, which many utilities are doing, you will get a general answer. But when you start slicing it up and looking at subpopulations, and particularly populations that are vulnerable to energy injustices and already experiencing them, we see very different results.

"One-size-fits-all is not going to work."

Credit: 
Ohio State University

Unveiling a new map that reveals the hidden personalities of jobs

image: Researchers say they were able to successfully recommend an occupation aligned to people's personality traits with over 70 per cent accuracy.

Image: 
Associate Professor Peggy Kern

Thousands of Australian students will get their Higher School Certificates this week - how many will choose the 'right career'?

According to new research published today in the Proceedings of the National Academy of Sciences, understanding the hidden personality dimensions of different roles could be the key to matching a person and their ideal occupation.

The findings of "Social media-predicted personality traits and values can help match people to their ideal jobs" point to the benefit of not only identifying the skills and experience in a particular industry, but also being aware of personality traits and values that characterise jobs - and how they align with your own.

Lead researcher Associate Professor Peggy Kern of the University of Melbourne's Centre for Positive Psychology notes that "it's long been believed that different personalities align better with different jobs. For example, sales roles might better suit an extraverted individual, whereas a librarian role might better suit an introverted individual. But studies have been small-scale in nature. Never before has there been such large-scale evidence of the distinctive personality profiles that occur across occupations."

The research team looked at over 128,000 Twitter users, representing over 3,500 occupations to establish that different occupations tended to have very different personality profiles. For instance, software programmers and scientists tended to be more open to experience, whereas elite tennis players tended to be more conscientious and agreeable.

Remarkably, many similar jobs were grouped together - based solely on the personality characteristics of users in those roles. For example, one cluster included many different technology jobs such as software programmers, web developers, and computer scientists.

The research used a variety of advanced artificial intelligence, machine learning and data analytics approaches to create a data-driven 'vocation compass' - a recommendation system that finds the career that is a good fit with our personality.

Co-author Dr Marian-Andrei Rizoui of the University of Technology Sydney said they were able to "successfully recommend an occupation aligned to people's personality traits with over 70 per cent accuracy."

"Even when the system was wrong it was not too far off, pointing to professions with very similar skill sets," he said. "For instance, it might suggest a poet becomes a fictional writer, not a petrochemical engineer."

With work taking up most of our waking hours, Professor Kern said many people want an occupation that "aligns with who they are as an individual."

"We leave behind digital fingerprints online as we use different platforms," said Professor Kern. "This creates the possibility for a modern approach to matching one's personality and occupation with an excellent accuracy rate."

Co-author, Professor Paul X McCarthy of the University of New South Wales in Sydney, said finding the perfect job was a lot like finding the perfect mate.

"At the moment we have an overly simplified view of careers, with a very small number of visible, high-status jobs as prizes for the hardest-working, best connected and smartest competitors.

"What if instead - as our new vocation map shows - the truth was closer to dating, where there are in fact a number of roles ideally suited for everyone?

"By better understanding the personality dimensions of different jobs we can find more perfect matches."

The researchers noted that while the study used publicly available data from Twitter, the underlying vocation compass map could be used to match people using information about their personality traits from social media, online surveys or other platforms.

"Our analytic approach potentially provides an alternative for identifying occupations which might interest a person, as opposed to relying upon extensive self-report assessments," said Dr Rizoui.

"We have created the first, detailed and evidence based multidimensional universe of the personality of careers - like the map makers of the 19th century we can always improve and evolve this over time."

Credit: 
University of Melbourne

Oil-catching sponge could soak up residue from offshore drilling

Drilling and fracking for oil under the seabed produces 100 billion barrels of oil-contaminated wastewater every year by releasing tiny oil droplets into surrounding water.

Most efforts to remove oil from water focus on removing large oil slicks from industrial spills but these aren't suitable for removing tiny droplets. Instead, scientists are looking for new ways to clean the water.

Now, researchers at the University of Toronto (U of T) and Imperial College London have developed a sponge that removes over 90 per cent of oil microdroplets from wastewater within ten minutes.

After capturing oil from wastewater, the sponge can be treated with a solvent, which releases the oil from the sponge. The oil can then be recycled; the sponge, ready to be used again.

The sponge improves upon a previous concept: lead author Dr Pavani Cherukupally, now of Imperial's Department of Chemical Engineering, had developed an early version of the sponge during her PhD at the U of T. Although the previous sponge removed more than 95 per cent of the oil in the samples tested, it took three hours to do so - far longer than would be useful in industry.

Acidity and alkalinity also presented an issue, as the pH of contaminated wastewater dictated how well the sponge worked. Dr Cherukupally said: "The optimal pH for our system was 5.6, but real-life wastewater can range in pH from four to ten. As we got toward the top of that scale, we saw oil removal drop off significantly, down to just six or seven per cent."

Now, Dr Cherukupally, together with U of T and Imperial academics, has chemically modified the sponge to be of potential use to industry. The new sponge works faster, and over a much wider pH range than the previous version.

The results are published today in Nature Sustainability.

Spongey secrets

To create the original sponge, Dr Cherukupally used ordinary polyurethane foams -- similar to those found in couch cushions -- to separate tiny droplets of oil from wastewater. The team carefully tweaked pore size, surface chemistry, and surface area, to create a sponge that attracts and captures oil droplets - a process known as 'adsorption' - while letting water flow through.

To improve the sponge's properties in the new study, Dr Cherukupally's team worked with U of T chemists to add tiny particles of a material known as nanocrystalline silicon to the foam surfaces. They could then better control the sponge's surface area and surface chemistry, improving its ability to capture and retain oil droplets - a concept known as critical surface energy.

After use, the sponge could be removed from the water and treated with a solvent, releasing the oil from its surface.

Dr Cherukupally said: "The critical surface energy concept comes from the world of biofouling research -- trying to prevent microorganisms and creatures like barnacles from attaching to surfaces like ship hulls.

"Normally, you want to keep critical surface energy in a certain range to prevent attachment, but in our case, we manipulated it to get droplets to cling on tight.

"It's all about strategically selecting the characteristics of the pores and their surfaces. Commercial sponges already have tiny pores to capture tiny droplets. Polyurethane sponges are made from petrochemicals, so they have already had chemical groups which make them good at capturing droplets.

"The problem was that we had fewer chemical groups than what was needed to capture all the droplets. I therefore worked with U of T chemists to increase the number of chemical groups, and with Imperial's Professor Daryl Williams to get the right amount of coating."

Oil cleanup

Co-author Professor Amy Bilton from U of T said: "Current strategies for oil spill cleanup are focused on the floating oil slick, but they miss the microdroplets that form in the water."

"Though our sponge was designed for industrial wastewater, adapting it for freshwater or marine conditions could help reduce environmental contamination from future spills."

Dr Cherukupally will continue to improve the sponge's performance for oil applications and has teamed up with Dr Huw Williams at Imperial's Department of Life Sciences to investigate how the sponges could remove bacteria from saltwater.

She also wants to use the sponges to treat contamination from gas, mining, and textile industries, and wants to make the technology affordable for use in developing countries - mainly for ridding contaminated rivers of organics, heavy metals, and pathogens.

Credit: 
Imperial College London

Big step in producing carbon-neutral fuel: Silver diphosphide

image: Scott Geyer, corresponding author of "Colloidal Silver Diphosphide Nanocrystals as Low Overpotential Catalysts for CO2 Reduction to Tunable Syngas," published online Dec. 16 in Nature Communications.

Image: 
WFU / Ken Bennett

A new chemical process described in the journal Nature Communications does in the lab what trees do in nature - it converts carbon dioxide into usable chemicals or fuels.

This new, carbon-neutral process, created by researchers at Wake Forest University, uses silver diphosphide (AgP2) as a novel catalyst that takes carbon dioxide pollution from manufacturing plants and converts it to a material called syngas, from which the liquid fuel used in manufacturing is made. The new catalyst allows the conversion of carbon dioxide into fuel with minimal energy loss compared to the current state-of-the-art process, according to the Wake Forest researchers.

"This catalyst makes the process much more efficient," said Scott Geyer, corresponding author of "Colloidal Silver Diphosphide Nanocrystals as Low Overpotential Catalysts for CO2 Reduction to Tunable Syngas," published online Dec. 16 in Nature Communications. "Silver diphosphide is the key that makes all the other parts work. It reduces energy loss in the process by a factor of three."

Silver has been considered the best catalyst for this process to date. Adding phosphorous removes electron density from the silver, making the process more controllable and reducing energy waste.

In the future, Geyer sees being able to power this process with solar energy, directly converting sunlight into fuel. The more efficient the chemical conversion process becomes, the more likely solar energy - instead of coal or other non-renewable energy sources - can be used to make fuel.

"People make syngas out of coal all the time," Geyer said. "But we're taking something you don't want, carbon dioxide pollution, and turning it into something you want, fuel for industry."

Geyer, whose lab focuses on understanding the role phosphorous plays in chemical reactions, is an assistant professor of chemistry at Wake Forest. The team that produced this paper includes Hui Li, who led the work as a Ph.D. student in Geyer's lab, plus former Wake Forest undergraduate Zachary Hood; Ph.D. in chemistry student Shiba Adhikari; and Ph.D. student in physics student Chaochao Dun, who all have stayed connected with the program through their professional posts.

"The ability to collaborate with a network of outstanding Wake Forest University graduates who are now at top universities and national laboratories across the United States has been essential in preparing this work as it allows us to access one-of-a-kind instrumentation facilities at their current institutions," Geyer said.

Credit: 
Wake Forest University