Culture

Optimal archery feather design depends on environmental conditions

image: Impacts of arrows with different feathers on a target - INSEP, France.

Image: 
Tom Maddalena

SEATTLE, November 23, 2019 - When it comes to archery, choosing the right feathers for an arrow is the key to winning. This necessity for precision makes it crucial to understand how environment and design effect arrows in flight.

Scientists from the Laboratoire d'Hydrodynamique at the Ecole Polytehcnique will explain the physics behind optimal arrow design at the 72nd Annual Meeting of the APS Division of Fluid Dynamics in Seattle on Saturday, Nov. 23. The presentation is part of a session on biological fluid dynamics in flight.

The researchers said the aspect of feather size and shape in archery accuracy not yet been studied in depth. To discover the optimal feather design, Tom Maddalena, Caroline Cohen and Christophe Clanet first shot arrows with various feathers using a throwing machine. They then used a wind tunnel to observe the aerodynamic forces on the arrow. These experiments were compared to theoretical models of arrow flight.

"We found that the best size depends on the environmental conditions. If there is no wind, a shooter must use very large feathers. The limit of the size is actually mostly dictated by geometrical constraints of the bow," said Maddalena.

The authors plan to further investigate environmental effects on the arrows. Although large feathers provide more stability, they're also affected more easily by the wind.

"We collaborated with the French Archery Federation to conduct this research. Currently, the choice of the feathers is based on intuition and comes directly from the athletes and their coaches. With our work, we hope able to tell them which feathers are the best, so that they can fully trust their equipment," said Maddalena.

Credit: 
American Physical Society

Secretome of pleural effusions associated with non-small cell lung cancer (NSCLC) and malignant...

image: Comparison of cytokine levels grouped by function. Cytokine concentrations were standardized such that the geometric mean control value in the benign PE group was 0 with a standard deviation of 1 and grouped by function (Table 1).

Image: 
Correspondence to - Albert D. Donnenberg - donnenbergad@upmc.edu and Vera S. Donnenberg - donnenbergvs@upmc.edu

Cryopreserved cell-free PE fluid from 101 NSCLC patients, 8 mesothelioma and 13 with benign PE was assayed for a panel of 40 cytokines/chemokines using the Luminex system.

Comparing NSCLC PE and published plasma levels of CAR-T recipients, both were dominated by sIL-6R and IL-6 but NSCLC PE had more VEGF, FGF2 and TNF, and less IL-2, IL-4, IL-13, IL-15, MIP1 and IFN.

A dampened effector response was detected in NSCLC PE, but not mesothelioma or benign PE.

Dr. Albert D. Donnenberg and Dr. Vera S. Donnenberg said, "The pleura represent a common site of metastasis for non-small cell lung cancer (NSCLC) and are the primary site of tumorigenesis in malignant mesothelioma."

These conditions are accompanied by the development of pleural effusions, accumulations of serous fluid rich in tumor cells, mesothelial cells, immune cells, as well as the cytokines and chemokines which they secrete.

In this report the authors compare levels of 40 cytokines and chemokines in non-small cell lung cancer, mesothelioma and benign pleural effusions, with the goal of identifying druggable targets and interventions that can change the pleural environment from one that suppresses immunity and promotes tumor invasiveness, to one conducive to anti-tumor effector responses.

The Donnenberg/Donnenberg Research Team concluded that despite this hostile environment, there is evidence in NSCLC PE of a nascent immune effector response that may be harnessed therapeutically by modifying the local pleural immune environment with antibody-based therapeutics, by ex vivo activation and reinstillation of pleural T cells or engineered T cells.

The authors go on to speculate that conditioning the immune environment of the pleura will greatly increase the chances of success.

Credit: 
Impact Journals LLC

El Nino swings more violently in the industrial age, compelling hard evidence says

image: On the right, satellite composition of El Nino in 1997, and on the left, El Nino in 2015. Both were extreme El Nino events that new hard evidence says are part of a new and odd climate pattern.

Image: 
NOAA

El Ninos have become more intense in the industrial age, which stands to worsen storms, drought, and coral bleaching in El Nino years. A new study has found compelling evidence in the Pacific Ocean that the stronger El Ninos are part of a climate pattern that is new and strange.

It is the first known time that enough physical evidence spanning millennia has come together to allow researchers to say definitively that: El Ninos, La Ninas, and the climate phenomenon that drives them have become more extreme in the times of human-induced climate change.

"What we're seeing in the last 50 years is outside any natural variability. It leaps off the baseline. Actually, we even see this for the entire period of the industrial age," said Kim Cobb, the study's principal investigator and professor in the Georgia Institute of Technology's School of Earth and Atmospheric Sciences. "There were three extremely strong El Nino-La Nina events in the 50-year period, but it wasn't just these events. The entire pattern stuck out."

The study's first author Pam Grothe compared temperature-dependent chemical deposits from present-day corals with those of older coral records representing relevant sea surface temperatures from the past 7,000 years. With the help of collaborators from Georgia Tech and partner research institutions, Grothe identified patterns of in the El Nino Southern Oscillation (ENSO), swings of heating and cooling of equatorial Pacific waters that, every few years, spur El Ninos and La Ninas respectively.

The team found the industrial age ENSO swings to be 25% stronger than in the pre-industrial records. The researchers published their results in the journal Geophysical Review Letters in October 2019. The work was funded by the National Science Foundation.

Slumbering evidence

The evidence had slumbered in and around shallow Pacific waters, where ENSO and El Ninos originate until Cobb and her students plunged hollow drill bits into living coral colonies and fossil coral deposits to extract it. In more than 20 years of field expeditions, they collected cores that contained hundreds of records.

The corals' recordings of sea surface temperatures proved to be astonishingly accurate when benchmarked. Coral records from 1981 to 2015 matched sea surface temperatures measured via satellite in the same period so exactly that, on a graph, the jagged lines of the coral record covered those of the satellite measurements, obscuring them from view.

"When I present it to people, I always get asked, 'Where's the temperature measurement?' I tell them it's there, but you can't see it because the corals' records of sea surface temperatures are that good," said Grothe, a former graduate research assistant in Cobb's lab and now an associate professor at the University of Mary Washington.

First red flag?

In 2018, enough coral data had amassed to distinguish ENSO's recent activity from its natural preindustrial patterns.

To stress-test the data, Grothe left out chunks to see if the industrial age ENSO signal still stuck out. She removed the record-setting 1997/1998 El Nino-La Nina and examined industrial age windows of time between 30 and 100 years long.

The signal held in all windows, but the data needed the 97/98 event to be statistically significant. This could mean that changes in the ENSO activities have just now reached a threshold that makes them detectable.

What is El Nino?

Every two to seven years in spring, an El Nino is born when the warm phase of the ENSO swells into a long heat blob in the tropical Pacific, typically peaking in early winter. It blows through oceans and air around the world, ginning up deluges, winds, heat, or cold in unusual places.

Once El Nino passes, the cycle reverses into La Nina by the following fall, when airstreams push hot water westward and dredge up frigid water in the equatorial Pacific. This triggers a different set of global weather extremes.

Tropical Pacific corals record the hot-cold oscillations by absorbing less of an oxygen isotope (O18) during ENSO's hot phases, and progressively more of it during ENSO's cool phases. As corals grow, they create layers of oxygen isotope records, chronicles of temperature history.

Waves, repairs, contortions

Extracting them is adventurous: A research diver guides a chest-high pneumatic drill under the ocean. Its pressure hose connects to a motor on the boat that powers the drill after the diver has taken off her fins and weighed herself down on the reef.

She carefully angles the bit down the axis of coral growth to get a core with layers that can be accurately counted back in time. On occasion, waves put her and her safety diver through washing machine cycles.

"Doing this all underwater adds an extra level of difficulty, even from the simplest tasks like working with wrenches," Grothe said. "But the drill slices through underwater corals like butter. Fossil corals are drilled on land, and the drill constantly seizes up and overheats."

Blowing models away

The physical proof taken from three islands that dot the heart of the ENSO zone has also thrown down scientific gauntlets, starkly challenging computer models of ENSO patterns and causes. A prime example: Previously unknown to science, the study showed that in a period from 3,000 to 5,000 years ago, the El Nino-La Nina oscillations were extremely mild.

"Maybe there's no good explanation for a cause. Maybe it just happened," Cobb said. "Maybe El Nino can just enter a mode and get stuck in it for a millennium."

Credit: 
Georgia Institute of Technology

Is parents' use of marijuana associated with greater likelihood of kids' substance use?

Bottom Line: Recent and past use of marijuana by parents was associated with increased risk of marijuana, tobacco and alcohol use by adolescent or young adult children living in the same household in this survey study. Researchers examined data for 24,900 parent-child pairs from National Surveys on Drug Use and Health from 2015-2018. Parental marijuana use was a risk factor for marijuana and tobacco use by adolescent and young adult children and for alcohol use by adolescent children when researchers accounted for a variety of potential family and environmental factors. When those factors were considered, parental marijuana use wasn't associated with opioid misuse by their children. The study has limitations, including that the surveys cannot provide a complete picture of family substance use.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Bertha K. Madras, Ph.D., Harvard Medical School, Boston, and McLean Hospital, Belmont, Massachusetts, and coauthors

(doi:10.1001/jamanetworkopen.2019.16015)

Editor's Note: The article contains conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Firearm violence impacts young people disproportionately

(Boston)--Although the magnitude of firearm deaths has remained constant since 2001, a new study has found that deaths have increased since 2014.

An examination of years of life lost due to guns showed there has been a slow increase in years of life lost due to guns since 1999 with a sudden and large increase since 2014. "A constant mortality rate with increasing percentage of years of potential life lost indicates that although the magnitude of firearm deaths remained the same, the deaths were increasingly premature or among younger people across time," explained corresponding author Bindu Kalesan, PhD, MPH, assistant professor of medicine at Boston University School of Medicine (BUSM).

Researchers from BUSM, Boston University School of Public Health (BUSPH), University of Florida and Columbia University used national and state data from the Injury Statistics Query and Reporting System database to explore the patterns of gun deaths and the life years lost.

Previous studies showed that certain populations experience more burden in terms of violence, particularly men and Non-Hispanic black populations. This new study additionally showed that the jump in national rates was driven principally due to increases among men, non-Hispanic black populations, Hispanic white populations and due to homicides. This study also found that 21 U.S. states mirrored the national increase. In contrast, gun deaths declined during the 18 years only in two states, New York and Arizona. Additionally, the shift in increasing burden of gun deaths to younger Americans was noted in 14 states.

According to the researcher, these alarming shifts indicate a wide variation in emerging patterns of gun deaths in America. "Not only did the amount of gun deaths vary by states and within different population groups, there were also unique changes in burden of gun deaths with time. The public health crisis of gun deaths in this country is therefore not unidimensional, but is complex and multidimensional with distinctive risk profiles," added Kalesan, who is also assistant professor of community health sciences at BUSPH.

The researchers believe that in an era of increasing and complex pattern of gun deaths, it is imperative to understand the state-specific and subgroup specific changes additional to national changes so as to combat them in the future based on the unique features. "Future interventions, programs and policies should be created to address this shifting burden locally and should bear in mind the populations that are being most affected by shifts in firearm death," said Kalesan.

Credit: 
Boston University School of Medicine

New Cochrane Review assesses different HPV vaccines & vaccine schedules in adolescent girls and boys

New evidence published in the Cochrane Library today provides further information on the benefits and harms of different human papillomavirus (HPV) vaccines and vaccine schedules in young women and men.

HPV is the most common viral infection of the reproductive tract in both women and men globally (WHO 2017). Most people who have sexual contact will be exposed to HPV at some point in their life. In most people, their own immune system will clear the HPV infection.

HPV infection can sometimes persist if the immune system does not clear the virus. Persistent infection with some 'high-risk' strains of HPV can lead to the development of cancer. High-risk HPV strains cause almost all cancers of the cervix and anus, and some cancers of the vagina, vulva, anus, penis, and head and neck. Other 'low risk', HPV strains cause genital warts but do not cause cancer. Development of cancer due to HPV happens gradually, over many years, through a number of pre-cancer stages, called intra-epithelial neoplasia. In the cervix (neck of the womb) these changes are called cervical intraepithelial neoplasia (CIN). High-grade CIN changes have a 1 in 3 chance of developing into cervical cancer, but many CIN lesions regress and do not develop into cancer. HPV-related cancers accounted for an estimated 4.5% of cancers worldwide in 2012 (de Martel 2017).

Vaccination aims to prevent future HPV infection and the cancers caused by high-risk HPV infection. HPV vaccines are mainly targeted towards adolescent girls because cancer of the cervix is the most common HPV-associated cancer. For the prevention of cervical cancer, the World Health Organization recommends vaccinating girls aged 9-14 years with HPV vaccine using a two-dose schedule (0, 6 months) as the most effective strategy. A three-dose schedule is recommended for older girls ?15 years of age or for people with human immunodeficiency virus (HIV) infection or other causes of immunodeficiency (WHO 2017).

Three HPV vaccines are currently in use: a bivalent vaccine that is targeted at the two most common high-risk HPV types; a quadrivalent vaccine targeted at four HPV types, and a nonavalent vaccine targeted at nine HPV types. In women, the bivalent and quadrivalent vaccines have been shown to protect against pre-cancer of the cervix caused by the HPV types contained in the vaccine if given before natural infection with HPV (Arbyn 2018).

This Cochrane Review summarizes the results from 20 randomized controlled trials involving 31,940 people conducted across all continents. In most studies, the outcome reported was the production of HPV antibodies by the vaccine recipient's immune system. HPV antibody responses predict protection against the HPV-related diseases and cancers the vaccines are intended to prevent. Antibody response is often used as a surrogate in HPV vaccine studies because it takes many years for pre-cancer to develop after HPV infection, so it is difficult for studies to follow participants over such long periods of time. Moreover, because trial participants were tested for HPV infection and offered treatment, if HPV-related precancer was found, progression to cervical cancer in this group would be expected to be very low, even without vaccination.

Four studies compared a two-dose vaccine schedule with a three-dose schedule in 2,317 adolescent girls and three studies compared different time intervals between the first two vaccine doses in 2,349 girls and boys. Antibody responses were similar after two-dose and three-dose HPV vaccine schedules in girls. Antibody responses in girls and boys were stronger when the interval between the first two doses of HPV vaccine was longer.

There was evidence from one study of 16 to 26-year old men that the quadrivalent HPV vaccine reduces the incidence of external genital lesions and genital warts compared with a group who did not receive the HPV vaccine.

There was also evidence from a study of 16 to 26-year old women that compared the nonavalent and quadrivalent vaccines that they provide a similar level of protection against cervical, vaginal, and vulval pre-cancerous lesions.

There was evidence from seven studies about HPV vaccines in people living with HIV. HPV antibody responses in children living with HIV were higher after vaccination with either bivalent or quadrivalent vaccine than with a non-HPV control vaccine. These antibody responses against HPV could be maintained up to two years. The evidence about clinical outcomes and harms for HPV vaccines in people with HIV was very limited.

Evidence suggested that up to 90% of males and females who received an HPV vaccine experienced local minor adverse events such as redness, swelling and pain at the injection site. Due to the low rates of serious adverse events in quadrivalent and nonavalent vaccine groups, and the broad definition of these events used in the trials, we cannot really determine the relative safety of different vaccine schedules.

The lead editor of this review and Consultant in Gynaecological Oncology, Musgrove Park Hospital, Somerset, UK, Dr. Jo Morrison said: "We need long-term population-level studies to provide data on the effects of dosing intervals, schedules and vaccines on HPV-related cancers, as well as giving us a more complete picture of rare harms. However, with fewer doses having a similar antibody response, and more extensive evidence from vaccine studies in boys, policy makers are now in a better position to determine how local vaccination programmes can be designed. It would be interesting to see how different schedules and vaccines influence immunisation coverage, but this review, and the studies within it, were not designed to be able to answer that question."

Credit: 
Wiley

Industrial scale production of layer materials via intermediate-assisted grinding

image: (a) Schematic of the decomposition of the macroscopic compressive forces Fc and Fc? into much smaller microscopic forces fi and fi? that were loaded onto the layer materials by force intermediates. (b) Exfoliation mechanism of layer materials. fi and fi? transfer to sliding frictional forces ffi and ffi? under the relative slipping of the intermediates and layer materials due to the rotation of the bottom container. (c) Atomic force microscopy image of 2D flakes. (d) Photos of several bottoms of 2D MoS2 flakes in aqueous solution.

Image: 
©Science China Press

The large number of 2D materials, including graphene, hexagonal boron nitride (h-BN), transition metal dichalcogenides (TMDCs) like MoS2 and WSe2, metal oxides (MxOy), black phosphorene (b-P), etc, provide a wide range of properties and numerous potential applications.

In order to realize their commercial use, the prerequisite is large-scale production. Bottom-up strategies like chemical vapor deposition (CVD) and chemical synthesis have been extensively explored but only small quantities of 2D materials have been produced so far. Another important strategy to obtain 2D materials is from a top-down path by exfoliating bulk layer materials to monolayer or few layer 2D materials, such as ball milling, liquid phase exfoliation, etc. It seems that top-down strategies are most likely to be scaled-up, however, they are only suitable for specific materials. So far, only graphene and graphene oxide can be prepared at the tons level, while for other 2D materials, they still remain in the laboratory state because of the low yield. Therefore, it is necessary to develop a high-efficiency and low-cost preparation method of 2D materials to progress from the laboratory to our daily life.

The failure of solid lubricants is caused by the slip between layers of bulk materials, and the result of the slip is that the bulk materials will be peeled off into fewer layers. Based on this understanding, in a new research article published in the Beijing-based National Science Review, the Low-Dimensional Materials and Devices lab led by Professor Hui-Ming Cheng and Professor Bilu Liu from Tsinghua University proposed an exfoliation technology which is named as interMediate-Assisted Grinding Exfoliation (iMAGE). The key to this exfoliation technology is to use intermediate materials that increase the coefficient of friction of the mixture and effectively apply sliding frictional forces to the layer material, resulting in a dramatically increased exfoliation efficiency.

Considering the case of 2D h-BN, the production rate and energy consumption can reach 0.3 g h-1 and 3.01×106 J g-1, respectively, both of which are one to two orders of magnitude better than previous results. The resulting exfoliated 2D h-BN flakes have an average thickness of 4 nm and an average lateral size of 1.2 μm. Besides, this iMAGE method has been extended to exfoliate a series of layer materials with different properties, including graphite, Bi2Te3, b-P, MoS2, TiOx, h-BN, and mica, covering 2D metals, semiconductors with different bandgaps, and insulators.

It is worth mentioning that, with the cooperation with the Luoyang Shenyu Molybdenum Co. Ltd., molybdenite concentrate, a naturally existing cheap and earth abundant mineral, was used as a demo for the industrial scale exfoliation production of 2D MoS2 flakes.

"This is the very first time that 2D materials other than graphene have been produced with a yield of more than 50% and a production rate of over 0.1g h-1. And an annual production capability of 2D h-BN is expected to be exceeding 10 tons by our iMAGE technology." Prof. Bilu Liu, one of the leading authors of this study, said, "Our iMAGE technology overcomes a main challenge in 2D materials, i.e., their mass production, and is expected to accelerate their commercialization in a wide range of applications in electronics, energy, and others."

Credit: 
Science China Press

United in musical diversity

image: In ethnomusicology, universality became something of a dirty word. But new research promises to once again revive the search for deep universal aspects of human musicality.

Image: 
© Tecumseh Fitch

Is music really a "universal language"? Two articles in the most recent issue of Science support the idea that music all around the globe shares important commonalities, despite many differences. Researchers led by Samuel Mehr at Harvard University have undertaken a large-scale analysis of music from cultures around the world. Cognitive biologists Tecumseh Fitch and Tudor Popescu of the University of Vienna suggest that human musicality unites all cultures across the planet.

The many musical styles of the world are so different, at least superficially, that music scholars are often sceptical that they have any important shared features. "Universality is a big word - and a dangerous one", the great Leonard Bernstein once said. Indeed, in ethnomusicology, universality became something of a dirty word. But new research promises to once again revive the search for deep universal aspects of human musicality.

Samuel Mehr at Harvard University found that all cultures studied make music, and use similar kinds of music in similar contexts, with consistent features in each case. For example, dance music is fast and rhythmic, and lullabies soft and slow - all around the world. Furthermore, all cultures showed tonality: building up a small subset of notes from some base note, just as in the Western diatonic scale. Healing songs tend to use fewer notes, and more closely spaced, than love songs. These and other findings indicate that there are indeed universal properties of music that likely reflect deeper commonalities of human cognition - a fundamental "human musicality".

In a Science perspective piece in the same issue, University of Vienna researchers Tecumseh Fitch and Tudor Popescu comment on the implications. "Human musicality fundamentally rests on a small number of fixed pillars: hard-coded predispositions, afforded to us by the ancient physiological infrastructure of our shared biology. These 'musical pillars' are then 'seasoned' with the specifics of every individual culture, giving rise to the beautiful kaleidoscopic assortment that we find in world music", Tudor Popescu explains.

"This new research revives a fascinating field of study, pioneered by Carl Stumpf in Berlin at the beginning of the 20th century, but that was tragically terminated by the Nazis in the 1930s", Fitch adds.

As humanity comes closer together, so does our wish to understand what it is that we all have in common - in all aspects of behaviour and culture. The new research suggests that human musicality is one of these shared aspects of human cognition. "Just as European countries are said to be 'United In Diversity', so too the medley of human musicality unites all cultures across the planet", concludes Tudor Popescu.

Credit: 
University of Vienna

Increased use of antibiotics may predispose to Parkinson's disease

Higher exposure to commonly used oral antibiotics is linked to an increased risk of Parkinson's disease according to a recently published study by researchers form the Helsinki University Hospital, Finland.

The strongest associations were found for broad spectrum antibiotics and those that act against against anaerobic bacteria and fungi. The timing of antibiotic exposure also seemed to matter.

The study suggests that excessive use of certain antibiotics can predispose to Parkinson's disease with a delay of up to 10 to 15 years. This connection may be explained by their disruptive effects on the gut microbial ecosystem.

"The link between antibiotic exposure and Parkinson's disease fits the current view that in a significant proportion of patients the pathology of Parkinson's may originate in the gut, possibly related to microbial changes, years before the onset of typical Parkinson motor symptoms such as slowness, muscle stiffness and shaking of the extremities. It was known that the bacterial composition of the intestine in Parkinson's patients is abnormal, but the cause is unclear. Our results suggest that some commonly used antibiotics, which are known to strongly influence the gut microbiota, could be a predisposing factor," says research team leader, neurologist Filip Scheperjans MD, PhD from the Department of Neurology of Helsinki University Hospital.

In the gut, pathological changes typical of Parkinson's disease have been observed up to 20 years before diagnosis. Constipation, irritable bowel syndrome and inflammatory bowel disease have been associated with a higher risk of developing Parkinson's disease. Exposure to antibiotics has been shown to cause changes in the gut microbiome and their use is associated with an increased risk of several diseases, such as psychiatric disorders and Crohn's disease. However, these diseases or increased susceptibility to infection do not explain the now observed relationship between antibiotics and Parkinson's.

"The discovery may also have implications for antibiotic prescribing practices in the future. In addition to the problem of antibiotic resistance, antimicrobial prescribing should also take into account their potentially long-lasting effects on the gut microbiome and the development of certain diseases," says Scheperjans.

The possible association of antibiotic exposure with Parkinson's disease was investigated in a case-control study using data extracted from national registries. The study compared antibiotic exposure during the years 1998-2014 in 13,976 Parkinson's disease patients and compared it with 40,697 non-affected persons matched for the age, sex and place of residence.

Antibiotic exposure was examined over three different time periods: 1-5, 5-10, and 10-15 years prior to the index date, based on oral antibiotic purchase data. Exposure was classified based on number of purchased courses. Exposure was also examined by classifying antibiotics according to their chemical structure, antimicrobial spectrum, and mechanism of action.

Credit: 
University of Helsinki

Filaments that structure DNA

image: After the cell has been treated with a messenger substance (right), the green actin molecules in the red cell nucleus (left) form actin filaments that structure the genome.

Image: 
Robert Grosse

They play a leading role not only in muscle cells. Actin filaments are one of the most abundant proteins in all mammalian cells. The filigree structures form an important part of the cytoskeleton and locomotor system. Cell biologists at the University of Freiburg are now using cell cultures to show how receptor proteins in the plasma membrane of these cells transmit signals from the outside to actin molecules inside the nucleus, which then form threads. In a new study, the team led by pharmacologist Professor Robert Grosse uses physiological messengers to control the assembly and disassembly of actin filaments in the cell nucleus and shows, which signaling molecules control the process. The results of their study have been published in the latest Nature Communications.

"It was previously unknown just how a hormone or agent induces the cell to begin filament formation in the intact cell nucleus," Grosse says. Back in 2013, he discovered that actin threads were formed in the nucleus when he exposed cells to serum components. In the nucleus, actin usually occurs as a single protein. It only forms filaments when a signal is given. Actin filaments resemble a double chain of beads and create possible anchor points or pathways for the structures in the cell nucleus. They give the DNA structure - for example, determining how densely packed the chromosomes in the form of chromatin are. This influences the readability of the genetic material. "What we have here is a generally valid mechanism that shows how external, physiological signals can control the cytoskeleton in the nucleus and reorganize the genome in a very short time," Grosse explains.

Grosse is familiar with the signaling pathway that reaches into the nucleus, known as the G-protein-coupled receptor pathway. Agents, hormones, or signal transmitters bind these receptor types at the cell membrane, which is a target for a large number of clinical drugs. The receptor initiates a calcium release in the cell via a signaling cascade. The team then shows that intracellular calcium causes filaments to be formed on the inner membrane of the cell nucleus. In the study, they used fluorescence microscopy and genetic engineering methods to show how actin filaments appeared in the nucleus after physiological messengers such as thrombin and LPA bound to G-protein-coupled receptors.

In the research project within the University of Freiburg excellence cluster CIBSS - Centre for Integrative Biological Signalling Studies, Grosse is now investigating the more exact processes in the cell nucleus. "My team in Freiburg wants to find out in detail how filament formation influences the readability of the genetic material and what role the inner cell nucleus membrane plays in this process."

Credit: 
University of Freiburg

DNA repeats -- the genome's dark matter

image: The sequencing device has tiny nanopores that can determine both the DNA sequence and the epigenetic signature.

Image: 
MPI f. Molecular Genetics/ Pay Gießelmann

Expansions of DNA repeats are very hard to analyze. A method developed by researchers at the Max Planck Institute for Molecular Genetics in Berlin allows for a detailed look at these previously inaccessible regions of the genome. It combines nanopore sequencing, stem cell, and CRISPR-Cas technologies. The method could improve the diagnosis of various congenital diseases and cancers in the future.

Large parts of the genome consist of monotonous regions where short sections of the genome repeat hundreds or thousands of times. But expansions of these "DNA repeats" in the wrong places can have dramatic consequences, like in patients with Fragile X syndrome, one of the most commonly identifiable hereditary causes of cognitive disability in humans. However, these repetitive regions are still regarded as an unknown territory that cannot be examined appropriately, even with modern methods.

A research team led by Franz-Josef Müller at the Max Planck Institute for Molecular Genetics in Berlin and the University Hospital of Schleswig-Holstein in Kiel recently shed light on this inaccessible region of the genome. Müller's team was the first to successfully determine the length of genomic tandem repeats in patient-derived stem cell cultures. The researchers additionally obtained data on the epigenetic state of the repeats by scanning individual DNA molecules. The method, which is based on nanopore sequencing and CRISPR-Cas technologies, opens the door for research into repetitive genomic regions, and the rapid and accurate diagnosis of a range of diseases.

A gene defect on the X chromosome

In Fragile X syndrome, a repeat sequence has expanded in a gene called FMR1 on the X chromosome. "The cell recognizes the repetitive region and switches it off by attaching methyl groups to the DNA," says Müller. These small chemical changes have an epigenetic effect because they leave the underlying genetic information intact. "Unfortunately, the epigenetic marks spread over to the entire gene, which is then completely shut down," explains Müller. The gene is known to be essential for normal brain development. He states: "Without the FMR1 gene, we see severe delays in development leading to varying degrees of intellectual disability or autism."

Female individuals are, in most cases, less affected by the disease, since the repeat region is usually located on only one of the two X chromosomes. Since the unchanged second copy of the gene is not epigenetically altered, it is able to compensate for the genetic defect. In contrast, males have only one X chromosome and one copy of the affected gene and display the full range of clinical symptoms. The syndrome is one of about 30 diseases that are caused by expanding short tandem repeats.

First precise mapping of short tandem repeats

In this study, Müller and his team investigated the genome of stem cells that were derived from patient tissue. They were able to determine the length of the repeat regions and their epigenetic signature, a feat that had not been possible with conventional sequencing methods. The researchers also discovered that the length of the repetitive region could vary to a large degree, even among the cells of a single patient.

The researchers also tested their process with cells derived from patients that contained an expanded repeat in one of the two copies of the C9orf72 gene. This mutation leads to one of the most common monogenic causes of frontotemporal dementia and amyotrophic lateral sclerosis. "We were the first to map the entire epigenetics of extended and unchanged repeat regions in a single experiment," says Müller. Furthermore, the region of interest on the DNA molecule remained physically wholly unaltered. "We developed a unique method for the analysis of single molecules and for the darkest regions of our genome - that's what makes this so exciting for me."

Tiny pores scan single molecules

"Conventional methods are limited when it comes to highly repetitive DNA sequences. Not to mention the inability to simultaneously detect the epigenetic properties of repeats," says Björn Brändl, one of the first authors of the publication. That's why the scientists used Nanopore sequencing technology, which is capable of analyzing these regions. The DNA is fragmented, and each strand is threaded through one of a hundred tiny holes ("nanopores") on a silicon chip. At the same time, electrically charged particles flow through the pores and generate a current. When a DNA molecule moves through one of these pores, the current varies depending on the chemical properties of the DNA. These fluctuations of the electrical signal are enough for the computer to reconstruct the genetic sequence and the epigenetic chemical labels. This process takes place at each pore and, thus, each strand of DNA.

Genome editing tools and bioinformatics illuminate "dark matter"

Conventional sequencing methods analyze the entire genome of a patient. Now, the scientists designed a process to look at specific regions selectively. Brändl used the CRISPR-Cas system to cut DNA segments from the genome that contained the repeat region. These segments went through a few intermediate processing steps and were then funneled into the pores on the sequencing chip.

"If we had not pre-sorted the molecules in this way, their signal would have been drowned in the noise of the rest of the genome," says bioinformatician Pay Giesselmann. He had to develop an algorithm specifically for the interpretation of the electrical signals generated by the repeats: "Most algorithms fail because they do not expect the regular patterns of repetitive sequences." While Giesselmann's program "STRique" does not determine the genetic sequence itself, it counts the number of sequence repetitions with high precision. The program is freely available on the internet.

Numerous potential applications in research and the clinic

"With the CRISPR-Cas system and our algorithms, we can scrutinize any section of the genome - especially those regions that are particularly difficult to examine using conventional methods," says Müller, who is heading the project. "We created the tools that enable every researcher to explore the dark matter of the genome," says Müller. He sees great potential for basic research. "There is evidence that the repeats grow during the development of the nervous system, and we would like to take a closer look at this."

The physician also envisions numerous applications in clinical diagnostics. After all, repetitive regions are involved in the development of cancer, and the new method is relatively inexpensive and fast. Müller is determined to take the procedure to the next level: "We are very close to clinical application."

Credit: 
Max-Planck-Gesellschaft

Increase in cannabis cultivation or residential development could impact water resources

Cannabis cultivation could have a significant effect on groundwater and surface water resources when combined with residential use, evidence from a new study suggests.

Researchers in Canada and the US investigated potential reductions in streamflow, caused by groundwater pumping for cannabis irrigation, in the Navarro River in Mendocino County, California, and contextualized it by comparing it with residential groundwater use.

Reporting their findings in the journal Environmental Research Communications, they note that the combination of cannabis cultivation and residential use may cause significant streamflow depletion, with the largest impacts in late summer when streams and local fish species depend most on groundwater inflows.

Dr Sam Zipper, from the University of Kansas, USA, is the study's lead author. He said: "Cannabis is an emerging agricultural frontier, but thanks to its long illegal and quasi-legal history, we know very little about the impacts of cannabis cultivation on water resources.

"What we do know is that there has been a big increase in cannabis cultivation in in recent years. Researchers have found that the area under cultivation in Mendocino and Humboldt counties nearly doubled between 2012 and 2016.

"It has often been assumed most cannabis cultivators irrigate using surface water. But recent evidence from Northern California shows that groundwater is the primary irrigation water supply in this region. That means it is essential to understand how groundwater pumping for cannabis cultivation affects water resources, particularly given that regulations governing cannabis legalisation and management are currently being debated and designed in many US states."

The study team examined the impacts of ongoing groundwater pumping on streamflow and aquatic ecosystems, using an analytical depletion function - a newly developed tool for estimating streamflow depletion with low data and computational requirements.

They found that both cannabis and residential groundwater use can cause significant streamflow depletion in streams with high salmon habitat potential in the Navarro River Watershed, with shifting drivers of impacts and implications through time.

Dr Zipper said: "Cannabis groundwater pumping has an important impact on streamflow during the dry season. But it is dwarfed by streamflow depletion caused by residential groundwater use, which is five times greater.

"However, cannabis pumping is a new and expanding source of groundwater depletion, which may further deplete a summer baseflow already stressed by residential water use and traditional agriculture."

The groundwater withdrawals at each residence or cannabis cultivation site were relatively small compared to irrigation for row crops like corn. However, with over 300 groundwater-irrigated cannabis cultivation sites and over 1300 residences in the Navarro River Watershed, these relatively small withdrawals added up to a significant impact on local streams.

"This study shows that it is not just big agriculture in the Central Valley that have potential to deplete streamflow by pumping groundwater," states co-author Dr. Jeanette Howard, from The Nature Conservancy California. "We showed that even in watersheds where there aren't big groundwater pumpers, that the cumulative impacts of many small groundwater pumpers has the potential to negatively impact stream flow."

Dr Zipper concluded: "Our results indicate that the emerging cannabis agricultural frontier is likely to increase stress on both surface water and groundwater resources, and groundwater-dependent ecosystems, particularly in areas already stressed by other groundwater users. Further residential development may have a similar effect. This study illustrates a valuable approach to assess potential for surface water depletion associated with dispersed groundwater pumping in other watersheds where this may be a concern.

"The ongoing legalisation of cannabis will require management approaches which consider the connection between groundwater and surface water to protect other water users and ecosystem needs."

Credit: 
IOP Publishing

NASA space data can cut disaster response times, costs

image: Banner In 2011, heavy monsoon rains and La Niña conditions across Southeast Asia's Mekong River basin inundated and destroyed millions of acres of crops, displacing millions of people and killing hundreds. The floodwaters are visible as a solid blue triangle on the left side of this MODIS image from November 1, 2011. Credit: LANCE/EOSDIS MODIS Rapid Response Team, NASA's Goddard Space Flight Center

Image: 
LANCE/EOSDIS MODIS Rapid Response Team, NASA's Goddard Space Flight Center

According to a new study, emergency responders could cut costs and save time by using near-real-time satellite data along with other decision-making tools after a flooding disaster.

In the first NASA study to calculate the value of using satellite data in disaster scenarios, researchers at NASA's Goddard Space Flight Center in Greenbelt, Maryland, calculated the time that could have been saved if ambulance drivers and other emergency responders had near-real-time information about flooded roads, using the 2011 Southeast Asian floods as a case study. Ready access to this information could have saved an average of nine minutes per emergency response and potential millions of dollars, they said.

The study is a first step in developing a model to deploy in future disasters, according to the researchers.

With lives on the line, time is money

In 2011, heavy monsoon rains and La Niña conditions across Southeast Asia's Mekong River basin inundated and destroyed millions of acres of crops, displacing millions of people and killing hundreds. NASA Goddard's Perry Oddo and John Bolten investigated how access to near-real-time satellite data could have helped in the aftermath of the floods, focusing on the area surrounding Bangkok, Thailand.

The Mekong River crosses more than 2,000 miles in Southeast Asia, passing through parts of Vietnam, Laos, Thailand, Cambodia, China and other countries. The river is a vital source of food and income for the roughly 60 million people who live near it, but it is also one of the most flood-prone regions in the world.

In previous work, they helped develop an algorithm that estimated floodwater depth from space-based observations, then combined this data with information on local infrastructure, population and land cover. They used this algorithm to calculate the disaster risk for the region, considering the vulnerability and exposure for various land cover types, and mapped where the costliest damage occurred. Assessing cost of damage can help emergency managers see what areas may be most in need of resources and also aid flood-mitigation planning and develop disaster resilience. The team used this tool to support disaster recovery after the 2018 failure of the Xepian-Xe Nam Noy hydropower dam in Laos.

In the current study, the researchers investigated the value of near-real-time information on flooded roadways -- specifically, how much time could have been saved by providing satellite-based flood inundation maps to emergency responders in their drive from station to destination.

Flood depth information was calculated from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS), and land cover from the NASA-USGS Landsat satellites. Infrastructure, road and population data came from NASA's Socioeconomic Data and Applications Center (SEDAC) and OpenStreetMap, an open-access geographic data source.

"We chose data that represented what we would know within a couple hours of the event," said Perry Oddo, an associate scientist at Goddard and the study's lead author. "We took estimates of flood depth and damage and asked how we could apply that to route emergency response and supplies. And ultimately, we asked, what is the value of having that information?"

First, the researchers used OpenRouteService's navigation service to chart the most direct routes between emergency dispatch sites and areas in need, without flooding information. Then they added near-real-time flooding information to the map, generating new routes that avoided the most highly flooded areas.

The direct routes contained about 10 miles' worth of flooded roadways in their recommendations. In contrast, the routes with flood information were longer, but avoided all highly flooded areas and contained just 5 miles of affected roadways. This made the flood-aware routes about 9 minutes faster than their baseline counterparts on average.

"The response time for emergency responders is heavily dependent on the availability and fidelity of the mapped regions," said John Bolten, associate program manager of the NASA Earth Science Water Resources Program, and the study's second author. "Here we demonstrate the value of this map, especially for emergency responders, and assign a numeric value to it. It has a lot of value for planning future response scenarios, allowing us to move from data to decision-making."

A 9-minute reduction in response time may seem insignificant, but previous research has pegged that value in the millions of dollars, the team said. While Oddo and Bolten did not include explicit financial calculations in their model, one previous study in Southeast Asia showed that reducing emergency vehicles' response time by just one minute per trip over the course of a year could save up to $50 million.

Working together to save lives

The study represents a first step toward a model that can be used in future disasters, the team said.

NASA has participated in research and applications in Southeast Asia for over 15 years via several Earth Science efforts, including NASA's Disasters, Water Resources and Capacity Building Programs. Through these efforts, NASA works with regional partners -- including the Mekong River Commission (MRC), the Asian Disaster Preparedness Center (ADPC) and other agencies -- to provide Earth observation data and value-added tools for local decision makers in the Mekong River basin.

Oddo and Bolten have not only developed tools for partners, but also shared their results with Southeast Asian decision makers.

"The NASA Earth Sciences Applied Sciences Program works by collaborating with partners around the world," Bolten said. "This isn't just research; our partner groups desperately need this information. The work we've laid out here demonstrates the utility of satellite observations in providing information that informs decision making, and mitigates the impact of flooding disasters, both their monetary impact and perhaps loss of life."

Credit: 
NASA/Goddard Space Flight Center

Breast cancer recurrence after lumpectomy & RT is treatable with localized RT without mastectomy

Approximately 10% of breast cancer patients treated with lumpectomy (breast-conserving surgery [BCS]) and whole-breast radiation (WBI) will have a subsequent in-breast local recurrence of cancer (IBTR) when followed long term. The surgical standard of care has been to perform mastectomy if breast cancer recurs following such breast-preserving treatment. However, a new multi-center study led by Douglas W. Arthur, MD, Chair of the Department of Radiation at the Virginia Commonwealth University/Massey Cancer Center provides the first evidence that partial breast re-irradiation is a reasonable alternative to mastectomy following tumor recurrence in the same breast. Unlike WBI, which exposes the entire breast to high-powered X-ray beams, partial-breast irradiation targets a high dose of radiation directly on the area where the breast tumor is located and thus avoids exposing the surrounding tissue to radiation.

"Effectiveness of Breast Conserving Surgery and 3-Dimensional Conformal Partial Breast Reirradiation for Recurrence of Breast Cancer in the Ipsilateral Breast The NRG-RTOG 1014 Phase 2 Clinical Trial" published in JAMA Oncology demonstrates that for patients experiencing an IBTR after a lumpectomy, breast conservation is achievable in 90% of patients using adjuvant partial breast re-irradiation, which results in an acceptable risk reduction of IBTR. The study investigators, from 15 institutions, analyzed late adverse events (those occurring one or more years after treatment), mastectomy incidence, distant metastasis-free survival, overall survival, and circulating tumor cell incidence. Median follow-up was 5.5 years.

Patients who were eligible for the study experienced an IBTR that was 3 centimeters or less occurring one year or less after lumpectomy with WBI who had undergone reexcision of the tumor with negative margins. Of 58 patients (median age 67) whose tumors were evaluable for analysis, 23 IBTRs were non-invasive, 35 were invasive; 91% were ? 2 cm in size and all were clinically node negative. Estrogen receptor was positive in 76%, progesterone receptor 57% and Her2Neu was over-expressed in 17%. IBTRs occurred in 4 patients, for a 5-year cumulative incidence of 5% (95% CI: 1 %, 13%). Seven patients had ipsilateral mastectomies for a 5-year cumulative incidence of 10% (95% CI: 4%, 20%). Distant metastasis-free survival and overall survival were 95% (95% CI: 85%, 98%). Four patients (7%) had grade 3 and none a grade ?4 late treatment adverse event.

"This is exciting data for women experiencing an IBTR after an initial lumpectomy and WBI who want to preserve their breast. Our study suggests that breast-conserving treatment may be a viable alternative to mastectomy," stated Douglas Arthur, MD, the Principle Investigator and Lead Author of the NRG-RTOG 1014 manuscript.

Credit: 
NRG Oncology

New electrodes could increase efficiency of electric vehicles and aircraft

image: Texas A&M doctoral student Paraskevi Flouda holds sample of new electrode.

Image: 
Texas A&M Engineering

The rise in popularity of electric vehicles and aircraft presents the possibility of moving away from fossil fuels toward a more sustainable future. While significant technological advancements have dramatically increased the efficiency of these vehicles, there are still several issues standing in the way of widespread adoption.

One of the most significant of these challenges has to do with mass, as even the most current electric vehicle batteries and supercapacitors are incredibly heavy. A research team from the Texas A&M University College of Engineering is approaching the mass problem from a unique angle.

Most of the research aimed at lowering the mass of electric vehicles has focused on increasing the energy density, thus reducing the weight of the battery or supercapacitor itself. However, a team led by Dr. Jodie Lutkenhaus, professor in the Artie McFerrin Department of Chemical Engineering, believes that lighter electric vehicles and aircraft can be achieved by storing energy within the structural body panels. This approach presents its own set of technical challenges, as it requires the development of batteries and supercapacitors with the same sort of mechanical properties as the structural body panels. Specifically, batteries and supercapacitor electrodes are often formed with brittle materials and are not mechanically strong.

In an article published in Matter, the research team described the process of creating new supercapacitor electrodes that have drastically improved mechanical properties. In this work, the research team was able to create very strong and stiff electrodes based on dopamine functionalized graphene and Kevlar nanofibers. Dopamine, which is also a neurotransmitter, is a highly adhesive molecule that mimics the proteins that allow mussels to stick to virtually any surface. The use of dopamine and calcium ions leads to a significant improvement in mechanical performance.

In fact, in the article, researchers report supercapacitor electrodes with the highest, to date, multifunctional efficiency (a metric that evaluates a multifunctional material based on both mechanical and electrochemical performance) for graphene-based electrodes.

This research leads to an entirely new family of structural electrodes, which opens the door to the development of lighter electric vehicles and aircraft.

While this work mostly focused on supercapacitors, Lutkenhaus hopes to translate the research into creating sturdy, stiff batteries.

Credit: 
Texas A&M University