Culture

Deep-sea volcanoes: Windows into the subsurface

image: Image 2. Deep-sea hydrothermal vent chimneys on Brothers volcano Northwest Caldera Wall. Image courtesy of Anna-Louise Reysenbach, NSF, ROV Jason, and 2018©Woods Hole Oceanographic Institution.

Image: 
Anna-Louise Reysenbach

Hydrothermally-active submarine volcanoes account for much of Earth's volcanism and are mineral-rich biological hotspots, yet very little is known about the dynamics of microbial diversity in these systems. This week in PNAS, Reysenbach and colleagues, show that at one such volcano, Brothers submarine arc volcano, NE of New Zealand, the geological history and subsurface hydrothermal fluid paths testify to the complexity of microbial composition on the seafloor, and also provide insights into how past and present subsurface processes could be imprinted in the microbial diversity.

"Microbes in hot springs everywhere get their energy in part from the geochemistry of the hot water/fluids. It's the same for the Brothers volcano seafloor hot springs. Since both seawater- and magmatic gas-influenced hydrothermal systems coexist at Brothers, we predicted that the microbes in the active magmatic cone sites (IMAGE1) would be very different from those on the caldera wall (IMAGE 2) that are affected largely by modified seawater" said Reysenbach, Professor of Microbiology at Portland State University. But what they did not expect was that there would also be two very different microbial communities in close proximity to each other on the caldera wall.

From recent International Ocean Discovery Program (IODP) drilling and geophysical measurements there is evidence that after the volcanic caldera collapse of the original stratovolcano to form the present-day caldera, the earliest magmatic hydrothermal system became overprinted by a more seawater dominated system. The authors show that one of the caldera communities aligns with microbes from magmatically-influenced hydrothermal vents of the more recent cone that has grown up from the caldera floor. It is likely that a combination of different subsurface mineral assemblages intersected by the circulating hydrothermal fluids help shape distinct microbial communities on the caldera wall.

"Having studied Brothers volcano for 20 years, this work really astounded me because for the first time I could join the dots from magmatic gases and hydrothermal fluids all the way to microbial communities" said coauthor Cornel de Ronde, Principal Scientist at GNS Science, New Zealand.

This study also describes more than 90 new bacterial and archaeal families, and nearly 300 previously unknown genera, highlighting how little we know about the biodiversity in these systems and how the complexity of the subsurface geology may contribute to high microbial biodiversity. Furthermore, these sites comprise many potentially deeply-branching and symbiotic microbes whose prospective study will add to our understanding of the evolution of life on Earth and the interactions shaping subsurface communities.

"I hope this work will encourage others to see that geology, geochemistry and even geophysics can actually go hand-in-glove with microbial studies. You just have to translate the various pieces of information into a language that is understood by all, then you will discover new paradigms", said de Ronde.

Credit: 
Portland State University

Plant-based diet ramps up metabolism, according to new study

A plant-based diet boosts after-meal burn, leads to weight loss, and improves cardiometabolic risk factors in overweight individuals, according to a new randomized control trial published in JAMA Network Open by researchers with the Physicians Committee for Responsible Medicine.

The study randomly assigned participants--who were overweight and had no history of diabetes--to an intervention or control group in a 1:1 ratio. For 16 weeks, participants in the intervention group followed a low-fat, plant-based diet based on fruits, vegetables, whole grains, and legumes with no calorie limit. The control group made no diet changes. Neither group changed exercise or medication routines, unless directed by their personal doctors.

Researchers used indirect calorimetry to measure how many calories participants burned after a standardized meal at both the beginning and end of the study. The plant-based group increased after-meal calorie burn by 18.7%, on average, after 16 weeks. The control group's after-meal burn did not change significantly.

"These findings are groundbreaking for the 160 million Americans struggling with overweight and obesity," says study author Hana Kahleova, MD, PhD, director of clinical research for the Physicians Committee. "Over the course of years and decades, burning more calories after every meal can make a significant difference in weight management."

Within just 16 weeks, participants in the plant-based group lowered their body weight by 6.4 kg (about 14 pounds), on average, compared to an insignificant change in the control group. The plant-based group also saw significant drops in fat mass and visceral fat volume--the dangerous fat found around the internal organs.

The researchers also teamed up with Yale University researchers Kitt Petersen, MD, and Gerald Shulman, MD, to track intramyocellular lipid and hepatocellular lipid--the accumulating fat in muscle and liver cells--in a subset of participants using magnetic resonance spectroscopy. Those in the plant-based group reduced the fat inside the liver and muscle cells by 34% and 10%, respectively, while the control group did not experience significant changes. Fat stored in these cells has been linked to insulin resistance and type 2 diabetes.

"When fat builds up in liver and muscle cells, it interferes with insulin's ability to move glucose out from the bloodstream and into the cells," adds Dr. Kahleova. "After just 16 weeks on a low-fat, plant-based diet, study participants reduced the fat in their cells and lowered their chances for developing type 2 diabetes."

The study also offered new insight into the link between fat within the cells and insulin resistance. The plant-based group decreased their fasting plasma insulin concentration by 21.6 pmol/L, decreased insulin resistance, and increased insulin sensitivity--all positive results--while the control group saw no significant changes.

The plant-based group also reduced total and LDL cholesterol by 19.3 mg/dL and 15.5 mg/dL, respectively, with no significant changes in the control group.

"Not only did the plant-based group lose weight, but they experienced cardiometabolic improvements that will reduce their risk for type 2 diabetes, heart disease, and other health problems," says Dr. Kahleova.

"I plan to stay on this diet for good. Not just for 16 weeks, but for life," reports study participant Sam T., who lost 34 pounds and improved his metabolism during the 16-week study. Since the study has concluded, Sam has continued a plant-based diet, reached his goal weight, and began running half-marathons and marathons.

Credit: 
Physicians Committee for Responsible Medicine

'Financial toxicity' of prostate cancer treatment: Radiation therapy has the greatest impact on patient finances

November 30, 2020 - For men with early-stage prostate cancer, choices about initial treatment carry varying risks of "financial toxicity," reports a study in The Journal of Urology®, Official Journal of the American Urological Association (AUA). The journal is published in the Lippincott portfolio by Wolters Kluwer.

The cost of cancer care can be high and the financial burden of prostate cancer treatment can be a significant source of stress for men and their families. "Cost of treatment and the associated financial burden could be an important factor in treatment decisions," says Daniel A. Barocas, MD, MPH, associate professor of urology and medicine at Vanderbilt University, Nashville, Tenn. and senior author of this new paper. Financial toxicity is a relatively new term in cancer care and can be defined as "the distress or hardship experienced by patients due to the cost of cancer treatment."

Differences in financial burden of initial treatments for localized prostate cancer

Prostate cancer is one of the most common cancers in men, with an estimated 190,000 new cases being diagnosed this year. Because their cancer has not spread beyond the prostate gland, men with localized disease have a choice of treatment options, including active surveillance, radiation, or surgery.

According to lead author Benjamin V. Stone, MD, "Modern treatments for localized prostate cancer provide comparable outcomes, with high rates of cancer control and patient survival." But do financial burdens differ according to the choice of initial prostate cancer treatment? To find out, Drs. Stone and Barocas and colleagues analyzed data on 2,121 patients from a follow-up study of treatment for localized prostate cancer.

The study included a questionnaire asking about the direct and indirect costs of prostate cancer and its treatment. Financial burdens were compared for patients choosing surgery (radical prostatectomy), external beam radiation therapy (EBRT), or active surveillance. (Other treatment groups were considered too small for analysis.)

In the first six months after prostate cancer treatment, 15 percent of patients said they experienced "a large or very large" burden of treatment costs. The financial burden was highest for patients who underwent EBRT: 11 percent of patients reported burdens consistent with financial toxicity.

Patients choosing surgery had higher initial financial burdens than those choosing active surveillance. However, these two groups were similar after one year. Financial burdens decreased over time: five years after treatment, only one to three percent of patients were still experiencing financial toxicity. After adjustment for other factors, the financial burdens associated with EBRT were up to twice as high as for surgery or active surveillance.

"Our research shows radiation therapy seems to have the highest financial burden for patients with clinically localized prostate cancer, compared to surgery or active surveillance," says Dr. Stone. "However, our study also shows there is a relatively small percentage of patients who experience a large or very large financial burden due to treatment, and the financial burden lessens over time."

Other factors contributing to higher financial burden included: higher-risk prostate cancer, younger age, non-white race, and lower education. "The association of financial burden with socioeconomic factors such as race and education is in line with the results of previous studies in the United States and worldwide," Drs. Stone and Barocas and coauthors write.

"Overall, our follow-up study suggests that radiation therapy has a longer-lasting burden of costs, compared to other initial treatment options for prostate cancer," the researchers conclude. They note some limitations of their study, including a lack of data on patients' income and other financial resources.

It's also unclear why the financial impact of EBRT is larger than for other treatment options. Dr. Stone adds, "Future studies should include data on out-of-pocket treatment costs as well as various types of indirect costs affecting the financial impact of prostate cancer treatment choices."

Credit: 
Wolters Kluwer Health

Mechanism of action of chloroquine/hydroxychloroquine for COVID-19 infection

image: Figure 1 - Cumulative confirmed deaths from Covid19 infection. Credit ourworldindata.org.

Image: 
Dr. Alberto Boretti, Dr. Bimal Banik, Dr. Stefania Castelletto, Bentham Science Publishers

The recent serious outbreak of Covid19 has affected (November 13, 2020) 53,796,098 people worldwide, resulting in 37,555,669 recovered, 1,310,250 deaths (Figure 1), and a large number of open cases. It has required urgent medical treatments for numerous patients. No clinically active vaccines or antiviral agents are available for Covid19. According to several studies, Chloroquine (CQ) and Hydroxychloroquine (HCQ) have shown promises as Covid19 antiviral especially when administered with Azithromycin (AZM). However, there is significant controversy. Many countries are limiting the use of CQ/HCQ, while others are accepting this therapeutical option (Figure 2). The work [1] is addressing the open question if Chloroquine (CQ) and Hydroxychloroquine (HCQ) are helpful in Covid19 infection by analyzing the latest published literature on the subject while applying the scientific method.

Both papers in favor, or against, this therapeutical option are reviewed [1]. Bias by a conflict of interest is also taken into account. The rationale behind this use is clear. CQ/HCQ is effective against Covid19 in-vitro and in-vivo laboratory studies. Therapy in Covid19 infected patients with CQ/HCQ is supported by evidence of trials and field experiences from multiple sources. The relevant works are reviewed. The presence or absence of conflict of interest is weighed against the conclusions. CQ/HCQ has been used with success in mild cases or medium severity cases. No randomized controlled trial has however been conducted to support the safety and efficacy of CQ/HCQ and AZM for Covid19. Prophylaxis with CQ/HCQ is more controversial, but generally not having side effects, and supported by pre-clinical studies. The mechanism of action against Covid19 is unclear. More research is needed to understand the mechanisms of actions CQ/HCQ have against Covid19 infection, and this requires investigations with nanoscale imaging of viral infection of host cells. Most of the published works indicate CQ/HCQ is likely effective against Covid19 infection, almost 100% in prophylaxis and mild to medium severity cases, and 60% in late infection cases. The percentage of positive works is larger if works conducted under a probable conflict of interest are excluded from the list. The result is consistent with the updated analysis provided in [2], [3], that suggests high efficacy of CQ/HCQ in early treatments and lower efficacy and controversial results only for late treatment. Statistically, 100% of early treatment studies are positive, late treatment studies are mixed with 70% positive effects, 78% of pre-exposure prophylaxis studies are positive, and 100% of post-exposure prophylaxis studies also report positive effects [2], [3].

Credit: 
Bentham Science Publishers

Towards accessible healthcare for all in sub-Saharan Africa

Achieving universal access to healthcare is a key development priority and a major target of the UN's Sustainable Development Goal 3. The COVID-19 pandemic has only reinforced this urge. A rapid development and expansion of public, affordable healthcare infrastructure is particularly crucial in sub-Saharan Africa. In the region, communicable diseases are the first cause of death, infant mortality rates are above five percent, and lengthy journeys to healthcare facilities undermine the accessibility to basic healthcare for millions. At least one sixth of the population lives more than two hours away from a public hospital and one in eight people is no less than one hour away from the nearest health centre.

The team of researchers from the RFF-CMCC European Institute on Economics and the Environment (EIEE), Catholic University of Milan, Fondazione Eni Enrico Mattei and Decatab recently published in PNAS - Proceedings of the National Academy of Sciences a study that provides a comprehensive planning-oriented, inequality-focused analysis of different types of healthcare accessibility in sub-Saharan Africa based on a state-of-the-art georeferenced database of public healthcare facilities.

Researchers, among them Soheil Shayegh, scientist at the RFF-CMCC European Institute on Economics and the Environment (EIEE), elaborate a strategy to efficiently abate the measured inequalities based on a geospatial optimisation algorithm which identifies the optimal location of future healthcare facilities of different tiers* based on the projected distribution of the population of each country by 2030 in order to satisfy the conditions of SDG 3 targets.

"We were able to devise a spatial optimization framework to identify the optimal location and required bed capacity of public healthcare facilities in the region to ensure universal accessibility by 2030" explains Giacomo Falchetta, Research Fellow at FEEM and at the Catholic University of Milan and lead author of the study. "The work builds on different high-resolution data sources, the key one coming from the recent release from the World Health Organisation of a comprehensive, georeferenced database on the location of different typologies of public healthcare facilities in sub-Saharan Africa".

"Our methodology and the results of our analysis can inform local policymakers in their assessment and prioritization of healthcare infrastructure", explains Soheil Shayegh, scientist at the RFF-CMCC European Institute on Economics and the Environment (EIEE). "This is particularly relevant to tackle healthcare accessibility inequality, which is not only prominent within and between countries of sub-Saharan Africa, but also relative to the level of service provided by healthcare facilities".?"Optimized location, type, and capacity of each healthcare facility can be explored in an online interactive dashboard" adds Ahmed Hammad, Data Scientist at Decatab and at the Catholic University of Milan.

The results of the analysis suggest that to meet commonly accepted universal healthcare accessibility targets, sub-Saharan African countries will need to build ~6,200 new facilities by 2030. The researchers also estimate that about 2.5 million new hospital beds need to be allocated between new facilities and ~1,100 existing structures that require expansion or densification.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Recycled concrete could be a sustainable way to keep rubble out of landfi

image: Shahria Alam, co-director of UBC's Green Construction Research and Training Centre and the lead investigator of the study.

Image: 
UBC Okanagan

Results of a new five-year study of recycled concrete show that it performs as well, and in several cases even better, than conventional concrete.

Researchers at UBC Okanagan's School of Engineering conducted side-by-side comparisons of recycled and conventional concrete within two common applications--a building foundation and a municipal sidewalk. They found that the recycled concrete had comparable strength and durability after five years of being in service.

"We live in a world where we are constantly in search of sustainable solutions that remove waste from our landfills," says Shahria Alam, co-director of UBC's Green Construction Research and Training Centre and the lead investigator of the study. "A number of countries around the world have already standardized the use of recycled concrete in structural applications, and we hope our findings will help Canada follow suit."

Waste materials from construction and demolition contribute up to 40 per cent of the world's waste, according to Alam, and in Canada, that waste amounts to nine million tonnes per year.

The researchers tested the compressive strength and durability of recycled concrete compared with conventional concrete.

Concrete is typically composed of fine or coarse aggregate that is bonded together with an adhesive paste. The recycled concrete replaces the natural aggregate for producing new concrete.

"The composition of the recycled concrete gives that product additional flexibility and adaptability," says Alam. "Typically, recycled concrete can be used in retaining walls, roads and sidewalks, but we are seeing a shift towards its increased use in structures."

Within the findings, the researchers discovered that the long-term performance of recycled concrete adequately compared to its conventional form, and experienced no issues over the five years of the study. In fact, the recycled concrete had a higher rate of compressive strength after 28 days of curing while maintaining a greater or equal strength during the period of the research.

The researchers suggest the recycled concrete can be a 100 per cent substitute for non-structural applications.

"As innovations continue in the composition of recycled concrete, we can envision a time in the future where recycle concrete can be a substitute within more structural applications as well."

Credit: 
University of British Columbia Okanagan campus

Hitting the quantum 'sweet spot': Researchers find best position for atom qubits in silicon

image: Atomic-scale image of two interacting donors in silicon

Image: 
CQC2T

Researchers from the Centre of Excellence for Quantum Computation and Communication Technology (CQC2T) working with Silicon Quantum Computing (SQC) have located the 'sweet spot' for positioning qubits in silicon to scale up atom-based quantum processors.

Creating quantum bits, or qubits, by precisely placing phosphorus atoms in silicon - the method pioneered by CQC2T Director Professor Michelle Simmons - is a world-leading approach in the development of a silicon quantum computer.

In the team's research, published today in Nature Communications, precision placement has proven to be essential for developing robust interactions - or coupling - between qubits.

"We've located the optimal position to create reproducible, strong and fast interactions between the qubits," says Professor Sven Rogge, who led the research.

"We need these robust interactions to engineer a multi-qubit processor and, ultimately, a useful quantum computer."

Two-qubit gates - the central building block of a quantum computer - use interactions between pairs of qubits to perform quantum operations. For atom qubits in silicon, previous research has suggested that for certain positions in the silicon crystal, interactions between the qubits contain an oscillatory component that could slow down the gate operations and make them difficult to control.

"For almost two decades, the potential oscillatory nature of the interactions has been predicted to be a challenge for scale-up," Prof. Rogge says.

"Now, through novel measurements of the qubit interactions, we have developed a deep understanding of the nature of these oscillations and propose a strategy of precision placement to make the interaction between the qubits robust. This is a result that many believed was not possible."

Finding the 'sweet spot' in crystal symmetries

The researchers say they've now uncovered that exactly where you place the qubits is essential to creating strong and consistent interactions. This crucial insight has significant implications for the design of large-scale processors.

"Silicon is an anisotropic crystal, which means that the direction the atoms are placed in can significantly influence the interactions between them," says Dr Benoit Voisin, lead author of the research.

"While we already knew about this anisotropy, no one had explored in detail how it could actually be used to mitigate the oscillating interaction strength."

"We found that there is a special angle, or sweet spot, within a particular plane of the silicon crystal where the interaction between the qubits is most resilient. Importantly, this sweet spot is achievable using existing scanning tunnelling microscope (STM) lithography techniques developed at UNSW."

"In the end, both the problem and its solution directly originate from crystal symmetries, so this is a nice twist."

Using a STM, the team are able to map out the atoms' wave function in 2D images and identify their exact spatial location in the silicon crystal - first demonstrated in 2014 with research published in Nature Materials and advanced in a 2016 Nature Nanotechnology paper.

In the latest research, the team used the same STM technique to observe atomic-scale details of the interactions between the coupled atom qubits.

"Using our quantum state imaging technique, we could observe for the first time both the anisotropy in the wavefunction and the interference effect directly in the plane - this was the starting point to understand how this problem plays out," says Dr Voisin.

"We understood that we had to first work out the impact of each of these two ingredients separately, before looking at the full picture to solve the problem - this is how we could find this sweet spot, which is readily compatible with the atomic placement precision offered by our STM lithography technique."

Building a silicon quantum computer atom by atom

UNSW scientists at CQC2T are leading the world in the race to build atom-based quantum computers in silicon. The researchers at CQC2T, and its related commercialisation company SQC, are the only team in the world that have the ability to see the exact position of their qubits in the solid state.

In 2019, the Simmons group reached a major milestone in their precision placement approach - with the team first building the fastest two-qubit gate in silicon by placing two atom qubits close together, and then controllably observing and measuring their spin states in real-time. The research was published in Nature.

Now, with the Rogge team's latest advances, the researchers from CQC2T and SQC are positioned to use these interactions in larger scale systems for scalable processors.

"Being able to observe and precisely place atoms in our silicon chips continues to provide a competitive advantage for fabricating quantum computers in silicon," says Prof. Simmons.

The combined Simmons, Rogge and Rahman teams are working with SQC to build the first useful, commercial quantum computer in silicon. Co-located with CQC2T on the UNSW Sydney campus, SQC's goal is to build the highest quality, most stable quantum processor.

Credit: 
Centre for Quantum Computation & Communication Technology

The wily octopus: king of flexibility

video: Examples of arm deformation types in an octopus (O. bimaculoides). From: E.B. Lane Kennedy, Kendra C. Buresch, Preethi Boinapally, and Roger T. Hanlon (2020) Octopus Arms Exhibit Exceptional Flexibility. Scientific Reports, DOI: 10.1038/s41598-020-77873-7

Image: 
Roger Hanlon Lab/Marine Biological Laboratory

WOODS HOLE, Mass. -- Octopuses have the most flexible appendages known in nature, according to a new study in Scientific Reports. In addition to being soft and strong, each of the animal's eight arms can bend, twist, elongate and shorten in many combinations to produce diverse movements. But to what extent can they do so, and is each arm equally capable? Researchers at the Marine Biological Laboratory (MBL) filmed 10 octopuses over many months while presenting them with a variety of challenges, and recorded 16,563 examples of these arm movements.

Amazingly, all eight arms could perform all four types of deformation (bend, twist, elongate, shorten) throughout their length. Moreover, each type of movement could be deployed in multiple orientations (e.g. left, right, up, down, 360º, etc.). Especially noteworthy was the clockwise and counterclockwise twisting that could occur throughout each arm during bending, shortening or elongating. This twisty strong arm is exceptionally flexible by any standard.

"Even our research team, which is very familiar with octopuses, was surprised by the extreme versatility of each of the eight arms as we analyzed the videos frame-by-frame," said co-author and MBL Senior Scientist Roger Hanlon. "These detailed analyses can help guide the next step of determining neural control and coordination of octopus arms, and may uncover design principles that can inspire the creation of next-generation soft robots."

Engineers have long wished to design "soft robotic arms" with greater agility, strength and sensing capability. Currently, most robotic arms require hard materials and joints of many configurations, all of which have limitations. The octopus presents a novel model for future robotic designs. Octopus arms are similar in function to the human tongue and the elephant trunk; they are muscular hydrostats that use incompressible muscle in different arrangements to produce movement. The current study provides a basis for investigating motor control of the entire octopus arm. Soft, ultra-flexible robotic arms could enable many new applications, e.g., inspecting unstructured and cluttered environments such as collapsed buildings, or gentler medical inspection of alimentary or respiratory pathways.

Credit: 
Marine Biological Laboratory

Researchers explore population size, density in rise of centralized power in antiquity

Early populations shifted from quasi-egalitarian hunter-gatherer societies to communities governed by a centralized authority in the middle to late Holocene, but how the transition occurred still puzzles anthropologists. A University of Maine-led group of researchers contend that population size and density served as crucial drivers.

Anthropology professor Paul "Jim" Roscoe led the development of Power Theory, a model emphasizing the role of demography in political centralization, and applied it to the shift in power dynamics in prehistoric northern coastal societies in Peru.

To test the theory, he, Daniel Sandweiss, professor of anthropology and Quaternary and climate studies, and Erick Robinson, a postdoctoral anthropology researcher at Utah State University, created a summed probability distribution (SPD) from 755 radiocarbon dates from 10,000-1,000 B.P., or before present.

The team found a correlation between the tenets of their Power Theory, or that population density and size influence political centralization, and the change in power dynamics in early Peruvian societies.

The team shared their findings in a report published in Philosophical Transactions of the Royal Society B.

"I've always been interested in how, in the space of just five to 10,000 years, humans went from biddy little hunter-gatherer groups in which nobody could really push anyone else around to vast industrial states governed by a few people with enormous power. From my fieldwork and other research in New Guinea, it became clear that leaders mainly emerged in large, high-density populations, and Power Theory explained why," Roscoe says. "Unfortunately, it was difficult until recently for archaeologists to get a handle on the size and densities of populations in the past. SPD techniques are a major help in bringing these important variables into understanding how human social life underwent this dramatic transformation."

Scientists have previously posited that population in northern coastal Peru rose during the Late Preceramic, Initial, Early Horizon and Early Intermediate periods, or between about 6,000-1,200 B.P. The SPD from Roscoe and his colleagues validates the notion.

The people who settled in the coastal plain first lived as mobile hunter-gatherers or incipient horticulturalists in low density groups, according to researchers. Millennia afterward in the Late Preceramic period, however, several developments brought increased interaction and shareable resources. People began farming, developed irrigation systems and became more settled as time passed. Eventually, some of the world's first 'pristine' states formed in the plain.

The onset and growth of agriculture, irrigation and sedentism, propelled by upticks in population size and density, fostered the capacity of political agents to interact with and manipulate others. Political centralization and hierarchy formed as a result, according to researchers.

Roscoe and his colleagues demonstrated through their radio-carbon SPD that the rise in centralized authorities in early Peruvian communities that resulted from farming, irrigation and settlement coincided with an uptick in population size. The results of their work demonstrate "a broad, low-resolution congruence between the expectations of Power Theory and what is currently known about coastal Peruvian antiquity," they wrote in their study.

The project also highlights the capability of SPDs for examining the influence of demography in the growth of prehistoric political centralization. Determining the extent of that influence, however, requires additional study.

"We're hoping this work demonstrates the value of SPDs for understanding the role of demography in the emergence and development of power centers on Earth," Roscoe says. "What we need now is to increase the size of our SPD databases and filter out some of the weaknesses we know they contain."

Credit: 
University of Maine

Business closures, partial reopenings due to COVID-19 could cost the US $3-$5 trillion i

The COVID-19 pandemic could result in net losses from $3.2 trillion and up to $4.8 trillion in U.S. Real Gross Domestic Product (GDP) over the course of two years, a new USC study finds.

The pandemic's economic impact depends on factors such as the duration and extent of the business closures, the gradual reopening process, infection rates and fatalities, avoiding public places, and pent-up consumer demand, according to the research by the USC Center for Risk and Economic Analysis of Terrorism Events (CREATE).

Real GDP is a measure, adjusted for inflation, that reflects the value and the quantity of final goods and services produced by a nation's economy in a given year.

"In a best-case scenario, we would see containment measures, such as masks and social distancing become more widespread, and possibly even a vaccine by next year, and then businesses and institutions would be able to reopen at an accelerated pace," said Adam Rose, study team leader who is the director of CREATE and a research professor at the USC Price School of Public Policy.

"But in a worst-case scenario, these countermeasures wouldn't materialize, and reopenings would happen slowly, particularly because we would continue to see waves of infection," he said. Then, more people would likely lose their jobs, and the impacts of this disaster would continue to mount."

The researchers found that the mandatory closures and partial reopenings alone could result in a 22% loss of U.S. GDP in just one year and an even greater loss of GDP over two years. Other key factors, though, will influence how disastrous the losses may be, they noted.

The research team noted that China has not sustained such losses due to aggressive containment measures resulting in a shorter lockdown period. They project that in a worst-case scenario, the U.S. GDP loss due to COVID will more than quadruple that of China.

The study was published on Nov. 30 in the journal Economics of Disasters and Climate Change.

In early March, several states responded to a rise in COVID-19 cases by ordering the closures of non-essential businesses such as restaurants, bars, salons and retail stores. Many also halted or reduced public services to limit the spread.

Researchers at CREATE who are experts on modeling economic consequences of disasters analyzed the potential economic impact in three scenarios ranging from moderate to disastrous.

Using a computerized economic model, the researchers accounted for these other factors in the three scenarios. They varied the decline in the workforce due to workers becoming sick with or dying of the virus, workers adopting new behaviors like staying home to avoid infection, increased demand for COVID healthcare, potential resilience through telework, increased demand for communication services, and increased pent-up consumer demand.

The researchers conducted a synthesis of the literature of projections on the severity and possible duration of the pandemic. For the scenarios, which span from March 2020 through February 2022, this compilation of findings indicated that the number of COVID-related deaths in the United States could range from more than 300,000 to, in a worst-case scenario, 1.75 million.

Anywhere from 365,000 to as many as 2.5 million COVID patients could end up in the ICU, while another 860,000 to nearly 6 million patients may be hospitalized but not treated in the ICU. The projected number of people who will be treated for COVID as outpatients may vary from about 2.6 million to 18 million.

Among other highlights of the study, the researchers projected:

54 million to 367 million work days would be lost due to people getting sick or die from COVID

2 million to nearly 15 million work days would be lost due to employees staying home to care for sick loved ones.

Job losses could range from 14.7% to 23.8%, and in the worst case affect an estimated 36.5 million workers.

Demand for health care has risen with COVID infections. Medical expenses due to COVID-19 from March 2020 to February 2022 could range from nearly $32 billion to $216 billion.

A loss in demand for some services -- such as the use of public transit and school attendance, restaurant dining and travel -- as people avoid public places and services to reduce their risk of exposure.

An uptick in demand for communication services, as many employees during this pandemic have had to work from home.

An increase in pent-up demand will arise since consumers are unable to spend money on big-ticket items such as cars, as well as on travel, restaurants, hotels, merchandise, fitness, sporting events and concerts during the closures, and, to a lesser extent, during the phased reopenings.

While the researchers have found that the mandatory closures and re-openings are the most influential factor in the economy's decline, consumer avoidance behavior also has a significant effect.

For the study, the researchers assumed that various people avoided work, did not attend in-person classes at schools, and stopped going to restaurants, activities and social gatherings to reduce their risk of infection.

"Because people have had to avoid activities, this has had a significant impact on economic losses," said Dan Wei, a CREATE research fellow and research associate professor at the USC Price School for Public Policy. "Based on our model, we estimate that avoidance behavior can result in nearly $900 billion losses in U.S. GDP in the worst-case scenario. Because consumers in places like California can't engage in many activities like eating inside a restaurant, they are saving their money."

The economic losses from closures and avoidance behavior could be partly offset by increased consumer spending after reopening, the researchers said.

"Pent-up demand is one of the most influential factors for the economy in this pandemic. While the mandatory closures and partial reopenings drive most of the economic decline, the extent to which pent-up demand leads to an increase in consumption after reopening, can be crucial to the economic recovery," said Terrie Walmsley, a USC CREATE research fellow and an adjunct assistant professor of practice in economics at the USC Dornsife College of Letters, Arts and Sciences.

"The key question is: When will we see a complete reopening across this country? We simply cannot predict that, especially in light of the fact that we have not gained control of the spread of the disease," Rose said.

Credit: 
University of Southern California

Warbler coloration shaped by evolution via distinct paths

image: According to a new study, the evolution of two color related genes helps explain the striking diversity in colors and patterns among wood warblers, like the blackburnian warbler pictured here.

Image: 
Darrell Cochran

UNIVERSITY PARK, Pa. -- Two genes that are important for the diverse colors and patterns of warbler plumage have evolved through two very different processes, according to a new study led by Penn State researchers. These evolutionary processes could help explain the rapid evolution of these songbirds into so many unique species.

"Wood warblers are an incredibly colorful and diverse group of birds, with more than a hundred species in total," said Marcella Baiz, postdoctoral researcher at Penn State and first author of the paper. "These species arose very quickly in evolutionary time in what biologists call a species radiation. To better understand this radiation, we studied genetic regions related to plumage coloration within a particularly colorful subset of warblers."

The research team sequenced the genomes of all 34 species within the Setophaga genus of wood warblers and created a phylogenetic tree that clarifies how each species is related to one another. Then, they focused on nine closely related pairs of "sister species." Each pair is the result of one species diverging into two. Seeing if similar evolutionary processes are at play in each of the pairs allowed the researchers to gain a better understanding of the overall radiation.

A paper describing their results appears today (Nov 30.) in the journal Current Biology.

"In most cases it is difficult to get at the genes underlying the diversification process because it can be hard to link specific genes to specific traits, like colors," said David Toews, assistant professor of biology at Penn State and leader of the research team. "But many species of warblers readily interbreed, producing hybrid offspring with a mix of the parent species' traits, so we were previously able to link certain color patterns with their underlying genetic regions. In this study, we focused on two coloration genes, but were able to study them across all the species in this large genus, to give us a window into the rest of the radiation."

The first gene, the Agouti-signaling protein (ASIP), is involved in producing the pigment melanin, which underlies brown and black plumage in these warblers. Within each pair of sister species where there were differences in the amount or location of the black or brown in their plumage, the team predictably found genetic differences near ASIP.

"We created an evolutionary tree based solely on the ASIP gene region, which more clearly shows how the gene has changed across the genus," said Baiz. "The patterns in this gene tree mirror patterns in the phylogenetic tree based on what we see across the whole genome. This implies that the differences we see in ASIP resulted from mutations that arose independently in different species. However, the gene tree from the second gene, BCO2, showed a very different pattern that did not match up with the whole genome tree, which suggests different processes are at play."

The second gene, beta-carotene oxygenate 2 (BCO2), is involved in producing carotenoid pigments, which result in bright yellow, red, and orange plumage. The researchers suggest that a process called introgression-- the exchange of genes between species that have evolved separately--may explain why the pattern of genetic changes in BCO2 didn't align with the overall radiation of the group.

"Introgression can occur when two separate species hybridize, and the hybrid offspring go on to mate with one of the original species," said Baiz. "After several generations, genetic material from one species can be incorporated into the other. The signal of this kind of ancient introgression can be maintained in the genomes of living individuals--like how ancestry tests can reveal how many Neanderthal genes you have. In this instance, we unexpectedly found evidence for ancient introgression at BCO2 in two otherwise distantly related warblers in this genus."

The researchers found evidence of introgression involving the yellow warbler and magnolia warbler and involving the prairie warbler and vitelline warbler, all species with colorful carotenoid coloration in their feathers. However, they note that with the current data it is difficult to tell the directionality of the gene transfer.

"One possibility is that the magnolia warbler version of BCO2 introgressed into the yellow warbler, and this 'new to them' version produced a broader deposition of carotenoids in the feathers of the yellow warbler," said Toews. "It is fun to think that ancient introgression is what made the yellow warbler so yellow!"

This is one of the first examples of carotenoid gene transfer among vertebrates. Collectively, the results of this study suggest that both introgression and a more standard mode of evolution, where mutations occur and are passed from parent to offspring, have played a role in generating the diversity of colors in this genus and could have helped enable the extreme diversification of warblers.

In the future, the researchers hope to link specific mutations in these genes to changes in plumage coloration and to map out the molecular pathways involved in pigment production. They would also like to expand their study to all 110 species of warblers.

"There's a possibility that there may be introgression from another genus entirely," said Toews. "Expanding to other warblers would allow us to explore this possibility, and to more broadly understand the radiation of these fascinating birds."

Credit: 
Penn State

Penn researchers unlock the door to tumor microenvironment for CAR T cells

PHILADELPHIA-- The labyrinth of jumbled blood vessels in the tumor microenvironment remains one of the toughest blockades for cellular therapies to penetrate and treat solid tumors. Now, in a new study published online today in Nature Cancer, Penn Medicine researchers found that combining chimeric antigen receptor (CAR) T cell therapy with a PAK4 inhibitor drug allowed the engineered cells to punch their way through and attack the tumor, leading to significantly enhanced survival in mice.

The researchers discovered in laboratory experiments that vascularization in solid tumors is driven by the genetic reprogramming of tumor endothelial cells--which line the walls of blood vessels--caused by an enzyme known as PAK4. Knocking out that enzyme reduced abnormal tumor vascularity and improved T cell infiltration and CAR T cell immunotherapies in glioblastoma (GBM) mouse models, the Penn team found. GBM, the most common and aggressive type of brain cancer diagnosed in more than 22,000 Americans every year, is known for its prominent and abnormal vascularity and being immunologically "cold."

"The response in GBM patient from CAR T cell therapies is universally poor because the CAR T cells have a problem getting into the tumor," said senior author Yi Fan, PhD, an associate professor of Radiation Oncology in the Perelman School of Medicine at the University of Pennsylvania. "Our study shows that turning off this endothelial cell genetic reprogramming with a PAK4 inhibitor may help open the door to let both T cells and engineered T cells reach the tumor to do their job."

First, the team performed a kinome-wide screening analysis of more than 500 kinases, or enzymes, that regulate blood vessel activation in human endothelial cells from GBM patients. They found that PAK4, which has previously been shown as a driver of growth in solid tumors, was the culprit. Knocking that enzyme out in GBM endothelial cells with a drug, they found, restored expression of adhesion proteins that are important for the recruitment of immune cells and stimulated T cell infiltration into the tumors. Notably, knocking down PAK4 shifted the endothelial cells' morphology, from a spindle-shaped appearance to a characteristic cobblestone in GBM, indicating a less chaotic formation of blood vessels. In other words, it "normalized" the microenvironment.

Next, in a GBM mouse model, they found that inhibiting PAK4 reduced vascular abnormalities, improved T cell infiltration, and inhibited tumor growth in the mice. Approximately 80 percent of PAK4 knockout mice survived for at least 60 days after the experiment was terminated, whereas all wild-type mice died within 40 days after tumor implantation.

Another experiment with a EGFRvIII-directed CAR T cell therapy and a PAK4 inhibitor showed a nearly 80 percent reduction in tumor growth compared to mice that only had CAR T therapy five days after infusion. Notably, nearly 40 percent of the mice in the combination therapy group still survived even when all of the mice in the other groups had died 33 days after tumor implantation.

Targeting PAK4 may provide a unique opportunity to recondition the tumor microenvironment, as well provide a much-needed opportunity to improve T cell-based cancer immunotherapy for solid tumors, the authors said. The findings also support the idea that vessel normalization by PAK4 inhibition can improve drug delivery and reduce oxygen deprivation known as hypoxia, leading to improved tumor responses to targeted therapy, radiation and chemotherapy.

"To our knowledge, we are the first to show how we can reprogram the whole vascular microenvironment with a PAK4 inhibitor and promote cellular therapy," Fan said. "Importantly, this may not only be limited to brain tumors; it could potentially be used for all types, including breast, pancreatic, and others, because vascular abnormality is a common feature for almost every solid tumor."

Credit: 
University of Pennsylvania School of Medicine

Family pigs prefer their owner's company as dogs do, but they might not like strangers

image: Researchers at ELTE Department of Ethology in Budapest compared how young companion dogs and companion pigs seek human proximity in a novel environment.

Image: 
Photo: Tünde Szalai

Researchers at ELTE Department of Ethology in Budapest compared how young companion dogs and companion pigs seek human proximity in a novel environment. It turned out that both dogs and pigs stay close to their owner if no other person is present; but if a stranger is also there, only dogs stay near humans, pigs prefer to stay away. The study reveals that living in a human family is not enough for early developing a general human preference in companion animals, species differences weigh in.

Dogs are known for being especially social with humans from a very early age. Even those with limited contact to humans readily approach and seek human proximity and dogs can also recognize familiar over unfamiliar humans. This special sociability of dogs has been assumed to be the result of both their domestication and socialization with humans during early development. "We were curious if being kept as a family member from a very early age, like dogs, would induce similar proximity-seeking behaviours towards their owner in another social domestic species, the pig"-explains PhD student Paula Pérez Fraga.

Nowadays, the domestic pig, especially the miniature variant, is a popular companion animal that occupies a similar social niche in the human family as the family dog. This new role of the pig creates the need to better understand the pig-human relationship in the household environment, and especially the relationship of the pigs with their owners. "The miniature pigs that are part of our Family Pig Project are raised in human families since their 6-8 weeks of age. This does not only provide a unique opportunity to investigate the pig-human relationship, but it also allows us to directly compare their human-oriented behaviour to those of dogs"- says Attila Andics, principal investigator of the MTA-ELTE 'Lendület' Neuroethology of Communication Research Group.

"In this study, the animals were led into a novel room, where their owner was paired with a familiar object or with an unfamiliar person, on two separate occasions. The subjects could freely choose to be anywhere in the room, e.g. staying near or further away from any of the humans or the object"-says Linda Gerencsér, research fellow at the Research Group. "We found that both pigs and dogs preferred to stay near their owner over the familiar object. However, neither species preferred their owner over the stranger, but for seemingly different reasons. Dogs preferred to stay near both humans over being elsewhere, whereas pigs rather stayed away from the social partners, which might reflect slight fear towards the unfamiliar human." Interestingly, the research also revealed a difference in how the two species behaved near their owner. "Pigs needed more physical contact" - adds Pérez Fraga. "They touched the owner with the snout, in a similar manner as they do with conspecifics, and they climbed to the owner's lap."

Watch our study's video abstract: https://www.youtube.com/watch?v=-srppJ6UupY.

This is the first study to investigate the proximity seeking behaviours of miniature pigs towards their owner. "The similar previous experience of both pigs and dogs might lead to a similar role of the owner for the two species" - adds Gerencsér. "However, being kept as a family member might not be enough for developing a general preference for human company in pigs. Species predispositions, including that dogs, have a longer socialization period and humans are more salient as a social stimulus for them, might play an important role."

Credit: 
Eötvös Loránd University

New cyberattack can trick scientists into making toxins or viruses -- Ben-Gurion University researchers

BEER-SHEVA, Israel...November 30, 2020 - An end-to-end cyber-biological attack, in which unwitting biologists may be tricked into generating dangerous toxins in their labs, has been discovered by Ben-Gurion University of the Negev cyber-researchers.

According to a new paper just published in Nature Biotechnology, it is currently believed that a criminal needs to have physical contact with a dangerous substance to produce and deliver it. However, malware could easily replace a short sub-string of the DNA on a bioengineer's computer so that they unintentionally create a toxin producing sequence.

"To regulate both intentional and unintentional generation of dangerous substances, most synthetic gene providers screen DNA orders which is currently the most effective line of defense against such attacks," says Rami Puzis, head of the BGU Complex Networks Analysis Lab, a member of the Department of Software and Information Systems Engineering and Cyber@BGU. California was the first state in 2020 to introduce gene purchase regulation legislation.

"However, outside the state, bioterrorists can buy dangerous DNA, from companies that do not screen the orders," Puzis says. "Unfortunately, the screening guidelines have not been adapted to reflect recent developments in synthetic biology and cyberwarfare."

A weakness in the U.S. Department of Health and Human Services (HHS) guidance for DNA providers allows screening protocols to be circumvented using a generic obfuscation procedure which makes it difficult for the screening software to detect the toxin producing DNA. "Using this technique, our experiments revealed that that 16 out of 50 obfuscated DNA samples were not detected when screened according to the 'best-match' HHS guidelines," Puzis says.

The researchers also found that accessibility and automation of the synthetic gene engineering workflow, combined with insufficient cybersecurity controls, allow malware to interfere with biological processes within the victim's lab, closing the loop with the possibility of an exploit written into a DNA molecule.

The DNA injection attack demonstrates a significant new threat of malicious code altering biological processes. Although simpler attacks that may harm biological experiments exist, we've chosen to demonstrate a scenario that makes use of multiple weaknesses at three levels of the bioengineering workflow: software, biosecurity screening, and biological protocols. This scenario highlights the opportunities for applying cybersecurity know-how in new contexts such as biosecurity and gene coding.

"This attack scenario underscores the need to harden the synthetic DNA supply chain with protections against cyber-biological threats," Puzis says. "To address these threats, we propose an improved screening algorithm that takes into account in vivo gene editing. We hope this paper sets the stage for robust, adversary resilient DNA sequence screening and cybersecurity-hardened synthetic gene production services when biosecurity screening will be enforced by local regulations worldwide.

Credit: 
American Associates, Ben-Gurion University of the Negev

More than one-third of children with COVID-19 show no symptoms: study

More than one-third of kids who have COVID-19 are asymptomatic, according to a University of Alberta study that suggests youngsters diagnosed with the disease may represent just a fraction of those infected.

"The concern from a public health perspective is that there is probably a lot of COVID-19 circulating in the community that people don't even realize," said Finlay McAlister, a professor of medicine in the Faculty of Medicine & Dentistry.

"When we see reports of 1,200 new cases per day in the province of Alberta, that's likely just the tip of the iceberg--there are likely many people who don't know they have the disease and are potentially spreading it," he said.

For the study, McAlister's team analyzed results for 2,463 children who were tested during the first wave of the pandemic--March to September--for COVID-19 infection.

All told, 1,987 children had a positive test result for COVID-19 and 476 had a negative result. Of children who tested positive, 714--35.9 per cent--reported being asymptomatic.

"It speaks to the school safety programs," he said. "We can do all the COVID-19 questionnaires we want, but if one-third of the kids are asymptomatic, the answer is going to be no to all the questions--yet they're still infected."

Because of the asymptomatic nature of the disease in up to one-third of children, McAlister said the province was right to close schools for a longer period over Christmas.

"As far as we know, kids are less likely to spread disease than adults, but the risk is not zero," he said. "Presumably asymptomatic spreaders are less contagious than the person sitting nearby who is sneezing all over you, but we don't know that for sure."

The researchers also found that although cough, runny nose and sore throat were three of the most common symptoms among children with COVID-19 infection--showing up in 25, 19 and 16 per cent of cases respectively--they were actually slightly more common among those with negative COVID-19 test results, and therefore not predictive of a positive test.

"Of course, kids are at risk of contracting many different viruses, so the COVID-specific symptoms are actually more things like loss of taste and smell, headache, fever, and nausea and vomiting, not runny nose, a cough and sore throat," he said.

McAlister noted that his group has a similar paper coming out that shows sore throats and runny noses aren't reliable signs of COVID-19 in adults either, although the vast majority of adults (84 per cent) do show symptoms.

"Sore throat and runny nose means you've got some kind of upper respiratory tract infection, but fever, headache, and loss of taste or smell are the big ones for indicating that one may have COVID-19 rather than another viral upper respiratory tract infection," he said, adding nausea and vomiting wasn't as prominent in adults.

McAlister added that if people have any symptoms at all, they should stay home and get tested, while even those who feel well should still be doing everything they can to stay safe--wearing a protective mask, frequent handwashing, keeping distance, and avoiding meeting indoors.

"Some people with COVID feel well and don't realize they have it so they socialize with friends and unintentionally spread the virus, and I think that's the big issue," he said.

Credit: 
University of Alberta Faculty of Medicine & Dentistry