Brain

Infants can distinguish between leaders and bullies, study finds

image: In a new study, psychology professor Renee Baillargeon found that 21-month-old infants expect people to respond differently to leaders and bullies.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- A new study finds that 21-month-old infants can distinguish between respect-based power asserted by a leader and fear-based power wielded by a bully.

The study, reported in the Proceedings of the National Academy of Sciences, analyzed infants' eye-gazing behavior, a standard approach for measuring expectations in children too young to explain their thinking to adults. This "violation-of-expectation" method relies on the observation that infants stare longer at events that contradict their expectations.

Previous studies had shown that infants can recognize power differences between two or more characters, said University of Illinois Psychology Alumni Distinguished Professor Renee Baillargeon, who conducted the new research.

"For example, infants will stare longer at scenarios where larger characters defer to smaller ones. They also take note when a character who normally wins a confrontation with another suddenly loses," she said. "But little was known about infants' ability to distinguish between different bases of power."

To get at this question, Baillargeon developed a series of animations depicting cartoon characters interacting with an individual portrayed as a leader, a bully or a likeable person with no evident power.

She first tested how adults - undergraduate students at the University of Illinois - responded to the scenarios and found that the adults identified the characters as intended. Next, she measured the eye-gazing behavior of infants as they watched the same animations.

"In one experiment, the infants watched a scenario in which a character portrayed either as a leader or a bully gave an order ("Time for bed!") to three protagonists, who initially obeyed," Baillargeon said. "The character then left the scene and the protagonists either continued to obey or disobeyed."

The infants detected a violation when the protagonists disobeyed the leader but not when they disobeyed the bully, Baillargeon found. This was true also in a second experiment that repeated the scenarios but eliminated previous differences in physical appearance between the leader and the bully (see graphic).

A third experiment tested whether the infants were responding to the likeability of the characters in the scenarios, rather than to their status as leaders or bullies.

"In general, when the leader left the scene, the infants expected the protagonists to continue to obey the leader," Baillargeon said. "However, when the bully left, the infants had no particular expectation: The protagonists might continue to obey out of fear, or they might disobey because the bully was gone. The infants expected obedience only when the bully remained in the scene and could harm them again if they disobeyed.

"Finally, when the likeable character left, the infants expected the protagonists to disobey, most likely because the character held no power over them," Baillargeon said.

The new findings confirm earlier studies showing that infants can detect differences in power between individuals and expect those differences to endure over time, Baillargeon said.

"Our results also provide evidence that infants in the second year of life can already distinguish between leaders and bullies," she said. "Infants understand that with leaders, you have to obey them even when they are not around; with bullies, though, you have to obey them only when they are around."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Russia: Increases in life expectancy, decreases in child deaths, use of alcohol, tobacco

SEATTLE - Life expectancy in Russia between 1994 and 2016 increased by more than 7 years, while rates of death among children under age 5 decreased nearly 60%, according to the most extensive health study on the nation ever conducted.

In addition, age-adjusted rates of premature death from smoking, one of the world's most substantial health risks, dropped by nearly 34% over the same time period. Russia also saw progress in reducing premature death (as measured in years of life lost (YLLs) from stomach cancer, drowning, and chronic obstructive pulmonary disease.

"These are significant accomplishments," said Dr. Mohsen Naghavi, a professor of health metrics sciences at the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. "Russia's public health officials deserve recognition for their efforts lowering the country's burden of disease."

However, the study, published today in the international medical journal The Lancet, concludes the nation continues to face considerable health challenges.

Despite improvement from the mid-1990s, a nearly 11-year gap in life expectancy remained between Russian men and women in 2016. Moreover, Russia exceeds all other countries for age-adjusted premature death rates attributed to alcohol use disorders and has the second-highest premature death rate from drug use globally.

"Like many other nations, more than half of all deaths in Russia can be attributed to behavioral risk factors, most prominently alcohol and substance abuse," said Dr. Christopher Murray, an author on the study and director IHME.

The study, "Burden of disease in Russia, 1980-2016: a systematic analysis for the Global Burden of Disease Study 2016," was published today in the international medical journal The Lancet. It is part of the Global Burden of Disease (GBD) study, a comprehensive effort to quantify health internationally, covering 333 diseases and injuries and 84 risk factors.

There are 3,676 collaborators on the GBD, including 20 in Russia. It is the world's largest scientific collaboration and is widely viewed as the most authoritative health study. Earlier this year, the leadership of IHME and the World Health Organization signed a memorandum of understanding to work together on the GBD, "thereby strengthening the study for national and local decision-making by health officials and practitioners."

The Russia study was led by Dr. Vladimir Starodubov, director of the Federal Research Institute for Health Organization and Informatics (FRIHOI) of the Ministry of Health, and was written in partnership with Russian experts, as well as representatives from the WHO European Office and IHME. A majority of the study authors are Russian.

It estimates life expectancy, prevalence, incidence, deaths, and several other summary health metrics.

Among the findings:

In 2016, life expectancy at birth for males in Russia was 65.4 years, whereas for females it was 76.2 years; for males, this marks an improvement from 57.4 in 1994, and for females, an improvement from 70.8 the same year.

With age-adjusted premature rates of 680 YLLs per 100,000 people, premature mortality from alcohol use disorders in Russia in 2016 is the highest in the world, followed next by Mongolia at 647 YLLs per 100,000 people.

Russia's age-adjusted premature death rates for drug use disorders (447 YLLs per 100,000 people) is second highest in the world.

Child mortality rates decreased between 2000 and 2016 by 58%.

Russian men have a disproportionately high burden of disease relative to women.

Age-adjusted rates of premature mortality in 2016 were higher for men than for women across all 20 leading causes of YLLs.

Age-adjusted premature mortality rates for cirrhosis and other chronic liver diseases due to alcohol use among both men and women increased by 43% between 2000 and 2016.

Age-adjusted rates of premature death from lung cancer were more than seven times higher for men than for women, and suicide rates were almost six times higher.

For men aged 15-49 years, alcohol use contributed to one in three deaths (34%) in 2016.

According to Starodubov, "Significant improvements in the health indicators of the population of the Russian Federation have been achieved and it is a credit to state alcohol policy, improving medical care, and increasing attention to a healthy lifestyle."

Researchers found more than half of all deaths in Russia are attributable to behavioral risk factors, such as smoking, alcohol use, dietary risks, low physical activity, drug use, and unsafe sex. However, high blood pressure, a metabolic risk factor, was the leading risk for death in Russia, accounting for nearly one in three deaths in 2016.

"Despite the challenges Russia is facing, the nation's population health leaders have made progress in addressing some key problems," Murray said. "The insights from this comprehensive study should be enlightening to policymakers and others responsible for planning and delivering health services for the nation's 144 million people."

Top 10 causes of premature mortality in 2016:

Ischemic heart disease

Stroke

Suicide

Cardiomyopathy

Road injuries

Lower respiratory infections

Lung cancer

Alcohol use disorders

Interpersonal violence

HIV/AIDS

Top 10 risk factors contributing to premature mortality in 2016 (ranking based on all ages rates of YLLs per 100,000 people):

High systolic blood pressure

Alcohol use

Smoking

High total cholesterol

High body-mass index

Diet low in whole grains

High fasting plasma glucose

Diet low in fruits

Diet low in nuts and seeds

Ambient particulate matter pollution

Credit: 
Institute for Health Metrics and Evaluation

African armed conflict kills more children indirectly than in actual fighting, study finds

More children die from the indirect impact of armed conflicts in Africa than by weapons used in those conflicts, according to a new study led by Stanford University researchers.

The research is the first comprehensive analysis of the large and lingering effects of armed conflicts -- civil wars, rebellions and interstate conflicts -- on the health of noncombatants.

The numbers are sobering: between 3.1 million and 3.5 million infants born within 30 miles of armed conflict died from indirect consequences of battles from 1995 to 2015. That number jumps to 5 million deaths of children ages 5 and younger in those same conflict zones.

"The indirect effects of conflict on children are so much greater than the direct deaths from warfare," said Eran Bendavid, MD, senior author of the study, which will be published Aug. 30 in The Lancet.

The authors also found evidence of increased mortality risk from as far away as 60 miles from armed conflicts and for eight years after them. Being born in the same year as a nearby armed conflict is riskiest for children younger than 1, the authors found, but the lingering effects remain elevated over the years and, even after a conflict has ended, raise the risk of death for infants by over 30 percent.

In the entire continent, the authors wrote, the number of infant deaths related to armed conflicts from 1995 to 2015 was more than three times the number of direct deaths from these conflicts. Further, they found that among babies born within a 30-mile range of armed conflict, the risk of dying before age 1 was on average 7.7 percent higher than it was for babies born outside that range.

The authors recognize it is not surprising that African children are vulnerable to nearby armed conflict. But they show that this burden is substantially higher than previously indicated.

'Surprisingly poorly understood'

"We wanted to understand the effects of war and conflict, and discovered that this was surprisingly poorly understood," said Bendavid, associate professor of medicine and core faculty member at Stanford Health Policy. "The most authoritative source, the Global Burden of Disease, only counts the direct deaths from conflict, and those estimates suggest that conflicts are a minuscule cause of death."

Paul Wise, MD, MPH, professor of pediatrics and a senior fellow at Stanford's Freeman Spogli Institute for International Studies, has long argued that lack of health care, vaccines, food, water and shelter kills more civilians than bombs and bullets do.

This study has now put data behind the theory when it comes to children.

"We hope to redefine what conflict means for civilian populations by showing how enduring and how far-reaching the destructive effects of conflict can be on child health," said Bendavid, an infectious disease physician.

"Lack of access to key health services or to adequate nutrition are the standard explanations for stubbornly high infant mortality rates in parts of Africa," said Marshall Burke, PhD, an assistant professor of earth systems science and fellow at the Center on Food Security and the Environment. "But our data suggest that conflict can itself be a key driver of these outcomes, affecting health services and nutritional outcomes hundreds of kilometers away and for nearly a decade after the conflict event."

The results suggest efforts to reduce conflict could lead to large health benefits for children, the authors said.

Gathering the data

The researchers matched data on 15,441 armed-conflict events with data on 1.99 million births and subsequent child survival across 35 African countries. The primary conflict data came from the Uppsala Conflict Data Program's Georeferenced Event Dataset, which includes detailed data about the time, location, type and intensity of conflicts from 1946 to 2016.

The authors also used all available data from the Demographic and Health Surveys, funded by the U.S. Agency for International Development, conducted in 35 African countries from 1995 to 2015 as the primary sources on child mortality in their analysis.

The data, they said, show that the indirect toll of armed conflict among children is three to five times greater than the estimated number of direct casualties in conflict. The total burden is likely even higher, since the authors focused on children and not the effects on women and other vulnerable populations.

Zachary Wagner, PhD, a former postdoctoral scholar at Stanford and lead author of the study, said he knows few are surprised that conflict is bad for child health.

"However, this work shows that the relationship between conflict and child mortality is stronger than previously thought, and children in conflict zones remain at risk for many years after the conflict ends," Wagner said.

"We hope our findings lead to enhanced efforts to reach children in conflict zones with humanitarian interventions," he added. "But we need more research that studies the reasons for why children in conflict zones have worse outcomes in order to effectively intervene."

Credit: 
Stanford Medicine

Higgs particle's favorite 'daughter' comes home

image: In experiments conducted at the Large Hadron Collider, a team including Princeton University researchers has identified a long-sought pathway by which the Higgs boson decays into two other particles, known as bottom quarks. The Higgs boson is produced with another particle called a Z boson. The Higgs boson decays to bottom quark jets (shown in blue), and the Z boson then decays into an electron and a positron (shown in red).

Image: 
Image courtesy of the CMS Collaboration

In a finding that caps years of exploration into the tiny particle known as the Higgs boson, researchers have traced the fifth and most prominent way that the particle decays into other particles. The discovery gives researchers a new pathway by which to study the physical laws that govern the universe.

Physicists at Princeton University led one of the two main teams that today announced the detection of the Higgs particle via its decay into two particles called bottom quarks. This pathway is the last to be detected of the five main signature pathways that can identify the Higgs particle.

"We found it exactly where we expected to find it and now we can use this new pathway to study the Higgs' properties," said James Olsen, professor of physics and leader of the team at Princeton. "This has been a truly collaborative effort from the beginning and it is exciting to see the amplification of effort that comes from people working together."

Long-sought because it confirms theories about the nature of matter, the Higgs particle exists only fleetingly before transforming into other, so-called "daughter" particles. Because the boson lasts only for about one septillionth of a second, researchers use the particle's offspring as evidence of its existence.

These daughter particles are scattered among the shower of particles created from the collision of two protons at the Large Hadron Collider at the European Organization for Nuclear Research (CERN). The Higgs particle was observed for the first time in 2012 through three of the other modes of decay.

Of these lineages or decay patterns, the decay into two bottom quarks occurs most often, making up about 60 percent of the decay events from the Higgs, according to Olsen.

But at the LHC, the bottom-quark pattern is the hardest to trace back definitively to the Higgs because many other particles can also give off bottom quarks.

Quarks are tiny constituents of protons, which themselves are some of the building blocks of atoms. The bottom quark is one of the six types of quarks that make up the menagerie of particles in the "standard model" that explains matter and their interactions.

Discerning which bottom quarks came from the Higgs versus other particles has been the main challenge facing the LHC's two Higgs detectors, the Compact Muon Solenoid (CMS) with which Olsen works, and its companion, ATLAS. The detectors operate independently and are run by separate teams of scientists.

Once produced, these bottom quarks split into jets of particles, making them hard to trace back to the original parent particles. Because of this background noise, the researchers required more data than was needed for the other pathways to be sure of their finding.

Both detectors are adept at spotting particles such as electrons, photons and muons, but are more challenged by quarks. The quarks due to their nature are not observed as free particles. Instead they are bound and appear as other particles such as mesons and baryons or decay quickly.

"It is a messy business because you have to collect all of those jets and measure their properties to calculate the mass of the object that decayed into the jets," Olsen said.

The two detectors are massive, complex structures that sit at the end of the LHC tunnel where protons are accelerated and smashed together at high energy levels. The structures contain layers of smaller detectors arranged like layers of an onion.

The devices detect particles at each layer of the onion to reconstruct their paths. This allows researchers to trace the path of a particle back to its source in a manner analogous to following the trail of a firework's light back to the place where the first burst occurred. By following many of these paths, the researchers can identify where and when the Higgs first formed in the proton-proton collision.

"The Higgs-to-bottom-quark decay is important because it is the most frequent decay, so a precise measurement of its rate tells us a lot about the nature of this particle," said Christopher Palmer, an associate research scholar at Princeton, who describes the work in this video.

Additional Princeton researchers on the team included physics graduate student Stephane Cooperstein and undergraduate Jan Offerman, Class of 2018. The team also included collaborators at the University of Florida and Fermilab, as well as groups from Italy, Switzerland and Germany.

The main challenge in detecting the bottom-quark decay mode was the amount of background bottom quarks produced by non-Higgs events. In addition to gathering more data from collisions, the researchers searched for a distinct way that the Higgs gave rise to the bottom quarks.

Olsen began working on this challenge 10 years ago, before the LHC had switched on and when physicists were running simulations on computers. Around that time, Olsen learned about a theoretical study showing that it is possible to find bottom quarks created from the Higgs recoiling off another particle, such as a Z boson or a W boson.

"It was an idea that nobody had before, to search for it in that channel, and the only question was whether it was possible experimentally and whether it really would pay off," Olsen said.

Olsen said it is exciting to see the work come to fruition. "This is a very satisfying moment."

Credit: 
Princeton University

Study reveals when and why people die after noncardiac surgery

Munich, Germany - 27 Aug 2018: The main reasons why people die after noncardiac surgery are revealed today in a study of more than 40,000 patients from six continents presented in a late breaking science session at ESC Congress 2018.1 Myocardial injury, major bleeding, and sepsis contributed to nearly three-quarters of all deaths.

"There's a false assumption among patients that once you've undergone surgery, you've 'made it'," said study author Dr Jessica Spence, of the Population Health Research Institute (PHRI), a joint institute of Hamilton Health Sciences (HHS) and McMaster University, Hamilton, Canada. "Unfortunately, that's not always the case, and now we have a much better sense of when and why people die after noncardiac surgery. Most deaths are linked to cardiovascular causes."

The VISION study2 included 40,004 patients aged 45 years or older undergoing noncardiac surgery and remaining in hospital for at least one night. Patients were recruited from 27 centres in 14 countries in North and South America, Asia, Europe, Africa, and Australia, and monitored for complications until 30 days after their surgery.

The researchers found that 715 (1.8%) patients died within 30 days after noncardiac surgery. Of those, 505 (71%) died in hospital (including four [0.6%] in the operating room), and 210 (29%) died after discharge from hospital. Dr Spence said: "One in 56 patients died within 30 days of noncardiac surgery and nearly all deaths occurred after leaving the operating room, with more than a quarter occurring after hospital discharge."

Eight perioperative complications - including five cardiovascular - were associated with death within 30 days postoperatively. The top three complications, which contributed to nearly three-quarters of all deaths, were myocardial injury after noncardiac surgery (MINS; 29%), major bleeding (25%), and sepsis (20%).

"We're letting patients down in postoperative management," said principal investigator Professor Philip J. Devereaux, director of cardiology at McMaster University. "The study suggests that most deaths after noncardiac surgery are due to cardiovascular causes, so cardiologists have a major role to play to improve patient safety. This includes conducting blood and imaging tests to identify patients at risk then giving preventive treatment, including medications that prevent abnormal heart rhythms, lower blood pressure and cholesterol, and prevent blood clots."

Earlier findings from the VISION study showed that a simple blood test can identify MINS, enabling clinicians to intervene early and prevent further complications.3 The blood test measures a protein called high-sensitivity troponin T which is released into the bloodstream when injury to the heart occurs.

Regarding cardiovascular complications, MINS occurred in 5,191 (13%) patients and independently increased the risk of 30-day mortality by 2.6-fold; major bleeding occurred in 6,238 (16%) patients and increased risk by 2.4-fold; 372 (0.9%) patients had congestive heart failure, which raised risk by 1.6-fold; 152 (0.4%) patients had deep venous thrombosis which raised risk by 2.1-fold; and 132 (0.3%) patients had a stroke, which increased risk by a factor of 1.6.

Regarding noncardiovascular complications associated with 30-day mortality, sepsis occurred in 1,783 (4.5%) patients and independently increased risk by 5.7-fold; infection occurred in 2,171 (5.4%) patients and raised risk by 1.9-fold; and 118 patients (0.3%) had acute kidney injury resulting in new dialysis, which increased risk by 4.7-fold.

"Combined, these discoveries tell us that we need to become more involved in care and monitoring after surgery to ensure that patients at risk have the best chance for a good recovery," said Dr Spence, who is also an anaesthesiologist at HHS and a PhD candidate at McMaster University.

Credit: 
European Society of Cardiology

Fires overwhelming British Columbia; smoke choking the skies

image: British Columbia is on fire. In this Canadian province 56 wildfires "of note" are active and continuing to blow smoke into the skies overhead.

Image: 
Image Courtesy: NASA Worldview, Earth Observing System Data and Information System (EOSDIS).

British Columbia is on fire. In this Canadian province 56 wildfires "of note" are active and continuing to blow smoke into the skies overhead.

Current statistics (from the BC Wildfire Service) show that 629,074 total hectares (1,554,475.71 acres) have burned this year in British Columbia. Specifically in each province, the Coastal province has had 86,116 hectares burn. Northwest has had 310,731 ha. burn. Prince George has had 118,233 ha. burn. Kamloops has had 38,019 ha. burn. The Southeast has had 35,639 ha. burn and Cariboo has had 40,336 ha. burn.

The weather, as in the western U.S., has had a significant role in the 2018 wildfire situation. Hot, dry, and windy conditions create a breeding ground for wildfires to start and spread. With just a lightning strike or a poorly tended campfire these weather conditions allow those quick-start fires to spread rapidly and become out of control before they are even discovered.

Besides the very obvious hazards of fire, there is the secondary hazard of smoke across the region. Smoke now blankets the sky above British Columbia and will be blown eastward by the jetstream. This smoke causes hazardous air quality wherever it travels. The map below shows the air quality index for the British Columbia region for August 22, 2018. Residents who either smell the smoke or notice haze in the air, should take precautions.

Credit: 
NASA/Goddard Space Flight Center

Kelp forests function differently in warming ocean

Kelp forests in the UK and the wider North-East Atlantic will experience a marked change in ecosystem functioning in response to continued ocean warming and the increase of warm-water kelp species, according to a new study led by a team from the Marine Biological Association and the University of Plymouth.

Lead author Albert Pessarrodona, now with the University of Western Australia, said the team studied the ecosystem consequences of an expanding warm-water kelp species, Laminaria ochroleuca, which is proliferating under climate change. The findings are published today in the Journal of Ecology.

"As the ocean warms, species are moving up slopes and towards the poles in order to remain within their preferred environmental conditions. Species with warm affinities are migrating to many habitats previously dominated by cold-water ones, transforming ecosystems as we know them. These so-called novel ecosystems feature a mix of warm- and cold-affinity species, but we don't know whether they can retain desirable ecological processes and functions which human wellbeing relies on", Pessarrodona said.

The scientists studied kelp forests in the southwest of the UK, where the warm water kelp species has increased in abundance in recent years - probably at the expense of a cold-water species, which is less tolerant to warming seas.

"The warm-water kelp Laminaria ochroleuca was actually first detected in the UK in the late 1940s, but is now a common sight along the southwest coast and is predicted to continue expanding northwards in response to climate change, occupying most of the UK and large sections of the wider North-East Atlantic coastline by the end of the century", co-author of the study Dr. Dan Smale, from the Marine Biological Association, said.

Most studies so far have looked at how non-native invasive species introduced by humans alter ecosystems. Far less attention has been paid to the impacts on ecosystem functioning of species expanding into new habitats as a result of climate change.

Pessarrodona added: "We found that the warm-water kelps essentially acted as a conveyor belt of food production, growing and shedding its leaf-like lamina throughout the year and providing a continuous supply of food. In contrast, the cold-water species only grew during short, discrete periods of the year".

Overall, the warm water species was functionally "faster", with its organic material being rapidly processed by herbivores such as sea snails and limpets and with faster rates of decomposition.

"Our findings suggest that the proliferation of the warm-water kelp will alter the dynamics of North-East Atlantic marine forests by modifying the quantity, quality and availability of food. In other research we have also seen that the warm-water kelp harbors less biodiversity than the cold species. Such changes in the provision of habitat and food could eventually affect commercially important species such as crabs, lobsters and coastal fish", Smale said.

However, it is not all bad news. Some of the functions the study examined, such as carbon absorption or food provisioning, were maintained or even enhanced. Moreover, the replacement of cold-water kelps by warm-water ones in the North-East Atlantic means this important habitat will likely survive in the future, in contrast to several other areas of the world, including Japan, Canada and Australia, where kelp forests are disappearing completely.

Credit: 
British Ecological Society

More minorities labeled 'learning disabled' because of social inequities, study finds

A new Portland State University study suggests that the disproportionate placement of racial minorities into special education for learning disabilities is largely because of social inequities outside of schools rather than racially biased educators.

Some attribute the overplacement to educators being racist, but when researchers use statistical techniques to compare youth with similar academic achievement levels and socioeconomic status, racial minorities are actually less likely to be labeled as having a learning disability than white children.

Some researchers suggest this means that more racial minorities need to be placed in special-education classes, but Dara Shifrer, a sociology professor in PSU's College of Liberal Arts and Sciences and author of a study published July 27 in The Sociological Quarterly, interprets the findings differently.

Shifrer says the problem lies in the fact that a student's socioeconomic status is a strong predictor of academic achievement, which is often used to diagnose learning disabilities. Blacks and Hispanics are more likely to be poor and begin school at a disadvantage compared to their white peers, but classifying the low achievement of minority and poor students as a disability fails to address the social causes behind the achievement gap, according to the study.

"It's not that we need to put more racial minorities in special education," Shifrer said. "It's that we need to pay attention to all the inequality in our society that makes kids have different levels of preparation when they start school and that makes it difficult for teachers to decide why one kid is struggling to learn while another is not."

The study says racial disproportionality is problematic because not only is it unclear whether special education improves students' outcomes, but disability labels can stigmatize students for life and limit their learning opportunities.

The study criticizes the way learning disabilities are diagnosed. Most youth who are diagnosed with learning disabilities have normal or above-average intelligence but for unexplained reasons are underachievers. But because there's a lack of physical or biological indicators for LDs, classifications are often subjective and inconsistent across districts and states.

Some youth who are labeled as "learning disabled" may in fact have real neurological or biological distinctions, but for others, their learning and behavior problems may stem from differences in the resources available to their families.

"The way learning disabilities are diagnosed is basically based on academic achievement," Shifrer said. "But education performance is a measure of a lot of things -- partly your brain, but it's also the things you're experiencing in your home, in your neighborhood, in your schools."

Shifrer suggests that rather than addressing the social issue of inequality with disability labels and special education, more needs to be invested in early childhood programming so schools can provide additional health, emotional and academic supports to the children who need them most.

She also says that educators and policymakers need to be clearer about what they know and don't know about learning disabilities. Doing so would allow teachers, parents and students to incorporate useful insights from the classification without feeling like it seals a child's destiny or captures their complexity.

"If we keep taxing teachers with figuring out who's learning disabled and who's not when there's not even a clear definition of what learning disabled means, the problem's not going to go away," Shifrer said.

Credit: 
Portland State University

Your office may be affecting your health

Workers in open office seating had less daytime stress and greater daytime activity levels compared to workers in private offices and cubicles, according to new research led by the University of Arizona.

That greater physical activity at the office was related to lower physiological stress during after-work hours outside the office, researchers said. This is the first known study to investigate the effects of office workstation type on these objective measures.

The study was led by the UA Institute on Place, Wellbeing and Performance, directed by Dr. Esther Sternberg, and the UA Center for Integrative Medicine, in collaboration with Aclima Inc. and the Baylor College of Medicine. The research is under the Wellbuilt for Wellbeing program, funded by the U.S. General Services Administration, which owns and leases more than 370 million square feet of space that houses more than 1 million federal employees.

The study evaluated 231 people who work in federal office buildings and wore stress and activity sensors around the clock for three workdays and two nights. The intent was to evaluate the workers' activity and stress levels both inside and outside of the office environments.

The study found that workers in open bench seating arrangements were 32 percent more physically active at the office than those in private offices and 20 percent more active than those in cubicles. Importantly, workers who were more physically active at the office had 14 percent less physiological stress outside of the office compared to those with less physical activity at the office.

"This research highlights how office design, driven by office workstation type, could be an important health promoting factor," said Sternberg, research director of the UA Center for Integrative Medicine and senior author on the study.

Office workers are at a particularly high risk for low levels of physical activity and the associated poor health outcomes. According to a 2015 report published by the U.S. Centers for Disease Control and Prevention, workplace-related illnesses cost the U.S. economy more than $225 billion a year.

"Objective measurements using wearable sensors can inform policies and practices that affect the health and well-being of hundreds of millions of office workers worldwide," said Casey Lindberg, UA Institute on Place, Wellbeing and Performance research associate and lead author on the study.

Bijan Najafi, director of Interdisciplinary Consortium on Advanced Motion Performance at the Baylor College of Medicine in Houston, Texas, said it is satisfying to know that his work on wearable devices to measure stress and activity can help improve health and well-being for millions of office workers.

The study adds an objective voice to an ongoing debate about how best to weigh the advantages and disadvantages of open seating designs to optimize worker health. The paper, "Effects of office workstation type on physical activity and stress," was recently published in Occupational and Environmental Medicine, a publication of British Medical Journals.

Credit: 
University of Arizona

'Liquid biopsy' predicts lymphoma therapy success within days

A blood test can predict which patients with a type of cancer called diffuse large B cell lymphoma are likely to respond positively to initial therapy and which are likely to need more aggressive treatment, according to a multicenter study led by researchers at the Stanford University School of Medicine.

The study validates the clinical usefulness of tracking the rise and fall of circulating tumor DNA, or ctDNA, in the blood of patients before and after therapy. It suggests that clinicians may soon be able to determine how a patient is responding to treatment within days or weeks of starting therapy rather than waiting until therapy is completed five to six months later.

"Although conventional therapy can cure the majority of patients with even advanced B cell lymphomas, some don't respond to initial treatment," said associate professor of medicine Ash Alizadeh, MD, PhD. "But we don't know which ones until several months have passed. Now we can predict nonresponders within 21 days after the initiation of treatment by tracking the levels of ctDNA in a patient's blood. We can look earlier and make a reliable prediction about outcome."

The study will be published online Aug. 20 in the Journal of Clinical Oncology. Alizadeh shares senior authorship with associate professor of radiation oncology Maximilian Diehn, MD, PhD. Instructor of medicine David Kurtz, MD, PhD, and postdoctoral scholar Florian Scherer, MD, are the lead authors.

Varying responses to treatment

Diffuse large B cell lymphoma, a blood cancer, is the most common type of non-Hodgkin lymphoma. Because it is highly biologically variable, patients vary widely in their response to treatment. Although most people are cured by conventional therapy, about one-third are not. Being able to predict early in the course of treatment those who will need additional or more aggressive therapies would be a significant boon to both clinicians and patients.

Circulating tumor DNA is released into the blood by dying cancer cells. Learning to pick out and read these DNA sequences among the thousands or even millions of other noncancerous sequences in the blood can provide valuable insight into the course of the disease and the effectiveness of therapy. Recently, Diehn and Alizadeh showed that ctDNA tracking can also predict lung cancer recurrence weeks or months before any clinical symptoms arise.

"Combined with our recent study on lung cancer, our new findings speak to the power and likely utility of using ctDNA to assess how well cancer treatments are working in an individual patient. We are very hopeful that the approach will ultimately be extensible to most if not all cancer types," Diehn said.

In this study, the researchers tracked ctDNA levels in 217 people with diffuse large B cell lymphoma who were treated at six medical centers -- three in the United States and three in Europe. For each patient, they compared levels of ctDNA before treatment began with the levels after the first and second rounds of conventional chemotherapy. They then correlated those changes with each patient's outcome.

They found that ctDNA was detectable prior to the initiation of therapy in 98 percent of the people studied. And, as would be expected, the amount of ctDNA in the blood dropped in all patients once treatment began. But the precipitousness of the decline varied. Those people whose ctDNA levels dropped a hundredfold after the first round or three-hundredfold by the second round were much more likely to live 24 months or more without experiencing a recurrence of their disease than those whose ctDNA levels declined more slowly.

"We found that ctDNA levels serve as a very sensitive and specific biomarker of response to therapy within as few as 21 days," Kurtz said. "Every year, about 30,000 people in the United States are diagnosed with diffuse large B cell lymphoma and, for the most part, they're treated with six cycles of combination therapy. But we know that not all patients need six cycles. A large fraction could be cured with fewer cycles -- maybe even just two. If we can identify those people who are responding extremely well, we could spare them additional treatments. Conversely, we could intensify the therapy or seek other options for those who are not responding as well as we would have hoped."

Hopes for expansion

The researchers are encouraged that they saw a similar correlation between changes in ctDNA levels and outcomes in patients from each of the six participating medical centers, confirming the global usefulness of the analysis. They're currently planning a clinical trial based on the results, and they're eager to learn whether they can make similar predictions about the prognoses of patients other than those with diffuse large B cell lymphomas.

"These findings confirm the value of tracking cancer genetics in the blood in real time," Alizadeh said. "We are thinking about how to use the tools to best benefit patients, and are very excited to test this approach in other types of cancers."

Credit: 
Stanford Medicine

A valley so low: Electrons congregate in ways that could be useful to 'valleytronics'

image: Elliptical orbits of bismuth surface electrons in a large magnetic field. The orientation and interference patterns of the electronic states reveal that the electrons prefer to occupy a single valley. Image created using a theoretical model of the data.

Image: 
Ali Yazdani Laboratory, Princeton University

A Princeton-led study has revealed an emergent electronic behavior on the surface of bismuth crystals that could lead to insights on the growing area of technology known as "valleytronics."

The term refers to energy valleys that form in crystals and that can trap single electrons. These valleys potentially could be used to store information, greatly enhancing what is capable with modern electronic devices.

In the new study, researchers observed that electrons in bismuth prefer to crowd into one valley rather than distributing equally into the six available valleys. This behavior creates a type of electricity called ferroelectricity, which involves the separation of positive and negative charges onto opposite sides of a material. This study was made available online in May 2018 and published this month in Nature Physics.

The finding confirms a recent prediction that ferroelectricity arises naturally on the surface of bismuth when electrons collect in a single valley. These valleys are not literal pits in the crystal but rather are like pockets of low energy where electrons prefer to rest.

The researchers detected the electrons congregating in the valley using a technique called scanning tunneling microscopy, which involves moving an extremely fine needle back and forth across the surface of the crystal. They did this at temperatures hovering close to absolute zero and under a very strong magnetic field, up to 300,000 times greater than Earth's magnetic field.

The behavior of these electrons is one that could be exploited in future technologies. Crystals consist of highly ordered, repeating units of atoms, and with this order comes precise electronic behaviors. Silicon's electronic behaviors have driven modern advances in technology, but to extend our capabilities, researchers are exploring new materials. Valleytronics attempts to manipulate electrons to occupy certain energy pockets over others.

The existence of six valleys in bismuth raises the possibility of distributing information in six different states, where the presence or absence of an electron can be used to represent information. The finding that electrons prefer to cluster in a single valley is an example of "emergent behavior" in that the electrons act together to allow new behaviors to emerge that wouldn't otherwise occur, according to Mallika Randeria, the first author on the study and a graduate student at Princeton working in the laboratory of Ali Yazdani, the Class of 1909 Professor of Physics.

"The idea that you can have behavior that emerges because of interactions between electrons is something that is very fundamental in physics," Randeria said. Other examples of interaction-driven emergent behavior include superconductivity and magnetism.

Credit: 
Princeton University

Statisticians correlate secondhand smoke in childhood to arthritis later in life

A new study in the journal Rheumatology indicates that being exposed to secondhand smoke in childhood could increase the risk of someone developing arthritis as an adult.

Rheumatoid arthritis is a complex disease that may be developed by environmental agents interacting with genetic factors. The role of genetics into arthritis susceptibility is well recognized. There are over 100 types of arthritis, but rheumatoid arthritis is one of the most common ones as well as one of the most frequent auto-immune diseases. The suspected relationship relies on the hypothesis that an environmental factor may induce changes in some tissues (for example the lung). But, this triggering of changes by interaction between genes and environmental factors might occur decades before the emergence of the disease.

The study investigated the link between smoking status, including childhood and adult passive exposures, and the risk of rheumatoid arthritis.

The patients studied included 98,995 French female volunteers prospectively followed since 1990. Self-administered questionnaires sent every 2-3 years collected medical events, and general, lifestyle, and environmental characteristics. Arthritis diagnoses were collected in three successive questionnaires, and confirmed if women received an arthritis-specific medication.

The results of the study confirmed that adulthood smoking was associated with an increased risk of arthritis. In addition, ever (current and past) smokers who also had childhood passive smoking exposure had a higher risk of arthritis than those not exposed as children. Also, arthritis began earlier in smokers exposed to childhood passive smoking. The data also suggested that even in nonsmokers, passive exposure to tobacco during childhood tended to increase the risk of arthritis, the magnitude of the increase being similar to that associated with regular adulthood smoking, i.e. about 40%.

In summary, childhood passive exposure to tobacco is associated with increased risk of RA and earlier RA onset, particularly in adult smokers. This study also suggests for the first time that passive exposure to tobacco during childhood might also increase the risk of arthritis even in adults who never smoked.

"Further study is needed to explore if this increased risk is also mainly observed in people carrying the gene at risk for rheumatoid arthritis, which is quite likely with regard to tobacco," said the paper's lead author, Dr Marie-Christine Boutron-Ruault. "These results also highlight the importance of children--especially those with a family history of this form of arthritis--avoiding secondhand smoke."

Credit: 
Oxford University Press USA

High oxidative stress hampers males' production of powerful blood vessel dilator

image: This is Dr. Jennifer C. Sullivan, pharmacologist and physiologist in the Department of Physiology at the Medical College of Georgia at Augusta University.

Image: 
Phil Jones, Senior Photographer, Augusta University

AUGUSTA, Ga. (Aug. 13, 2018) - Higher levels of oxidative stress in males results in lower levels of a cofactor needed to make the powerful blood vessel dilator nitric oxide, researchers report.

An antioxidant appears to help level the playing field between males and females of the cofactor BH? deep inside the kidneys - where the fine-tuning of our blood pressure happens - and restore similar production levels of protective nitric oxide.

Higher nitric oxide levels help reduce blood pressure both by enabling dilation of blood vessels and increasing the kidneys' excretion of sodium, which decreases the volume in those blood vessels.

"BH? has to be there," says Dr. Jennifer C. Sullivan, pharmacologist and physiologist in the Department of Physiology at the Medical College of Georgia at Augusta University, who is exploring gender differences in hypertension. "We found that oxidative stress makes a big difference in BH? levels."

The study in the journal Bioscience Reports is the first to look at sex differences of BH? in a rodent model of hypertension.

Male humans generally have higher blood pressures and oxidative stress levels than females, at least until menopause. The findings provide more evidence that the cofactor might be a novel treatment target for both sexes, says Sullivan, the study's corresponding author.

BH?, or tetrahydrobiopterin, is required for the precursor nitric oxide synthase to make nitric oxide. Oxidative stress, which results from high levels of natural byproducts of oxygen use, is known to reduce BH? levels, is implicated in high blood pressure and, at least before menopause, females tend to be less sensitive to it, possibly because of the protective effects of estrogen.

In an attempt to figure out why females, even in the face of hypertension, have more nitric oxide, the scientists measured BH? levels in the inner most part of the kidney in male and female spontaneously hypertensive rats.

"We found BH? levels were higher in the hypertensive females than the hypertensive males," Sullivan says. Females also had more nitric oxide and lower - but still high - blood pressures, and the males had more oxidative stress.

They had previously shown that young spontaneously hypertensive female rats have significantly more nitric oxide and nitric oxide synthase activity in the inner portion of their kidney than their male hypertensive counterparts, and that difference holds as the rats mature. The new work helps explain why.

"If we don't understand why females have more nitric oxide, we can't do things to potentiate our ability to make it," Sullivan says.

The scientists theorized - and found - that the elevated levels of oxidative stress in the males meant less BH?, and ultimately less nitric oxide compared to females.

They found that reducing oxidative stress improved BH?, levels and nitric oxide production and "normalized the playing fields between the two sexes," Sullivan says.

Pouring more BH? on the situation on the other hand, didn't work without reducing oxidative stress.

"If you have a ton of oxidative stress, you can give as much BH? as you want, and all you are going to get is more BH?," Sullivan says of BH?'s destructive counterpart and the unhealthy, vicious cycle it helps create.

Without BH?, nitric oxide synthase becomes "uncoupled" and instead produces superoxide, which decreases nitric oxide production but also interacts with the nitric oxide that is available to form the oxidant peroxynitrite. Destructive peroxynitrite, in turn, targets the BH? that is present so it becomes BH?, which further interferes with BH?'s normal job of helping nitric oxide synthase make nitric oxide.

"You don't make the product you want nitric oxide synthase to make, which is nitric oxide," Sullivan says.

While it's not clear that females are any better of making BH?, it is clear that the cofactor is easily altered by oxidative stress to become its unhealthy counterpart BH?, Sullivan says.

Giving both males and females the synthetic antioxidant treatment Tempol for two weeks is what leveled the gender field.

Bottom line: the antioxidant treatment essentially eliminated the sex differences in BH? and nitric oxide synthase activity in that key region of the kidneys.

Males had a higher blood pressure at baseline and the antioxidant treatment had no effect on the blood pressure of either sex.

More work is needed to explore BH?'s treatment potential in both sexes, Sullivan says.

BH? is widely available without a prescription, and its impact has been evaluated in a number of clinical trials including a current study at the University of Nebraska, Omaha, looking at its effect on blood flow and exercise capacity in patients with peripheral artery disease.

Credit: 
Medical College of Georgia at Augusta University

Black male youth more fearful when visiting whiter neighborhoods

Embargoed until 12:01 a.m. ET, Monday Aug. 13, 2018

COLUMBUS, Ohio - Young black males feel less safe when they go to neighborhoods with a larger white population than occurs in areas they normally visit, a new study suggests.

Researchers gave 506 black youths in Columbus smartphones that tracked their locations for a week and asked the participants to rate how safe they felt (among other questions) five times per day.

Results showed that African American boys felt less safe even in areas that were only modestly more white than where they usually spent time, said Christopher Browning, lead author of the study and professor of sociology at The Ohio State University.

"It doesn't have to be a majority white neighborhood for African American boys to feel more threatened," Browning said. "It just has to be more white than what they typically encounter."

When outside their own neighborhoods, black teens in the study visited areas that were, on average, 13 percent more white.

Unlike boys, black girls did not report feeling significantly less safe in whiter areas.

Browning presented the research Aug. 13 in Philadelphia at the annual meeting of the American Sociological Association.

The results are consistent with the hypothesis that young black males expect increased scrutiny, surveillance and even direct targeting when they are in white areas, according to Browning.

"We've seen a lot of stories in the media lately about the police being called on black people going about their business in white areas," he said.

"This may help explain why black youth felt more threatened in parts of town where they were exposed to more white people."

Data for the study came from the Adolescent Health and Development in Context study, which Browning leads. The AHDC is examining the lives of 1,405 representative youths living in 184 neighborhoods in Franklin County, Ohio. This includes Columbus and its suburbs.

This particular study involved 506 black youths aged 11 to 17 when the study was done from 2014 to 2016. They carried a smartphone with a GPS function that reported their location every 30 seconds. Five times a day they were sent a mini-survey to answer on their phones. By the end of the study, the researchers had collected 7,398 of these surveys.

In one question, participants were asked to rate on a scale of 1 (strongly disagree) to 5 (strongly agree) if the place they were currently at was a safe place to be.

Participants reported either strongly agreeing or agreeing that they were in a safe place about 91 percent of the time. That makes sense, Browning said, because the teens reported being at home about 71 percent of the time when they received the survey.

Much of the research that examines safety among black youth focuses on the neighborhood they live in. But this research suggests that the immediate neighborhood may not be the most important area to focus on, he said.

What seemed to matter was how the places they visited differed from the areas where they spent most of their time.

Black boys felt less safe when they were in neighborhoods that were significantly poorer than ones they generally frequented, as well as in neighborhoods that were whiter.

"Part of the experience for black kids is having to leave their home neighborhoods to go to places that might not be as welcoming," Browning said.

"They are typically going to places with amenities like restaurants and movie theaters that may not be available in their neighborhoods. And these places are probably going to be whiter than the places they live."

The findings were similar for African American boys no matter where they lived in the Columbus area.

"Even black boys who were regularly exposed to integrated neighborhoods felt less safe when they went to white-dominated areas," he said.

Although African American girls in this study didn't report feeling less safe in whiter areas, Browning said that doesn't necessarily mean they aren't experiencing negative effects.

It may be that there are different features of neighborhoods not picked up in this study, such as the number of men in the immediate vicinity, that may affect whether black girls felt safe in a particular location, he said.

Credit: 
Ohio State University

Deep in the weeds: Using eDNA sequencing to survey pondweed diversity

Ecological surveys of biodiversity provide fundamental baseline information on species occurrence and the health of an ecosystem, but can require significant labor and taxonomic expertise to conduct. However, as the cost of high-throughput DNA sequencing has plummeted in recent years, DNA from environmental samples (eDNA) has emerged as a cost-effective source of biodiversity data. In research reported in a recent issue of Applications in Plant Sciences, Dr. Maria Kuzmina and colleagues at the University of Guelph show the feasibility of eDNA sequencing for identifying aquatic plant diversity.

Traditionally, ecological surveys of biodiversity would require many hours of painstaking work by taxonomic experts and trained non-experts, identifying large numbers of specimens to the species level based on morphology. Once an eDNA sequencing study is designed, adding more eDNA samples is relatively trivial and inexpensive, so this method offers the ability to identify species at a scale that would be prohibitively expensive by traditional means. This makes it possible to produce much more data about which species live where for a much lower cost. Human taxonomic expertise is still a critical part of the equation though. According to Dr. Kuzmina, while "eDNA definitely becomes more and more cost-effective for ecological monitoring, it will never replace an expert for accurate identification of an individual specimen. These two approaches exist in two different dimensions, ideally complementing each other."

Sequencing eDNA presents its own challenges. As Dr. Kuzmina explains, "The experiment needs to be planned with all precautions to prevent contamination from outside, with negative controls, and thoughtful interpretation of the results." Judicious selection of DNA markers to test is also necessary to produce meaningful results. Most eDNA sequencing studies have focused on animal or microbial diversity, as plants can be a trickier target. That's because there are no universal plant DNA markers that can be applied across a wide number of plant species and still effectively identify samples down to the species level.

However, Dr. Kuzmina and colleagues designed their study to survey a narrower taxonomic scope, focusing only on the pondweed family (Potamogetonaceae). This design effectively rendered lack of universal plant DNA markers a moot point while at the same time delivering ecologically relevant information about pondweed diversity. "The goal was to design a method that can be used to detect rare or endangered species of pondweeds," says Dr. Kuzmina. "Narrowing the search allowed the method to be more sensitive and interpretation of the results more reliable."

Pondweeds are an excellent candidate for eDNA sequencing beyond their tractable DNA markers. They are quite difficult to identify to species level based on morphology, relying on microscopic traits and requiring substantial expertise, and often live in aquatic habitats that are difficult to access. Pondweeds are also useful in habitat classification and are important bioindicators of aquatic ecosystem health, as different species are adapted to different water temperatures and chemical compositions. Additionally, seven species of North American pondweeds are endangered species, including two in Ontario, where this study was conducted.

"In broader application, the combination of pondweed species may be used as a 'fingerprint' of freshwater ecosystems, indicating quality of water and showing how this system is suitable for other freshwater organisms such as fishes and invertebrates," explains Dr. Kuzmina. This study found that pondweed diversity had been underestimated at the rare Charitable Research Reserve in Ontario, and detected the presence of three species of pondweed previously unknown in the reserve.

Sequencing of eDNA is a promising avenue for delivering large volumes of high-quality data on where species occur. These methods are currently being refined to answer specific questions such as detection of endangered or invasive species, or as bioindicators of water or soil quality. In this study, Dr. Kuzmina and colleagues showed that targeted eDNA sequencing of specific plant groups like pondweeds can yield important ecological information.

Credit: 
Botanical Society of America