Culture

How many gender subtypes exist in the brain?

The terminology humans have conceived to explain and study our own brain may be mis-aligned with how these constructs are actually represented in nature. For example, in many human societies, when a baby is born either a "male" or a "female" box is checked on the birth certificate. Reality, however, may be less black and white. In fact, the assumption of dichotomic differences between only two sex/gender categories may be at odds with our endeavours that try to carve nature at its joints. Such is the case with a new paper, published recently in the journal Cerebral Cortex, where researchers argue that there are at least nine directions of brain-gender variation.

Many classical statistical approaches pre-assume which groups they expect to see in the data; such as old vs. young participants, or introverted vs. extroverted participants. Everything else that follows after that critically depends on the initial decision of assigning individuals into strict groups. In this new study, the researchers did not pre-assume what the brain gender groups, transcending male, female, and individuals in-between, should be. Instead, they derived the brain-gender groups directly from brain-imaging and psychological assessment items in an agnostic data-driven fashion.

"Our goal was to demonstrate that widely available brain-imaging methods are capable of providing evidence against a strict binary view of how sex/gender is manifested in the brain," explains Dr. Danilo Bzdok, Associate Professor in the Department of Biomedical Engineering at McGill University's Faculty of Medicine and a senior author on the paper. "These findings have important consequences for the movement towards improved equity, diversity, and inclusion in Canada and other countries. By raising awareness from the biological perspective we may contribute to building a society where individuals identifying themselves in between the labels of male and female do feel included rather than discriminated against."

Pulling the data together

In order to conduct their study, the researchers acquired a unique dataset comprised of individuals of wide sex/gender diversity. Rather than only studying gender behaviour in a male and a female group, as is commonly done, they acquired a rich sample that also included individuals that underwent sex transformation from male to female as well as individuals that have undergone sex transformation from female to male. The measured brain connectivity fingerprints of these four groups were then related to a comprehensive profile of gender-stereotypical behavioural traits, working closely with Professor Ute Habel and Dr. Benjamin Clemens at the Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University.

The researchers used machine learning algorithms that could provide evidence that sex/gender may not be a dichotomic entity in the human brain. In an unbiased pattern-learning approach they could show that at least nine dimensions of brain-gender variation can be robustly identified. That is, the particular individuals can be assigned to nine "expressions" or coordinate system axes of how much they fall along a particular distribution of brain-gender variation.

"My lab works at the interface between systems neuroscience and tailoring machine-learning algorithms to answer questions in large neuroscience datasets," says Dr. Bzdok, who recently moved to Montreal to join the McGill community. "Montreal has the advantage of combining world-class neuroscience institutions, such as The Neuro, with top-notch Artificial Intelligence institutions, such as the Mila Quebec AI Institute, in the same city. In both of these research areas, there is a lot of legacy and, now, momentum to build a critical mass to promote forward progress. As such, Montreal is a particularly promising place that is likely to make important contributions to bridging neuroscience and AI."

Moving the research forward

Dr. Bzdok is optimistic that budding clinical consortium initiatives will allow them to pool even richer and multi-modal datasets to acknowledge even more facets of sex/gender variation existing in the wider population. From a data analytics standpoint, he explains that the more data we can gather, the more likely it is that we will discover a greater number of sex/gender dimensions.

"I am currently reaching out to various investigators across the McGill community to try to take these and other projects to the next level," shares Dr. Bzdok. "Such questions of mapping societally-relevant behavioural variation to brain variation can now be addressed from cross-cutting perspectives including genetics, genomics, interventional responses such as from temporary brain lesions, immunological markers, and so many more. McGill provides fertile ground to work towards such ambitious questions."

Credit: 
McGill University

Study: inequality between men and women dramatic in Houston-area

image: Gender and sexuality are major factors in shaping the experience of Harris County residents, often inequitably, a new report finds.

Image: 
University of Houston

A new study by the University of Houston Institute for Research on Women, Gender & Sexuality (IRWGS) reports that women lag far behind men on multiple fronts in Harris County. Women are almost 50% more likely to live in poverty than men and the wage gap for men and women by race and ethnicity is considerably greater here than nationally. The report presents both new and summary analyses of select data on gender and sexuality, derived from the 2017 American Community Survey and other sources.

"These and other data in the report demonstrate that gender and sexuality are major factors in shaping the experience of Harris County residents, often inequitably. We've made some progress in moving toward equitable inclusion of all talented workers in the workforce including women, but there's far to go," said IRWGS director Elizabeth Gregory. "An important element seems to be that we haven't found a way to equitably provide child-care support for working families. As a result, mothers get stuck slowing their careers where it might not be to their, their family's or the community's advantage to do that."

Key findings of the report include:

In Harris County during 2017, women's poverty rate (15.3%) was nearly 50% higher than the male poverty rate (10.4%). The gender gap is higher than observed nationally, statewide (Texas) and in comparable counties in the United States.

Wage gap data for men and women by race/ethnicity in Harris County is considerably greater than the national wage gap, with the median non-Hispanic (NH) white woman here making 69.4 cents on the dollar made by the median NH white man; the median NH Asian woman making 63.6 cents; the median NH black woman making 47.1 cents; and the median Hispanic woman making 33.5 cents.

Thirty percent of Harris County women with minor children in the home were unpartnered, with a median household income of $31,600 and 36.0% living at or below the poverty line. Contrastingly, 8.2% of men with minor children in the home were unpartnered, with a median household income of $54,000 and 17.2% at or below poverty. Harris County children with partnered parents had a median family income of $78,000, with 11.2% living under the poverty rate.

Between the time of their birth and high school graduation, most U.S. children do not have access to public school/care during work hours 63% of the time. The new availability of pre-K for some four-year-olds in Harris County lowers the non-availability of public school/care to 60% of the time.

Texas and Harris County follow the national trend with rapidly declining teen fertility rates: The Texas teen fertility rate declined 59.1% between 2007 (the start of the recent recession) and 2018 and the Harris County rate fell 60.9% in that interval. Nonetheless, these rates remain high compared to national rates.

Though demographic data on sexuality is imprecise, recent U.S. Census data indicates that among same-sex cohabiting partners in Harris County, 59% were men, and 41% were women.

Gregory notes that the LGBT community is incompletely documented due to persistent social risk.

"Data on the LGBT community is limited, like much gender and sexuality data that operates in a context of risk --in this case due to lack of employment protection and other social stigma," said Gregory. The center works to responsibly document such data in ways that minimize risk.

Founded in 2019, the University of Houston Institute for Research on Women, Gender & Sexuality is the region's first gender and sexuality focused think tank. The institute's goal is to provide evidence-based data and analyses to amplify discussion around the social and economic forces linked to gender and sexuality that have long gone unexamined and to engender positive change.

"Instead of taking gender and sexuality disparities for granted as inevitable, people should start talking about them over dinner with friends and asking, 'How can we alter this?' 'What are the real economic dynamics of gender, and how can they be improved upon?' While gender has long served as a work-assignment system and has given men and women different jobs in the home and workforce and different pay scales, the old patterns don't make sense anymore, for employers or for families," said Gregory.

Credit: 
University of Houston

Memory games: Eating well to remember

image: Food group and memory status may vary among different age groups.

Image: 
Tijana Drndarski/Unsplash.com

A healthy diet is essential to living well, but as we age, should we change what we eat?

UTS research fellow Dr Luna Xu has studied data from 139,000 older Australians and found strong links between certain food groups, memory loss and comorbid heart disease or diabetes.

Dr Xu found high consumption of fruit and vegetables was linked to lowered odds of memory loss and its comorbid heart disease. High consumption of protein-rich foods was associated with a better memory.

Dr Xu also found the link between food group and memory status may vary among different older age groups. People aged 80 years and over with a low consumption of cereals are at the highest risk of memory loss and its comorbid heart disease, her research showed.

"Our present study implies that the healthy eating suggestions of cereals consumption in the prevention of memory loss and comorbid heart disease for older people may differ compared to other age groups," said Dr Xu, who holds a Heart Foundation postdoctoral research fellowship.

She said the study pointed to a need for age-specific healthy dietary guidelines.

Memory loss is one of the main early symptoms for people with dementia, which is the second leading cause of death of Australians. People living with dementia have on average between two and eight comorbid conditions, which may accelerate cognitive and functional impairment. The most common comorbidities in dementia include cardiovascular diseases, diabetes and hypertension.

"The dietary intervention in chronic disease prevention and management, by taking into consideration the fact that older populations often simultaneously deal with multiple chronic conditions, is a real challenge," Dr Xu said.

"To achieve the best outcome for our ageing population, strong scientific evidence that supports effective dietary intervention in preventing and managing co-occurring chronic conditions, is essential."

Credit: 
University of Technology Sydney

Complications of measles can include hepatitis, appendicitis, and viral meningitis, doctors warn

The complications of measles can be many and varied, and more serious than people might realise, doctors have warned in the journal BMJ Case Reports after treating a series of adults with the infection.

Measles is a highly contagious respiratory viral infection, the symptoms of which include fever, cough, conjunctivitis, and an extensive rash all over the body.

Measles is entirely preventable as the vaccine used to immunise against it is safe and very effective.

But in the past few decades, unfounded fears about the vaccine have prompted it to re-emerge as a health scourge around the world, with rising numbers of cases in teens and adults, say the authors.

In 2017 the global death toll from measles reached 110,000. Most of these deaths were in young children.

The authors treated three people with measles who had additional complications as a direct result of their infection.

The first case concerned a young man, who had only had the first of two doses of measles vaccine as a child. He was additionally diagnosed with hepatitis.

The second case involved a young woman who developed appendicitis associated with measles. In the third case, a middle aged man complained of blurred vision and severe headache. He was diagnosed with viral meningitis, caused by his measles infection.

All three people recovered fully after appropriate treatment and care, and none had any long lasting health problems as a result of their illness.

But because measles suppresses the immune system, it has been associated with complications in every organ of the body, note the authors. Almost a third of all reported cases are associated with one or more complications, they point out.

These can include pneumonia, febrile seizures, and encephalomyelitis (inflammation of the brain and spinal cord) which causes neurological problems.

Another possible complication of measles is SSPE (subacute sclerosing panencephalitis), a progressive neurological disorder that causes permanent nervous system damage and leads to a vegetative state.

"Large outbreaks with fatalities are currently ongoing in European countries which had previously eliminated or interrupted endemic transmission," write the authors, adding that in the first six months of 2019 alone, 10,000 measles cases were reported in Europe.

They attribute the rise in new cases to negative publicity in the early 2000s linking the measles, mumps and rubella (MMR) vaccine to autism, despite major studies proving otherwise. This prompted a fall in vaccine uptake and collective ('herd') immunity.

"Urgent efforts are needed to ensure global coverage with two-dose measles vaccines through education and strengthening of national immunisation systems," they conclude.

Credit: 
BMJ Group

An intelligent and compact particle analyzer

IMAGE: (a) A schematic of the experimental setup for measuring particle suspensions showing the optical hardware consisting of a fiber coupled LED, a CMOS image sensor camera and the polymer angular...

Image: 
by Rubaiya Hussain, Mehmet Alican Noyan, Getinet Woyessa, Rodrigo R. Retamal Marín, Pedro Antonio Martinez, Faiz M. Mahdi, Vittoria Finazzi, Thomas A. Hazlehurst, Timothy N. Hunter, Tomeu Coll, Michael Stintz,...

In many industrial and environmental applications, determining the size and distribution of microscopic particles is essential. For example, in the pharmaceutical industry, inline measurement and control of particles containing various chemical ingredients (before consolidation in tablets) may critically enhance the yield and quality of the final medical product. Also, the air we breathe, water we drink and food we eat can also contain many types of unhealthy particulates, which is then crucial to detect for our health and wellbeing.

In a new paper published in Light Science & Application, a team of European scientists and engineers from ICFO and IRIS in Spain, Ipsumio B.V. in the Netherlands, the Technical University of Denmark, the Technische Universität Dresden in Germany and the University of Leeds in the UK, has developed a new micro-particle size analyser by combining consumer electronics products and artificial intelligence. The device, an order of magnitude smaller in terms of size, weight and cost, measures particle size with a precision comparable to commercial light-based particle size analysers at least.

"The EU funded project ProPAT aimed to deliver new sensors for industrial applications. The innovation developed by ICFO is a great example of such sensor. The feedback from pilot scale testing in real world conditions and end-industries moved the sensor from a lab device to potential applicability in industrial environments," says Frans Muller, professor in chemical process engineering at the University of Leeds and the technical manager of ProPAT.

Conventionally, laser diffraction (LD) based particle size analysers (PSA) are widely used for measuring particle size from hundreds of nanometres to several millimetres. In such devices, laser light focused onto a dilute particle sample produces a diffraction (scattering) pattern, measured by an array of light detectors and converted to a particle size distribution using well-established scattering theory. These devices are precise and reliable but large (each dimension being of the order of half meter), heavy (tens of kg) and expensive (often costing in the order of a hundred thousand dollars or more). In addition, their complexity, together with the fact that they often require maintenance and highly trained personnel, make them impractical, for example in the majority of online industrial applications, which require installation of probes in processing environments, often at multiple locations.

The newly developed PSA works in a collimated beam configuration using a simple LED light emitting diode (LED) and a single metal-oxide-semiconductor (CMOS) image sensor, similar to those used in smart phones. The key-innovation is the small angular spatial filter (ASF) made with an array of holes with different diameters that is extruded from a polymer rod. On illuminating the target sample, light scatters and passes through the ASF onto the sensor. Light collected from different size holes is representative of a different set of scattering angles. An ad hoc machine learning (ML) model converts the sensor image into size of particles. The same device can be easily converted into hazemeter, an essential instrument to characterize many optical materials.

"It is very exciting to see how a simple combination of consumer photonic components, such as an LED and a phone camera, an innovative angular filter fabricated using mass-scalable photonic crystal fiber extrusion and machine learning data processing has allowed us to make such a compact, cheap and precise device", says Rubaiya Hussain, first author of the paper and PhD candidate in the Optoelectronics group at ICFO.

In order to validate the new PSA, mixtures of water and glass beads with sizes in the range from 13 to 125 micrometre were tested at several process concentrations in liquid dispersions. Laser diffraction systems cannot measure such high concentrations as light is scattered multiple times resulting in scattering patterns that cannot be converted into particles size. Using the random forest machine learning algorithm the data from the new device could be analysed successfully, increasing the working range of particle sizes and concentrations that can be measured.

"We used the PSA device built at ICFO in Barcelona to collect data from different particle size ranges and concentrations of standard glass beads. According to the obtained results and our experience, we were pleased to see that the precision of a few % of the median volume particle size (D50) is comparable with other measurement techniques (e.g. LD) in micrometre range", says Dipl.-Ing. Rodrigo R. Retamal Marín, researcher in the Mechanical Process Engineering group at theTechnische Universität Dresden.

Future improvements in the optical hardware are also being designed. In particular, further optimisation of the innovative ASF component and refined data capturing methods are being undertaken, to produce larger, higher fidelity datasets for the machine learning algorithm. Future work will also include analysis of non-spherical particles, collected with well-designed sample feeding systems for both dry and wet measurements, leading to high precision analysis for a range of industrially relevant systems.

"We intend to utilize the inherent flexibility of the simple design and low hardware cost of our proprietary PSA for specific applications, for example online or at-line monitoring, and we are looking for partners from various Industries and Research Institutions", says Valerio Pruneri, ICREA Professor at ICFO and leading author of the work.

Credit: 
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences

Examining how often care, ICU admissions were consistent with treatment-limiting orders near end of life

What The Study Did: Patients with chronic life-limiting illnesses often have medical orders with treatment limitations in place regarding medical interventions and intensive care unit admissions near the end of their lives. This observational study included about 1,800 patients with such orders who were hospitalized within six months of their death to examine how often care was consistent with those orders.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Robert Y. Lee, M.D., M.S., of the University of Washington in Seattle, is the corresponding author.

(doi:10.1001/jama.2019.22523)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Researchers create new tools to monitor water quality, measure water insecurity

EVANSTON, Ill. --- A wife-husband team will present both high-tech and low-tech solutions for improving water security at this year's American Association for the Advancement of Science (AAAS) annual meeting in Seattle on Sunday, Feb. 16. Northwestern University's Sera Young and Julius Lucks come from different ends of the science spectrum but meet in the middle to provide critical new information to approach this global issue.

Lucks, associate professor of chemical and biological engineering in Northwestern's McCormick School of Engineering and an internationally recognized leader in synthetic biology, is developing a new technology platform to allow individuals across the globe to monitor the quality of their water cheaply, quickly and easily. Lucks will discuss how advances made at Northwestern's Center for Synthetic Biology are making these discoveries possible in his presentation "Rapid and Low-cost Technologies for Monitoring Water Quality in the Field."

In "A Simple Indicator of Global Household Water Experiences," Young, associate professor of anthropology in Northwestern's Weinberg College of Arts and Sciences, will discuss the Household Water Insecurity Experiences Scale (HWISE.org), the first globally equivalent scale to measure experiences of household-level water access and use. Young led a large consortium of scholars in the development of the HWISE Scale, which permits comparisons across settings to quantify the social, political, health and economic consequences of household water insecurity. The HWISE Scale is already being used by scientists and governmental- and non-governmental organizations around the world, including the Gallup World Poll.

Both presentations will be presented with representatives from the World Bank and UNESCO as part of the session "Managing Water: New Tools for Sustainable Development" to be held from 3:30 to 5 p.m. on Sunday, Feb. 16, at the Washington State Convention Center.

Prior to the 3:30 p.m. panel on Sunday, Young and Lucks will participate in an Expo Stage Debrief: "Managing Water: New Tools for Sustainable Development?" which will be held at 11:30 a.m. in the Expo Hall at the convention center. The discussion will be livestreamed.

Credit: 
Northwestern University

Journalism is an 'attack surface' for those who spread misinformation

image: "Believing things that aren't true when it comes to health can be not just bad for us, but dangerous," said Dan Gillmor, co-founder of the News Co/Lab at Arizona State University's Walter Cronkite School of Journalism and Mass Communication. "Journalists have a special duty to avoid being fooled, and they can help us learn to sort out truth from falsehood ourselves."

Image: 
Arizona State University

For all the benefits in the expansion of the media landscape, we're still struggling with the spread of misinformation--and the damage is especially worrisome when it comes to information about science and health.

"Believing things that aren't true when it comes to health can be not just bad for us, but dangerous," said Dan Gillmor, co-founder of the News Co/Lab at Arizona State University's Walter Cronkite School of Journalism and Mass Communication. "Journalists have a special duty to avoid being fooled, and they can help us learn to sort out truth from falsehood ourselves."

Gillmor will discuss his work, which focuses on improving media literacy, during a panel presentation on Feb. 15 as part of the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Washington.

His presentation is anchored by warnings from security experts, aimed at media consumers and journalists who may sometimes unwittingly amplify bad information. Those experts label journalism as an "attack surface" for those who are looking to intentionally spread misinformation.

His discussion is particularly timely as the country moves into the 2020 election season, when the stakes become higher on local and national levels.

"We need to get better ourselves at sorting out what we can trust, and understanding our roles as part of a digital ecosystem in which we're sharers and creators as well as consumers," Gillmor said.

The News Co/Lab, which has received support from the Facebook Journalism Project, Craig Newmark Philanthropies, Democracy Fund, Rita Allen Foundation and News Integrity Initiative, collaborates with a number of partners to find new ways to increase public understanding of the news. Research suggests expertise in the area is sorely needed.

A report released by the News Co/Lab and the Center for Media Engagement at The University of Texas at Austin revealed that nearly a third of media consumers with a college education could not identify a fake news headline. And, consumers with negative attitudes about the news media were less likely to be able to spot fake news or distinguish opinion from analysis or advertising.

Gillmor's presentation will pivot on the need for a better understanding of news and the media, how journalists can prepare for and respond to misinformation, and how consumers can learn to parse what they read and watch so that they don't unknowingly traffic information that was intentionally designed to be misleading. For its part, the News Co/Lab recently received a grant for a new media literacy project that will include outreach events across the U.S., a massive online open course on digital media literacy, and digital and social media content.

Credit: 
Arizona State University

In court, far-reaching psychology tests are unquestioned

Psychological tests are important instruments used in courts to aid legal decisions that profoundly affect people's lives. They can help determine anything from parental fitness for child custody, to the sanity or insanity of a person at the time of a crime, to eligibility for capital punishment.

While increasingly used in courts, new research shows the tests are not all scientifically valid, and once introduced into a case they are rarely challenged, according to Tess Neal, an assistant professor of psychology at Arizona State University.

"Given the stakes involved one would think the validity of such tests would always be sound," Neal said. "But we found widespread variability in the underlying scientific validity of these tests."

The problem is made worse because the courts are not separating the good from the bad.

"Even though courts are required to screen out 'junk science,' nearly all psychological assessment evidence is admitted into court without even being screened," Neal said.

Neal was speaking today (February 15) at the annual meeting of the American Association for the Advancement of Science in Seattle. She presented her findings in the talk "Psychological assessments and the law: Are courts screening out "junk science?"

In a two-part investigation, Neal and her colleagues found a varying degree of scientific validity to 364 commonly used psychological assessment tools employed in legal cases. The researchers looked at 22 surveys of experienced forensic mental health practitioners to find which tools are used in court. With the help of 30 graduate students and postdocs, they examined the scientific foundations of the tools, focusing on legal standards and scientific and psychometric theory.

The second part of the study was a legal analysis of admissibility challenges with regard to psychological assessments, focusing on legal cases from across all state and federal courts in the U.S. for a three-year period (2016-2018).

"Most of these tools are empirically tested (90%), but we could only clearly identify two-thirds of them being generally accepted in the field and only about 40% as having generally favorable reviews of their psychometric and technical properties in authorities like the Mental Measurements Yearbook," Neal explained.

"Courts are required to screen out the 'junk science,' but rulings regarding psychological assessment evidence are rare. Their admissibility is only challenged in a fraction of cases (5.1%)," Neal said. "When challenges are raised, they succeed only about a third of the time."

"Challenges to the most scientifically suspect tools are almost nonexistent," Neal added. "Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law."

What is needed is a different approach. In their open-access paper in Psychological Science in the Public Interest, Neal and her colleagues offer concrete advice for solving these problems to psychological scientists, mental health practitioners, lawyers, judges and members of the public interacting with psychologists in the legal system.

"We suggest that before using a psychological test in a legal setting, psychologists ensure its psychometric and context-relevant validation studies have survived scientific peer review through an academic journal, ideally before publication in a manual," Neal explained. "For lawyers and judges, the methods of psychologist expert witnesses can and should be scrutinized, and we give specific suggestions for how to do so."

Credit: 
Arizona State University

The verdict is in: Courtrooms seldom overrule bad science

In television crime dramas, savvy lawyers are able to overcome improbable odds to win their cases by presenting seemingly iron-clad scientific evidence. In real-world courtrooms, however, the quality of scientific testimony can vary wildly, making it difficult for judges and juries to distinguish between solid research and so-called junk science.

This is true for all scientific disciplines, including psychological science, which plays an important role in assessing such critical pieces of testimony as eyewitness accounts, witness recall, and the psychological features of defendants and litigants.

A new, multiyear study published in Psychological Science in the Public Interest (PSPI), a journal of the Association for Psychological Science (APS), finds that only 40% of the psychological assessment tools used in courts have been favorably rated by experts. Even so, lawyers rarely challenge their conclusions, and when they do, only one third of those challenges are successful.

"Although courts are required to screen out junk science, legal challenges related to psychological-assessment evidence are rare," said Tess M.S. Neal of Arizona State University, one of the authors of the report. The other authors are Michael J. Saks of Arizona State University, Christopher Slobogin of Vanderbilt University Law School, David Faigman of the University of California Hastings School of Law, and Kurt F. Geisinger of the University of Nebraska-Lincoln.

"Although some psychological assessments used in court have strong scientific validity, many do not. Unfortunately, the courts do not appear to be calibrated to the strength of the psychological-assessment evidence," said Neal.

The new APS report examines more than 360 psychological assessment tools that have been used in legal cases, along with 372 legal cases from across all state and federal courts in the United States during the calendar years 2016, 2017, and 2018.

These findings are also presented at the 2020 American Association for the Advancement of Science (AAAS) meeting in Seattle.

Psychological scientists provide expert evidence in a variety of court proceedings, ranging from custody disputes to disability claims to criminal cases. In developing their expert evaluation of, for example, a defendant's competence to stand trial or a parent's fitness for child custody, they may use tools that measure personality, intelligence, mental health, social functioning, and other psychological features. A number of federal court decisions and rules give judges the latitude to gauge the admissibility of evidence, largely by evaluating its empirical validity and its acceptance within the scientific community.

For their review, Neal and her colleagues gathered results from 22 surveys of psychologists who serve as forensic experts in legal cases. They reviewed the 364 psychological assessment tools that the respondents reported having used in providing expert evidence. They found that nearly all of those tools have been subjected to scientific testing, but only about 67 percent are generally accepted by the psychological community at large. What's more, only 40% of the tools have generally favorable reviews in handbooks and other sources of information about psychological tests.

The scientists also found that legal challenges to the admission of assessment evidence are rare, occurring in only about 5% of cases they reviewed. And only a third of those challenges succeeded.

According to the report: "Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law."

In an accompanying commentary, David DeMatteo, Sarah Fishel, and Aislinn Tansey, psychology and legal scholars at Drexel University, call for more research on whether trial court judges are functioning as effective gatekeepers for expert testimony. They point to studies indicating that many judges admit evidence from methodologically flawed studies and others that show attorneys and jurors lack the scientific literacy necessary to scrutinize scientific evidence. The Drexel scholars also called on forensic psychologists to ensure they use scientifically sound assessment tools when providing expert evaluations in legal settings.

Credit: 
Association for Psychological Science

The paradox of dormancy: Why sleep when you can eat?

image: Daphnia (zooplankton) carrying a resting egg.

Image: 
Image by Dieter Ebert, reproduced from Wikimedia Commons under Creative Commons Attribution-Share Alike 4.0 International license

Why do predators sometimes lay dormant eggs -- eggs which are hardy, but take a long time to hatch, and are expensive to produce?

That is the question that researchers from Singapore University of Technology and Design (SUTD) set out to answer in a recent paper published in Advanced Science, a cutting-edge research journal with an impact factor of 15.804 (2019 Journal Citation Reports).

The traditional answer is that these hardy eggs allow the population to survive harsh environmental conditions, like winter or drought. However, this does not explain why dormant eggs are laid even in non-seasonal habitats, such as tropical lakes.

The team of researchers led by Assistant Professor Kang Hao Cheong from SUTD, in collaboration with Dr Eugene V. Koonin, senior investigator at the National Institutes of Health, have discovered an alternate explanation: Dormancy is a naturally occurring response to over-predation. In non-seasonal habitats, prey organisms, such as algae in a lake, grow to very large populations. This leads their predators, such as zooplankton, to consume them at a high rate and grow in population as well. Eventually, this leads to over-consumption. As the algae population collapses, little food is left for the large amount of zooplankton, which then begin to starve and die.

It is during this period of food scarcity that dormancy makes a lot of sense. If a zooplankton had laid hardy, slow-hatching dormant eggs in advance, those eggs would likely hatch after the prey populations had recovered, allowing them to survive and reproduce. On the other hand, if the zooplankton had only laid regular fast-hatching eggs, those eggs would likely hatch in the middle of the famine, and would not aid much in the recovery of the zooplankton population. Eventually, only those zooplankton which lay dormant eggs would dominate the population.

In discovering this explanation, the researchers were inspired by a phenomenon called Parrondo's paradox. The paradox states that it is possible to alternate between a pair of losing strategies, such as losing bets in a gamble, and still end up winning. When food is plenty, the researchers realized that dormancy is similarly paradoxical.

"Why spend extra energy laying dormant eggs, when your competitors are saving energy by laying regular eggs? And why invest in eggs that take longer to hatch, when your competitors are laying eggs that will hatch faster and grow quickly into adults? That was what we needed to explain," said Zhi-Xuan Tan, the lead author of the study. "Just like in Parrondo's paradox, we had a pair of losing strategies: the strategy of laying dormant eggs, and the strategy of remaining dormant as an egg instead of hatching."

As the researchers discovered, switching between these two losing strategies ensures survival against the food shortages created by over-predation.

The implications of this study could go beyond explaining why predators lay dormant eggs. "One of the first applications of Parrondo's paradox was actually to explain a biological process: how molecular motors in our muscles could produce sustained directional movement", observed Assistant Professor Kang Hao Cheong from SUTD, the principal investigator for this study. "We believe that the relevance of Parrondo's paradox to biology might be wider still."

For example, the researchers suggest that Parrondo's paradox might also explain why bacteria-infecting viruses often alternate between a dormant lysogenic phase, where viruses incorporate their DNA into the bacterial genome, and an active and infectious lytic phase, which kills bacteria.

"Going further, we might even be able to explain the evolution of multicellular life," said Assistant Professor Cheong. "How did unicellular organisms start co-operating enough to form multicellular organisms, when cheating and taking advantage of other cells could often yield better results? As co-operation is a losing strategy in this context, we suspect that Parrondo's paradox might one day yield some answers."

Credit: 
Singapore University of Technology and Design

Computer-generated genomes

image: Beat Christen, Assistant Professor of Experimental Systems Biology and Dr. Matthias Christen in the Christen Lab at ETH Zurich

Image: 
ETH Zurich / Agnieszka Wormus

All organisms on our planet store the molecular blueprint of life in a DNA code within their genome. The digital revolution in biology, driven by DNA sequencing, enables us to read the genomes of the myriads of microbes and multi-cellular organisms that populate our world. Today, the DNA sequences of over 200,000 microbial genomes are deposited in digital genome databases and have exponentially increased our understanding of how DNA programs living systems. Using this incredible treasure trove of molecular building blocks, bio-engineers learn to read (sequence) and write (using chemical synthesis) long DNA molecules and to breed useful microbes with the help of computers.

In his research, Beat Christen, Professor of Experimental Systems Biology and researchers in the Christen Lab, ETH Zurich in Zurich, Switzerland use a digital genome design algorithm in conjunction with large-scale chemical DNA synthesis to physically produce artificial genomes and understand the code of life at the molecular level. The lab also uses systems and synthetic biology approaches to define essential genes across species that serve as the genetic parts to build microbial genomes for applications in sustainable chemistry, medicine and agriculture.

The research team has physically produced Caulobacter ethensis-2.0, the world's first fully computer-generated genome. Using a natural freshwater bacterium as a starting point, the researchers computed the ideal DNA sequence for chemical manufacturing and construction of a minimized genome solely comprised of essential functions. In the design process, more than one-sixth of all of the 800,000 DNA letters in the artificial genome were replaced and the entire genome was produced as a large ring-shaped DNA molecule. While a living cell does not yet exist, gene functions have been tested across the entire genome design. In these experiments, researchers found out that approximately 580 of the 680 artificial genes were functional demonstrating the promise of the approach to produce designer genomes.

During the AAAS 2020 session, "Synthetic Biology: Digital Design of Living Systems" (February 14th, 2020 at 3:30 PM PST), Christen will discuss possible future applications of synthetic genomes for industrial purposes and health benefits. He will also talk about the need for profound discussions in society about the challenges and purposes for which this technology can be used and, at the same time, about how potential for abuse can be prevented.

Professor Christen will be joined by Professor David Baker, University of Washington who will speak about Designer Proteins and Joyce Tait, University of Edinburgh, Edinburgh Scotland, who will speak on Risk Regulation, Uncertainty, and Ethics.

Credit: 
ETH Zurich

Pteropus giganteus: Evolutionary study finds rare bats in decline across Asia

image: A Pteropus giganteus flying fox comes home to roost in Peradeniya, Sri Lanka

Image: 
Photo Skanda de Saram.

A study led by Susan Tsang, a former Fulbright Research Fellow from The City College of New York, reveals dwindling populations and widespread hunting throughout Indonesia and the Philippines of the world's largest bats, known as flying foxes.

Unfortunately, hunting not only depletes the flying foxes, which are already rare, but also potentially exposes humans to animal-borne pathogens (a process known as zoonosis). "For instance, the current case of Wuhan Coronavirus is thought to have been spread from wild bats to humans through an intermediate host at a wildlife market," said CCNY biologist and Tsang's mentor David J. Lohman, an entomologist and two-time Fulbright recipient.

The CCNY experts found that flying foxes originated in a group of islands in Indonesia called Wallacea. They diversified into different species by flying to other islands that presumably lacked competitors and established themselves. Thus, islands are critical to the evolution and conservation of this large group of around 65 mammal species.

"This study provides insight into biodiversity conservation and public health. Islands are frequently home to endemic species found nowhere else," noted Tsang, who earned a PhD in biology from the Graduate Center, CUNY.

Unfortunately, island-endemic species are more likely to be endangered or go extinct than continental species. Flying foxes are seed dispersers and pollinators of many ecologically and economically important plants, and forest trees on islands often depend on bats for regeneration.

Credit: 
City College of New York

Algorithms 'consistently' more accurate than people in predicting recidivism, study says

In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.

When assessing just a handful of variables in a controlled environment, even untrained humans can match the predictive skill of sophisticated risk-assessment instruments, says the new study by scholars at Stanford University and the University of California, Berkeley.

But real-world criminal justice settings are often far more complex, and when a larger number of factors are useful for predicting recidivism, the algorithm-based tools performed far better than people. In some tests, the tools approached 90% accuracy in predicting which defendants might be arrested again, compared to about 60% for human prediction.

"Risk assessment has long been a part of decision-making in the criminal justice system," said Jennifer Skeem, a psychologist who specializes in criminal justice at UC Berkeley. "Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism. That's consistent with a long line of research comparing humans to statistical tools."

"Validated risk-assessment instruments can help justice professionals make more informed decisions," said Sharad Goel, a computational social scientist at Stanford University. "For example, these tools can help judges identify and potentially release people who pose little risk to public safety. But, like any tools, risk assessment instruments must be coupled with sound policy and human oversight to support fair and effective criminal justice reform."

The paper -- "The limits of human predictions of recidivism" -- was slated for publication Feb. 14, 2020, in Science Advances. Skeem presented the research on Feb. 13 in a news briefing at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Wash. Joining her were two co-authors: Ph.D. graduate Jongbin Jung and Ph.D. candidate Zhiyuan "Jerry" Lin, who both studied computational social science at Stanford.

The research findings are important as the United States debates how to balance the needs communities have for security while reducing incarceration rates that are the highest of any nation in the world--and disproportionately affect African Americans and communities of color.

If the use of advanced risk assessment tools continues and improves, that could refine critically important decisions that justice professionals make daily: Which individuals can be rehabilitated in the community, rather than in prison? Which could go to low-security prisons, and which to high-security sites? And which prisoners can safely be released to the community on parole?

Assessment tools driven by algorithms are widely used in the United States, in areas as diverse as medical care, banking and university admissions. They have long been used in criminal justice, helping judges and others to weigh data in making their decisions.

But in 2018, researchers at Dartmouth University raised questions about the accuracy of such tools in a criminal justice framework. In a study, they assembled 1,000 short vignettes of criminal defendants, with information drawn from a widely used risk assessment called the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS).

The vignettes each included five risk factors for recidivism: the individual's sex, age, current criminal charge, and the number of previous adult and juvenile offenses. The researchers then used Amazon's Mechanical Turk platform to recruit 400 volunteers to read the vignettes and assess whether each defendant would commit another crime within two years. After reviewing each vignette, the volunteers were told whether their evaluation accurately predicted the subject's recidivism.

Both the people and the algorithm were accurate slightly less than two-thirds of the time.

These results, the Dartmouth authors concluded, cast doubt on the value of risk-assessment instruments and algorithmic prediction.

The study generated high-profile news coverage--and sent a wave of doubt through the U.S. criminal justice reform community. If sophisticated tools were no better than people in predicting which defendants would re-offend, some said, then there was little point in using the algorithms, which might only reinforce racial bias in sentencing. Some argued such profound decisions should be made by people, not computers.

Grappling with "noise" in complex decisions

But when the authors of the new California study evaluated additional data sets and more factors, they concluded that that risk assessment tools can be much more accurate than people in assessing potential for recidivism.

The study replicated the Dartmouth findings that had been based on a limited number of factors. However, the information available in justice settings is far more rich -- and often more ambiguous.

"Pre-sentence investigation reports, attorney and victim impact statements, and an individual's demeanor all add complex, inconsistent, risk-irrelevant, and potentially biasing information," the new study explains.

The authors' hypothesis: If research evaluations operate in a real-world framework, where risk-related information is complex and "noisy," then advanced risk assessment tools would be more effective than humans at predicting which criminals would re-offend.

To test the hypothesis, they expanded their study beyond COMPAS to include other data sets. In addition to the five risk factors used in the Dartmouth study, they added 10 more, including employment status, substance use and mental health. They also expanded the methodology: Unlike the Dartmouth study, in some cases the volunteers would not be told after each evaluation whether their predictions were accurate. Such feedback is not available to judges and others in the court system.

The outcome: Humans performed "consistently worse" than the risk assessment tool on complex cases when they didn't have immediate feedback to guide future decisions.

For example, the COMPAS correctly predicted recidivism 89% of the time, compared to 60% for humans who were not provided case-by-case feedback on their decisions. When multiple risk factors were provided and predictive, another risk assessment tool accurately predicted recidivism over 80% of the time, compared to less than 60% for humans.

The findings appear to support continued use and future improvement of risk assessment algorithms. But, as Skeem noted, these tools typically have a support role. Ultimate authority rests with judges, probation officers, clinicians, parole commissioners and others who shape decisions in the criminal justice system.

Credit: 
University of California - Berkeley

New findings from the Neotropics suggest contraction of the ITCZ

image: The University of New Mexico research team was led by Professor Yemane Asmerom (3rd from left) and included (l. to r.): Valorie Aquino, Keith Prufer and Victor Polyak. The team found contraction of the Intertropical Convergence Zone (ITCZ) during a warming Earth, leading in turn to drying of the Neotropics, including Central America..

Image: 
The University of New Mexico

Research by an international team of scientists led by University of New Mexico Professor Yemane Asmerom suggests contraction of the Intertropical Convergence Zone (ITCZ) during a warming Earth, leading in turn to drying of the Neotropics, including Central America, and aggravating current trends of social unrest and mass migration.

Positioned near the equator where the trade winds of the northern and southern hemisphere converge, the ITCZ is the world's most important rainfall belt affecting the livelihood of billions of people around the globe. Globally, seasonal shifts in the location of the ITCZ across the equator dictate the initiation and duration of the tropical rainy season. The behavior of the ITCZ in response to the warming of the Earth is of vital scientific and societal interest.

Previous work based on limited data suggested a southward migration of the ITCZ in response to global cooling, such as during the Little Ice Age a few hundred years ago. In contrast, modeling and limited observational data seemed to suggest the ITCZ expands and contracts in response to cooling and warming. Which of these scenarios is correct has a huge implication for understanding rainfall variability and its economic and social impacts across the tropics. In order to resolve these seemingly contradictory alternatives the authors undertook this paleoclimate reconstruction study from the margin of the ITCZ and combined that with existing data from across the full annual north-south excursion of the ITCZ.

The study titled, "Intertropical Convergence Zone Variability in the Neotropics During the Common Era," was published today in Science Advances. In addition to UNM, the research also includes scientists from the University of Durham (UK), Northumbria University (UK) and the University of California, Santa Barbara.

"Much of our understanding of ITCZ variability was based on records from South America, especially the Cariaco Basin (Venezuela), which was the gold standard," explained Asmerom. "But these studies were only able to present half of the picture. As a result, they suggested southward movement of the mean position of the ITCZ during cool periods of Earth, such as during the Little Ice Age, and by implication it shifts northward during warm periods.

"This would imply regions in the northern margin of the ITCZ, such as Central America would get wetter with warming climate. This contradicted modeling results suggesting drying as a consequence of warming."

With two testable hypotheses, Asmerom and his colleagues used 1,600 years of new bimonthly-scale speleothem rainfall reconstruction data from a cave site located at the northern margin of the ITCZ in Central America, coupled with published data from the full transect of the ITCZ excursion in Central America and South America. The combined data elucidate ITCZ variability throughout the Common Era including the warmer Medieval Climate Anomaly and the cooler Little Ice Age. The results of this study are consistent with models suggesting ITCZ expansion and weakening during global periods of cold climate and contraction and intensifying during periods of global warmth.

"Stable isotopic data obtained at Durham University, and trace element data and a precise uranium-series chronology, with an average 7 year uncertainty, obtained at the University of New Mexico, provided us with a nearly bi-monthly record of past climate variability between 400 CE to 2006. This level of resolution is unprecedented for continental climate proxies", said Polyak.

"What we found was that in fact during the Medieval Climate Anomaly Southern Belize was very dry, similar to modern central Mexico. In contrast, during the Little Ice Age cool period, when it should have been dry by the standard old model, it was the wettest interval over the last 2000 years," said Asmerom. "The pattern that emerges when all the data across the full transect of ITCZ excursion is supportive of the expansion-contraction model." The implication of this that regions currently in the margins of the ITCZ are likely to experience aridity with increased warming, consistent with modeling data from Central America. These data have important implications for rainfall-dependent agriculture system on which millions of people depend for food security.

Co-author and UNM Professor of Anthropology Keith Prufer is an environmental archaeologist, who has been conducting research in Belize for 25 years. "In the last five years there have been mass migrations of people in Guatemala and Honduras - partially driven by political instability, but also driven by drought-related conditions and changes in seasonality. This is creating enormous problems for agricultural production and feeding a growing population. There is growing evidence that these changes are a direct consequence of climate change."

"This work highlights the convergence of good science with policy relevancy. It also illustrates the strength of cross-disciplinary collaborative work, in this case international," said Asmerom.

Credit: 
University of New Mexico