Tech

A new study shows the relationship between surgery and Alzheimer's disease

image: Unidad de Deterioro Cognitivo

Image: 
Valdecilla

Amsterdam, January 21, 2021 - A new study published in the Journal of Alzheimer's Disease carried out by researchers at the Marqués de Valdecilla-IDIVAL University Hospital, in collaboration with researchers at the University of Bonn Medical Center, proposes that major surgery is a promoter or accelerator of Alzheimer's disease (AD). The first author of the publication was Carmen Lage and the principal investigator Pascual Sánchez-Juan.

AD is one of the greatest public health challenges. From the moment the first lesions appear in the brain to the clinical manifestations, up to 20 years can pass. Today we can detect the presence of these initial lesions through biochemical markers such as amyloid-β, which is one of the main proteins accumulated in the brains of Alzheimer's patients. The frequency of amyloid-β deposits in healthy people increases with age, and after 65 years of age, they would be present in up to one-third of the population. However, it is not well known what determines how in amyloid-β carriers the disease progresses more or less rapidly towards dementia or even remains inactive.

Carmen Lage said: "Although the phenomenon of cognitive deterioration after surgery has been known for a long time, there are few studies that relate it to AD. In the clinic, the patient's relatives frequently tell us that memory problems began after a surgical procedure or a hospital admission. This posed the following question: Is this just a recall bias or has surgery triggered the appearance of the symptoms in a previously affected brain?"

This is the question that motivated the work developed by researchers from the Marqués de Valdecilla University Hospital-IDIVAL exploring the relationship between cerebrospinal fluid (CSF) amyloid-β levels and surgery. The researchers administered cognitive tests to healthy individuals over the age of 65 before undergoing orthopedic surgery; obtained samples of CSF to determine amyloid-β levels during anesthesia; and then administered the same tests again nine months later. The main result was that half of the patients' cognition worsened compared to their state before surgery, and those who had altered amyloid-β levels exhibited a pattern compatible with the onset of AD, in which memory problems predominated.

Carmen Lage said: "Before the surgery, the memory test scores of the subjects with abnormal amyloid-β levels were indistinguishable from those of subjects with normal levels, and yet after surgery, they were significantly worse. These results lead us to the conclusion that major surgery can trigger different patterns of cognitive alterations, depending on the previous presence or absence of Alzheimer's pathological changes. While subjects without amyloid-β pathology showed a deterioration that does not affect memory, probably associated with factors intrinsic to the surgery itself, those with amyloid-β pathology suffered a cognitive deterioration that predominantly affected memory, and which was consistent with the first clinical manifestations of AD and therefore associated with greater probabilities of progression to dementia."

Dr. Pascual Sanchez-Juan added: "The progressive aging of our societies and the improvement in surgical technique mean that more and more elderly and fragile individuals are undergoing surgery. Pre-surgical evaluation always assesses whether cardiac or respiratory function will withstand the surgery, however, the potential consequences of the operation for the patient's brain are not usually determined. Our results would advocate that pre-surgical evaluation studies include cognitive tests, and even the analysis of Alzheimer's biomarkers, especially once these become widely available in plasma."

Credit: 
IOS Press

Designing customized "brains" for robots

Contemporary robots can move quickly. "The motors are fast, and they're powerful," says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don't move quickly. "The hang up is what's going on in the robot's head," she adds.

Perceiving stimuli and calculating a response takes a "boatload of computation," which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot's "mind" and body. The method, called robomorphic computing, uses a robot's physical layout and intended applications to generate a customized computer chip that minimizes the robot's response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. "It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers," says Neuman.

Neuman will present the research at this April's International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman's PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard's School of Engineering and Applied Sciences.

There are three main steps in a robot's operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: "Based on what they've seen, they have to construct a map of the world around them and then localize themselves within that map," says Neuman. The third step is motion planning and control -- in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. "For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly," says Plancher. "Current algorithms cannot be run on current CPU hardware fast enough."

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren't the answer. "What's relatively new is the idea that you might also explore better hardware." That means moving beyond a standard-issue CPU processing chip that comprises a robot's brain -- with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. "A GPU is not the best at everything, but it's the best at what it's built for," says Neuman. "You get higher performance for a particular application." Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That's why Neuman's team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot's computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman's system translates these physical properties into mathematical matrices. These matrices are "sparse," meaning they contain many zero values that roughly correspond to movements that are impossible given a robot's particular anatomy. (Similarly, your arm's movements are limited because it can only bend at certain joints -- it's not an infinitely pliable spaghetti noodle.)

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot's computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman's team didn't fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system's suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

"I was thrilled with those results," says Neuman. "Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient."

Plancher sees widespread potential for robomorphic computing. "Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions," he says. "I wouldn't be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them." Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot's parameters, and "out the other end comes the hardware description. I think that's the thing that'll push it over the edge and make it really useful."

Credit: 
Massachusetts Institute of Technology

Advances in modeling and sensors can help farmers and insurers manage risk

image: A trial of drought-tolerant beans in 2016 in Malawi, during one of the worst droughts to hit the nation in three decades.

Image: 
Neil Palmer/International Center for Tropical Agriculture

When drought caused devastating crop losses in Malawi in 2015-2016, farmers in the southeastern African nation did not initially fear for the worst: the government had purchased insurance for such a calamity. But millions of farmers remained unpaid for months because the insurer's model failed to detect the extent of the losses, and a subsequent model audit moved slowly. Quicker payments would have greatly reduced the shockwaves that rippled across the landlocked country.

While the insurers fixed the issues resulting in that error, the incident remains a cautionary tale about the potential failures of agricultural index insurance, which seeks to help protect the livelihoods of millions of smallholder farmers across the globe. Recent advances in crop modeling and remote sensing - especially in the availability and use of high-resolution imagery from satellites that can pinpoint individual fields - is one tool insurers that can help improve the quality of index insurance for farmers, report a team of economists and earth system scientists this week in Nature Reviews Earth & Environment.

"The enthusiasm for agricultural insurance needs to be matched with an equally well-founded concern for making sure that novel insurance products perform and help, not hurt, farmers exposed to severe risk," said Elinor Benami, the lead social scientist of the review.

The review was co-led by Benami, an assistant professor in Agricultural and Applied Economics from Virginia Tech, and Zhenong Jin, an assistant professor of Digital Agriculture at the University of Minnesota, and included Aniruddha Ghosh from the Alliance of Bioversity International and CIAT. The authors outline opportunities for enhancing the quality of index insurance programs to increase the value that index insurance programs offer to agricultural households and communities.

"Improvements in earth observation are enabling new approaches to assess agricultural losses, such as those resulting from adverse weather," said Zhenong.

Index insurance in agriculture triggers payments when certain environmental conditions - seasonal rainfall, for example - stray from thresholds for a typical harvest. Unlike policies that require costly and time-consuming field visits to assess claims, index insurance uses an indicator of losses to cover a group of farmers within a given geographical area. This approach offers the promise of inexpensive, quick coverage to many people who would otherwise be uninsured.

Lack of other types of coverage is due, in part, to the cost involved in verifying small claims on the ground. As the Malawi case shows, verification is also an issue for index insurance but its potential for scale, speed, and low cost render it viable for both insurers and desirable for farmers. When well matched to local experiences, index insurance can have meaningful impacts on agricultural livelihoods. One study cited by the authors found that people insured under a Kenyan index insurance program reduced their "painful coping strategies" by 40-80% when compared to uninsured households.

In non-technical terms, "painful coping strategies" for smallholder households include skipping meals, removing children from school, and selling off what little productive assets they have.

"Shocks that destroy incomes and assets have been shown to have irreversible consequences," said Michael Carter, a co-author of the review and an agricultural economist at the University of California, Davis. "Families never recover from the losses and become trapped in poverty. By restoring assets and income destroyed by shocks, insurance can halt this downward spiral before it starts. This can fundamentally alter the dynamics of poverty."

Insurance has been shown to push the poverty needle in the other direction. By protecting assets after bad seasons, insurance payments also build the confidence farmers have to invest in their farms and progress toward better wellbeing, secure in the knowledge they will not need to pursue painful coping if bad times fall. Despite a few decades of experimentation with the idea of index insurance, however, serious quality issues have plagued implementation on the ground and place the otherwise promising concept of index insurance itself at risk.

"With the technology of remote sensing changing rapidly, we wrote this review to call attention to the quality problem and to highlight ways to harness those technological advances to solve that problem. Our immodest hope is that this article will make more, high-quality insurance products available to small-scale farmers across the globe," said Carter.

Better models, coverage

Governments and insurers in sub-Saharan Africa have enrolled millions of farmers in index insurance programs. Programs have met with varying degrees of success and generally focus on livestock, in part because weather-related losses on rangelands are relatively easier to quantify. The authors say enhanced satellite imaging can potentially increase coverage and include more cropland. But improving the effectiveness of insurance is a bigger goal.

"We're trying to encourage the insurance community to move towards not just how many people you have enrolled but how many people you protected well when they suffered," said Benami.

To that end, the researchers discuss a minimum quality standard in their review, which is akin to a medical doctor's oath to patients: A minimum quality standard is based on the premise of doing no harm to farmers. Poor insurance coverage can make farmers worse off than they otherwise would have been without insurance.

"The criteria that insurance regulators have told us that they want is good value for money - meaning that farmers get effective risk reduction and asset protection for the premia that are paid," said Benami. "As we understand it, insurers are looking for ways to reduce cost and encourage uptake while meeting regulatory requirements for their roll-out."

To improve the quality, reliability, affordability and accessibility of index insurance, the authors make five concrete recommendations in their study.

First, the full potential of higher-resolution spatial data and new data products on environmental conditions should be explored. Many possibilities exist to wring more value from satellite data for index insurance - such as pairing data from multiple sensors and or with crop models - and examining those possibilities is a promising opportunity to improve the match between observation and experience on the ground.

Second, several opportunities exist to help improve loss detection. For example, this can be done with better crop modeling and new data products enabled by remote sensing, such as higher resolution soil moisture indicators of 100-meter resolution. Additional data sources such as drones and smartphones can be incorporated. Insurers should focus on metrics of farmer welfare as the key objective in insurance design.

Third, better on-the-ground data will help bolster the usefulness and quality of insurance programs. Ground-referenced data is essential to evaluate how well a given index relates to a farmer's reality, and strategically collected data on environmental conditions, crop types, and yields for the areas considered by insurance would help diagnose and improve insurance quality.

Fourth, insurance zones can be optimized to better reflect the geographic, microclimatic, and crop-management conditions that influence the productivity of specific landscapes. Within large administrative boundaries, considerable variation can occur due to mountains, rivers and different social customs.

Finally, contracts can be designed to accommodate a variety of needs and the inevitability of index failure. Farmers have different needs and a rigid insurance contract window may not always reflect the times of the year a farmer is most concerned about risk as it relates to their production strategies and location. In addition, secondary mechanisms -- liked audits -- can be put into place to minimize uncompensated losses that can be missed by index errors.

In implementing these recommendations, the Alliance's Ani Ghosh notes the importance of interdisciplinary, researcher-practitioner collaborations. For example, "the advances in economic, remote sensing, and crop modeling led by academic institutions complement CGIAR's experience in targeting, prioritizing, and scaling out interventions for smallholder farmers that can maximize the impact of index insurance programs," Ghosh said.

"Overall, evaluating and designing programs to successfully manage risk is a problem with both technical and social dimensions," the authors conclude. "Although index insurance instruments will not solve all agricultural risk-related problems, they offer a useful form of protection against severe, community-wide shocks when done well."

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Bringing atoms to a standstill: NIST miniaturizes laser cooling

image: Illustration of a new optical system to miniaturize the laser cooling of atoms, a key step towards cooling atoms on a microchip. A beam of laser light is launched from a photonic integrated circuit (PIC), aided by an element called an extreme mode converter (EMC) that greatly expands the beam. The beam then strikes a carefully engineered, ultrathin film known as a metasurface (MS), which is studded with tiny pillars that further expand and shape the beam. The beam is diffracted from a grating chip to form multiple overlapping laser beams inside a vacuum chamber. The combination of laser beams and a magnetic field efficiently cools and traps a large collection of gaseous atoms in a magneto-optical trap (MOT).

Image: 
NIST

It's cool to be small. Scientists at the National Institute of Standards and Technology (NIST) have miniaturized the optical components required to cool atoms down to a few thousandths of a degree above absolute zero, the first step in employing them on microchips to drive a new generation of super-accurate atomic clocks, enable navigation without GPS, and simulate quantum systems.

Cooling atoms is equivalent to slowing them down, which makes them a lot easier to study. At room temperature, atoms whiz through the air at nearly the speed of sound, some 343 meters per second. The rapid, randomly moving atoms have only fleeting interactions with other particles, and their motion can make it difficult to measure transitions between atomic energy levels. When atoms slow to a crawl -- about 0.1 meters per second -- researchers can measure the particles' energy transitions and other quantum properties accurately enough to use as reference standards in a myriad of navigation and other devices.

For more than two decades, scientists have cooled atoms by bombarding them with laser light, a feat for which NIST physicist Bill Phillips shared the 1997 Nobel Prize in physics. Although laser light would ordinarily energize atoms, causing them to move faster, if the frequency and other properties of the light are chosen carefully, the opposite happens. Upon striking the atoms, the laser photons reduce the atoms' momentum until they are moving slowly enough to be trapped by a magnetic field.

But to prepare the laser light so that it has the properties to cool atoms typically requires an optical assembly as big as a dining-room table. That's a problem because it limits the use of these ultracold atoms outside the laboratory, where they could become a key element of highly accurate navigation sensors, magnetometers and quantum simulations.

Now NIST researcher William McGehee and his colleagues have devised a compact optical platform, only about 15 centimeters (5.9 inches) long, that cools and traps gaseous atoms in a 1-centimeter-wide region. Although other miniature cooling systems have been built, this is the first one that relies solely on flat, or planar, optics, which are easy to mass produce.

"This is important as it demonstrates a pathway for making real devices and not just small versions of laboratory experiments," said McGehee. The new optical system, while still about 10 times too big to fit on a microchip, is a key step toward employing ultracold atoms in a host of compact, chip-based navigation and quantum devices outside a laboratory setting. Researchers from the Joint Quantum Institute, a collaboration between NIST and the University of Maryland in College Park, along with scientists from the University of Maryland's Institute for Research in Electronics and Applied Physics, also contributed to the study.

The apparatus, described online in the New Journal of Physics, consists of three optical elements. First, light is launched from an optical integrated circuit using a device called an extreme mode converter. The converter enlarges the narrow laser beam, initially about 500 nanometers (nm) in diameter (about five thousandths the thickness of a human hair), to 280 times that width. The enlarged beam then strikes a carefully engineered, ultrathin film known as a "metasurface" that's studded with tiny pillars, about 600 nm in length and 100 nm wide.

The nanopillars act to further widen the laser beam by another factor of 100. The dramatic widening is necessary for the beam to efficiently interact with and cool a large collection of atoms. Moreover, by accomplishing that feat within a small region of space, the metasurface miniaturizes the cooling process.

The metasurface reshapes the light in two other important ways, simultaneously altering the intensity and polarization (direction of vibration) of the light waves. Ordinarily, the intensity follows a bell-shaped curve, in which the light is brightest at the center of the beam, with a gradual falloff on either side. The NIST researchers designed the nanopillars so that the tiny structures modify the intensity, creating a beam that has a uniform brightness across its entire width. The uniform brightness allows more efficient use of the available light. Polarization of the light is also critical for laser cooling.

The expanding, reshaped beam then strikes a diffraction grating that splits the single beam into three pairs of equal and oppositely directed beams. Combined with an applied magnetic field, the four beams, pushing on the atoms in opposing directions, serve to trap the cooled atoms.

Each component of the optical system -- the converter, the metasurface and the grating -- had been developed at NIST but was in operation at separate laboratories on the two NIST campuses, in Gaithersburg, Maryland and Boulder, Colorado. McGehee and his team brought the disparate components together to build the new system.

"That's the fun part of this story," he said. "I knew all the NIST scientists who had independently worked on these different components, and I realized the elements could be put together to create a miniaturized laser cooling system."

Although the optical system will have to be 10 times smaller to laser-cool atoms on a chip, the experiment "is proof of principle that it can be done," McGehee added.

"Ultimately, making the light preparation smaller and less complicated will enable laser-cooling based technologies to exist outside of laboratories," he said.

Credit: 
National Institute of Standards and Technology (NIST)

New biochemical clues in cell receptors help explain how SARS-CoV-2 may hijack human cells

The SARS-CoV-2 virus may enter and replicate in human cells by exploiting newly-identified sequences within cell receptors, according to work from two teams of scientists. The findings from both groups paint a more complete portrait of the various cellular processes that SARS-CoV-2 targets to not only enter cells, but to then multiply and spread. The results also hint that the sequences could potentially serve as targets for new therapies for patients with COVID-19, although validation in cells and animal models is needed. Scientists know that SARS-CoV-2 binds the ACE2 receptor on the surface of human cells, after which it enters the cell through a process known as endocytosis. Research has suggested that the virus may hijack or interfere with other processes such as cellular housekeeping (autophagy) by targeting other receptors called integrins. However, not much is known about exactly how the virus takes advantage of integrins on the biochemical level. Analyzing the Eukaryotic Linear Motif database, Bálint Mészáros and colleagues discovered that ACE2 and various integrins contained several short linear motifs (SLiMs ) - small amino acid sequences - that they predicted play a role in endocytosis and autophagy. The scientists then compiled a list of currently used experimental treatments and approved drugs that can target the interactions between SARS-CoV-2 and its target SLiMs. Separately, Johanna Kliche and colleagues performed molecular tests to see whether these SLiMs interacted with proteins that contribute to autophagy and endocytosis. The team found that two SLiMs in ACE2 bound to the endocytosis-related proteins SNX27 and SHANK, and one SLiM in the integrin β3 bound to two proteins involved in autophagy. In addition to providing a resource for repurposing drugs for SARS-CoV-2, Mészáros et al. say their prediction methods could help identify similar under-the-radar SLiMs that assist with the replication of other viruses that cause disease.

Credit: 
American Association for the Advancement of Science (AAAS)

Creating a safe CAR T-Cell therapy to fight solid tumors in children

Chimeric Antigen Receptor T-cell therapy--CAR T--has revolutionized leukemia treatment. Unfortunately, the therapy has not been effective for treating solid tumors including childhood cancers such as neuroblastoma. Preclinical studies using certain CAR T against neuroblastoma revealed toxic effects. Now, a group of scientists at Children's Hospital Los Angeles have developed a modified version of CAR T that shows promise in targeting neuroblastoma, spares healthy brain tissue and more effectively kills cancer cells. Their study was published today in Nature Communications. While this work is in the preclinical phase, it reveals potential for lifesaving treatment in children and adults with solid tumors.

Shahab Asgharzadeh, MD, a physician scientist at the Cancer and Blood Disease Institute of CHLA, is working to improve the lifesaving CAR T-cell therapy, in which scientists take a patient's own immune system T-cells and engineer them to recognize and destroy cancer cells.

"The CAR T therapy works in leukemia," he says, "by targeting a unique protein (or antigen) on the surface of leukemia cells. When the treatment is given, leukemia cells are killed. CAR T turns the patient's immune system into a powerful and targeted cancer-killer in patients with leukemia. This antigen is also on normal B cells in the blood, but this side effect can be treated medically."

On the other hand, solid tumors like breast cancer or neuroblastoma present a dilemma: Many of the antigens they have on their surface are also found in healthy tissues where toxicity cannot be safely managed, as in leukemia. In patients with solid tumors, treatment with CAR T cells kills both cancer cells and healthy tissues indiscriminately. Because of this and suppressive immune environment within the solid tumor, preclinical studies that targeted these cancers resulted in little efficacy or unacceptable levels of toxicity.

"CAR T therapy is incredibly powerful, but for solid tumors it has significant barriers," says Babak Moghimi, MD, the first author of the publication. "We needed a way to boost the CAR T-cells to make them fight harder and smarter against the cancer. But we also want to save brain cells and other healthy tissue." And this is exactly what the researchers did.

The team used a new CAR T technology called synthetic Notch (or synNotch). SynNotch CAR T-cells have a unique property--called gating--that allows them to target specific cancers very precisely. The gating function works similarly to logic gates, a tool often used by computer programmers: If condition A is met, then do action B.

"The way it works is really unique," says Dr. Moghimi. He explains that the special synNotch protein on the surface of the T-cell is designed to recognize the antigen GD2. When it does, the synNotch protein instructs the T-cell to activate its CAR T properties, enabling its ability to recognize a second antigen, B7H3. The T-cell has to follow these specific instructions, which means it can only kill cells with both antigens.

This gating property is key to minimizing toxicity; healthy cells will sometimes have low levels of one of the antigens, but not both. Solid tumors like neuroblastoma have both GD2 and B7H3 antigens, which Dr. Asgharzadeh's team has engineered the synNotch cells to recognize.

The team was also able to surmount another challenge. "With normal CAR T therapy," says Dr. Asgharzadeh, "the CAR T-cells burn out and are no longer active after some time. But we discovered the synNotch CAR T-cells are more metabolically stable because they are not activated constantly." This means they use less energy, which allows them to continue to fight the cancer for a longer period of time.

Credit: 
Children's Hospital Los Angeles

Patients in cancer remission at high risk for severe COVID-19 illness

PHILADELPHIA--Patients with inactive cancer and not currently undergoing treatments also face a significantly higher risk of severe illness from COVID-19, a new study from Penn Medicine published online today in JNCI Cancer Spectrum shows. Past reports have established an increased risk of severe disease and death for sick or hospitalized cancer patients with COVID-19 compared to patients without cancer, but less is known about patients in the general population.

The findings underscore the importance of COVID-19 mitigation, like social distancing and mask wearing, and vaccinations for all patients, not just those recently diagnosed or with active disease.

"Patients who have cancer need to be careful not to become exposed during this time," said senior author Kara N. Maxwell, MD, PhD, an assistant professor of Hematology-Oncology and Genetics in the Perelman School of Medicine at the University of Pennsylvania and a member of the Abramson Cancer Center and the Basser Center for BRCA. "That message has been out there, but these latest findings show us it's not only for patients hospitalized or on treatment for their cancer. All oncology patients need to take significant precautions during the pandemic to protect themselves."

The researchers analyzed the records of more than 4,800 patients who had been tested for COVID-19 from the Penn Medicine BioBank, a centralized bank of samples and linked data from the health system's electronic health records, to investigate the association between cancer status and COVID-19 outcomes. Of the 328 positive cases through June 2020, 67 (20.7 percent) had a cancer diagnosis in their medical history (80.6 percent with solid tumor malignancy and 73.1 percent with inactive cancer).

Patients with COVID-19 -- including both those with active cancer (18) and inactive cancer (49) -- had higher rates of hospitalizations compared to non-cancer patients (55.2 percent vs. 29 percent), intensive care unit admissions (25.7 percent vs. 11.7 percent), and 30-day mortality (13.4 percent vs. 1.6 percent). While worse outcomes were more strongly associated with those with active cancer, patients in remission also faced an overall increased risk of more severe disease compared to COVID-19 patients without cancer.

Notably, the proportion of Black patients -- who make up 20 percent of the patients in the biobank -- was significantly higher in both cancer and non-cancer COVID-19-positive patients (65.7 percent and 64.1 percent, respectively) compared to all patients tested for SARS-CoV-2.

The findings parallel prior reports showing the disproportionate impact of COVID-19 on minority communities.

"We really need to be thinking about race as a significant factor in trying to get people vaccinated as soon as we can," Maxwell said.

Studies show that cancer patients have a higher risk of COVID-19 complications, due in part to factors such as older age, higher smoking rates, comorbidities, frequent health care exposures, and the effects of cancer therapies. These latest results also suggest the cancer itself and its impact on the body may play a role in exacerbating COVID-19 infections.

"Our finding that cancer patients with COVID-19 were more likely than non-cancer patients to experience hospitalization and death even after adjusting for patient-level factors supports the hypothesis that cancer is an independent risk factor for poor COVID-19 outcomes," they wrote.

In a separate, related study published in the preprint database bioRxiv and not yet peer-reviewed, Penn Medicine researchers report that cancer patients receiving in-person care at a facility with aggressive mitigation efforts have an extremely low likelihood of COVID-19 infection. Of 124 patients in the study receiving treatment at Penn Medicine, none tested positive for the virus after their clinical visits (an average of 13 per patient).

The results suggest those efforts, when combined with social distancing outside the healthcare setting, may help protect vulnerable cancer patients from COVID-19 exposure and infection, even when ongoing immunomodulatory cancer treatments and frequent healthcare exposure are necessary, the authors said.

Credit: 
University of Pennsylvania School of Medicine

Social influence matters when it comes to following pandemic guidelines

New research published in the British Journal of Psychology indicates that social influence has a large impact on people's adherence to COVID-19 guidelines.

In the analysis of information from 6,674 people in 114 countries, investigators found that people distanced most when they thought their close social circle did. Such social influence mattered more than whether people thought that distancing was the right thing to do.

The findings suggest that to achieve behavioral change during crises, policymakers must emphasize shared values and harness the social influence of close friends and family.

"We saw that people didn't simply follow the rules if they felt vulnerable or were personally convinced. Instead, this uncertain and threatening environment highlighted the crucial role of social influence," said lead author Bahar Tunçgenç, PhD, of the University of Nottingham, in the UK. "Most diligent followers of the guidelines were those whose friends and family also followed the rules. We also saw that people who were particularly bonded to their country were more likely to stick to lockdown rules--the country was like family in this way, someone you were willing to stick your neck out for."

Tunçgenç noted that efforts to improve adherence to COVID-19 guidelines might include the use of social apps, similar to social-based exercise apps, that tell people whether their close friends are enrolled for vaccination. Using social media to demonstrate to friends that you are following the rules, rather than expressing disapproval of people who aren't following them, could also be an impactful approach. In addition, public messages by trusted figures could emphasize collectivistic values, such as working for the benefit of loved ones and the community.

Credit: 
Wiley

Cancer can be precisely diagnosed using a urine test with artificial intelligence

image: The set of sensing signals collected for each patient were then analyzed using ML to screen the patient for PCa. Seventy-six urine samples were measured three times, thereby generating 912 biomarker signals or 228 sets of sensing signals. We used RF and NN algorithms to analyze the multimarker signals. Both algorithms provided an increased accuracy, and the AUROC increased in size as the number of biomarkers was increased.

Image: 
Korea Institute of Science and Technology(KIST)

Prostate cancer is one of the most common cancers among men. Patients are determined to have prostate cancer primarily based on *PSA, a cancer factor in blood. However, as diagnostic accuracy is as low as 30%, a considerable number of patients undergo additional invasive biopsy and thus suffer from resultant side effects, such as bleeding and pains.

*Prostate-Specific Antigen (PSA): a prostate-specific antigen (a cancer factor) used as an index for the screening of prostate cancer.

The Korea Institute of Science and Technology (KIST) announced that the collaborative research team led by Dr. Kwan Hyi Lee from the Biomaterials Research Center and Professor In Gab Jeong from Asan Medical Center developed a technique for diagnosing prostate cancer from urine within only twenty minutes with almost 100% accuracy. The research team developed this technique by introducing a smart AI analysis method to an electrical-signal-based ultrasensitive biosensor.

As a noninvasive method, a diagnostic test utilizing urine is convenient for patients and does not need invasive biopsy, thereby diagnosing cancer without side effects. However, as the concentration of cancer **factors is low in urine, a urine-based biosensor has been utilized for classifying risk groups rather than for precise diagnosis thus far.

**Cancer Factor: a cancer-related biological index that can measure and evaluate drug reactivity objectively for a normal biological process, disease progress, and a treatment method.

Dr. Lee's team at the KIST has been working toward developing a technique for diagnosing disease from urine by utilizing the electrical-signal-based ultrasensitive biosensor. An approach utilizing a single cancer factor associated with a cancer diagnosis was limited in increasing the diagnosis accuracy to over 90%. However, to overcome this limitation, the team simultaneously utilized different kinds of cancer factors instead of using only one to enhance the diagnostic accuracy innovatively.

The team developed an ultrasensitive semiconductor sensor system capable of simultaneously measuring trace amounts of selected four cancer factors in urine for diagnosing prostate cancer. They trained AI by using the correlation between the four cancer factors, which were obtained from the developed sensor. The trained AI algorithm was then used to identify those with prostate cancer by analyzing complex patterns of the detected signals. The diagnosis of prostate cancer by utilizing the AI analysis successfully detected 76 urinary samples with almost 100 percent accuracy.

"For patients who need surgery and/or treatments, cancer will be diagnosed with high accuracy by utilizing urine to minimize unnecessary biopsy and treatments, which can dramatically reduce medical costs and medical staff's fatigue," Professor Jeong at Asan Medical Center said. "This research developed a smart biosensor that can rapidly diagnose prostate cancer with almost 100 percent accuracy only through a urine test, and it can be further utilized in the precise diagnoses of other cancers using a urine test," Dr. Lee at the KIST said.

Credit: 
National Research Council of Science & Technology

Internet and freedom of speech, when metaphors give too much power

image: Since 1997, when the US supreme court metaphorically called the Internet the free market of ideas, attempts at regulation have been blocked by the 1st amendment. But with power concentrated in a few platforms, that metaphor is now misleading, says a study by Bocconi's Oreste Pollicino

Image: 
Paolo Tonato

Since 1997 (Reno vs. American Civil Liberties Union), the Supreme Court has used the metaphor of the free market of ideas to define the Internet, thus addressing the regulation of the net as a matter of freedom of speech. In law, metaphors have a constitutive value and, once established, affect the debate and the decisions of the Courts for a long time. In the paper 'Judicial Frames and Fundamental Right in Cyberspace', published in the American Journal of Comparative Law, Oreste Pollicino (Bocconi University) and Alessandro Morelli (Università Magna Graecia, Catanzaro) apply to judicial reasoning reflections on metaphors and go so far as to criticize, on the one hand, the US Supreme Court's orientations on (non-)regulation of the Internet and, on the other, to invoke changes in Directive 2000/31/EC on e-commerce. Internet regulation should be framed not as a matter of freedom of speech, but as a matter of freedom to conduct a business, they argue.

The metaphor was used for the first time in 1919 in a dissenting opinion of Judge Oliver Wendell Holmes on a Supreme Court decision concerning the expression of anti-war ideas and implies the fact that, when there is competition in a free market of ideas, even the worst of them (and therefore also the false ones) should be admitted, in the certainty that the best ones (and ultimately the truth) would still prevail.

&laquoSince 1997», explains Professor Pollicino, &laquoevery decision on the possible regulation of the Internet in the US has referred to the First Amendment, which guarantees freedom of speech and is in fact superordinate to any other freedom»., The jurisprudential line followed by the Supreme Court for more than 30 years, then, derives from the use of a metaphor.

&laquoSince then, however», continues Prof. Pollicino, &laquothe context has completely changed and the metaphor is now misleading». In fact, the large Internet platforms have assumed such a power as to counterbalance, in many fields, the power of Governments, without being conditioned by any geographical border.

The large platforms can no longer be considered actors like the others, competing on equal terms in a free market of ideas, and the American legal tradition, which enforces the freedom of speech only vertically (when the freedom of a private individual is limited by a public power) should instead enforce it also horizontally, when one of the private actors holds an overwhelming power. Still in 2017, however, in the Packingham vs. North Carolina decision, social networks were defined as &laquothe new free market of ideas», leaving little room for hope that such a market can be regulated in some way (even if only imposing serious obligations to remove fraudulent or dangerous material, as happens for example in Germany).

In Europe, the situation is more fluid, because the judicial tradition foresees a certain balance between the different rights, but the pressures to adopt a vision close to the American one are still strong. &laquoThe Directive 31 of 2000, at the dawn of social networks, equated them to the service or hosting providers of those years, thus substantially denying any responsibility for the posted content. Some recent EU legislation (or proposals) on anti-terrorism protection, audiovisual discipline and copyright are eroding the directive, but aren't directly calling it into question. Perhaps the time has come to do so» argues Pollicino.

The right way forward, according to the authors, could be to frame Internet regulation not as a matter of freedom of speech, but as a matter of freedom to conduct a business. &laquoWhat platforms really want to avoid is to change their business model: content monitoring is expensive and could discourage some from using the platforms. But the freedom to conduct a business, however protected, is not superordinate to other rights in any system and should therefore be counterbalanced by the rights to privacy, security, reputation and protection of minors». The contest for ideas is therefore open and the rhetoric of the absolutization of fundamental rights is often counterproductive. What metaphor would make it possible to address the issue in this way? &laquoCertainly, importing into Europe the US metaphor of the free marketplace of ideas, decontextualized it from the constitutional paradigms that frame it (Freedom in the USA and Dignity in Europe) can be very risky», concludes Prof. Pollicino.

Credit: 
Bocconi University

Stealing the spotlight in the field and kitchen

image: Beans in the UC Davis breeding program, whose varieties have been selected to combine excellent culinary with improved yields and resistance to bean common mosaic virus.

Image: 
Travis Parker

January 20, 2021 - Plant breeders are constantly working to develop new bean varieties to meet the needs and desires of the food industry. But not everyone wants the same thing.

Many consumers desire heirloom-type beans, which have great culinary quality and are visually appealing. On the other hand, farmers desire beans with better disease resistance and higher yield potential.

The bean varieties that farmers want to grow are usually different than the varieties consumers want to purchase. Until now.

Travis Parker, a plant scientist at University of California, Davis, has worked with a team of researchers to release five new varieties of dry beans that combine the most desirable traits.

The new varieties, UC Sunrise, UC Southwest Red, UC Tiger's Eye, UC Rio Zape, and UC Southwest Gold, were recently highlighted in the Journal of Plant Registrations, a publication of the Crop Science Society of America.

"Our new beans combine the best of both worlds for farmers and consumers," says Parker. "They combine the better qualities of heirloom-type beans with the better qualities of commercial types."

Heirloom-type beans often represent older bean types that are known for culinary qualities and seed patterns. These are highly desired by consumers. Heirloom types often fetch a higher market value than other beans.

Commercial dry beans often have higher yields, shorter maturity times, and improved disease resistance. While they possess qualities desirable to producers, they don't fetch as high of a market price compared to their heirloom counterparts.

"Our goal was to improve field characteristics of the heirloom beans without losing culinary characteristics," said Parker. "We have an interest in higher-value varieties and want them to grow well."

Farmers growing the heirloom dry beans often sell the beans to health-conscious consumers or high-end restaurants. This sale often leads to a higher price point. However, these beans are prone to disease and don't perform well in the field.

"We know that existing heirloom beans don't usually do well in terms of yield," said Parker. "Breeding beans for high yields is a major improvement for farmers. The new varieties are high-yielding, heat-tolerant, and are also resistant to bean common mosaic virus."

Incorporating disease resistance was essential when developing the new bean varieties. Bean common mosaic virus is a well-known problem that is hard to control in the field.

"The only real effective means to handle the virus is through genetic resistance," explains Parker.

The new varieties, such as UC Sunrise, satisfy the need for farmers to have a bean that is disease resistant while also yielding 50% more than heirloom types. In addition, the beans do not take as long to grow between planting and harvest.

Commercial and heirloom beans come from the same species, but they are in different market classes. The heirloom varieties are bred with intimate knowledge of what tastes good and what works well in the kitchen.

"In recent decades, there has been less attention paid to consumer desires during the bean breeding process," says Parker. "There are more layers between the breeder and the consumer. We are trying to make sure to keep consumers in mind while incorporating qualities that are beneficial to the farmer."

With consumer desires in mind, the research team used cross-pollination to breed plants with key characteristics they selected. As Parker and the team continued the breeding process, they performed taste tests to ensure the beans met the level of culinary quality expected of an heirloom-type bean, in terms of flavor and visual appeal.

Credit: 
American Society of Agronomy

Describing the worldviews of the new 'tech elite'

image: Fig 10. The 50 most frequently used words in foundations' mission statements by tech cohort.

Image: 
Brockmann et al, PLOS ONE 2021 (CC BY 4.0 https://creativecommons.org/licenses/by/4.0/)

The new tech elite share distinct views setting them apart from other segments of the world's elite more generally, according to a study published January 20, 2020 in the open-access journal PLOS ONE by Hilke Brockmann from Jacobs University Bremen, Germany, and colleagues.

The global economic landscape over the last half-century is marked by a shift to a high-tech economy, dominated by the "Big Nine" (Amazon, Apple, Facebook, Google, IBM, Microsoft, Alibaba, Baidu, Huawei, and Tencent), computer hardware and software manufacturers, and most recently, app companies. In this study, Brockmann and colleagues investigate the worldviews of the 100 richest people in the tech world (as defined by Forbes).

Though the authors initially approached all 100 of their subjects for a face-to-face interview, only one person agreed. So Brockmann and colleagues turned to the internet to learn more about their subjects in their own words, scraping and analyzing 49,790 tweets from 30 verified Twitter account holders within this tech elite subject group (and the same number of Tweets from a random sample of the general US Twitter-using population for comparison purposes). They also analyzed 60 mission statements from tech elite-run philanthropic websites, plus statements from 17 tech elites and other super-rich elites not associated with the tech world (for comparison purposes) who signed the Giving Pledge, a philanthropic initiative of Warren Buffett and Bill and Melinda Gates.

The Twitter text analyses revealed the Twitter-using tech elite subjects tweeted with a greater emphasis on disruption, positivity, and temporality compared with the average user. Their most frequently used words were "new" and "great" compared to the chattier "just" and "like" as the most common words used by the general users sampled, and they tended to refer much more frequently to their peers and other tech firms. While the authors found no statistically significant differences in whether or not the tech elite Twitter users saw a positive relationship between power and money or power and democracy as compared to the general Twitter sample, they did note that tech elites denied a connection between democracy and money, a belief not shared by the ordinary Twitter users sampled. The philanthropic statements from tech elites who signed the Giving Pledge tended to be on average briefer than those from other wealthy signatories (1796 words VS 2422 words). The tech elite philanthropists also tended to use more similar, meritocratic language as a group, with "education", "work", and "social" appearing frequently in their statements along with an emphasis on personal agency, progress and impact. This analysis indicates the tech elite hold a strong positive interest in "making the world a better place", but the authors note this belief is often frequently espoused by other very rich people as well.

There are several limitations to this research. The authors were not able to trace everyone in their initial 100-person sample for multiple reasons (for instance, Twitter is blocked in China, and many older tech elite do not use Twitter); it's not possible to rule out that Twitter accounts are managed by professional PR experts (which might presumably affect the language used); and it's also not clear whether the tech elite denial of a relationship between democracy and money is strategic or an actually-held belief. However, Brockmann and colleagues note that this study may serve as a starting point for future inquiries into this new class of elite, distinct from previous elite groups and continuing to rise in wealth and power as our world's reliance on technology grows.

The authors add: "The tech elite may be thought of as a 'class for itself' in Marx's sense--a social group that shares particular views of the world, which in this case means meritocratic, missionary, and inconsistent democratic ideology."

Credit: 
PLOS

Over 34,000 street cattle roam the Indian city of Raipur (1 for every 54 human residents)

There may be over 34,000 street cattle in the Indian city of Raipur (one for every 54 human residents), with implications for road accidents and human-cattle conflict.

Credit: 
PLOS

Female Bengalese finches have lifelong preference for their father's song to other birds'

image: Fig 3. Results of the preference test (vocal behavior).
(a, d) Population mean of the frequency proportion plotted against age of testing (the number of individuals of each sex at each age is specified in Table 3). (a) shows the results of calling, while (d) shows the results of singing (only males). A solid line with filled circles indicates female data, and a dashed line with filled triangles indicates male data. The proportion was calculated as the response frequency to the father's song (F) divided by the total response frequency (F + U). (b, c, e) Population mean of the number of trials in which birds vocally responded to either song in the tests conducted at 60 and 120 dph. (b) shows the results of male calling, (c) female calling, and (e) male singing. We used this count data for all individuals (10 males and 10 females) for model fitting. Open bars and grey bars indicate response to the father's song and unfamiliar song, respectively. In all three panels, error bars are 95% confidence intervals. Mean and confidence interval values are summarized in Table 3.

Image: 
Fujii et al, PLOS ONE 2021 (CC-BY 4.0, https://creativecommons.org/licenses/by/4.0/)

Daddies' girls? Female Bengalese finches prefer their father's song to that of other birds throughout their lives - while sons lose this preference as they grow up.

Credit: 
PLOS

An anode-free zinc battery that could someday store renewable energy

Renewable energy sources, such as wind and solar power, could help decrease the world's reliance on fossil fuels. But first, power companies need a safe, cost-effective way to store the energy for later use. Massive lithium-ion batteries can do the job, but they suffer from safety issues and limited lithium availability. Now, researchers reporting in ACS' Nano Letters have made a prototype of an anode-free, zinc-based battery that uses low-cost, naturally abundant materials.

Aqueous zinc-based batteries have been previously explored for grid-scale energy storage because of their safety and high energy density. In addition, the materials used to make them are naturally abundant. However, the rechargeable zinc batteries developed so far have required thick zinc metal anodes, which contain a large excess of zinc that increases cost. Also, the anodes are prone to forming dendrites -- crystalline projections of zinc metal that deposit on the anode during charging -- that can short-circuit the battery. Yunpei Zhu, Yi Cui and Husam Alshareef wondered whether a zinc anode was truly needed. Drawing inspiration from previous explorations of "anode-free" lithium and sodium-metal batteries, the researchers decided to make a battery in which a zinc-rich cathode is the sole source for zinc plating onto a copper current collector.

In their battery, the researchers used a manganese dioxide cathode that they pre-intercalated with zinc ions, an aqueous zinc trifluoromethanesulfonate electrolyte solution and a copper foil current collector. During charging, zinc metal gets plated onto the copper foil, and during discharging the metal is stripped off, releasing electrons that power the battery. To prevent dendrites from forming, the researchers coated the copper current collector with a layer of carbon nanodiscs. This layer promoted uniform zinc plating, thereby preventing dendrites, and increased the efficiency of zinc plating and stripping. The battery showed high efficiency, energy density and stability, retaining 62.8% of its storage capacity after 80 charging and discharging cycles. The anode-free battery design opens new directions for using aqueous zinc-based batteries in energy storage systems, the researchers say.

Credit: 
American Chemical Society