Culture

Leap in lidar could improve safety, security of new technology

image: A silicon chip with a tiled array of serpentine optical phased array (SOPA) tiles. The 32 tiles in the 8-by-4 array have slightly differing grating designs, showing here two matching pairs of tiles "lighting up" at this viewing angle. Drawn superimposed are beams from two matching tiles and the far field beam interference pattern demonstrating tiled beam forming.

Image: 
Bohan Zhang and Nathan Dostart

Whether it's on top of a self-driving car or embedded inside the latest gadget, Light Detection and Ranging (lidar) systems will likely play an important role in our technological future, enabling vehicles to 'see' in real-time, phones to map three-dimensional images and enhancing augmented reality in video games.

The challenge: these 3-D imaging systems can be bulky, expensive and hard to shrink down to the size needed for these up-and-coming applications. But University of Colorado Boulder researchers are one big step closer to a solution.

In a new paper, published in Optica, they describe a new silicon chip--with no moving parts or electronics--that improves the resolution and scanning speed needed for a lidar system.

"We're looking to ideally replace big, bulky, heavy lidar systems with just this flat, little chip," said Nathan Dostart, lead author on the study, who recently completed his doctorate in the Department of Electrical and Computer Engineering.

Current commercial lidar systems use large, rotating mirrors to steer the laser beam and thereby create a 3-D image. For the past three years, Dostart and his colleagues have been working on a new way of steering laser beams called wavelength steering--where each wavelength, or "color," of the laser is pointed to a unique angle.

They've not only developed a way to do a version of this along two dimensions simultaneously, instead of only one, they've done it with color, using a "rainbow" pattern to take 3-D images. Since the beams are easily controlled by simply changing colors, multiple phased arrays can be controlled simultaneously to create a bigger aperture and a higher resolution image.

"We've figured out how to put this two-dimensional rainbow into a little teeny chip," said Kelvin Wagner, co-author of the new study and professor of electrical and computer engineering.

The end of electrical communication

Autonomous vehicles are currently a $50 billion dollar industry, projected to be worth more than $500 billion by 2026. While many cars on the road today already have some elements of autonomous assistance, such as enhanced cruise control and automatic lane-centering, the real race is to create a car that drives itself with no input or responsibility from a human driver. In the past 15 years or so, innovators have realized that in order to do this cars will need more than just cameras and radar--they will need lidar.

Lidar is a remote sensing method that uses laser beams, pulses of invisible light, to measure distances. These beams of light bounce off everything in their path, and a sensor collects these reflections to create a precise, three-dimensional picture of the surrounding environment in real time.

Lidar is like echolocation with light: it can tell you how far away each pixel in an image is. It's been used for at least 50 years in satellites and airplanes, to conduct atmospheric sensing and measure the depth of bodies of water and heights of terrain.

While great strides have been made in the size of lidar systems, they remain the most expensive part of self-driving cars by far--as much as $70,000 each.

In order to work broadly in the consumer market one day, lidar must become even cheaper, smaller and less complex. Some companies are trying to accomplish this feat using silicon photonics: An emerging area in electrical engineering that uses silicon chips, which can process light.

The research team's new finding is an important advancement in silicon chip technology for use in lidar systems.

"Electrical communication is at its absolute limit. Optics has to come into play and that's why all these big players are committed to making the silicon photonics technology industrially viable," said Miloš Popovi?, co-author and associate professor of engineering at Boston University.

The simpler and smaller that these silicon chips can be made--while retaining high resolution and accuracy in their imaging--the more technologies they can be applied to, including self-driving cars and smartphones.

Rumor has it that the upcoming iPhone 12 will incorporate a lidar camera, like that currently in the iPad Pro. This technology could not only improve its facial recognition security, but one day assist in creating climbing route maps, measuring distances and even identifying animal tracks or plants.

"We're proposing a scalable approach to lidar using chip technology. And this is the first step, the first building block of that approach," said Dostart, who will continue his work at NASA Langley Research Center in Virginia. "There's still a long way to go."

Credit: 
University of Colorado at Boulder

Encouraging results from functional MRI in an unresponsive patient with COVID-19

BOSTON - Many patients with severe coronavirus disease 2019 (COVID-19) remain unresponsive after surviving critical illness. Investigators led by a team at Massachusetts General Hospital (MGH) now describe a patient with severe COVID-19 who, despite prolonged unresponsiveness and structural brain abnormalities, demonstrated functionally intact brain connections and weeks later he recovered the ability to follow commands. The case, which is published in the Annals of Neurology, suggests that unresponsive patients with COVID-19 may have a better chance of recovery than expected.

In addition to performing standard brain imaging tests, the team took images of the patient's brain with a technique called resting-state functional magnetic resonance imaging (rs-fMRI), which evaluates the connectivity of brain networks by measuring spontaneous oscillations of brain activity. The patient was a 47-year-old man who developed progressive respiratory failure, and despite intensive treatment, he fluctuated between coma and a minimally conscious state for several weeks.

Standard brain imaging tests revealed considerable damage, but unexpectedly, rs-fMRI revealed robust functional connectivity within the default mode network (DMN), which is a brain network thought to be involved in human consciousness. Studies have shown that stronger DMN connectivity in patients with disorders of consciousness predicts better neurologic recovery. The patient's DMN connectively was comparable to that seen in healthy individuals, suggesting that the neurologic prognosis may not be as grim as conventional tests implied.

Twenty days later, on hospital day 61, the patient began following verbal commands. He blinked his eyes to command, opened his mouth to command, and on day 66 followed four out of four vocalization commands. By this time, he also consistently demonstrated gaze tracking with his eyes in response to visual and auditory stimuli.

"Because there are so many unanswered questions about the potential for recovery in unresponsive patients who have survived severe COVID-19, any available data that could inform prognosis are critical," said senior author Brian Edlow, MD, director of the Laboratory for NeuroImaging of Coma and Consciousness and associate director of the Center for Neurotechnology and Neurorecovery at MGH. "Our unexpected observations do not prove that functional MRI predicts outcomes in these patients, but they suggest that clinicians should consider the possibility that unresponsive survivors of severe COVID-19 may have intact brain networks. We should thus exercise caution before presuming a poor neurologic outcome based on our conventional tests."

Providing families with an accurate prognosis about neurological recovery is particularly challenging for patients with COVID-19, because so little is known about how the brain is affected by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), or associated inflammation and clotting disorders. "Initially, our goal in the intensive care unit was to support patients through the critical illness of COVID-19," said lead author David Fischer, MD, Neurocritical Care fellow at MGH. "However, we found that a subset of patients, after surviving the critical illness, were not waking up as expected. As neurologists, we were asked by many families whether their loved ones would regain consciousness - a critical question given that decisions about life support often hinged on the answer - but we were uncertain. We used functional MRI to try to provide a more comprehensive assessment of brain function."

The application of functional MRI to critically ill patients with disorders of consciousness is the culmination of decades of work to develop this technology and ultimately translate it to clinical care. Co-author Bruce Rosen, MD, PhD, director of the Athinoula A. Martinos Center for Biomedical Imaging at MGH, and one of the developers of functional MRI in the early 1990s, explained that "we have to be cautious when interpreting results from a single patient, but this study provides proof of principle that clinicians may be able to use advanced imaging techniques like functional MRI to get a clearer picture of a patient's brain function, and hence the potential for recovery."

Credit: 
Massachusetts General Hospital

Researchers foresee linguistic issues during space travel

LAWRENCE -- It lacks the drama of a shape-shifting alien creature, but another threat looms over the prospect of generations-long, interstellar space travel: Explorers arriving on Xanadu could face problems communicating with previous and subsequent arrivals, their spoken language having changed in isolation along the way.

Therefore, a new paper co-authored by a University of Kansas professor of linguistics and published in a journal affiliated with the European Space Agency recommends that such crews include, if not a linguist, members with knowledge of what is likely to occur and how to adapt.

Associate Professor Andrew McKenzie of KU and Jeffrey Punske, assistant professor of linguistics at Southern Illinois University, co-authored the article "Language Development During Interstellar Travel" in the April edition of Acta Futura, the journal of the European Space Agency's Advanced Concepts Team.

In it, they discuss the concept of language change over time, citing such earthbound examples of long-distance voyages as the Polynesian island explorers and extrapolating from there.

It might seem far-fetched, but the authors cite language change even during their own lifetimes with the rise - no pun intended - of uptalk.

They write that "it is increasingly common for speakers to end statements with a rising intonation. This phenomenon, called uptalk (or sometimes High Rising Terminal), is often mistaken for a question tone by those without it in their grammars, but it actually sounds quite distinct and indicates politeness or inclusion. Uptalk has only been observed occurring within the last 40 years, but has spread from small groups of young Americans and Australians to most of the English-speaking world, even to many Baby Boomers who had not used it themselves as youth."

"Given more time, new grammatical forms can completely replace current ones."

Imagine trying to chat with Chaucer today. Even improvements in translation technology might not be enough.

In a recent interview, McKenzie gamed it out.

"If you're on this vessel for 10 generations, new concepts will emerge, new social issues will come up, and people will create ways of talking about them," McKenzie said, "and these will become the vocabulary particular to the ship. People on Earth might never know about these words, unless there's a reason to tell them. And the further away you get, the less you're going to talk to people back home. Generations pass, and there's no one really back home to talk to. And there's not much you want to tell them, because they'll only find out years later, and then you'll hear back from them years after that.

"The connection to Earth dwindles over time. And eventually, perhaps, we'll get to the point where there's no real contact with Earth, except to send the occasional update.

"And as long as the language changes on the vessel, and then at an eventual colony, the question becomes 'Do we still bother learning how to communicate with people on Earth?' Yes. So if we have Earth English and vessel English, and they diverge over the years, you have to learn a little Earth English to send messages back, or to read the instruction manuals and information that came with the ship.

"Also, keep in mind that the language back on Earth is going to change, too, during that time. So they may well be communicating like we'd be using Latin -- communicating with this version of the language nobody uses."

The authors also point out that an adaptation in the form of sign language will be needed for use with and among crew members who, genetics tell us, are sure to be born deaf.

In any case, they write, "every new vessel will essentially offload linguistic immigrants to a foreign land. Will they be discriminated against until their children and grandchildren learn the local language? Can they establish communication with the colony ahead of time to learn the local language before arrival?

"Given the certainty that these issues will arise in scenarios such as these, and the uncertainty of exactly how they will progress, we strongly suggest that any crew exhibit strong levels of metalinguistic training in addition to simply knowing the required languages. There will be need for an informed linguistic policy on board that can be maintained without referring back to Earth-based regulations."

If a study of the linguistic changes aboard ship could be performed, it would only "add to its scientific value," McKenzie and Punske conclude.

Credit: 
University of Kansas

Yellow pond-lily prefers cyclic flowers to spiral ones

Biologists from Lomonosov Moscow State University and HSE University have studied the patterns of flower development in yellow water-lily (Nuphar lutea). They found out that all the floral organs are arranged in cycles (whorls) rather than inserted sequentially in a spiral, as is the case in some other basal angiosperms. The ancestors of yellow pond-lily were among the first to diverge from the root of the angiosperm evolutionary tree, which is why it can be used to hypothesize about the structure of the first flowers. The study has been published in Frontiers in Cell and Developmental Biology journal.

The flower is one of the key evolutionary innovations of angiosperms. It helps attract various pollinators, protect the seeds inside the fruit, and adds some new means of distribution that do not exist in gymnosperms. Thanks to these advantages, flowering plants have settled across the planet and have become the most numerous group of land plants.

How flowers evolved and how they looked initially remains a mystery. The appearance of the ancestral flowers can be inferred with the help of plants that have preserved the greatest degree of similarity to the first angiosperms. It makes sense to look for them among the basal groups, whose ancestors diverged from the phylogenetic root of flowering plants earlier than the others. It is highly probable that the flower structure in such organisms will be similar to the initial one.

Among extant flowering plants, Nymphaeales are rather close to the root of angiosperms. Yellow pond-lily (Nuphar lutea) is widespread in Eurasia; it is also sometimes seen in North America, which is why it could be a convenient model object. But detailed studies of its flower structure using modern research methods are lacking.

Researchers from Lomonosov Moscow State University and HSE University have collected several dozen rhizomes of Nuphar lutea with leaves and flowers. Some of them were dissected to prepare specimens for light and scanning electron microscopy.

The researchers focused on shoot tips, where new leaves and flowers form. Young flowers at different stages of development were selected for the study. To determine their architecture, the researchers measured the angles between similar organs of the flower.

Elements of shoots in plants--leaves, flowers, lateral buds and lateral branches developing from them--are frequently arranged in a spiral. It had previously been assumed that plants similar to basal angiosperm type, including Nuphar, have a similar arrangement of organs. But the researchers discovered that in Nuphar lutea, the angles between the sepals differed from the spiral insertion (85° and 55°, rather than 137.5°). It looked like sepals and petals form cycles--two whorls for sepals and a single whorl for petals--although they are not always initiated simultaneously within a whorl.

Nuphar lutea develops five sepals. If they all were in one whorl, the angle between adjacent sepals would be 72°. In fact, they were placed at such angles that de-facto formed two circles: three elements in the external circle, and two in the internal one. The number of petals usually varied from 14 to 15, but they also formed a cycle rather than a spiral. And even the numerous stamens tended to arrange in alternating whorls.

Credit: 
National Research University Higher School of Economics

What ethical models for autonomous vehicles don't address - and how they could be better

There's a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence (AI) and autonomous vehicles (AVs). Namely, existing approaches don't account for the fact that people might try to use the AVs to do something bad.

For example, let's say that there is an autonomous vehicle with no passengers and it is about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian.

Most discussions of ethics in this scenario focus on whether the autonomous vehicle's AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own.

"Current approaches to ethics and autonomous vehicles are a dangerous oversimplification - moral judgment is more complex than that," says Veljko Dubljevi?, an assistant professor in the Science, Technology & Society (STS) program at North Carolina State University and author of a paper outlining this problem and a possible path forward. "For example, what if the five people in the car are terrorists? And what if they are deliberately taking advantage of the AI's programming to kill the nearby pedestrian or hurt other people? Then you might want the autonomous vehicle to hit the car with five passengers.

"In other words, the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn't account for malicious intent. And it should."

As an alternative, Dubljevi? proposes using the so-called Agent-Deed-Consequence (ADC) model as a framework that AIs could use to make moral judgements. The ADC model judges the morality of a decision based on three variables.

First, is the agent's intent good or bad? Second, is the deed or action itself good or bad? Lastly, is the outcome or consequence good or bad? This approach allows for considerable nuance.

For example, most people would agree that running a red light is bad. But what if you run a red light in order to get out of the way of a speeding ambulance? And what if running the red light means that you avoided a collision with that ambulance?

"The ADC model would allow us to get closer to the flexibility and stability that we see in human moral judgment, but that does not yet exist in AI," says Dubljevi?. "Here's what I mean by stable and flexible. Human moral judgment is stable because most people would agree that lying is morally bad. But it's flexible because most people would also agree that people who lied to Nazis in order to protect Jews were doing something morally good.

"But while the ADC model gives us a path forward, more research is needed," Dubljevi? says. "I have led experimental work on how both philosophers and lay people approach moral judgment, and the results were valuable. However, that work gave people information in writing. More studies of human moral judgment are needed that rely on more immediate means of communication, such as virtual reality, if we want to confirm our earlier findings and implement them in AVs. Also, vigorous testing with driving simulation studies should be done before any putatively 'ethical' AVs start sharing the road with humans on a regular basis. Vehicle terror attacks have, unfortunately, become more common, and we need to be sure that AV technology will not be misused for nefarious purposes."

Credit: 
North Carolina State University

Global success for Canadian companies depends on prior R&D investment, receptiveness to new learning

image: Walid Hejazi is an Associate Professor of Economic Analysis and Policy at the University of Toronto's Rotman School of Management. He has researched, advised, and testified extensively on global competitiveness, and is currently working on a series of studies which shed light on the competitiveness and productivity of Canadian firms. He teaches Macro and Global Economics in Rotman's MBA and EMBA programs, and has also delivered lectures in over 30 countries. He has also developed and teaches a successful MBA course in Islamic Finance, the first such course in Canada.

Image: 
Rotman School

Global success for Canadian companies depends on prior R&D investment, receptiveness to new learning, shows new study.

Toronto - Canadian companies that go international are known to be more productive and successful than those that don't.

New research has quantified the reasons why. It shows that about 80 percent of global companies' productivity is due to what they did before going abroad - namely, making themselves more competitive, including by investing in research and development. The other 20 percent is due to what companies learned from their exposure to international markets.

The findings show that going global on its own is no guarantee of higher productivity, says lead researcher Walid Hejazi.

"A company has to be prepared, it has to be much, much better in Canada before it can be successful abroad," says Prof. Hejazi, an associate professor of economic analysis and policy at the University of Toronto's Rotman School of Management and an expert on Canadian companies' global competitiveness.

That preparation is required because a company needs processes and technologies in place that enable it to absorb and integrate what it learns once it has entered a foreign market.

Companies that have not yet reached a certain productivity threshold can still benefit from going global, however. The researchers found that the minimum required level of productivity represented a range, rather than being a fixed number.

"If a company is prepared well enough, it can still go abroad and then through learning, rise above the threshold that it needs," says Prof. Hejazi. Overall, global Canadian companies were found to be 60 to 76 percent more productive than those that stayed home. About 20 percent of their investment moved through offshore financial centres.

Companies have the best chance of learning when they locate to countries with a similar language - aiding in communication -- and with strong legal and government institutions, the researchers found.

If that's not the case, companies can still position themselves to take advantage of foreign market learning opportunities by assembling a culturally literate management team, with experience in the new market, says Prof. Hejazi.

The findings are drawn from advanced statistical analyses that Prof. Hejazi and his co-researchers conducted on confidential data from Statistics Canada about every Canadian company between 2000 and 2014. The research responds to the Canadian government's interest in better understanding the ingredients for international success among Canadian companies.

Given that Canadian companies still lag many other countries for R&D spending, the study's results underscore the importance of government and Canadian business working together to promote innovation and advancement to the global stage, says Prof. Hejazi.

Credit: 
University of Toronto, Rotman School of Management

To let neurons talk, immune cells clear paths through brain's 'scaffolding'

To make new memories, our brain cells first must find one another. Small protrusions that bud out from the ends of neurons' long, branching tentacles dock neurons together so they can talk. These ports of cellular chatter -- called synapses, and found in the trillions throughout the brain -- allow us to represent new knowledge. But scientists are still learning just how these connections form in response to new experiences and information. Now, a study by scientists in UC San Francisco's Weill Institute for Neurosciences has identified a surprising new way that the brain's immune cells help out.

In recent years, scientists have discovered that the brain's dedicated immune cells, called microglia, can help get rid of unnecessary connections between neurons, perhaps by engulfing synapses and breaking them down. But the new study, published July 1, 2020 in Cell, finds microglia can also do the opposite -- making way for new synapses to form by chomping away at the dense web of proteins between cells, clearing a space so neurons can find one another. Continuing to study this new role for microglia might eventually lead to new therapeutic targets in certain memory disorders, the researchers say.

Neurons live within a gelatinous mesh of proteins and other molecules that help to maintain the three-dimensional structure of the brain. This scaffolding, collectively called the extracellular matrix (ECM), has long been an afterthought in neuroscience. For decades, researchers focused on neurons, and, more recently, the cells that support them, have largely considered the ECM unimportant.

But neurobiologists are starting to realize that the ECM, which makes up about 20 percent of the brain, actually plays a role in important processes like learning and memory. At a certain point in brain development, for example, the solidifying ECM seems to put the brakes on the rapid pace at which new neuronal connections turn over in babies, seemingly shifting the brain's priority from the breakneck adaptation to the new world around it, to a more stable maintenance of knowledge over time. Scientists also wonder if a stiffening of the extracellular matrix later in life might somehow correspond to the memory challenges that come with aging.

"The extracellular matrix has been here the whole time," said the study's first author Phi Nguyen, a biomedical sciences graduate student at UCSF. "But it's definitely been understudied."

Nguyen and his advisor, Anna Molofsky, MD, PhD, an associate professor in the UCSF Department of Psychiatry and Behavioral Sciences, first realized the ECM was important to their research on the hippocampus, a brain structure critical for learning and memory, when an experiment yielded unexpected results. Knowing that microglia chew away at obsolete synapses, they expected that disrupting microglia function would cause the number of synapses in the hippocampus to shoot up. Instead, synapse numbers dropped. And where they thought they'd find pieces of synapses being broken down in the "bellies" of microglia, instead they found pieces of the ECM.

"In this case microglia were eating something different than we expected," Molofsky said. "They're eating the space around synapses -- removing obstructions to help new synapses to form."

Before springing into action, the microglia wait for a signal from neurons, an immune molecule called IL-33, indicating that it's time for a new synapse to form, the study found. When researchers used genetic tools to block this signal, microglia failed to fulfill their ECM-chomping duties, leading to fewer new connections between neurons in the brain of mice and leaving mice struggling to remember certain details over time. When researchers instead drove the level of IL-33 signaling up, new synapses increased in number. In older mice, in which brain aging already slows the formation of new connections, ramping up IL-33 helped push the number of new synapses towards a more youthful level.

The study could be important for understanding -- and perhaps one day treating -- the kinds of memory problems we see in age related diseases like Alzheimer's, according to study co-author Mazen Kheirbek, PhD, an associate professor of psychiatry whose lab studies brain circuits involved in mood and emotion. But the findings might also be important for specific types of emotional memory problems sometimes seen in anxiety related disorders.

To determine how changes in IL-33 affect memory, the researchers taught mice to distinguish between an anxiety-inducing box (inside which the mice received a mild foot shock) and a neutral box. After a month, normal mice expressed far more fear in the shock-associated box by freezing in place (a rodent reflex to throw off predators) than they did in the neutral box, where they moved around more casually. But mice with disrupted IL-33 expressed high levels of fear in either box, suggesting they'd lost the kind of precise memory needed to determine when they should be scared and when they were safe.

Kheirbek likens this overgeneralized response to the kind of trauma-induced fear that might result from being mugged in a parking lot at night. Instead of being able to separate that fearful memory from new, perhaps less-threatening experiences, some people might develop a generalized fear that makes it hard for them to enter any parking lot at any time. "Deficits in this ability to have very precise, emotional memories are seen in a lot of anxiety disorders and particularly in PTSD," he said. "It's an overgeneralization of fear that can really interfere with your life."

For Molofsky's part, stumbling upon this unexpected finding has left her eager to learn more about the ECM and how it shapes the way we learn. Her lab is now working to identify new, poorly characterized pieces of the matrix to look for as yet undocumented ways it interacts with neurons and microglia in the brain.

"I'm in love with the extracellular matrix," Molofsky said. "A lot of people don't realize that the brain is made up not just of nerve cells, but also cells that keep the brain healthy, and even the space in between cells is packed with fascinating interactions. I think a lot of new treatments for brain disorders can come from remembering that."

Credit: 
University of California - San Francisco

Do we know what we want in a romantic partner? No more than a random stranger would

We all can describe our ideal partner. Perhaps they are funny, attractive and inquisitive. Or maybe they are down-to-earth, intelligent and thoughtful. But do we actually have special insight into ourselves, or are we just describing positive qualities that everyone likes?

New research coming out of the University of California, Davis, suggests that people's ideal partner preferences do not reflect any unique personal insight. The paper, "Negligible Evidence That People Desire Partners Who Uniquely Fit Their Ideals," was published last week in the Journal of Experimental Social Psychology.

"The people in our study could very easily list their top three attributes in an ideal partner," noted Jehan Sparks, former UC Davis doctoral student and lead author of the study. "We wanted to see whether those top three attributes really mattered for the person who listed them. As it turns out, they didn't."

In the research, more than 700 participants nominated their top three ideals in a romantic partner -- attributes like funny, attractive or inquisitive. Then they reported their romantic desire for a series of people they knew personally: Some were blind date partners, others were romantic partners, and others were friends.

Participants experienced more romantic desire to the extent that these personal acquaintances possessed the top three attributes. If Vanessa listed funny, attractive and inquisitive, she experienced more desire for partners who were funny, attractive and inquisitive.

"On the surface, this looks promising," notes Paul Eastwick, a professor in the UC Davis Department of Psychology and co-author.

"You say you want these three attributes, and you like the people who possess those attributes. But the story doesn't end there." -- Professor Paul Eastwick, UC Davis

What would a stranger say?

The researchers included a twist: Each participant also considered the extent to which the same personal acquaintances possessed three attributes nominated by some other random person in the study. For example, if Kris listed down-to-earth, intelligent and thoughtful as her own top three attributes, Vanessa also experienced more desire for acquaintances who were down-to-earth, intelligent and thoughtful.

"So in the end, we want partners who have positive qualities," said Sparks, "but the qualities you specifically list do not actually have special predictive power for you." The authors take these findings to mean that people don't have special insight into what they personally want in a partner.

Eastwick compared it to ordering food at a restaurant. "Why do we order off the menu for ourselves? Because it seems obvious that I will like what I get to pick. Our findings suggest that, in the romantic domain, you might as well let a random stranger order for you -- you're just as likely to end up liking what you get."

The findings have implications for the way people approach online dating. People commonly spend many hours perusing online dating profiles in the search of someone who specifically matches their ideals. Sparks and colleagues' research suggests that this effort may be misplaced.

"It's really easy to spend time hunting around online for someone who seems to match your ideals," notes Sparks. "But our research suggests an alternative approach: Don't be too picky ahead of time about whether a partner matches your ideals on paper. Or, even better, let your friends pick your dates for you."

Credit: 
University of California - Davis

Multisample technique to analyze cell adhesion

image: The new assay developed at KAUST is a fluorescent multiplex cell rolling assay that uses unique fluorescent tags to label up to seven cell population types.

Image: 
© 2020 Elham Roshdy

An efficient, robust method of examining the interactions between cells uses fluorescent tagging to simultaneously analyze multiple cell populations, speeding up a once tedious and limiting process. The new assay developed by KAUST also has applications in studying cellular processes in inflammation or cancer cell metastasis and in assessing potential treatments.

Cells move through blood vessels via cell adhesion--the interaction and attachment of cells to one another via specialized molecules on cell surfaces. During blood flow, shear forces act on the cells to help orchestrate cell adhesion. Manipulation of cell adhesion can lead to inflammation and diseases, like cancer, while pathogens, such as viruses exploit cell adhesion to infect the body.

"The conventional assay used to study cell-cell interactions is the parallel plate flow chamber (PPFC) assay, which records videos of cells rolling in flow and adhering to molecules on endothelial cells (blood vessel lining cells)," says group leader Jasmeen Merzaban. "This assay has been used for decades, but it is prone to error and possible bias, and it can only analyze one cell type at a time, making it hugely time consuming."

Ayman AbuElela, with the other graduate students in Merzaban's lab, wanted to improve upon PPFC and speed up analyses.

Their new fluorescent multiplex cell rolling assay (FMCR) uses unique fluorescent tags to label up to seven cell population types. The cell samples are mixed just prior to entering the simulated flow over a layer of endothelial cells. A spectral confocal microscope captures images of the mixed cell populations in real time, at high temporal resolution, in a single pass. This allows researchers to collect data on cell kinetics, including the rolling frequency, velocity and tethering capability of individual cell types.

"We developed a comprehensive data analysis pipeline, which enables us to analyze the multiple cell types we obtain by this approach and achieves high statistical power and sensitivity," says AbuElela. "FMCR is now used in our lab to study the migration of various human cells including stem cells, activated immune cells and breast cancer cells."

Another advantage of the new procedure is that before or during the assay, a test compound, such as a new drug, can be added to the cells to investigate the effect of the compound on cell adhesion.

"Such studies provide major insights into the effect of treatments on the migration and metastasis of cells and on how the drugs might work inside the body," notes Merzaban.

Credit: 
King Abdullah University of Science & Technology (KAUST)

U of SC: How non-alcoholic fatty liver disease causes Alzheimer's-like neuroinflammation

COLUMBIA, SC -- Research from University of South Carolina associate professor Saurabh Chatterjee's laboratory in Environmental Health Sciences, Arnold School of Public Health, and led by Ayan Mondal, a postdoctoral researcher from the same lab, has revealed the cause behind the previously established link between non-alcoholic fatty liver disease (i.e., NAFLD, recently reclassified as metabolic associated fatty liver disease or MAFLD) and neurological problems. The link they discovered, the unique role of an adipokine (Lipocalin-2) in causing neuroinflammation, may explain the prevalence of neurological Alzheimer's disease-like and Parkinson's disease-like phenotypes among individuals with MAFLD.

The investigators, which include members of Chatterjee's Environment Health & Disease Laboratory and researchers from across UofSC*, published their results in the Journal of Neuroinflammation, a pioneering journal in the field. These findings build on years of research conducted by the interdisciplinary team, which has unearthed previously unknown pathways and mechanisms between the liver and the gut microbiome with other parts of the body through their focus on how environmental toxins contribute to liver disease, metabolic syndrome and obesity.

MAFLD affects up to 25 percent of Americans and much of the global population - many of whom are unaware of their condition. Yet the effects of this silent disease are far-reaching, possibly leading to cirrhosis, liver cancer/failure and other liver diseases. The findings from the current study not only confirm the strong correlation between MAFLD and neuroinflammation/neurodegeneration that has been established by other recent research, but it explains how this happens.

"Lipocalin 2 is one of the important mediators exclusively produced in the liver and circulated throughout the body among those who have nonalcoholic steatohepatitis - or NASH - which is a more advanced form of MAFLD," Chatterjee says. "The research is immensely significant because MAFLD patients have been shown to develop Alzheimer's and Parkinson's-like symptoms as older adults. Scientists can use these results to advance our knowledge in neuroinflammatory complications in MAFLD and develop appropriate treatments."

Ninety percent of the obese population and 40 - 70 percent of those with type 2 diabetes appear to have MAFLD, according to the Centers for Disease Control and Prevention. In addition to overweight/obese status and diabetes, other risk factors include high cholesterol and/or triglycerides, high blood pressure and metabolic syndrome.

These individuals have a higher risk for having diseased livers, which are associated with increased lipocalin 2 - as found in the present study. The lipocalin 2 circulates throughout the body at higher levels, possibly inducing inflammation in the brain.

"Chronic neuroinflammation is a critical element in the onset and progression of neurodegenerative diseases, including Alzheimer's disease," says Prakash Nagarkatti, UofSC Vice President for Research and a member of the research team.

"Our study may help design new therapeutic approaches to counter the neuroinflammatory pathology in NASH but also in other related brain pathology associated with chronic inflammatory diseases," adds Chatterjee.

Credit: 
University of South Carolina

Asthma and allergies more common in 'night owl' teens: study

Teenagers who prefer to stay up late at night and sleep in late the next day are more likely to develop asthma and allergies than their "early bird" counterparts, according to new research published today.

"Compared to the morning type, those who go to bed late have approximately three times higher risk of developing asthma," said principal investigator Subhabrata Moitra, a post-doctoral fellow with the Alberta Respiratory Centre in the division of pulmonary medicine at the University of Alberta.

"We also found allergic rhinitis symptoms were twice as likely in late sleepers as in those who sleep early at night."

Moitra said more than 300 million people suffer from asthma worldwide and is the most common non-communicable disease among children, and the number is increasing every year. This is the first study to examine "chronotype" or sleep time preference and associations with asthma and allergies in teenagers.

The researchers questioned 1,684 adolescents in the Indian state of West Bengal about their sleep preferences and respiratory health, as part of the Prevalence and Risk Factors of Asthma and Allergy-Related Diseases Among Adolescents (PERFORMANCE) study. The questions included whether they had been diagnosed with asthma or experienced symptoms of rhinitis such as wheezing, runny nose or coughing.

Of the late risers, 23.6 per cent reported having asthma, compared with 6.2 per cent of the early risers.

The researchers found the association between asthma and sleep pattern preference held whether the teens were male or female, had a pet, lived in a rural or urban area, had a parent with asthma or allergies, or were exposed to second-hand smoke.

Moitra pointed out that humans are naturally early risers.

"Our ancestors evolved to wake as the sun rose and go to bed as the sun set," he said.

"However, a nighttime preference seems unavoidable for this young generation because digital screens are accessible at any hour."

Moitra said the researchers suspect that adolescents who go to bed late at night experience some level of sleep deprivation or sleep interruption. He said the blue or white-tinged light from computer, television and smartphone screens disrupts the production and function of melatonin, a sleep hormone.

"A perfect sleep is the result of good melatonin cycles," he said, adding that melatonin can also affect the immune system, and that the development of asthma and allergies is known to be the result of immune system alterations.

Moitra said his team intends to do further research to explore this association, including more objective tests of sleep quality and lung function.

In the meantime, he urges clinicians to ask patients more behavioural questions when diagnosing allergies and asthma.

"We need to be more vigilant to ask about eating habits, sleeping habits, whether they play outside, because these behaviours can be modified to help get rid of symptoms," he suggested.

Moitra said melatonin supplements can sometimes help with sleeplessness but should not be taken regularly because they can disrupt the body's natural production of the hormone.

He also suggested that we should minimize nighttime exposure to artificial light, and when it is unavoidable, use amber house lighting and LED screens and reduce brightness.

Credit: 
University of Alberta Faculty of Medicine & Dentistry

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

1. Clinical Validity of COVID-19 Serum Antibodies

Researchers studied 11,066 patients tested at Johns Hopkins Hospital to examine the characteristics of SARS-CoV-2 antibodies and assess their clinical utility. Of those, 115 patients were hospitalized and investigated for COVID-19. Clinical record review was performed to classify the patient into a COVID-19 case group (n=60) or a non-COVID-19 control group (n=55). These groups were compared to a laboratory control group. The researchers surmised that antibodies to SARS-CoV-2 demonstrate infection when measured at least 14 days after symptom onset, associate with clinical severity, and provide valuable diagnostic support in patients who test negative by nucleic acid amplification test (NAAT) on nasopharyngeal swabs but remain clinically suspicious for COVID-19. Besides epidemiologic and therapeutic applications, the study shows the potential contribution of serology to COVID-19 diagnosis, which currently relies on integrating symptom surveillance, radiographic findings, and NAAT results. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-2889.

Media contacts: PDFs for these articles is not yet available. Please click the link to read the full text. The lead author, Patrizio Caturegli, MD, MPH, can be contacted directly at pcat@jhmi.edu.

2. Qualitative Assessment of Rapid System Transformation to Primary Care Video Visits at an Academic Medical Center

In response to the COVID-19 pandemic, primary care practices across the United States have transitioned from in-person visits to virtual visits. However, there is limited information regarding the facilitators and barriers to the implementation of such a transition. Researchers from Stanford University School of Medicine evaluated the short-term implications of rapid transition to video visits at Stanford Primary Care through qualitative interviews with key stakeholders, and found critical issues to sustain video visits long-term. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-1814.

Media contacts: PDFs for these articles is not yet available. Please click the link to read the full text. The lead author, Malathi Srinivasan, MD, can be contacted directly at malathis@stanford.edu.

3. Obesity and COVID-19 in New York City: A Retrospective Cohort Study

Authors from Weill Cornell Medicine set out to study the association between obesity and outcomes among a diverse cohort of 1,687 persons hospitalized with confirmed COVID-19 at 2 New York City hospitals. The authors' findings support the need to consider the community-specific prevalence of obesity when planning a community's COVID-19 response and also suggest that risk conferred by obesity is similar across age, sex, and race. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-2730.

Media contacts: PDFs for these articles is not yet available. Please click the link to read the full text. The lead author, Parag Goyal, MD, MSc, can be contacted directly at pag9051@med.cornell.edu.

4. Regulatory T Cells for Treating Patients With COVID-19 and Acute Respiratory Distress Syndrome: Two Case Reports

Normally, regulatory T cells (also known as T regulatory cells or Tregs) migrate into inflamed tissues, dampening inflammatory responses. Patients with COVID-19 and acute respiratory distress syndrome (ARDS) have protracted hospitalizations characterized by excessive systemic inflammation (cytokine storm) and delayed lung repair, which is partly due to reduced or defective Tregs. Authors from Johns Hopkins University School of Medicine describe outcomes in 2 patients with COVID-19 and ARDS who were treated with Tregs, and are planning a multicenter, randomized, double-blind, placebo-controlled trial of CB Tregs for ARDS associated with COVID-19. Read the full text: https://www.acpjournals.org/doi/10.7326/L20-0681.

Media contacts: PDFs for these articles is not yet available. Please click the link to read the full text. The lead author, Douglas E. Gladstone, MD, can be contacted through Amy Mone at amone@jhmi.edu.

Credit: 
American College of Physicians

Group genomics drive aggression in honey bees

image: Researchers studied a unique population of gentle Africanized honey bees in Puerto Rico.

Image: 
Photo by Manuel A. Giannoni-Guzman

CHAMPAIGN, Ill. -- Researchers often study the genomes of individual organisms to try to tease out the relationship between genes and behavior. A new study of Africanized honey bees reveals, however, that the genetic inheritance of individual bees has little influence on their propensity for aggression. Instead, the genomic traits of the hive as a whole are strongly associated with how fiercely its soldiers attack.

The findings are reported in the Proceedings of the National Academy of Sciences.

“We’ve always thought that the most significant aspects of an organism’s behavior are driven, at least in part, by its own genetic endowment and not the genomics of its society,” said Matthew Hudson, a University of Illinois at Urbana-Champaign professor of bioinformatics in the department of crop sciences who led the research with Gene Robinson, an entomology professor and the director of the Carl R. Woese Institute for Genomic Biology at the U. of I. “This is a signal that there may be more to the genetics of behavior as a whole than we’ve been thinking about.”

The researchers focused on a unique population of gentle Africanized honey bees in Puerto Rico, which have evolved to become more docile than Africanized bees anywhere else in the world.

“We wanted to know which parts of the genome are responsible for gentle behavior versus aggressive behavior,” Hudson said. “And because there’s quite a bit of variation in aggression among these bees, they are an ideal population to study.”

Africanized bees are hardier and more resistant to disease than their European predecessors on the island, so scientists are eager to learn more about the genetic underpinnings of the Puerto Rican bees’ gentle nature.

When a honey bee hive is disturbed, guard bees emit a chemical signal that spurs soldier bees into action. The response depends on the nature of the threat and the aggressiveness of the hive. Whether the soldiers sting their target is another measure of aggression, as soldiers that sting will die as a result.

In general, foragers do little to defend the hive.

The researchers compared the genomes of soldier and forager bees from each of nine honey bee colonies in Puerto Rico. They also tested how aggressively the soldier bees responded to an assault on the hive.

To their surprise, the scientists found no genome-sequence differences between the soldiers and foragers that consistently explained the different responses.

But when the researchers conducted a genomewide association study comparing the the most-aggressive and least-aggressive hives, they saw a strong correlation between hive genomics and aggression. The analyses revealed that one region of the genome appeared to play a central role in the hives' relative gentleness or aggression.

“There was one chunk of DNA where the frequency of that chunk in the hive seems to dictate how gentle that hive is going to be to a large extent,” Hudson said. “What that tells us is that the individual genetic makeup of the bee doesn’t have a strong influence on how aggressive it is. But the genetic makeup of the society that the bees live in – the colony – has a very strong impact on how aggressive the bees in that colony are.”

"Many behavioral traits in animals and humans are known to be strongly affected by inherited differences in genome sequence, but for many behaviors, how an individual acts also is influenced by how others around it are acting - nature and nurture, respectively," Robinson said. "We now see that in the beehive, nurture can also have a strong genomic signature."

Such behavioral genomic influences may be particularly pronounced in honey bees, which live in an extraordinarily cooperative society where each individual has a defined social and functional role, he said.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Incoming CEOs with premium pay packages perform accordingly, study shows

The average pay package for CEOs at top U.S. companies surpassed $12 million last year, according to the latest Associated Press survey, as boards attempted to keep pace in the labor market for CEO talent.

Many previous studies focused on how CEO pay relates to past performance. But new research from the University of Notre Dame takes a different approach: The paper examines how compensation for incoming chief executives -- which serves as a sign of the board's upfront confidence in the CEO's ability -- is related to subsequent performance in the years that follow.

CEOs who are paid more than the going rate during their first two years on the job tend to perform more effectively over the rest of their tenure, according to "Board predictive accuracy in executive selection decisions: How do initial board perceptions of CEO quality correspond with subsequent CEO career performance?" forthcoming in Organization Science from Adam Wowak and Craig Crossland, management professors at Notre Dame's Mendoza College of Business and Timothy Quigley from the University of Georgia.

"Newly hired CEOs who are deemed by their boards to warrant above-average pay tend to deliver above-average performance in subsequent years," Wowak said. "This is another way of saying that boards are, in general, reasonably accurate in their initial assessments of CEO quality, as their decision about how much to pay the newly minted CEO is predictive of how the CEO performs in the future. Conversely, incoming CEOs who receive below-market pay perform less effectively, on average."

"The relationship is far from complete, however," he continued, "Although there is a statistically significant positive effect, there is still a lot of unexplained variance. Boards are getting it right more than they are getting it wrong, but not by a large margin."

The study examined CEOs who began their tenures between 2004 and 2012 at S&P 1500 firms. The team measured the extent to which they were "overpaid" versus "underpaid" when they were hired. They calculated the degree to which pay exceeded or fell short of suggested market norms based on, among other things, the size of the company and its industry. They looked at how these CEOs performed through the end of their tenures or the end of 2017, whichever came first. To measure performance, they used a new technique that isolated the CEO's individual effect on firm performance after accounting for contextual factors, including conditions inherited by the CEO upon joining, performance of the rest of the industry and macro-level effects. 

"When looking at CEO pay ?-- or anyone's pay, really ?-- it's important to remember that it reflects both backward and forward-aiming rationales," Wowak said. "Most people tend to look to the past. Our study serves as a reminder that practitioners, journalists and others should consider both aspects when forming assessments or critiques of CEO pay. Only looking backward ignores an important part of the story."

Credit: 
University of Notre Dame

Simulations shows magnetic field can change 10 times faster than previously thought

A new study by the University of Leeds and University of California at San Diego reveals that changes in the direction of the Earth's magnetic field may take place 10 times faster than previously thought.

Their study gives new insight into the swirling flow of iron 2800 kilometres below the planet's surface and how it has influenced the movement of the magnetic field during the past hundred thousand years.

Our magnetic field is generated and maintained by a convective flow of molten metal that forms the Earth's outer core. Motion of the liquid iron creates the electric currents that power the field, which not only helps guide navigational systems but also helps shield us from harmful extra terrestrial radiation and hold our atmosphere in place.

The magnetic field is constantly changing. Satellites now provide new means to measure and track its current shifts but the field existed long before the invention of human-made recording devices. To capture the evolution of the field back through geological time scientists analyse the magnetic fields recorded by sediments, lava flows and human-made artefacts. Accurately tracking the signal from Earth's core field is extremely challenging and so the rates of field change estimated by these types of analysis are still debated.

Now, Dr Chris Davies, associate professor at Leeds and Professor Catherine Constable from the Scripps Institution of Oceanography, UC San Diego, in California have taken a different approach. They combined computer simulations of the field generation process with a recently published reconstruction of time variations in Earth's magnetic field spanning the last 100,000 years

Their study, published in Nature Communications, shows that changes in the direction of Earth's magnetic field reached rates that are up to 10 times larger than the fastest currently reported variations of up to one degree per year.

They demonstrate that these rapid changes are associated with local weakening of the magnetic field. This means these changes have generally occurred around times when the field has reversed polarity or during geomagnetic excursions when the dipole axis -- corresponding to field lines that emerge from one magnetic pole and converge at the other -- moves far from the locations of the North and South geographic poles.

The clearest example of this in their study is a sharp change in the geomagnetic field direction of roughly 2.5 degrees per year 39,000 years ago. This shift was associated with a locally weak field strength, in a confined spatial region just off the west coast of Central America, and followed the global Laschamp excursion - a short reversal of the Earth's magnetic field roughly 41,000 years ago.

Similar events are identified in computer simulations of the field which can reveal many more details of their physical origin than the limited paleomagnetic reconstruction.

Their detailed analysis indicates that the fastest directional changes are associated with movement of reversed flux patches across the surface of the liquid core. These patches are more prevalent at lower latitudes, suggesting that future searches for rapid changes in direction should focus on these areas.

Dr Davies, from the School of Earth and Environment, said: "We have very incomplete knowledge of our magnetic field prior to 400 years ago. Since these rapid changes represent some of the more extreme behaviour of the liquid core they could give important information about the behaviour of Earth's deep interior."

Professor Constable said: "Understanding whether computer simulations of the magnetic field accurately reflect the physical behaviour of the geomagnetic field as inferred from geological records can be very challenging.

"But in this case we have been able to show excellent agreement in both the rates of change and general location of the most extreme events across a range of computer simulations. Further study of the evolving dynamics in these simulations offers a useful strategy for documenting how such rapid changes occur and whether they are also found during times of stable magnetic polarity like what we are experiencing today."

Credit: 
University of Leeds