Culture

Expressing variety of emotions earns entrepreneurs funding

VANCOUVER, Wash. - Putting on a happy face might not be enough for entrepreneurs to win over potential investors.

Despite perceptions that entrepreneurs should always be positive about their ventures, a study led by a Washington State University researcher found that entrepreneurs whose facial expressions moved through a mix of happiness, anger and fear during funding pitches were more successful.

"Our findings show that there's a role for different emotions in pitches," said Ben Warnick, WSU assistant professor in WSU's Carson College of Business and lead author on the study published in the Journal of Business Venturing. "For example, an angry facial expression can convey how much you care about something, instead of just smiling, which on the extreme end can come off as insincere or overoptimistic. It's good to balance that out. There are different reasons for using different expressions."

While previous research--and advice for entrepreneurs--has focused on using happy or positive attitudes in pitches, Warnick and his co-authors looked at several emotions: happiness, anger, fear and sadness.

For the study, the researchers analyzed nearly 500 pitch videos from the online crowdfunding site, Kickstarter. They used facial analysis software to code the presenters' facial expressions for the four emotions as well as neutral expressions for every frame of each video. They measured the percentage of the pitch that the entrepreneurs expressed each emotion. Then, they compared the display of these emotional expressions with the ultimate success of the pitch by three measures: whether the entrepreneurs met their stated fundraising goal, total amount raised and how many people contributed.

The study showed that those who used a variety of three emotional expressions--happiness, anger and fear--had the most fundraising success. The only emotion that had a negative effect on funding was sadness.

In a qualitative analysis, the authors found that many successful entrepreneurs used different emotional expressions at different points in the pitch. For instance, many entrepreneurs would start their pitch in a happy way, introducing themselves and talking about how proud they are of their team. They would then use anger to talk about their determination or the problem they were trying to address. When the entrepreneurs talked about obstacles, the risk they were taking or need for resources, they would often use facial expressions conveying fear.

In contrast, people who expressed very little emotion on their faces did not do well in garnering funds even if the words they were saying were compelling. Entrepreneurs who stuck to just one emotion also did not do as well.

Yet there were limits to the use of emotion in pitches, even if it was varied.

"There's a Goldilocks point where you can have too little or too much," said Warnick. "Expressing happiness, anger and fear all promote funding up to a point. But if you express any one of these emotions too frequently, you're hurting your funding prospects."

This study only looked at the use of facial expressions. Warnick suggested that further research might look into other channels of expression or at the connection between what people actually feel and what they express, as the two don't always align.

"Some people might be very expressive, where what they're feeling on the inside shows quite readily to other people," said Warnick. "Others might be engaging in impression management, in other words, faking it."

Credit: 
Washington State University

Meteorite amino acids derived from substrates more widely available in the early solar system

image: Asteroid Ryugu

Image: 
JAXA

Scientists have recreated the reaction by which carbon isotopes made their way into different organic compounds, challenging the notion that organic compounds, such as amino acids, were formed by isotopically enriched substrates. Their discovery suggests that the building blocks of life in meteorites were derived from widely available substrates in the early solar system.

Their findings were published online in Science Advances on April 28, 2021.

Carbonaceous meteorites contain the building blocks of life, including amino acids, sugars, and nucleobases. These meteorites are potential providers of these molecules to the prebiotic Earth.

The small organic molecules found in meteorites are generally enriched in a heavy carbon isotope (13C). However, the most abundant organic matter in meteorites is depleted in 13C. Such a difference has long since puzzled scientists. It has been thought the small molecules came from 13C enriched substances found in the extremely cold outer solar system and/or the solar nebula.

However, a team of researchers from Tohoku University and Hokkaido University has presented a new hypothesis. They argue that the formose-type reactions, the formation of sugar from formaldehyde, create sizeable differences in the 13C concentration between small and large organic molecules.

Recreating the formose-type reaction in the lab, the researchers found that the carbon isotope components of meteorite organics are created by the formose-type reaction even in a hot aqueous solution.

Their findings suggest that the organic compounds were formed without the use of isotopically enriched substrates from the outer solar system; rather, their formation may have taken place using substrates commonly present in the early solar system.

"The discrepancy in carbon isotopic composition between the small organic compounds and large insoluble organic matter is one of the most mysterious characteristics of meteorite organic compounds," said Tohoku University's Yoshihiro Furukawa, lead-author of the study. "However, the behavior of 13C in this reaction solves the puzzle completely."

"Even though the compounds were synthesized 4.6 billion years ago, the isotope compositions tell us the process of synthetic reaction," added co-author Yoshito Chikaraishi, from Hokkaido University.

Looking ahead, the research group is planning to investigate the impact of the formose-type reaction in nitrogen and carbon isotope characteristics in a number of meteorite organics and carbonates.

Credit: 
Tohoku University

Mammals evolved big brains after big disasters

image: Major extinction events have given rise to present-day differences in relative brain size.

Image: 
Javier Lazaro (http://www.lazaroillustration.com/

Scientists from Stony Brook University and the Max Planck Institute of Animal Behavior have pieced together a timeline of how brain and body size evolved in mammals over the last 150 million years. The international team of 22 scientists, including biologists, evolutionary statisticians, and anthropologists, compared the brain mass of 1400 living and extinct mammals. For the 107 fossils examined--among them ancient whales and the oldest Old World monkey skull ever found--they used endocranial volume data from skulls instead of brain mass data. The brain measurements were then analyzed along with body size to compare the scale of brain size to body size over deep evolutionary time.

The findings, published in Science Advances, showed that brain size relative to body size--long considered an indicator of animal intelligence--has not followed a stable scale over evolutionary time. Famous "big-brained" humans, dolphins, and elephants, for example, attained their proportions in different ways. Elephants increased in body size, but surprisingly, even more in brain size. Dolphins, on the other hand, generally decreased their body size while increasing brain size. Great apes showed a wide variety of body sizes, with a general trend towards increases in brain and body size. In comparison, ancestral hominins, which represent the human line, showed a relative decrease in body size and increase in brain size compared to great apes.

The authors say that these complex patterns urge a re-evaluation of the deeply rooted paradigm that comparing brain size to body size for any species provides a measure of the species' intelligence. "At first sight, the importance of taking the evolutionary trajectory of body size into account may seem unimportant," says Jeroen Smaers, an evolutionary biologist at Stony Brook University and first author on the study. "After all, many of the big-brained mammals such as elephants, dolphins, and great apes also have a high brain-to-body size. But this is not always the case. The California sea lion, for example, has a low relative brain size, which lies in contrast to their remarkable intelligence."

By taking into account evolutionary history, the current study reveals that the California sea lion attained a low brain-to-body size because of the strong selective pressures on body size, most likely because aquatic carnivorans diversified into a semi-aquatic niche. In other words, they have a low relative brain size because of selection on increased body size, not because of selection on decreased brain size.

"We've overturned a long-standing dogma that relative brain size can be equivocated with intelligence," says Kamran Safi, a research scientist at the Max Planck Institute of Animal Behavior and senior author on the study. "Sometimes, relatively big brains can be the end result of a gradual decrease in body size to suit a new habitat or way of moving--in other words, nothing to do with intelligence at all. Using relative brain size as a proxy for cognitive capacity must be set against an animal's evolutionary history and the nuances in the way brain and body have changed over the tree of life."

The study further showed that most changes in brain size occurred after two cataclysmic events in Earth's history: the mass extinction 66 million years ago and a climatic transition 23-33 million years ago.

After the mass extinction event at the end of the Cretaceous period, the researchers noticed a dramatic shift in brain-body scaling in lineages such as rodents, bats and carnivorans as animals radiated into the empty niches left by extinct dinosaurs. Roughly 30 million years later, a cooling climate in the Late Paleogene led to more profound changes, with seals, bears, whales, and primates all undergoing evolutionary shifts in their brain and body size.

"A big surprise was that much of the variation in relative brain size of mammals that live today can be explained by changes that their ancestral lineages underwent following these cataclysmic events," says Smaers. This includes evolution of the biggest mammalian brains, such as the dolphins, elephants, and great apes, which all evolved their extreme proportions after the climate change event 23-33 million years ago.

The authors conclude that efforts to truly capture the evolution of intelligence will require increased effort examining neuroanatomical features, such as brain regions known for higher cognitive processes. "Brain-to-body size is of course not independent of the evolution of intelligence," says Smaers. "But it may actually be more indicative of more general adaptions to large scale environmental pressures that go beyond intelligence."

Credit: 
Max-Planck-Gesellschaft

When does the green monster of jealousy wake up in people?

Adult heterosexual women and men are often jealous about completely different threats to their relationship. These differences seem to establish themselves far sooner than people need them. The finding surprised researchers at the Norwegian University of Science and Technology (NTNU) who studied the topic.

"You don't really need this jealousy until you need to protect yourself from being deceived," says Professor Leif Edward Ottesen Kennair at NTNU's Department of Psychology.

Romantic jealousy can be experienced as horrible at its worst. But jealousy associated with a partner's infidelity has clearly been an evolutionary advantage.

"Jealousy is activated when a relationship we care about is threatened. The function is probably to minimize threats to this relationship. These threats have historically been somewhat different for men and women," says Per Helge H. Larsen, a master's student in the Department of Psychology at NTNU.

Evolutionary psychology can help explain the gender differences having to do with this jealousy.

The differences in sexual jealousy between the sexes, simply put, revolve around the possibilities for their own children. Previous research has already established that:

Men more often react more negatively when their partner has had sex with others than if she falls in love or spends time with someone without having sex.

It's easy to explain: if the woman is sexually unfaithful, it ultimately means that her partner might need to use his own resources to raise another man's children.

Women, on the other hand, are always sure that the child is theirs. They tend to react more negatively to their partner having feelings for another woman than that he's had sex with her.

This response can also be explained. Historically, she could suffer a loss of resources and status for herself and their child if he left her for someone else.

We should note that these differences have been with us since long before birth control pills and the possibility for women to feed and raise their children alone. A few generations aren't enough to change either biology or culture very much.

The gender differences that lead to jealousy are easy to explain. They are evolutionary adaptations that get passed on to the next generation - but why does this gender difference arise so early?

Precisely this question presents theoretical challenges for the researchers, because jealousy has historically not been risk-free, either.

"Jealousy is potentially a costly reaction, perhaps especially for the man before he is physically strong enough to defend himself and his partner against rivals, and before he would normally have had the opportunity to have a steady partner through marriage," says Kennair.

Throughout history, jealous boys and men have run a great risk by expressing their jealousy. Being ostracized, injured or killed in competing for women is all too well known.

"Throughout evolutionary history, the usefulness of man's form of jealousy would probably have been reserved for men of high status who had a great ability to defend themselves," says Kennair.

So why be jealous before you're able to take care of your partner?

"We knew that this difference becomes established in the early 20s, but through our study we've shown that it appears even earlier," says Larsen.

The research group at NTNU wanted to find out when these gender differences around jealousy, sex and emotions begin.

To this end they studied 1266 pupils aged 16 to 19 years in upper secondary school. However, it turns out the participants weren't young enough for the researchers to answer this question as to when gender differences develop.

"The gender difference was stable and clear throughout the age range of the study. This is pretty startling," says Professor Mons Bendixen in the Department of Psychology.

"The gender difference wasn't affected by whether the teens currently had a boyfriend or girlfriend, or whether they had made their sexual debut. The difference thus doesn't seem to have anything to do with experience," Bendixen adds.

We can imagine, and perhaps assume, that the gender differences in jealousy responses arise even earlier than age 16. But we don't know that for sure yet. To confirm it, we need to study even younger boys and girls.

"It's also unclear how young study participants can be to research this in a meaningful way," says Kennair.

Distinguishing between sexual jealousy and other types of jealousy can quickly become meaningless for the very youngest among us.

In one way or another, the benefits of this early, gender-specific sexual jealousy must have outweighed its dangers.

"It could be that the early development of sexual jealousy is simply preparing us for adulthood, and that it has no other function at a younger age."

But Kennair emphasized that jealousy is a dangerous feeling. Young men could put themselves in danger by experiencing this feeling before it was appropriate and they were physically strong enough to defend the relationship.

But the researchers are clear that this idea is still speculation.

"We need further research and theory development on the basis of these findings," Kennair said.

Credit: 
Norwegian University of Science and Technology

More than 25% of infants not getting common childhood vaccinations, study finds

image: Rajesh Balkrishnan, PhD, of the University of Virginia School of Medicine, and his colleagues warn that failure to complete the course of common childhood vaccinations leaves children at risk. "These findings highlight that significant disparities still exist in protecting infants from preventable diseases in the United States," he said.

Image: 
Dan Addison | UVA Communications

More than a quarter of American infants in 2018 had not received common childhood vaccines that protect them from illnesses such as polio, tetanus, measles, mumps and chicken pox, new research from the University of Virginia School of Medicine reveals.

Only 72.8% of infants aged 19-35 months had received the full series of the seven recommended vaccines, falling far short of the federal government's goal of 90%. Those less likely to complete the vaccine series include African-American infants, infants born to mothers with less than a high-school education and infants in families with incomes below the federal poverty line.

The researchers warn that failure to complete the vaccine series leaves children at increased risk of infection, illness and death. It also reduces the herd immunity of the entire population, allowing diseases to spread more easily.

"These findings highlight that significant disparities still exist in protecting infants from preventable diseases in the United States," said researcher Rajesh Balkrishnan, PhD, of UVA's Department of Public Health Sciences. "The low seven-vaccine series rates in low-income families are disheartening, especially with federal programs such as Vaccine for Children, which provides coverage for their service."

Trends in Childhood Vaccination

Some good news: There was a 30% increase in the overall number of infants getting the full vaccine series during 2009-2018, the 10-year period the researchers examined.

However, disparities in vaccine uptake grew between low-income families and higher-income families in that time. In 2009, families below the federal poverty line were 9% less likely to get the full vaccine series than families with annual income above $75,000. In 2018, low-income families were 37% less likely to complete the vaccine series.

The researchers say the lower rate among low-income families is especially disheartening considering the availability of federal programs such as Vaccine for Children, which provides free vaccines for uninsured, underinsured and Medicaid-eligible children.

"Free vaccination coupled with no physician administration fees, linked with potential programs that are frequently accessed by low-income families, could be a potential solution to increase immunization rates," Balkrishnan said "The role of healthcare professionals such as pharmacists could also be expanded to provide these services cost effectively."

The study found that mothers who had not completed high school were almost 27% less likely to have their infants fully vaccinated than moms with college education. That disparity had increased sharply from a previous study evaluating 1995-2003. The previous study found that mothers with less than high-school education were 7.8% less likely to complete the vaccine series.

Among African-Americans, completion of the vaccine series was significantly lower than in both whites and Hispanics. The researchers call this disparity "unacceptable" and say cost-effective interventions are needed to increase immunization rates and address vaccine hesitancy.

"These findings are particularly important in the context of the current COVID pandemic," Balkrishnan said. "Particular attention needs to be paid to vulnerable populations in ensuring the availability and access to important life-saving vaccines."

Credit: 
University of Virginia Health System

How does the brain flexibly process complex information?

Human decision-making depends on the flexible processing of complex information, but how the brain may adapt processing to momentary task demands has remained unclear. In a new article published in the journal Nature Communications, researchers from the Max Planck Institute for Human Development have now outlined several crucial neural processes revealing that our brain networks may rapidly and flexibly shift from a rhythmic to a "noisy" state when the need to process information increases.

Driving a car, deliberating over different financial options, or even pondering different life paths requires us to process an overwhelming amount of information. But not all decisions pose equal demands. In some situations, decisions are easier because we already know which pieces of information are relevant. In other situations, uncertainty about which information is relevant for our decision requires us to get a broader picture of all available information sources. The mechanisms by which the brain flexibly adapts information processing in such situations were previously unknown.

To reveal these mechanisms, researchers from the Lifespan Neural Dynamics Group (LNDG) at the Max Planck Institute for Human Development and the Max Planck UCL Centre for Computational Psychiatry and Ageing Research designed a visual task. Participants were asked to view a moving cloud of small squares that differed from each other along the four visual dimensions: color, size, brightness, and movement direction. Participants were then asked a question about one of the four visual dimensions. For example, "Were more squares moving to the left, or right?". Prior to seeing the squares, the study authors manipulated "uncertainty" by informing participants which feature(s) they could be asked about; the more features that were relevant, the more uncertain participants were expected to become about which features to focus upon. Throughout the task, brain activity was measured using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI).

First, the authors found that when participants were more uncertain about the relevant feature in the upcoming choice, participants' EEG signals shifted from a rhythmic mode (present when participants could focus on a single feature) to a more arrhythmic, "noisy" mode. "Brain rhythms may be particularly useful when we need to select relevant over irrelevant inputs, while increased neural 'noise' could make our brains more receptive to multiple sources of information. Our results suggest that the ability to shift back and forth between these rhythmic and 'noisy' states may enable flexible information processing in the human brain," says Julian Q. Kosciessa, LNDG post-doc and the article's first author.

Additionally, the authors found that the extent to which participants shifted from a rhythmic to a noisy mode in their EEG signals was dominantly coupled with increased fMRI activity in the thalamus, a deep brain structure largely inaccessible by EEG. The thalamus is often thought of primarily as an interface for sensory and motor signals, while its potential role in flexibility has remained elusive. The findings of the study may thus have broad implications for our current understanding of the brain structures required for us to adapt to an ever-changing world. "When neuroscientists think about how the brain enables behavioral flexibility, we often focus exclusively on networks in the cortex, while the thalamus is traditionally considered a simple relay for sensorimotor information. Instead, our results argue that the thalamus may support neural dynamics in general and could optimize brain states according to environmental demands, allowing us to make better decisions," says Douglas Garrett, senior author of the study and LNDG group leader.

In the next phases of their research, the authors plan to investigate the underlying neurochemical bases of how the thalamus permits shifts in neural dynamics, and whether such shifts can be "tuned" by stimulating the thalamus using weak electrical currents.

Credit: 
Max Planck Institute for Human Development

Risk factors for a severe course of COVID-19 in people with diabetes

People with diabetes are at increased risk of developing a severe course of COVID-19 compared to people without diabetes. The question to be answered is whether all people with diabetes have an increased risk of severe COVID-19, or whether specific risk factors can also be identified within this group. A new study by DZD researchers has now focused precisely on this question and gained relevant insights.

The COVID-19 pandemic poses unprecedented challenges to science and the health sector. While in some people with a SARS-CoV-2 infection the disease is hardly noticeable, in others it is much more severe and sometimes fatal. So far, knowledge about the course of a COVID-19 disease is still quite meager. However, diabetes has increasingly emerged as one of the risk factors determining the severity of the disease. Several studies on diabetes and SARS-CoV-2 have already observed an approximately two- to threefold increase in mortality due to COVID-19 in people with diabetes compared to people without diabetes. This makes it all the more important to conduct studies that examine the risk factors of people with diabetes for severe COVID-19 disease in more detail.

A new study of the German Diabetes Center, partner of the DZD, led by Dr. Sabrina Schlesinger, head of the junior research group Systematic Reviews at the Institute for Biometrics and Epidemiology, therefore examined the risk phenotypes of diabetes and their possible association with the severity of COVID-19. In their meta-analysis, the researchers combined the results from 22 published studies, so that a total of more than 17,500 people with diabetes and confirmed SARS-CoV-2 infection were included in this study. For individuals with diabetes and SARS-CoV-2 infection, male sex, older age (>65 years), high blood glucose levels (at the time of hospital admission), chronic treatment with insulin, and existing concomitant diseases (such as cardiovascular disease or kidney disease) were identified as risk factors for a severe COVID-19 course. On the other hand, the results showed that chronic metformin treatment was associated with a reduced risk of a severe course of COVID-19.

"This current systematic review and meta-analysis describes within the high-risk group, namely diabetes mellitus, those individuals with the highest risk of a severe COVID-19 course," said Professor Michael Roden, scientific director and board member of the German Diabetes Center. "These results will help to classify individuals with diabetes even better in order to improve their therapy and mitigate the course."

The risk factors identified in the study - i.e. older persons, usually male, with comorbidities of diabetes and chronic insulin treatment - can thus be seen as indicators of diabetes severity or overall poor health. "However, some results, especially on diabetes-specific factors such as type or duration of diabetes and further treatments, are still imprecisely assessed and the significance is low. In order to strengthen the significance, further primary studies are needed that examine these specific risk factors and consider other relevant influencing factors in their analysis," said Dr. Schlesinger. Her research team is therefore already working on a next version of this review: "This review presents the current study situation and will be updated regularly as long as new findings on this topic are available," said Dr. Schlesinger.

Credit: 
Deutsches Zentrum fuer Diabetesforschung DZD

GeneSight Mental Health Monitor shows misunderstanding of depression and treatment

video: A new GeneSight Mental Health Monitor national survey finds 83 percent of those diagnosed with depression say life would be easier if others could understand what they're going through. Yet, most reported they were more likely to hear statements that demonstrate a lack of understanding and support for what they are experiencing.

Image: 
GeneSight Mental Health Monitor

In a new nationwide poll, the GeneSight® Mental Health Monitor found that 83% of people with depression agree that life would be easier if others could understand their depression. Yet, most people who have not experienced depression may not be able to understand the challenges, including its treatment.

"Depression is one of the most misunderstood disorders. When people misinterpret patients with depression as 'lazy' or 'dramatic,' they are vastly underestimating and misunderstanding the debilitating symptoms of major depressive disorder," said Mark Pollack, M.D., chief medical officer for the GeneSight test at Myriad Genetics. "That is why we are working with the Depression and Bipolar Support Alliance, so that loved ones can offer more empathetic support and people with depression won't feel so alone."

For Mental Health Awareness Month (May), GeneSight and the Depression and Bipolar Support Alliance (DBSA) have partnered to raise awareness and understanding for how a person who has major depressive disorder feels, and why it can be so hard to seek treatment.

Lack of Understanding and Empathy about Depression

Three out of four people living with depression said they desire support from their loved ones including just listening or saying supportive things like: "How can I help?" or "Do you want to talk about it?". Instead, nearly half of those with depression said they were more likely to hear statements like: "You need to get over it/snap out of it" or "We all get sad sometimes".

"Depression is a serious but treatable medical condition that affects how a person feels, thinks, and acts. Though, typically characterized by feelings of sadness, depression symptoms may appear as irritability or apathy," said Michael Thase, M.D., professor of psychiatry, Perelman School of Medicine and the Corporal Michael J. Crescenz VA Medical Center, and DBSA scientific advisory board member. "We must work together - providers, patients, family and friends - to continue to reduce the impact of stigma. Misunderstanding the disorder may lead to people feeling embarrassed and/or unwilling to seek the treatment they need."

Nearly half of those either diagnosed with depression or concerned they may have depression say they feel ashamed/embarrassed when others found out they were suffering from depression, according to the GeneSight Mental Health Monitor.

Pandemic Prompts Search for New Treatment

More than half of those diagnosed with depression indicated in the poll that they started a new treatment since the start of the pandemic. Nevertheless, for some, starting a new depression medication doesn't guarantee success.

More than half of people diagnosed with depression said they have tried four or more depression medications in their lifetime, with nearly 1 in 4 respondents reporting they have tried six or more medications to try to find relief.

"I couldn't get out of bed to take care of my children, much less go to the doctor multiple times to try new medicines that 'might' help," said Amanda, a 25-year-old woman who was diagnosed with major depressive disorder. "The years of trial and error were so frustrating and discouraging. You feel like you are stuck living that way."

Those who indicated in the GeneSight Mental Health Monitor that they had experienced the trial-and-error process described the experience as:

"On a rollercoaster" (51%)

"I'm just waiting for the next side effect" (45%)

"Walking through a maze blindfolded" (44%)

"Playing a game of darts, only I'm the dartboard" (42%)

While 4 in 10 of those diagnosed with depression say they are not confident that their depression medications will work for them, 7 in 10 would feel "hopeful" if their doctor recommended genetic testing as part of their treatment plan.

Genetic testing, like the GeneSight Psychotropic test, analyzes how a patient's genes may affect their outcomes with medications commonly prescribed to treat depression, anxiety and other psychiatric conditions.

"With just a simple cheek swab, the GeneSight test provides your clinician with information about which medications may require dose adjustments, be less likely to work, or have an increased risk of side effects based on a patient's genetic makeup," said Dr. Pollack. "It's one of many tools in a physician's toolbox that may help get patients on the road to feeling more like themselves again."

Conquering the Depression Disconnect

While 7 in 10 adults said that they are more conscious about their own or others' mental health challenges than they were before the pandemic began, less than half of adults are very confident they can recognize if a loved one is suffering from depression, according to the GeneSight Mental Health Monitor.

For a better understanding of depression and treatment, visit KnowMentalHealth.com. For more information on how genetic testing can help inform clinicians in depression treatment, please visit genesight.com.

Credit: 
MediaSource

Espresso, latte or decaf? Genetic code drives your desire for coffee

Whether you hanker for a hard hit of caffeine or favour the frothiness of a milky cappuccino, your regular coffee order could be telling you more about your cardio health than you think.

In a world first study of 390,435 people, University of South Australia researchers found causal genetic evidence that cardio health - as reflected in blood pressure and heart rate - influences coffee consumption.

Conducted in partnership with the SAHMRI, the team found that people with high blood pressure, angina, and arrythmia were more likely to drink less coffee, decaffeinated coffee or avoid coffee altogether compared to those without such symptoms, and that this was based on genetics.

Lead researcher and Director of UniSA's Australian Centre for Precision Health, Professor Elina Hyppönen says it's a positive finding that shows our genetics actively regulate the amount of coffee we drink and protect us from consuming too much.

"People drink coffee for all sorts of reasons - as a pick me up when they're feeling tired, because it tastes good, or simply because it's part of their daily routine," Prof Hyppönen says.

"But what we don't recognise is that people subconsciously self-regulate safe levels of caffeine based on how high their blood pressure is, and this is likely a result of a protective genetic a mechanism.

"What this means is that someone who drinks a lot of coffee is likely more genetically tolerant of caffeine, as compared to someone who drinks very little.

"Conversely, a non-coffee drinker, or someone who drinks decaffeinated coffee, is more likely prone to the adverse effects of caffeine, and more susceptible to high blood pressure."

In Australia, one in four men, and one in five women suffer from high blood pressure, with the condition being a risk factor for many chronic health conditions including stroke, heart failure and chronic kidney disease.

Using data from the UK Biobank, researchers examined the habitual coffee consumption of 390,435 people, comparing this with baseline levels of systolic and diastolic blood pressure, and baseline heart rate. Causal relationships were determined via Mendelian randomization.

Prof Hyppönen says how much coffee we drink is likely to be an indicator of our cardio health.

"Whether we drink a lot of coffee, a little, or avoid caffeine altogether, this study shows that genetics are guiding our decisions to protect our cardio health," Prof Hyppönen says.

"If your body is telling you not to drink that extra cup of coffee, there's likely a reason why. Listen to your body, it's more in tune with what your health than you may think."

Credit: 
University of South Australia

Novel imaging method to visualize respiratory activity of 3D tissue models

image: Spheroids were settled on the gold electrode with a luminescent solution. When the potential steps were applied, the spheroids were brightly visualized. The dark area around the spheroids indicates a decrease in luminescence attributed to low O2 concentration, caused by the respiratory activity of the spheroids.

Image: 
Kaoru Hiramoto, et al.

Cells breathe, to an extent, exchanging gases, taking in energy sources from the environment and processing it. Now, researchers from Tohoku University in Japan have shone a light on the process in a new way.

Their demonstrated visualization method in model systems was made available online on March 12th in Biosensors and Bioelectronics, ahead of the June print edition.

The researchers used spheroids - cultured cells within a close-to-natural environment - to mimic a biological tissue using mesenchymal stem cells (MSCs). Due to MCSs' ability to self-renew and differentiate into various tissues, they are of significant interest for use in regenerative medicine and to test different therapeutics in different tissue models.

They settled the spheroids on a gold electrode dosed in luminescent solution. The researchers applied stepped electric potential - the energy needed to convert oxygen - to the set up where the solution was then oxidized sensitized to produce luminescent, which is dependent on oxygen concentration. The spheroids glowed brightly, surrounded by a ring of darkness where oxygen concentration was low due to gas exchanges. They call the process potential step-based electrochemiluminescence (ECL) imaging.

"As living spheroids consume oxygen to produce energy, the respiratory activity was elegantly visualized by the distribution of luminescence around the spheroids," said Kaoru Hiramoto of the JSPS Research Fellowship for Young Scientists, who developed the ECL imaging system.

The researchers used a digital camera to visualize multiple spheroids with a single shot, demonstrating both the high spatial resolution of their system, as well as the low background noise.

"The system offers high throughput analysis of spheroids, in addition to highly improving resolution of the images compared to conventional electrode array devices," said co-corresponding authors Kosuke Ino and Hitoshi Shiku, assistant professor and professor in the Graduate School of Engineering, Tohoku University. "The system offers novel insights into electrochemical devices and imaging systems for cell spheroids."

ECL imaging is still in the development phase, the researchers said. They plan to improve the sensitivity and selectivity, as well as analyze other data that might be collected in the system, such as cellular metabolites attached to the electrode. They also plan to use the system to investigate more complicated biological models, as well as patient-derived cell spheroids.

"To the best of our knowledge, this is the first attempt to visualize the respiratory activity of spheroids by direct conversion of oxygen concentration into ECL imaging," Ino said. "The features of the proposed system - high spatial resolution with the ability for simultaneous imaging of multiple spheroids - are promising for transplantation research and drug screening utilizing cell spheroids."

Credit: 
Tohoku University

How behavioral rhythms are fine-tuned in the brain

image: A schema summarizing the effects caused by the deficiency of GABAergic transmission from vasopressin neurons on circadian rhythms at multiple levels. Without GABA release from vasopressin neurons, the spatiotemporal pattern of GABAergic transmission alters within the SCN. Such an alteration does not significantly disturb the spatiotemporal organization of molecular clocks measured with clock gene expression and intracellular calcium, but it does cause an aberrant bimodal pattern of the SCN firing (electrical activity) rhythm that may lead to the increased interval between the morning and evening locomotor activities. Thus, GABAergic transmission of vasopressin neurons regulates the SCN neuronal activity rhythm to modulate the time at which SCN molecular clocks enable circadian behavior.

Image: 
Kanazawa University

Our bodies and behaviors often seem to have rhythms of their own. Why do we go to the bathroom at the same time every day? Why do we feel off if we can't go to sleep at the right time? Circadian rhythms are a behind-the-scenes force that shape many of our behaviors and our health. Michihiro Mieda and his team at Kanazawa University in Japan are researching how the brain's circadian rhythm control center regulates behavior.

Termed the superchiasmatic nucleus, or SCN, the control center contains many types of neurons that transmit signals using the molecule GABA, but little is known about how each type contributes to our bodily rhythms. In their newest study, the researchers focused on GABA neurons that produce arginine vasopressin, a hormone that regulates kidney function and blood pressure in the body, and which the team recently showed is also involved in regulating the period of rhythms produced by the SCN in the brain.

To examine the function of these neurons, and only these neurons, the researchers first created mice in which a gene needed for GABA signaling between neurons was deleted only in vasopressin-producing SCN neurons. "We removed a gene that codes for a protein that allows GABA to be packaged before it is sent to other neurons," explains Mieda. "Without packaging, none of the vasopressin neurons could send out any GABA signals."

This means that these neurons could no longer communicate with the rest of the rhythm control center using GABA. On the surface, the results were simple. The mice showed longer periods of activity, beginning activity earlier and ending activity later than control mice. So, lack of the packaging gene in the neurons disrupted the molecular clock signal, right? In fact, the reality was not so simple. Closer examination showed that the molecular clock progresses correctly. So, what was happening?

The researchers used calcium imaging to examine the clock rhythms within the vasopressin neurons. They found that while the rhythm of activity matched the timing of behavior in control mice, this relationship was disturbed in the mice whose GABA transmission from the vasopressin neurons was missing. In contrast, the rhythm of SCN output, i.e. SCN neuronal electrical activity, in the modified mice had the same irregular rhythm as their behavior. "Our study shows that GABA signaling from vasopressin neurons in the suprachiasmatic nucleus help fix behavioral timing within the constraints of the molecular clock," says Mieda.

Credit: 
Kanazawa University

New device reduces hemostasis time following catheterization and improves efficiency

WASHINGTON, D.C, (April 28, 2021) - A new study reveals the use of a potassium ferrate hemostatic patch (PFHP) reduces the time to hemostasis for patients receiving cardiac catherization. The findings indicate a faster approach to removing the compression band used during the procedure, without compromising safety. Positive results of the STAT2 trial follow an initial pilot study and are being presented as late-breaking clinical science at the Society for Cardiovascular Angiography and Interventions (SCAI) 2021 Virtual Scientific Sessions.

Cardiac catherization is a procedure performed to evaluate the heart or arteries for patient diagnosis and interventions. This is increasingly performed using transradial approach, where a catheter is inserted through radial artery in the arm to the heart. A compression device called a TR band (TRB) is used to close the hole in the wrist made during the catherization process. Standard protocols require the band to be left on for at least two hours following the procedure.

The study evaluated the use of the StatSeal device patch compared to using a TRB alone in order to reduce time to hemostasis after transradial access. The study enrolled 443 patients across three centers, including 27.5% receiving percutaneous coronary intervention (PCI). Patients were randomized 1:1 to either the TRB alone or the TRB in addition to the potassium ferrate hemostatic patch. Both groups had complete TRB deflation attempted at 60 minutes post-procedure. Findings demonstrate the adjunctive use of the patch is safer and faster in deflating the TRB and reduced rebleeding. From a prior pilot study, discharge times were reduced with the use of the Statseal.

"By bringing observation times down from two hours to one, the use of the hemostatic patch has the potential to change practice because we can move toward same-day discharge protocols for cardiac catherization patients," said Arnold H. Seto, MD, MPA, FSCAI, Long Beach VA Health Care System. "We would be able to shift from long observation times and more frequently tell a patient, 'you are going home today.' This is really important for both the clinician and the patient."

Results showed the time to complete TRB deflation was shorter with the PFHP compared to for TRB alone (65.9 ± 14.1 min vs. 112.8 ± 56.3 min, P

"From an operator stand point, these findings are key to improving cath lab throughputs," said Jordan G. Safirstein, MD, FSCAI, Morristown Medical Center. "Now we know that we can safely discharge patients quicker than we have before, which not only improves efficiencies but is also beneficial for the quality of life for the patient. We hope our results generate positive health care benefits so more patients can be treated with life-saving cardiac solutions."

The authors highlighted the benefits for PCI patients receiving stents, that typically spend four to six hours under observation. Their findings challenge this standard of care and indicate they may be able to safely reach discharge sooner, a trend consistent with the recent SCAI document "Length of stay following percutaneous coronary intervention." Further, authors from Morristown Medical Center note they are taking initiatives in their radial lounge to base discharge on when safety concerns and adequate hemostasis is met and the patient feels well, rather than on a framework of time.

Credit: 
Society for Cardiovascular Angiography and Interventions

Espresso, latte or decaf? Genetic code drives your desire for coffee

image: Genetics actively regulate the amount of coffee we drink and protect us from consuming too much.

Image: 
Unsplash

Whether you hanker for a hard hit of caffeine or favour the frothiness of a milky cappuccino, your regular coffee order could be telling you more about your cardio health than you think.

In a world first study of 390,435 people, University of South Australia researchers found causal genetic evidence that cardio health - as reflected in blood pressure and heart rate - influences coffee consumption.

Conducted in partnership with the SAHMRI, the team found that people with high blood pressure, angina, and arrythmia were more likely to drink less coffee, decaffeinated coffee or avoid coffee altogether compared to those without such symptoms, and that this was based on genetics.

Lead researcher and Director of UniSA's Australian Centre for Precision Health, Professor Elina Hyppönen says it's a positive finding that shows our genetics actively regulate the amount of coffee we drink and protect us from consuming too much.

"People drink coffee for all sorts of reasons - as a pick me up when they're feeling tired, because it tastes good, or simply because it's part of their daily routine," Prof Hyppönen says.

"But what we don't recognise is that people subconsciously self-regulate safe levels of caffeine based on how high their blood pressure is, and this is likely a result of a protective genetic a mechanism.

"What this means is that someone who drinks a lot of coffee is likely more genetically tolerant of caffeine, as compared to someone who drinks very little.

"Conversely, a non-coffee drinker, or someone who drinks decaffeinated coffee, is more likely prone to the adverse effects of caffeine, and more susceptible to high blood pressure."

In Australia, one in four men, and one in five women suffer from high blood pressure, with the condition being a risk factor for many chronic health conditions including stroke, heart failure and chronic kidney disease.

Using data from the UK Biobank, researchers examined the habitual coffee consumption of 390,435 people, comparing this with baseline levels of systolic and diastolic blood pressure, and baseline heart rate. Causal relationships were determined via Mendelian randomization.

Prof Hyppönen says how much coffee we drink is likely to be an indicator of our cardio health.

"Whether we drink a lot of coffee, a little, or avoid caffeine altogether, this study shows that genetics are guiding our decisions to protect our cardio health," Prof Hyppönen says.

"If your body is telling you not to drink that extra cup of coffee, there's likely a reason why. Listen to your body, it's more in tune with what your health than you may think."

Credit: 
University of South Australia

African Americans with coronary artery disease impacted by non-traditional risk factors

WASHINGTON, D.C, (April 28, 2021) - A retrospective analysis of risk factors for coronary artery disease (CAD) in young African American patients is being presented today at the Society for Cardiovascular Angiography and Interventions (SCAI) 2021 Virtual Scientific Sessions. The findings reveal this specific patient segment, African-Americans under age 45, experiences greater CAD risk factors related to smoking, drug and alcohol abuse, HIV as well as mental health conditions including anxiety and depression.

CAD is the most common type of heart disease, with high blood pressure, obstructive sleep apnea and diabetes among traditional risk factors. African Americans are disproportionally impacted by heart disease, and are more likely to develop the chronic, progressive condition earlier in life. Despite this, the prevalence of and risk factors for CAD in a younger, African American patient population is understudied.

A retrospective analysis of the National Inpatient Sample was performed to identify all the patients with CAD in 2017. 139,657 African American patients with CAD were identified using international classification of disease-10 ICD 10 codes and then classified into two groups based on age. Group 1 consisted of 7,093 African American patients aged 18-45 years old and Group 2 consisted of 131,520 African American patients older than 45 years old. Patient baseline characteristics and co-morbid conditions were recorded and analyzed.

Results showed African Americans aged 18-45 years who present with CAD have lower incidence of traditional risk factors and higher incidence of non-traditional risk factors. In the younger patient group (Group 1) there was significant higher prevalence of obesity [31.2% vs. 19.4%], drug abuse [17.8% vs. 6.7%], alcohol abuse [5.2% vs. 4.3%], smoking [49.8% vs. 46.6%], HIV [1.88% vs. 0.88%] end-stage renal disease [20.7% vs. 14.6%] and depression [13.8% vs. 10.4%] compared to patients over 45. There was no statistically significant difference between groups for hypertension, diabetes mellitus, congestive heart failure, obstructive sleep apnea and gender.

"In our practice, we are seeing more African American patients come in with heart attacks caused by coronary artery disease at a younger age, causing major health and lifestyle implications," said Ahmad Awan, Cardiology Fellow, Howard University Hospital in Washington, D.C. "As we look at how to tailor prevention for a population already at high-risk for cardiovascular diseases, our data points to a need to look beyond the standard risk factors to help address the complex burden of disease and interventions needed for effective early prevention. Understanding the unique risk profile is a first step for more individualized patient interventions."

The authors state that the analysis is part of a larger study and that further well powered randomized controlled trials are needed to validate these findings.

Credit: 
Society for Cardiovascular Angiography and Interventions

The state of China's climate in 2020: Warmer and wetter again

image: Automatic weather station near the Yangtze River in Nanjing, which flooded on 23 July 2020

Image: 
Bing Zhou

The National Climate Center (NCC) of China has just completed a report that gives an authoritative assessment of China's climate in 2020. It provides a summary of China's climate as well as the major weather and climate events that took place throughout the year. This is the third consecutive year that the NCC has published an annual national climate statement in Atmospheric and Oceanic Science Letters (AOSL).

"Against the background of global warming, extreme weather and climate events occur more frequently and have wide influence on society and economies. Last year, floods, droughts, typhoons, low-temperature freezing and snow disasters, and dust storms attacked China and caused severe losses," says Wei Li, Director of the Climate Services Division of the NCC.

According to the report, in 2020, China's climate was warm and wet on the whole, and disasters caused by rainstorms and flooding were more serious than those by drought. The mean air temperature in China was 0.7? above normal, and the annual rainfall was 694.8 mm, which was 10.3% above normal.

In summer, southern China experienced the most severe flooding with extreme heavy rainstorms since 1998. Drought brought slight impacts and losses in China. High temperatures occurred earlier than normal with extreme values, and lasted longer than normal in summer over the south of China. The number of landfalling typhoons was lower than normal, while three typhoons successively affected Northeast China from late August to early September, which was the first time since 1949. Cold-air processes had a wide influence and brought a substantial decrease in air temperature in local areas.

Compared with the average values of the past 10 years, the affected crop area and the numbers of deaths and missing persons in 2020 were significantly smaller, while direct economic losses were slightly larger.

Nonetheless, Li warns that the hazards of climate disasters are increasing: "The WMO announced in January 2021 that the global average air temperature in 2020 was 1.2°C ± 0.1°C above the pre-industrial level and one of the three warmest on record. China also experienced a serious heatwave in summer and the mean air temperature in China was warmer than normal. Disaster prevention and reduction remains the focus of society."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences