Culture

The end of domestic wine in 17th century Japan

image: Japanese domestic winemaking, which began in 1627, is thought to have ended in the wake of the Hosokawa clan's transfer to the Higo Domain (modern-day Kumamoto Prefecture). The documents were studied by the Eisei Bunko Research Center

Image: 
Professor Tsuguharu Inaba

Researchers from Kumamoto University (Japan) have found an Edo period document that clearly indicates the Hosokawa clan, rulers of the Kokura Domain (modern-day Fukuoka Prefecture), completely stopped producing wine in 1632, the year before the shogunate ordered them to move to the Higo Domain (now Kumamoto Prefecture). The researchers believe that the discontinuation of wine production was directly related to this move and because it was considered to be a drink of a religion that was harshly suppressed in Japan at that time, Christianity.

Previous analysis of historical documents revealed that the lord of the Hosokawa clan, Tadatoshi Hosokawa, ordered wine production from 1627 to 1630 for medicinal use. His vassals, who were experienced in various western customs and technologies#mdash;from foods to watches, used black soybeans and wild grapes in their brewing process. Those documents are the earliest known proof of Japanese wine production.

Until now, no historical records regarding wine production after 1631 had been found. Previously, researchers understood there to be a four-year period of Japanese winemaking. Production was thought to be halted because it was a stereotypically Christian drink and making it could have been a dangerous prospect due to the shogunate's strict prohibition of Christianity during the Edo period.

The new document, from September 1632, was found in the Eisei Bunko Library's Hosokawa clan repository and is a clear order for one more batch of wine. A note written on the document by the magistrate dated October 3rd, 1632 (No. 10.7.13) is as follows.

[Original Japanese]

一、ぶだう酒御作せ可被成候間、がらミをとらセ上田太郎右衛門所へ遣可申旨、則太郎右衛門を以被仰出候事、

[Rough translation]

Taroemon Ueda has personally informed the magistrate's office that he received an order from the lord to have wild grapes collected and brought to him for wine production.

Taroemon Ueda was a Hosokawa clan vassal who had training in Western techniques and had been making wine since 1627. Later in the document, the magistrate wrote another note.

[Original Japanese]

がらミ、太郎右衛門へ渡候、

[Rough translation]

Wild grapes were provided to Taroemon.

The document does not say when wine production was completed. However, earlier documents revealed that Taroemon usually took about 10 days to finish making wine, so researchers believe that this batch was probably finished by mid-October 1632 at the latest. On January 18th of the following year, the shogunate ordered the Hosokawa clan to move from the Kokura Domain, where all of the wine was made, to the Higo Domain.

Historical documents related to wine production in the Higo domain have not been found. The researchers believe that the Hosokawa clan stopped making wine as a direct result of their move to a new domain and because wine was heavily associated with Christianity.

Soon after the move to the Higo Domain, the Hosokawa clan faced off with Western-influenced rebels. They were on the front lines of Christian oppression which led to the suppression of the "Shimabara-Amakusa Revolt" in 1637. The tightening prohibition of Christianity, outbreaks of Christian revolts, and the suppression of the revolt brought the history of Japanese domestic wine in the 17th century to a close.

Credit: 
Kumamoto University

Acting quickly after heart attack symptoms start can be a heart saver

DALLAS, Jan. 14, 2021 -- The longer the time between when heart attack symptoms start and a patient has an artery-clearing percutaneous coronary intervention (PCI), the more damage to the heart muscle, according to new research published today in Circulation: Cardiovascular Interventions, an American Heart Association journal.

A heart attack happens about every 40 seconds in the U.S., and the most common heart attack is caused by a complete blockage in a coronary artery, called ST-elevation myocardial infarction (STEMI). STEMI patients are most often treated with PCI, also known as angioplasty with stent, in which a catheter with a deflated balloon is inserted into the narrowed heart artery. Subsequently, the balloon is inflated, which clears the obstruction and restores blood flow. A stent is then inserted to keep the artery open.

"We know the time to opening the blocked coronary artery with PCI in heart attack patients is an important indicator for how a patient does after their heart attack. There are two measures for this time. One is symptom-to-balloon time, which is before the patient arrives to the hospital after symptoms start, to when that patient has a PCI; second is door-to-balloon time, the time from hospital arrival to PCI," said study author Gregg W. Stone, M.D., director of academic affairs at Mount Sinai Heart Health System in New York City. "We focused on heart attack size, or damage, with both time measures and found symptom-to-balloon time was by far the more important."

Stone and colleagues analyzed the data from 10 randomized controlled trials that followed more than 3,100 STEMI patients enrolled after PCI between 2002-2011. Patients' hearts were assessed within between 3-12 days after PCI to measure the size of the heart attack, and some studies also included measures of ejection fraction (a measure of the percentage of blood the heart is able to pump with each contraction) and TIMI flow (a measure of blood flow in the coronary artery). All patients had clinical follow-up data for at least six months, with a median follow-up of 341 days after PCI.

The study found:

Symptom-to-balloon time was more strongly associated with heart attack size and patients' clinical health after heart attack than door-to-balloon time.

The median symptom-to-balloon time was 185 minutes. The median door-to-balloon time was 46 minutes.

Symptom-to-balloon time represented approximately 80% of the total time from symptom onset to treatment of the artery.

The size of the heart attack increased with longer symptom-to-balloon times, whereas longer door-to-balloon times were not notably related to heart attack size.

Older age, female sex, arterial hypertension, diabetes and left circumflex artery as the culprit vessel were associated with longer symptom-to-balloon time.

For every 60-minute delay in symptom-to-balloon time, the one-year rate of death or hospitalization for heart failure was increased by 11%. In contrast, there was no relationship between delays in door-to-balloon time and these clinical results.

"Health care teams have worked to reduce door-to-balloon times and are achieving excellent results with a median time of 46 minutes. While we shouldn't become complacent and relax our current standards of rapidly performing PCI as soon as possible after the patient reaches the hospital, this study suggests that major efforts to further shorten door-to-balloon times by 10 or 20 minutes might not translate to better PCI outcomes," Stone said. "Our analysis indicates the more important and meaningful focus should be to shorten the delays from symptom onset to arrival at hospitals that can perform PCI. We must emphasize efforts to increase public awareness of heart attack symptoms and shorten the time it takes for patients to access emergency care."

These findings are extremely important and particularly relevant right now, said American Heart Association president Mitchell S.V. Elkind, M.D., M.S., FAHA, FAAN, professor of neurology and epidemiology at Vagelos College of Physicians and Surgeons and attending neurologist at New York-Presbyterian/Columbia University Irving Medical Center.

"During the peaks of the COVID-19 pandemic, hospitals are reporting fewer people coming into the emergency room for heart attack and stroke symptoms - indicating people aren't calling 911, or they are delaying or avoiding critical care," Elkind said. "This concerns us because we know it's very unlikely that there are fewer heart attacks or strokes occurring. These new findings emphasize just how crucial it is to call 911 at the first sign of a heart attack or stroke - because getting quick treatment can be the difference between life and death. As we have been urging even during the COVID-19 pandemic, don't die of doubt. Call 911 as soon as possible."

Among the limitations of this analysis, detailed information about the intensity of chest pain or other heart attack signs and symptoms, or about the time from symptom onset to PCI was not available from the clinical trials' data..

Credit: 
American Heart Association

Sexual harassment claims by less feminine women perceived as less credible

WASHINGTON -- Women who do not fit female stereotypes are less likely to be seen as victims of sexual harassment, and if they claim they were harassed, they are less likely to be believed, according to research published by the American Psychological Association.

"Sexual harassment is pervasive and causes significant harm, yet far too many women cannot access fairness, justice and legal protection, leaving them susceptible to further victimization and harm within the legal system," said Cheryl Kaiser, PhD, of the University of Washington and a co-author of the study published in the Journal of Personality and Social Psychology. "Our research found that a claim was deemed less credible and sexual harassment was perceived to be less psychologically harmful when it targeted a victim who was less attractive or did not act according to the stereotype of a typical woman."

Sexual harassment is a widespread social problem with a broad range of harmful consequences, including decreased engagement with and performance in work and school, worse mental and physical health, and increased economic instability, according to Kaiser. 

"Perceiving sexual harassment involves noticing a behavior that might qualify as harassment and linking that behavior to gender-based group membership," said co-author Bryn Bandt-Law, a doctoral student at the University of Washington. "We wanted to understand what happens when the victim does not look or act like a stereotypical member of that gender-based group."

In Western societies, stereotypical women tend to be perceived as attractive, thin, relatively young and dressing in a feminine way. Stereotypically feminine hobbies include shopping, yoga or watching romantic movies, rather than stereotypically masculine hobbies such as fishing, contact sports or watching violent action movies. 

The researchers conducted a series of 11 multi-method experiments, involving more than 4,000 total participants, designed to investigate the effect a victim's fit to the concept of a typical woman had on participants' view of sexual harassment and the consequences of that mental association.

In five of the experiments, participants read scenarios in which women either did or did not experience sexual harassment. Participants then assessed the extent to which these women fit with the idealized image of women, either by drawing what they thought the woman might look like or selecting from a series of photos. Across all the experiments, participants perceived the targets of sexual harassment as more stereotypical than those who did not experience harassment.

In the next four experiments, participants were shown ambiguous sexual harassment scenarios, such as a boss inquiring about a woman's dating life. These scenarios were paired with descriptions or photos of women who were either stereotypical or not. The participants then rated the likelihood that the incident constituted sexual harassment. 

"We found that participants were less likely to label these ambiguous scenarios as sexual harassment when the targets were non-stereotypical women compared with stereotypical women, despite the fact that both stereotypical and non-stereotypical targets experienced the same incident," said Jin Goh, PhD, of Colby College and another author of the study.

The final two experiments found that sexual harassment claims were viewed as less credible and the harassment less likely to be recognized as psychologically harmful when the accuser adhered less to the female stereotype, even though the claims were identical.

"Our findings demonstrate that non-stereotypical women who are sexually harassed may be vulnerable to unjust and discriminatory treatment when they seek legal recourse," said Bandt-Law. "If women's nonconformity to feminine stereotypes biases perceptions of their credibility and harm caused by harassment, as our results suggest, it could prevent non-stereotypical women who are sexually harassed from receiving the civil rights protections afforded to them by law."  

Credit: 
American Psychological Association

Water and gender equality

image: Zambian woman watering the family garden from a newly installed household tap.

Image: 
James Winter

Water isn't just crucial for life, it's fundamental to increasing opportunities for women and girls in rural areas across the globe. A new Stanford study reveals how bringing piped water closer to remote households in Zambia dramatically improves the lives of women and girls, while also improving economic opportunities, food security and well-being for entire households. The research, recently published in Social Science & Medicine, could spur governments and NGOs to more carefully evaluate the costs and benefits of piped water as an alternative to less accessible communal water sources.

"Switching from the village borehole to piped supply saved almost 200 hours of fetching time per year for a typical household," said study senior author Jenna Davis, a professor of civil and environmental engineering at Stanford and director of Stanford's Program on Water, Health and Development. "This is a substantial benefit, most of which accrued to women and girls."

Globally, about 844 million people live without safe, accessible water for drinking, cooking, cleaning, hygiene and food production - the linchpin of healthy, prosperous communities. Just 12 percent of the rural population in sub-Saharan Africa has water piped to their home. Instead, families collect water from distant, shared sources, with women and girls overwhelmingly responsible for performing the time-consuming and arduous chore of carrying containers that average about 40 pounds each. Dedicating a large chunk of their day to water fetching takes time away from activities such as childcare, housework, hygiene, outside employment, education and leisure.

"Addressing this problem provides the time and water for women and girls to invest in their household's health and economic development, in whatever way they see fit," said lead author James Winter, who recently defended his PhD in civil and environmental engineering at Stanford.

Over the past several decades, national governments and international aid groups have spent hundreds of millions of dollars installing basic water sources, such as wells and handpumps. However, many of these sources are still far from users' homes, resulting in long journeys to fetch water. Previous studies have shown water fetching can harm both mental and physical well-being, while piped water at home can increase water for hygiene and livelihoods, improve food production and decrease infectious disease prevalence.

Yet despite this finding, piped water installations in sub-Saharan Africa have increased by a mere 2 percentage points since 2007. Investing resources into high-quality piped water sources that are dramatically closer to rural households could thus be a more effective route to providing safe, accessible and affordable drinking water for all.

For their study, the researchers examined less frequently measured aspects of well-being - including time savings, economic opportunity and nutritional security - that can be gained through increased access to reliable, easily accessible water. To do this, the team followed four rural villages within Zambia's southern province that had similar populations and access to school, markets and health care facilities. Halfway through the study, two of the villages received piped water to their yard, reducing the distance of their water source to just 15 meters.

Each village was surveyed at the beginning, middle and end of the study, with a team of Zambian interviewers conducting a total of 434 household surveys. They collected information on the time spent fetching water, the amount of water used for domestic tasks (cooking and cleaning) and productive uses (watering gardens, brick making or animal husbandry), and the frequency of these activities. A subset of female respondents wore GPS tracking devices to measure walking speeds and distance to water sources. Water meters were used to validate water consumption information.

The researchers found households with piped water spent 80 percent less time fetching water, representing a savings of close to four hours per week. The vast majority of these time savings accrued to women and girls, confirming that females disproportionately benefit from piped water interventions. These time savings were spent gardening, performing other household chores, caring for children or working outside of the home selling products such as fried buns or charcoal. These families also reported being happier, healthier and less worried.

Water consumption, especially for productive purposes, also increased. Households with piped water were over four times more likely to grow a garden, and garden sizes more than doubled over the course of the study. Furthermore, a larger variety of crops were harvested and households reported both selling and consuming this produce, with plans to expand their crop sales in the coming years.

While the accumulated benefits are impressive, they may actually understate the potential time savings of piped water interventions. At the start of the study, households in all four villages lived just a five-minute walk from their primary water source. On average, rural Zambian households spend about double that time walking to their water source, along with additional time waiting in line and filling water containers. The researchers point out that introducing piped water near homes elsewhere in Zambia could save the average rural household 32 hours per month, which is almost twice the amount of time recouped by households in this instance.

Of course, a piped water infrastructure does have higher upfront costs, which could discourage government and NGO investments. Poverty poses a major barrier when it comes to water access, and with most of the world's poorest countries in sub-Saharan Africa, more research is needed to understand what is needed for communities to sustain piped water networks.

"The benefits we see here make it crucial for future work to understand how these systems can be operated and maintained in a financially sustainable way, even in geographically isolated, rural communities," said Winter.

Credit: 
Stanford University

Honeybees reveal how our floral landscape has changed over the last 65 years

image: Honeybee on white clover at the National Botanic Garden of Wales

Image: 
Steffan John

Honeybee historians might seem like a flight of fancy but these tiny pollinators have been helping researchers from the National Botanic Garden of Wales track how the UK's fields, hedgerows, wild spaces and gardens have changed since the 1950s.Using cutting-edge DNA barcoding techniques, scientists at the Botanic Garden identified which plants modern-day honeybees visited most often by looking at the pollen grains trapped within honey.

They compared this to a 1952 survey of honey plants where a microscope had been used to painstakingly identify pollen grains in honey sent from hives across the country. The differences were clear. White clover had been the most important plant for honeybees but, with fewer pastures today and increased use of herbicides and inorganic fertiliser in farming, this has dropped to second place. Now the insects are visiting much more of:

their modern-day favourite, bramble

oilseed rape, a plant which has a sting in its tail

the highly invasive Himalyan balsam

Also important were spring-flowering shrubs and trees,including hawthorn (Cratageus monogyna), apple (Malus species), Cotoneaster species, sycamore and maples (Acer species), cherries and plums (Prunus species), and towards the end of the season, heather (Calluna vulgaris).

Dr Natasha de Vere, Head of Conservation and Research at the National Botanic Garden of Wales, said: "The last 65 years have been a period of profound change within the UK landscape. Agricultural intensification after the Second World War led to a decline in species-rich grasslands and permanent pastures, while hedgerows and woodland were destroyed so that field sizes could increase, and new crops were grown. The distribution and abundance of the UK's wildflowers has changed, with some species declining whilst new plants have been introduced.

"Natural historians, scientists and government agencies have made detailed records over this time, but they are not the only witnesses to this changing world. Honeybees also travel through these landscapes, flying through fields and woodlands, over hedgerows and croplands, searching for nectar and pollen to return to their hives."

The new research:

In 2017, Dr Laura Jones repeated a 1952 survey, undertaken by A.S.C. Deans, as part of her PhD research with the National Botanic Garden of Wales and Bangor University. She analysed 441 samples of honey sent in from across the UK. This time, instead of using a microscope, the pollen was identified using plant DNA barcoding, work for which the Botanic Garden has an international reputation.

White clover - in 1952 the most important plant for honeybees was white clover (Trifolium repens). In the new research, white clover was still the second most important plant but there was a significant reduction in its presence within the honey. In 1952 it was found in 93% of honey samples and was a major honey plant in 74% of these, but by 2017 it was found in 62% of samples and was a major source in only 31% - a sign that our modern-day landscape has far less white clover.

The Countryside Survey for the UK shows that between 1978 and 2007 white clover decreased within the landscape by 13%. It used to be a dominant plant within permanent pastures and was also often included in grass leys as a source of protein for livestock. Agricultural intensification led to a reduction in the amount of permanent pastures and increased use of inorganic nitrogen fertiliser meant that clover was less likely to be sown in grass leys. Reseeding without clover made it easier to control docks and thistles with a broad-spectrum herbicide that killed off all of the plants that were not grasses. Where white clover was still included in leys, much more regular cutting for silage meant that it was unlikely to be allowed to flower.

Bramble - with the reduction of such an important nectar source, honeybees needed to find alternative supplies of nectar and pollen. The research suggests that honeybees increased the amount of bramble, Rubus fruticosus, in their diets. Bramble and white clover flower at similar times and bramble increased significantly as a major honeybee plant between 1952 and 2017. In 1952, Rubus was found in 58% of honey samples but was a major source in only 5% of these. In 2017, bramble was found in 73% of honey samples and was a major honey plant in 36% of these.

Oilseed rape - first grown in the late 1960s. By 1988, 279,030 ha of oilseed rape (Brassica napus) were in production and this increased to 332,000 ha in 2000. The bright yellow fields are now a common sight in spring. In 1952 the genus Brassica, to which oilseed rape belongs, was a major plant in only 2% of honey samples, by 2017 this had increased to 21%. The presence of Brassica pollen within the honey was found to be significantly greater from hives found within 2 km of oil seed rape crops. Honeybees make full use of the nectar and pollen from oilseed rape and single-origin oilseed rape honey is now widely available. It has a high glucose content that makes it granulate very rapidly into a set, white honey with a mild flavour. The rapid granulation can cause problems for the beekeeper as if it is not extracted rapidly from the combs it can set hard and be impossible to remove.

But there is a sting in the tale of this new resource, as oilseed rape seeds are often treated with neonicotinoid insecticides which harm honeybees. These neonic insecticides are currently banned within the UK and hopefully this ban will remain in place.

Himalayan balsam - honeybees in their search for nectar and pollen have also tracked the emergence of an invasive species. Himalayan balsam (Impatiens glandulifera) was first introduced into the UK in 1839. Its fast-growing stature (up to 3 m tall), orchid-like flowers in shades of white, pink and purple, and exploding seed pods, appealed to Victorian gardeners. Himalayan balsam soon made its escape from the garden walls, first spreading slowly, then increasing more rapidly from the 1940s to the 1960s, gradually establishing along waterways and field margins. In 1952 it was present within UK honey at low levels, occurring in 3% of samples with only 1% as a major plant.

Himalayan balsam now rampages along riversides and road verges. In 2017 it was found within 15% of samples and was a major source in 6%. However, this is an under estimation of its importance, since most of the honey samples were provided in July and August, whilst Himalayan balsam tends to be used by honeybees later in the year. Himalayan balsam is now an important late season plant for honeybees, providing an abundant nectar source at a time of year when there is little else available. It helps the bees build up their winter stores and is sometimes sold by beekeepers as a single-origin straw-coloured honey with a sweet, fragrant, floral taste. It is easy to tell when honeybees are foraging on Himalayan balsam as they return to the hive with a characteristic whitewash of pollen covering their bodies, leading them to be called 'ghost bees'.

Himalayan balsam is undoubtedly a good plant for honeybees but this is a controversial issue as it is a highly invasive species listed under Schedule 9 of the Wildlife and Countryside Act 1981, making it an offence to plant or cause this species to grow in the wild. Its vigorous growth means it competes with native plants for light, nutrients and space. It dies back in winter leaving riverbanks bare and open to erosion whilst its dead stems and leaves can block waterways. Even its popularity with pollinators can cause problems, as it can outcompete native wildflowers for their services, leading to reduced seed production in native plants.

Dr de Vere added: "Honeybees and wild pollinators need abundant and diverse sources of nectar and pollen within the landscape, to provide sufficient, high-quality food. By understanding which plants are the most important sources we can provide recommendations on which plants to grow so that honeybees and wild pollinators can thrive."

Recommendations from the research include:

Landscape level changes to provide more floral resources. The UK needs more flower-filled hedgerows with bramble margins and grasslands rich in wildflowers. The conservation of remaining species-rich meadows is a priority, but the area that these habitats cover is vanishingly small.

To make the biggest gains in nectar and pollen, changes are needed in the most prevalent habitat in the UK today - improved grassland. Wildflowers are squeezed out to create grasslands dominated by a small number of grass species, where there are very few flowers to sustain pollinators. But because of the scale of this habitat, small changes here could vastly increase nectar resource. For honeybees providing more white clover in improved grasslands would be best, for other pollinators different flowers are more important.

Credit: 
National Botanic Garden of Wales

Bees respond to wildfire aftermath by producing more female offspring

image: Blue orchard bee

Image: 
Jim Rivers, OSU College of Forestry

CORVALLIS, Ore. - Researchers at Oregon State University have found that the blue orchard bee, an important native pollinator, produces female offspring at higher rates in the aftermath of wildfire in forests.

The more severe the fire had been, the greater percentage of females - more than 10% greater in the most badly burned areas relative to areas that burned the least severely.

"This is one of the first studies that has looked at how forest fire severity influences bee demography," said Jim Rivers, an animal ecologist with the OSU College of Forestry. "Sex ratio varied under different fire conditions but the number of young produced did not, which indicates bees altered the sex of their offspring depending on the degree of fire severity."

Female bees control the sex of their offspring, laying eggs fertilized with sperm that become females, or non-fertilized eggs that become males.

Bees pollinate many of the flowering plants that make up native ecosystems and food chains Understanding how fire - expected to increase in frequency and severity - influences their reproductive outputs is an important part of knowing how post-fire management actions could help or harm bees.

"We placed bees on different sites within recently burned mixed-conifer forest in southwestern Oregon and used them as a measuring stick to tell us how good the bee habitat was," said Sara Galbraith, a postdoctoral researcher in the College of Forestry. "Adjusting offspring production toward the more expensive offspring sex shows a functional response to changes in habitat quality via an increased density of flowering plants."

In general, pollinators benefit from canopy-reducing fires in dense conifer forest ecosystems; flowering plant abundance usually increases for several years following a fire, resulting in food resources that enhance wild bee diversity and abundance.

Bees are the most important among the Earth's pollinators, which combine for an estimated $100 billion in global economic impact each year. Oregon is home to more than 600 species of native bees.

Animal pollinators enhance the reproduction of nearly 90% of the Earth's flowering plants, including many food crops.

Pollinators are an essential component of insect and plant biodiversity. Bees are the standard bearer because they're usually present in the greatest numbers and because they're the only pollinator group that feeds exclusively on nectar and pollen their entire life.

For this study involving the blue orchard bee, known scientifically as Osmia lignaria, Galbraith, Rivers and James Cane of the U.S. Department of Agriculture set up nest blocks containing a standardized number and sex ratio of pre-emergent adult bees.

They then looked at the relationship between fire severity and reproductive output, sex ratio and offspring mass at the local (within 100 meters of the blocks) and landscape (750 meters) scales. Female bees forage across both scales when caring for offspring.

"In fire-prone landscapes, there is variation in species-level response to wildfire that serves to maintain ecosystem structure and function," Rivers said. "With the blue orchard bee and similar species, foraging females invest in larger progeny and more females when more resources are available."

The findings showed that burned mixed-conifer forest provides forage for the blue orchard bee along a gradient of severity, and that the rise in floral resources that comes after high-severity fire causes females to reallocate resources to the larger and more costly sex - females - when nesting.

"Our study revealed more female progeny than is typically observed with blue orchard bees," Galbraith said. "The greater proportion of females in areas surrounded by a more severely burned landscape indicates an investment in more female offspring because of greater resource availability."

Credit: 
Oregon State University

Chemotherapy with light; only one injection required

image: A schematic diagram of the application of cancer-targeted supermolecular peptide phototherapy drugs to animal experiments developed by KIST researchers. A single supermolecular peptide phototherapy injection and repeated phototherapy were used to completely treat cancer.

Image: 
Korea Institute of Science and Technology(KIST)

Researchers in South Korea have developed a phototherapy technology that can significantly increase efficiency while reducing the pain of chemotherapy and minimizing side effects after treatment. The President of Korea Institute of Science and Technology (KIST), Seok-Jin Yoon announced that a research team led by Dr. Sehoon Kim at the Theragnosis Research Center (KU-KIST Graduate School of Converging Science and Technology) has developed a cancer-targeted phototherapeutic agent that promises complete elimination of cancer cells without side effects. It involves only one injection and repeated phototherapy. This development was made through joint research with Professor Dong June Ahn of Korea University and Professor Yoon-Sik Lee of Seoul National University.

Phototherapy technology, a cancer treatment modality that uses light, injects a photosensitizer that destroys cancer cells in response to a laser, which accumulates in only cancerous tissues. Further, it shoots light to selectively destroy the cancer cells. It has far fewer side effects than radiation therapy or general chemotherapy (that inevitably damage the tissues surrounding the cancer cells), allowing repeated treatment.

Since the effect of the conventional photosensitizers only lasts for one session, photosensitizers have to be administered each time the treatment procedure is repeated. Moreover, the residual photosensitizer after treatment accumulates in the skin or eyes causing side effects due to light; thus, it is recommended to isolate the patient from sunlight and indoor lighting for some time after treatment. Overall, the patients receiving phototreatment have had to suffer from the pain of the repeated injection and the inconvenience of isolation in the dark each time. Recently, photosensitizers with phototherapeutic effects that get activated only in cancer tissues have been developed; however, they are still toxic and have to be injected for every repeated session of treatment.

Dr. Sehoon Kim and his team at KIST used peptides that selectively target cancer tissues and assemble themselves in a specific order to resolve the problems associated with the phototherapy technology. The research team developed a peptide-based photosensitizer that activates phototherapeutic effects only in cancer tissues by using the internalizing RGD peptide (iRGD) that can selectively penetrate and target cancer tissues, and by proper design for the modulation of its reaction to light.

When this newly developed photosensitizer is injected into a living body, it is activated by the body temperature and aggregates into a supramolecular arrangement designed by the research team, to be stored around the tumor. The subsequent phototherapy can destroy only cancer cells without affecting surrounding normal cells.

The phototherapeutic agent developed by the researchers was injected into a mouse model implanted with a tumor, and the photosensitizer was stored around the tumor and was continuously released for a long time (2 to 4 weeks), demonstrating the ability of selectively targeting the tumor with just a single injection around the cancerous tissues. Moreover, no toxicity was found to destroy the surrounding tissues and major organs, even with repeated exposure to light. The cancerous tissues were completely removed through repeated procedures.

"We developed a cancer-targeting peptide phototherapeutic agent that forms a depot through supramolecular self-assembly without additional excipients when injected in vivo," said KIST Center Director Sehoon Kim. "The developed phototherapeutic agent is expected to be useful in future phototherapy as it allows long-term repeated phototherapy without toxicity after only one injection around the cancer until the complete removal of the cancer, and has a simple formulation with a single component," he added.

Credit: 
National Research Council of Science & Technology

Effects of head trauma from intimate partner violence largely unrecognized

While there is an abundant amount of research about traumatic brain injuries in athletes and those serving in the military, the same data is scarce when it comes to concussions and head and neck injuries sustained due to intimate partner violence.

Carrie Esopenko, assistant professor in the Department of Rehabilitation and Movement Sciences in the Rutgers School of Health Professions says that the World Health Organization estimates that one in three women will experience intimate partner violence (IPV) in her lifetime, and studies suggest that anywhere between 30% to 90% of women who experience physical abuse at the hands of an intimate partner experience head trauma. Yet not enough data is being collected to understand how this head trauma affects cognitive and psychological functioning as well as the underlying neural effects.

Esopenko is part of a new Intimate Partner Violence Working Group studying intimate partner violence-related head trauma as part of the Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium, an international, multidisciplinary group that seeks to provide a collaborative framework for large-scale analysis and neuroimaging and genetic studies in patient groups. She discusses the effect that head trauma due to intimate partner violence can have on individuals and the challenges the working group faces in gathering data as recently published in the journal Brain Imaging and Behavior.

What is the risk for traumatic brain injury in those who suffer abuse?

Although intimate partner violence occurs at any age, it is most prevalent in the 18- to 24-year-old age group, and older adults are also vulnerable. Males and females experience IPV, but violence against women tends to result in more severe and chronic injuries. Due to the high degree of physical aggression associated with this type of abuse, there is a significant risk for traumatic brain injury caused by blunt force trauma, being violently shaken or pushed.

Another significant concern is anoxic brain injury, which can occur due strangulation or attempts to impede normal breathing. The prevalence of head injuries in women who have sustained IPV is estimated to be between 30 percent and 92 percent, with a high proportion of these women reporting injuries as a result of strangulation. It is estimated that more than 50 percent of women exposed to intimate partner violence suffer multiple brain injuries due to abuse-related head trauma.

What are the consequences of such injuries?

Past research suggests that IPV can impact cognitive and psychological functioning as well as have neurological effects. These seem to be compounded in those who suffer a brain injury as a result of trauma to the head, face, neck or body due to physical and/or sexual violence. However, our understanding of the neurobehavioral and neurobiological effects of head trauma is limited.

Studies suggest that women who experience IPV report cognitive dysfunction, including impaired reaction time, response inhibition, working memory, attention and a range of other cognitive, behavioral and emotional difficulties. They often report a high degree of mental health disorders, such as depression, anxiety, substance use disorders, suicidal ideation and PTSD. There is evidence that intimate partner violence-related brain injury also alters brain function and structure.

What is unknown about traumatic brain injury in victims of domestic violence?

While research on traumatic brain injury in other populations, like athletes and the military, has dramatically increased over the past two decades, research on intimate partner-related brain injury is vastly understudied. We need to know more about the effect of sex, socioeconomic status, race and/or ethnicity, age at first exposure - including childhood trauma, duration and severity of IPV exposure, and psychiatric disorders on the neural, cognitive and psychological outcomes associated with IPV-related brain injuries. Knowing this can help us to predict outcomes and help personalize treatment and intervention strategies.

What are the working group's goals?

There remain important challenges to understanding the interaction between intimate partner-related brain injury and cognitive and psychosocial functioning, mental health and neural outcomes. Of importance is the identification and characterization of brain injury in this population, which is often difficult because brain trauma is often overlooked or not diagnosed in this population. By forming a global collaboration across disciplines -- researchers, clinicians, first responders, community organizations and policymakers -- we hope to help tailor measures that can be used across groups for consistent data collection that will enable us to combine large-scale datasets to answer these difficult questions and facilitate further translation of research outcomes to clinical care and community-based supports.

Credit: 
Rutgers University

SolarEV City concept: Building the next urban power and mobility systems

Cities have become the focus of global climate mitigation efforts because as they are responsible for 60-70% of energy-related CO2 emissions. As the world is increasingly urbanized, it is crucial to identify cost-effective pathways to decarbonize and enhance the resilience of cities, which ensure the well-being of their dwellers. In this study, we propose a "SolarEV City" concept, in which integrated systems of cities' roof-top photovoltaics and electric vehicles (EVs) supply affordable and dispatchable CO2-free electricity to urban dwellers.

The SolarEV City assumes that 70% of toof-top of cities at maximum are used for PV and all passenger vehciles are converted to EV in cities being used as batteries for PV electricity. We conducted technoeconomic analyses to evaluate the concept in terms of CO2 emission reduction, cost savings, energy suffciency, self-sufficiency, self-consumption for nine Japanese urban areas (Kyoto City, Hiroshima City, Korimaya City, Okayama City, Sapporo City, Sendai City, Niigata City, Kawasaki City, Special districits of Tokyo).

Our analyses indicate that implementations of the concept can meet 53-95 % of electricity demands in nine major Japanese urban areas by 2030 with the use of 70% of roof-top area in the cities. CO2 emission from vehicle use and electricity generation in these areas can be reduced by 54-95% with potential cost savings of 26-41%. High cost-effectiveness and seasonally stable insolation in low latitudes may imply that the concept may be more effective to decarbonize urban environments in emerging economies in low latitudes.

Among several factors, governmental interventions will play a crucial role in realizing such systems, particularly in legislating regulations that enhance penetration of the integrated system of PV and EV and enable formation of decentralized power systems. As bottom-up processes are critical, policy makers, communities, industries, and researchers should work together to build such systems overcoming social and regulatory barriers.

Credit: 
National Institute for Environmental Studies

Concept for a hybrid-electric plane may reduce aviation's air pollution problem

At cruising altitude, airplanes emit a steady stream of nitrogen oxides into the atmosphere, where the chemicals can linger to produce ozone and fine particulates. Nitrogen oxides, or NOx, are a major source of air pollution and have been associated with asthma, respiratory disease, and cardiovascular disorders. Previous research has shown that the generation of these chemicals due to global aviation results in 16,000 premature deaths each year.

Now MIT engineers have come up with a concept for airplane propulsion that they estimate would eliminate 95 percent of aviation's NOx emissions, and thereby reduce the number of associated early deaths by 92 percent.

The concept is inspired by emissions-control systems used in ground transportation vehicles. Many heavy-duty diesel trucks today house postcombustion emissions-control systems to reduce the NOx generated by engines. The researchers now propose a similar design for aviation, with an electric twist.

Today's planes are propelled by jet engines anchored beneath each wing. Each engine houses a gas turbine that powers a propeller to move the plane through the air as exhaust from the turbine flows out the back. Due to this configuration, it has not been possible to use emissions-control devices, as they would interfere with the thrust produced by the engines.

In the new hybrid-electric, or "turbo-electric," design, a plane's source of power would still be a conventional gas turbine, but it would be integrated within the plane's cargo hold. Rather than directly powering propellers or fans, the gas turbine would drive a generator, also in the hold, to produce electricity, which would then electrically power the plane's wing-mounted, electrically driven propellers or fans. The emissions produced by the gas turbine would be fed into an emissions-control system, broadly similar to those in diesel vehicles, which would clean the exhaust before ejecting it into the atmosphere.

"This would still be a tremendous engineering challenge, but there aren't fundamental physics limitations," says Steven Barrett, professor of aeronautics and astronautics at MIT. "If you want to get to a net-zero aviation sector, this is a potential way of solving the air pollution part of it, which is significant, and in a way that's technologically quite viable."

The details of the design, including analyses of its potential fuel cost and health impacts, are published today in the journal Energy and Environmental Science. The paper's co-authors are Prakash Prashanth, Raymond Speth, Sebastian Eastham, and Jayant Sabnins, all members of MIT's Laboratory for Aviation and the Environment.

A semi-electrified plan

The seeds for the team's hybrid-electric plane grew out of Barrett and his team's work in investigating the Volkswagen diesel emissions scandal. In 2015, environmental regulators discovered that the car manufacturer had been intentionally manipulating diesel engines to activate onboard emissions-control systems only during lab testing, such that they appeared to meet NOx emissions standards but in fact emitted up to 40 times more NOx in real-world driving conditions.

As he looked into the health impacts of the emissions cheat, Barrett also became familiar with diesel vehicles' emissions-control systems in general. Around the same time, he was also looking into the possibility of engineering large, all-electric aircraft.

"The research that's been done in the last few years shows you could probably electrify smaller aircraft, but for big aircraft, it won't happen anytime soon without pretty major breakthroughs in battery technology," Barrett says. "So I thought, maybe we can take the electric propulsion part from electric aircraft, and the gas turbines that have been around for a long time and are super reliable and very efficient, and combine that with the emissions-control technology that's used in automotive and ground power, to at least enable semielectrified planes."

Flying with zero impact

Before airplane electrification had been seriously considered, it might have been possible to implement a concept such as this, for example as an add-on to the back of jet engines. But this design, Barrett notes, would "kill any stream of thrust" that a jet engine would produce, effectively grounding the design.

Barrett's concept gets around this limitation by separating the thrust-producing propellers or fans from the power-generating gas turbine. The propellers or fans would instead be directly powered by an electric generator, which in turn would be powered by the gas turbine. The exhaust from the gas turbine would be fed into an emissions-control system, which could be folded up, accordion-style, in the plane's cargo hold -- completely isolated from the thrust-producing propellers.

He envisions the bulk of the hybrid-electric system -- gas turbine, electric generator, and emissions control system -- would fit within the belly of a plane, where there can be ample space in many commercial aircraft .

In their new paper, the researchers calculate that if such a hybrid-electric system were implemented on a Boeing 737 or Airbus A320-like aircraft, the extra weight would require about 0.6 percent more fuel to fly the plane.

"This would be many, many times more feasible than what has been proposed for all-electric aircraft," Barrett says. "This design would add some hundreds of kilograms to a plane, as opposed to adding many tons of batteries, which would be over a magnitude of extra weight."

The researchers also calculated the emissions that would be produced by a large aircraft, with and without an emissions control system, and found that the hybrid-electric design would eliminate 95 percent of NOx emissions

If this system were rolled out across all aircraft around the world, they further estimate that 92 percent of pollution-related deaths due to aviation would be avoided. They arrived at this estimate by using a global model to map the flow of aviation emissions through the atmosphere, and calculated how much various populations around the world would be exposed to these emissions. They then converted these exposures to mortalities, or estimates of the number of people who would die as a result of exposure to aviation emissions.

The team is now working on designs for a "zero-impact" airplane that flies without emitting NOx and other chemicals like climate-altering carbon dioxide.

"We need to get to essentially zero net-climate impacts and zero deaths from air pollution," Barrett says. "This current design would effectively eliminate aviation's air pollution problem. We're now working on the climate impact part of it."

Credit: 
Massachusetts Institute of Technology

Esophageal cancer patients show abundance of oral pathogens

image: In the dental plaque samples, the prevalence of all bacteria, with the exception of F. nucleatum, was significantly higher in the test group versus the control group. The pie charts in orange are for cancer patients, and those in blue are for healthy controls.

Image: 
Department of Periodontology,TMDU

Researchers led by Tokyo Medical and Dental University (TMDU) find that certain oral pathogens are more prevalent in esophageal cancer patients, and could be used as a novel diagnostic tool

Tokyo, Japan - It is increasingly clear that the trillions of bacteria that make themselves at home in and on the human body are more than just casual observers along for the ride. Gut bacteria in particular have been shown to have an enormous influence on human health, with studies suggesting they play a role in illnesses ranging from autoimmune disorders to anxiety and depression.

The oral cavity is another rich source of microbial diversity, with more than 700 bacterial species making our mouths their home. The vast majority of these species are harmless, but a select few cause conditions such as gingivitis, periodontitis, and abscesses. While the role of these pathogens in periodontal disease is well-characterized, more recent studies have hinted at involvement in gastric and esophageal cancers.

In a recent issue of Cancer, researchers led by Tokyo Medical and Dental University (TMDU) characterized the oral bacterial communities of esophageal cancer patients to look for patterns associated with cancer risk and lay foundations for further exploration of the role of oral pathogens in disease.

"Esophageal cancer is the sixth most deadly cancer worldwide and is often not detected until an advanced stage, meaning that the prognosis is generally poor," says lead author of the study Machiko Kawasaki. "Complicating matters, the two main subtypes of esophageal cancer have different risk factors, presentations, and incidence rates in different populations. A better understanding of the causes of esophageal cancer could therefore help with early detection."

To explore the characteristics of the oral bacterial community in esophageal cancer patients, the researchers collected dental plaque and saliva samples from 61 esophageal cancer patients and 62 healthy controls. Using a technique called real-time polymerase chain reaction, the researchers screened DNA extracted from the plaque and saliva samples to determine the abundance of seven common periodontal pathogens in the bacterial population as a whole.

Cancer patients had significantly higher rates of smoking and drinking habits, and poor gum status. "Interestingly, five of the seven pathogens were more abundant in dental plaque from cancer patients than that from the healthy controls, with the detection rate of six of the seven pathogens was significantly higher in the cancer patients," explains senior author Satoshi Miyake. "On the other hands, only two of the seven pathogens, Aggregatibacter actinomycetemcomitans and Streptococcus anginosus, were more abundant in saliva from cancer patients."

Overall, the researchers determined that an increased prevalence of Streptococcus anginosus and Tannerella forsythia in dental plaque and Aggregatibacter actinomycetemcomitans in saliva, and also alcohol consumption, were associated with a high risk of esophageal cancer.

The study findings are an exciting indication of the diagnostic potential of oral bacteria in esophageal cancer and could form the basis of future screening methods.

Credit: 
Tokyo Medical and Dental University

Catalyticity of molybdenum-dinitrogen complexes in organic reactions

image: (a) Typical Reactivity Modes of -N2 Units in Transition Metal-Dinitrogen Complexes; (b) This Work: Catalytic Reactivity of Mo-N2 Units in the Disproportionation of Cyclohexadienes and Isomerization of Terminal Alkenes

Image: 
©Science China Press

Dinitrogen (N2) fixation is considered as one of the most essential tasks in basic science, providing straightforward methods to produce ammonia and nitrogen-containing molecules. Exploring the reactivity of N2 units of transition metal-nitrogen complexes is of great significance and challenging in the chemistry. Since the first Ru-N2 complex was prepared in 1965, important progress has been made in the synthesis and reactivity of transition metal nitrogen complexes. In many cases, terminal end-on M-N2 complexes as the most prevalent bonding mode were proved to be effective to catalyze reductive reactions of N2 to afford ammonia or silylamines; moreover, due to the nucleophilicity of dinitrogen moiety, the activated dinitrogen ligands in M-N2 (M = Mo, W, Fe, Co) complexes were transformed into N-containing organic compounds with carbon-based electrophiles. On the other hand, late-transition metal-N2 complexes (M = Co, Ru, Ir, Fe) have also been reported as precatalysts for organometallic transformations including cycloaddition and hydrofunctionalization of olefins, semihydrogenation of alkynes, transfer hydrogenation of ketones and acceptorless dehydrogenation of alcohols. In these systems, dinitrogen (N2) as weakly π-accepting ligand to stabilize highly reactive and low valence-electron species, was proved not to be involved in the catalytic processes. The status quo urges exploration on the potential reactivity of coordinated N2 units in organometallic catalysis, which will inspire new strategies for catalytic conversions of N2.

More recently, the research group of Professor Zhang-Jie Shi from Fudan University reported the synthesis, structural characterization and catalytic reactivity exploration of molybdenum-nitrogen complexes supported by monodentate arylsilylamino ligand in the National Science Review (NSR). The results showed that complexes [Ar(TMS)N]3MoN2Mg(THF)2[N(TMS)Ar] (A) and [Ar(TMS)N]3MoN2TMS (B) could be used as effective catalysts for the disproportionation of cyclohexadienes to generate cyclohexene and benzene. It was noteworthy that those catalysts remained intact after complete conversion of the substrate, suggesting that the activated -N2 units were retained during catalysis. Other complexes such as Mo[N(TMS)Ar]3, ClMo[N(TMS)Ar]3, Mg[N(TMS)Ar]2 or Li[N(TMS)Ar] failed for promoting such a disproportionation efficiently. Kinetic studies showed that the initial rate of disproportionation depend on the concentration of catalyst A was the first order, providing the evidence of A not as a precatalyst in the transformation; the authors proposed that the Mo-N=N unit in the molybdenum-nitrogen complex played a key role in the reaction through the LLHT process. In addition, applying the LLHT-reverse LLHT reaction process to terminal olefins such as allylbenzene or 1-hexene the isomerization indeed took place in high efficiency. Kinetic studies indicated the C-H cleavage was not involved in the rate-determining step.

The research results showed that molybdenum-nitrogen complexes possessed high catalytic activity in organometallic catalytic conversion, which may suggest new approaches to participate in catalytic systems for nitrogen ligands, and afforded new perspectives to examine coordination modes between metal centers and nitrogen ligands.

Credit: 
Science China Press

Spectacular fossil discovery:

image: Tentative life reconstruction of the hybodontiform shark Asteracanthus; for scale see silhouettes at the right lower corner

Image: 
© Sebastian Stumpf/Fabrizio De Rossi

In a new study, an international research team led by Sebastian Stumpf from the University of Vienna describes an exceptionally well-preserved skeleton of the ancient shark Asteracanthus. This extremely rare fossil find comes from the famous Solnhofen limestones in Bavaria, which was formed in a tropical-subtropical lagoon landscape during the Late Jurassic, about 150 million years ago. The almost complete skeleton shows that Asteracanthus was two-and-a-half meters long in life, which makes this ancient shark one of the largest of its time. The study is published in the journal Papers in Palaeontology.

Cartilaginous fishes, which include sharks and rays, are one of the most successful vertebrate groups still alive today. Due to their life-long tooth replacement, teeth of cartilaginous fishes are among the most common fossil vertebrate finds. However, the low preservation potential of their cartilaginous skeletons prevents fossilization of completely preserved specimens in most cases. The extremely rare preservation of fossil cartilaginous fish skeletons is therefore linked to special conditions during fossilization and restricted to a few fossil-bearing localities only.

The Solnhofen limestones in Bavaria, Germany, which were formed during the Late Jurassic, about 150 million years ago, is such a rare occurrence. They are world-renowned for having produced skeletons of the small feathered dinosaur Archaeopteryx and have yielded numerous shark and ray skeletons, recovered during excavations over the past 150 years. A new study published in the journal Papers in Palaeontology and led by the paleontologist Sebastian Stumpf from the University of Vienna presents the largest fossil shark skeleton that has ever been discovered in the Solnhofen limestones. The specimen is represented by an almost completely preserved skeleton of the extinct hybodontiform shark Asteracanthus, the total length of which was two-and-a-half meters in life, which made it a giant among Jurassic sharks.

Hybodontiform sharks, which are the closest relatives of modern sharks and rays, first appeared during the latest Devonian, about 361 million years ago, and went extinct together with dinosaurs at the end of the Cretaceous, about 66 million years ago. They had two dorsal fins, each supported by a prominent fin spine. The body size of hybodontiform sharks ranged from a few centimeters to approximately three meters in maximum length, which consequently makes Asteracanthus one of the largest representatives of both its group and its time. In contrast, modern sharks and rays, which were already diverse during the Jurassic, only reached a body size of up to two meters in maximum length in very rare cases.

Asteracanthus was scientifically described more than 180 years ago by the Swiss-American naturalist Louis Agassiz on the basis of isolated fossil dorsal fin spines. However, articulated skeletal remains have never been found - until now. The dentition of the skeleton is exceptionally well-preserved and contains more than 150 teeth, each with a well-developed central cusp that is accompanied on both sides by several smaller cusplets. "This specialized type of dentition suggests that Asteracanthus was an active predator feeding on a wide range of prey animals. Asteracanthus was certainly not only one of the largest cartilaginous fishes of its time, but also one of the most impressive." says Sebastian Stumpf.

Credit: 
University of Vienna

Wits University scientists artificially infect mosquitoes with human malaria to advance treatment

image: Scientists at the Wits Research Institute for Malaria, with local and global partners, have artificially infected mosquitoes with human malaria and identified a new chemical compound to treat malaria.

Image: 
Wits University

A vector refers to an organism that carries and transmits an infectious disease, as mosquitoes do malaria.

Lead compounds are chemical compounds that show promise as treatment for a disease and may lead to the development of a new drug.

Antiplasmodial lead compounds are those that counter parasites of the genus Plasmodium, which is the parasite that infects mosquitoes and causes malaria in people.

The study findings were published in Nature Communications on 11 January 2021, at a time when malaria incidence generally peaks after the holiday season.

MOSQUITO INFECTION EXPERIMENT CENTRE

Professor Lizette Koekemoer, co-director of the WRIM and the National Research Foundation SARChI Chair in Medical Entomology and Control, and an honorary member of the Centre for Emerging Zoonotic and Parasitic Diseases, National Institute for Communicable Diseases, co-authored the paper.

Koekemoer and her team at WRIM established a unique mosquito malaria infection centre in the Faculty of Health Sciences at Wits University, where the mosquito transmission blocking experiments took place.

"The WRIM infection centre is the only facility in South Africa and in the southern African region that can artificially infect mosquitoes with the human malaria parasite," says Koekemoer. "The infection centre provided the high level of expertise required to infect mosquitoes with the human malaria parasite, Plasmodium falciparum, and allowed for this unique study to be done."

CURATING CHEMICALS TO COMBAT TRANSMISSION

Drugs are used to control human malaria but resistance to these drugs develops rapidly. Furthermore, the drugs mainly target just one stage of the parasite's life cycle and are not good candidates for blocking transmission.

To expand drug suitability for malaria elimination strategies, drugs need to be able to act as a chemotype [a chemically distinct entity in a plant or microorganism] that blocks both human-to-mosquito transmission and mosquito-to-human transmission.

Koekemoer, with WRIM colleagues and co-authors, Erica Erlank, Luisa Nardini, Nelius Venter, Sonja Lauterbach, Belinda Bezuidenhout, and Theresa Coetzer conducted specialised experiments to measure the reduction of infection in the mosquitoes as well as the "killing effect" [endectocide effect] in the vectors.

The scientists screened 400 chemical compounds available in the "Pandemic Response Box", which is supplied by Medicines for Malaria Venture (MMV), to identify the compounds that are most effective across the life stages of the parasite that generally take place in the human host.

The Pandemic Response Box contains 400 diverse drug-like molecules active against bacteria, viruses or fungi. It is available free of charge provided that researchers share data resulting from research on the molecules in the box with the public within 2 years of its generation.

The compounds that showed a good effect on late-stage gametocytes (those circulating in the blood and being transferred to a mosquito when it feeds on an infected human) were evaluated for their transmission-blocking potential.

Mosquitoes were fed infected blood that was treated either with or without the compound. After eight to 10 days, the mosquito guts [stomach] were removed and the number of parasites (called oocysts) counted and compared against those mosquitoes that received only an infected blood meal without treatment.

SEEKING THE SPARK

Mass drug administration (MDA) is the administration of antimalarial drugs to target the parasite reservoir in humans, without necessarily testing whether those people are carrying the parasite that causes malaria.

The World Health Organization (WHO) recommends MDA for the elimination of the Plasmodium falciparum malaria parasite. However, the effort and cost required to implement MDA on a large scale is prohibitive.

Identifying the potential compounds for MDA is amongst the greatest hurdles to overcome and is therefore an intense research pipeline of development.

By leveraging the scarce skills and expertise available in different research institutes such as the WRIM, this study published in Nature Communication established the first such pipeline screening in South Africa.

This opens the door for screening additional chemistry effectively and rapidly to contribute towards the elimination of malaria.

TOWARDS ZERO TRANSMISSION TARGETS

Although there were still an estimated 229 million malaria cases in 2019 and 409 000 deaths during this period (compared to some 88 million Covid-19 cases reported), the successes of reducing malaria cases over the past two decades has inspired many countries to commit to eliminating transmission altogether.

To date, the WHO has certified 38 countries and territories malaria-free. In southern Africa, eight countries - including South Africa - have made the elimination of malaria a policy goal.

However, progress towards eliminating transmission has numerous challenges and new drugs and insecticides are urgently needed to combat the development of drug resistance in the parasite and the vector.

The WRIM and studies like these are pioneering novel approaches. However, it is still a long road between the laboratory and field operations. But the first steps have been taken.

Credit: 
University of the Witwatersrand

The regulatory network of sugar and organic acid in watermelon fruit is revealed

image: Figure 1 Gene networks and key candidate genes involved in sugar and organic acid regulation during watermelon fruit development

Image: 
Zhengzhou Fruit Research Institute

Recently, the innovation project watermelon and melon cultivation and physiology team of Zhengzhou Fruit Research Institute has made new progress in the metabolism regulation of sugar and organic acid in watermelon fruit. The changes of sugar and organic acid during the fruit development were analyzed and the key gene networks controlling the metabolism of sugar and organic acid during the fruit development were identified. These results provided a theoretical basis for watermelon quality breeding, which had important scientific significance for the development of watermelon industry and the improvement of watermelon breeding level in China. The related research results were published in the journals of Horticulture Research and Scientia Horticulturae.

The sensory quality of watermelon fruit is determined by the content of sugar and organic acid, which determines the taste of watermelon during the development and maturation of watermelon fruit. The sweet watermelon '203Z' and sour watermelon 'SrW' of its isogenic line were used as materials, the genes and gene networks co-expressed with glycolic acid metabolism were searched through WGCNA analysis of transcriptional and metabolite data. Three gene expression networks were identified, including 2443 genes that were highly correlated with sugar and organic acid metabolism in watermelon fruits. Seven key genes involved in sugar and organic acid metabolism of watermelon fruits were screened by significance and qRT-PCR analysis. Among them, Cla97C01G000640, Cla97C05G087120 and Cla97C01G018840 were sugar transporters. Cla97C03G064990 was a sucrose synthase. Cla97C07G128420, Cla97C03G068240 and Cla97C01G008870 were highly correlated with malic acid and citric acid, which were the transporters and regulators of malic acid and citric acid. These genes were verified in the natural population, and the results showed that the expressions of these 7 genes were significantly positively correlated with the contents of sugar and organic acid in watermelon fruit.

Credit: 
Nanjing Agricultural University The Academy of Science