Tech

Self-assembling nanomaterial offers pathway to more efficient, affordable harnessing of solar power

image: In this illustration, DPP and rylene dye molecules come together to create a self-assembled superstructure. Electrons within the structure absorb and become excited by light photons, and then couple with neighboring electrons to share energy and create additional excited electrons that can be harvested to create solar cells.

Image: 
Andrew Levine

NEW YORK, January 24, 2019 - Solar rays are a plentiful, clean source of energy that is becoming increasingly important as the world works to shift away from power sources that contribute to global warming. But current methods of harvesting solar charges are expensive and inefficient--with a theoretical efficiency limit of 33 percent. New nanomaterials developed by researchers at the Advanced Science Research Center (ASRC) at The Graduate Center of The City University of New York (CUNY) could provide a pathway to more efficient and potentially affordable harvesting of solar energy.

The materials, created by scientists with the ASRC's Nanoscience Initiative, use a process called singlet fission to produce and extend the life of harvestable light-generated electrons. The discovery is described in a newly published paper in the Journal of Physical Chemistry. Early research suggests these materials could create more usable charges and increase the theoretical efficiency of solar cells up to 44 percent.

"We modified some of the molecules in commonly used industrial dyes to create self-assembling materials that facilitate a greater yield of harvestable electrons and extend the electrons' xcited-state lifetimes, giving us more time to collect them in a solar cell," said Andrew Levine, lead author of the paper and a Ph.D. student at The Graduate Center.

The self-assembly process, Levine explained, causes the dye molecules to stack in a particular way. This stacking allows dyes that have absorbed solar photons to couple and share energy with --or "excite"--neighboring dyes. The electrons in these dyes then decouple so that they can be collected as harvestable solar energy.

Methodology and Findings

To develop the materials, researchers combined various versions of two frequently used industrial dyes--diketopyrrolopyrrole (DPP) and rylene. This resulted in the formation of six self-assembling superstructures, which scientists investigated using electron microscopy and advanced spectroscopy. They found that each combination had subtle differences in geometry that affected the dyes' excited states, the occurrence of singlet fission, and the yield and lifetime of harvestable electrons.
Significance

"This work provides us with a library of nanomaterials that we can study for harvesting solar energy," said Professor Adam Braunschweig, lead researcher on the study and an associate professor with the ASRC Nanoscience Initiative and the Chemistry Departments at Hunter College and The Graduate Center. "Our method for combining the dyes into functional materials using self-assembly means we can carefully tune their properties and increase the efficiency of the critical light-harvesting process."

The materials' ability to self-assemble could also shorten the time for creating commercially viable solar cells, said the researchers, and prove more affordable than current fabrication methods, which rely on the time-consuming process of molecular synthesis.

The research team's next challenge is to develop a method of harvesting the solar charges created by their new nanomaterials. Currently, they are working to design a rylene molecule that can accept the electron from the DPP molecule after the singlet fission process. If successful, these materials would both initiate the singlet fission process and facilitate charge-transfer into a solar cell.

Credit: 
Advanced Science Research Center, GC/CUNY

Computer analysis shows that popular music lyrics become angrier and sadder over time

Popular music has changed over the years, and the music of 2019 is noticeably different from the music of the 1960s or 1970s. But it is not just the music that changed, but also the lyrics. Data scientists at Lawrence Technological University in Michigan used quantitative analytics to study the change in lyrics of popular music over seven decades, from the 1950s to 2016. The results showed that the expression of anger and sadness in popular music has increased gradually over time, while the expression of joy has declined.

In a research paper published in the most recent issue of the Journal of Popular Music Studies, Kathleen Napier and Lior Shamir analyzed the lyrics of over 6000 songs of the Billboard Hot 100 in each year. The Billboard Hot 100 songs are the most popular songs each year, and reflect the preferences of music fans. In the past the songs were ranked mainly by record sales, radio broadcasting, and jukebox plays, but in the more recent years it is based on several other popularity indicators such as streaming and social media to reflect the changes in music consumption.

The tones expressed in each song were analyzed by applying automatic quantitative sentiment analysis. Automatic sentiment analysis associates each word or phrase in the song with a set of tones that they express. The combination of the tones expressed by all words and phrases of the lyrics determines the sentiment of that song. The sentiments of all Billboard Hot 100 songs in each year are averaged, and the average of each year allows to measure whether the expression of that sentiment increased, decreased, or remained constant.

The analysis showed that the expression of anger in popular music lyrics has increased gradually over time. Songs released during the mid 1950s were the least angry, and the anger expressed in lyrics has increased gradually until peaking in 2015. The analysis also revealed some variations. Songs released in the three years of 1982-1984 were less angry compared to any other period, except for the 1950s. In the mid 1990s, songs became angrier, and the increase in anger was sharper during that time in comparison to previous years.

The expression of sadness, disgust and fear also increased over time, although the increase was milder compared to the increase in the expression of anger. Disgust increased gradually, but was lower in the early 1980s and higher in the mid and late 1990s. Popular music lyrics expressed more fear during the mid 1980s, and the fear decreased sharply in 1988. Another sharp increase in fear was observed in 1998 and 1999, with a sharp decrease in 2000. The study also showed that joy was a dominant tone in popular music lyrics during the late 1950s, but it decreased over time and became much milder in the recent years. An exception was observed in the mid 1970s, when joy expressed in lyrics increased sharply.

The study shows that the tones expressed in popular music change over time, and the change is gradual and consistent, with a few exceptions. Since the researchers analyzed the most popular songs in each year, the study does not show that music changed, but in fact that the preferences of music consumers have changed over time. While music fans preferred joyful songs during the 1950s, modern music consumers are more interested in songs that express sadness or anger. "The change in lyrics sentiments does not necessarily reflect what the musicians and songwriters wanted to express, but is more related to what music consumers wanted to listen to in each year," says Lior Shamir, who participated in the research.

Credit: 
Lawrence Technological University

Worms can process rice straw, scientists discover

image: An experimental set of mesocosms that helped understand the role of earthworms in the process of rice straw decomposition.

Image: 
Courtesy of Andrey Zaitsev

A team of scientists from I.M. Sechenov First Moscow State Medical University (MSMU) discovered that earthworms efficiently process rice straw and enrich the soil with organic matter increasing its fertility and preventing the burning of the straw that takes quite long to naturally decompose. The results of the study were published in the European Journal of Soil Biology. The work was supported with a grant of the Russian Science Foundation.

Rice is staple food product for the majority of the Earth's population. The demand for it is constantly growing, making its production increase annually. Harvesting and grain peeling leave a considerable amount of crop residues which are not naturally consumed by herbivorous animals and therefore is burned down. Burning causes the emission of greenhouse gases (carbon dioxide and methane) and black carbon, which negatively affects the climate. Therefore, it is important to develop a more eco-friendly method for rice straw recycling.

The authors of the study collected soil samples at three regions of Russia where rice is grown: Krasnodarsky Krai, Kalmykia, and Primorsky Krai. It turned out, that in all three regions the rice fields lacked earthworms, and the scientists selected the cheapest kind of them to be tested - Eisenia fetida that is cultivated in Russia on an industrial scale for fishing and humus production. The scientists wanted to find out if they were able to process rice straw.

The team created several mesocosms (closed confinements imitating natural conditions) to study the ratio between the emission of carbon dioxide, methane, and organic carbon and the type of soil, presence of straw, and the number of earthworms in it. Each system contained 1 kg of soil with or without rice straw. Finally, different amounts of earthworms were placed into each mesocosm to find out how their activity would influence the concentration of greenhouse gases and carbon input into soil.

It turned out that adding rice straw to any type of soil increases carbon dioxide emission at least by the factor of three. The ratio between the emitted CO2 and the number of earthworms varied depending on the soil type. For example, the emissions barely changed when the worms were added to the mesocosms with soil from Primorsky Krai. In other types of soil with rice straw the emission of carbon dioxide considerably increased after the worms were added. The highest effect in the soils from Kalmykia was reached at the density of 6 worms per mesocosm, and from Krasnodar Krai - at 4 worms per mesocosm. At the same time the increase in concentration of organic carbon in Krasnodar soils turned out to be 10,000 times higher than its loss in the course of emissions. When bound with the soil, organic carbon improves its fertility and erosion resistance, and when burned - produces carbon dioxide or black carbon. As for methane emissions, the worms did not influence those.

"This work is of practical importance. We've found a way to efficiency process rice straw instead of burning it which is currently the most common practice worldwide. Along with increasing sustainability and climate safety of rice growing, it reduces the risks for the environment and mankind associated with agricultural burns as causes of fires and atmospheric pollution, including that with carcinogenic substances. Moreover, adding earthworms to soils will increase the quality, fertility, and soil health of the fields and reduce the risk of erosion due to binding with organic substances," added Andrey Zaitsev, a researcher from MSMU and a senior researcher at the laboratory of soil ecosystem functions, A.N. Severtsov Institute of Ecology and Evolution of the Russian Academy of Sciences.

Credit: 
Sechenov University

Silicones obtained at low temperatures with the help of air

Russian scientists have developed a new method for synthesizing para-carboxyplenylsiloxanes, a unique class of organosilicon compounds. The resulting compounds are promising for creating self-healing, electrically conductive, heat- and frost-resistant silicones.

Organosilicon compounds, especially materials based thereon such as silicones, are among the most in-demand products. The outstanding ability to withstand incredible thermal and mechanical stress makes it possible to use silicones for sealing and protecting many items in aircraft and rocket construction. The strength and durability of silicones enable their application in medicine, food industry, and in many other fields of human live.

Though many silicone materials have already been created and their fields of application have been found, scientists believe that their usability potential has not been fully revealed to date. This is due to one of the central problems in the modern chemistry of silicones, namely, the synthesis of organosilicon products with a "polar" (-C(O)OH, -OH, -NH2, etc.) functional group in an organic substituent. Such a moiety allows one to introduce other substituents much more easily, tune the ability of a compound to repel water or to form stable aqueous emulsions, and to impart other "super-capabilities" to a material. This opens quite unique prospects for subsequent modification of these compounds in order to synthesize new copolymers, self-healing and conductive materials, and compounds for the storage and delivery of drugs and fuels. Just a small modification of a compound would also allow one to solve the problem of low mechanical strength and "incompatibility" of silicones with polymers, such as polyesters and others.

With rare exception, the classical methods for synthesizing silicones (first monomers, then polymers) do not allow one to obtain functional organosilicon substrates. As a rule, these methods are either applicable to a narrow range of substrates or are time-consuming, expensive and involve multiple stages.

In recent years, there have been an increasing number of publications on the oxidation and functionalization of organic compounds involving molecular oxygen, i.e., a "green", simple and available oxidant. A number of industrially important processes already rely on this approach. However, despite all the advantages, these processes generally feature low selectivity and require rather drastic conditions (elevated temperature, high pressure, etc.).

A team of scientists from A.N. Nesmeyanov Institute of Organoelement Compounds of the Russian Academy of Sciences (INEOS RAS), in collaboration with colleagues from the Russian Federation, used a combination of a metallic and organic reaction accelerators (catalysts), which allowed them to solve these problems: the reaction conditions were softened and high process selectivity was achieved. The reaction occurred with involvement of molecular oxygen, in liquid phase and at temperatures slightly above the room temperature, whereas many industrial processes are performed in gas phase under drastic conditions. Even now, the method can be scaled to gram amounts in order to produce a required compound. According to the scientists, this is very important since it is far not always that chemists can offer a reaction that can be used for applied purposes already tomorrow.

"Thus, we suggested a highly efficient method based on aerobic metal- and organo-catalyzed oxidation of starting para-tolylsiloxanes to para-carboxyphenylsiloxanes. This approach is based on "green", commercially available, simple and inexpensive reagents and employs mild reaction conditions: molecular oxygen as the oxidant, a process temperature of 40-60°?, atmospheric pressure", - says Dr. Ashot Arzumanyan, the leader and one of the contributors of this study, Senior Scientist of the Laboratory of organosilicon compounds named after K.A. Andrianov, INEOS RAS.

Furthermore, it has been shown that the suggested method is applicable to the oxidation of organic derivatives (alkylarenes) to the corresponding acids and ketones, as well as hydridosilanes to silanols (and/or siloxanols).

The scientists also studied whether materials can be obtained on the basis of para-carboxyphenylsiloxanes, including an analogue of such an industrially important polymer as PET that is used, for example, to bottle water and other beverages, to obtain fibers for clothes and for technical applications, etc.
"The compounds that we obtained open prospects for the creation of self-healing, electrically conductive, heat- and frost-resistant and mechanically strong silicones. They can also serve as a base for developing new hybrid materials that may find use in catalysis, drug delivery, fuel storage, and in other fields of science, technology and medicine", Ashot notes.

Credit: 
AKSON Russian Science Communication Association

How Russian gender inequality is reproduced on social media

Researchers from Higher School of Economics analyzed 62 million public posts on the most popular Russian social networking site VK and found that both men and women mention sons more often than daughters. They also found that posts featuring sons receive 1.5 times more likes. The results have been published in the Proceedings of the National Academy of Sciences of the United States of America.

In recent years, social media have become an important source of data for researchers. In particular, it allows them to observe everyday social interactions and to get insights into the reproduction of gender inequality.

Elizaveta Sivak and Ivan Smirnov used public posts about children made by 635,665 users from Saint Petersburg on the most popular Russian social networking site VK. Common topics for such posts included celebrations of different achievements and important events (19%); expression of love, affection, and pride (26%); and reports on spending time with the children (27%).

The results demonstrate gender imbalance: there are 20% more posts about sons than about daughters on social media. Sons are more often mentioned by both men and women. This difference cannot be explained by the sex ratio at birth alone (106 boys to 100 girls in Russia), thus indicating gender preference in sharing information about children. Previous studies have shown that children's books are dominated by male central characters; in textbooks, females are given fewer lines of text; and in movies, on average, twice as many male characters as female ones are in front of the camera. Gender imbalance in public posts may send yet another message that girls are less important and interesting than boys and deserve less attention.

The researchers also found that posts about sons receive, on average, 1.5 times more likes. The posts about daughters written by the mother, on average, receive 6.7 likes from women, and 1.1 likes from men. Their posts about sons get 10.7 likes from women and 1.8 likes from men. Father's posts about daughters receive 5.3 likes from women and 2.6 from men. Their posts about sons receive 6.7 likes from women and 3.7 from men.

It means that women like posts more often than men, that women prefer posts written by women and men prefer those written by men, and, most importantly, that both women
and men more often like posts mentioning sons.

'This imbalance could send a signal that girls are less significant than boys. The fact that posts about sons get more likes only enhances this effect,' says Ivan Smirnov, the co-author of the paper and the head of the Computational Social Science Lab at HSE.

'The gender preference in sharing information about children may seem quite harmless compared with other layers of gender disparity. However, given the widespread popularity of social media, even moderate bias might accumulate. Millions of users are exposed to a gender-biased news feed on a daily basis and, without even noticing, receive the reaffirmation that paying more attention to sons is normal.'

Credit: 
National Research University Higher School of Economics

New CDC study examines effects of smoking status on ART clinical outcomes

image: Journal of Women's Health is a core multidisciplinary journal dedicated to the diseases and conditions that hold greater risk for or are more prevalent among women, as well as diseases that present differently in women.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, January 23, 2019--A new Centers for Disease Control and Prevention (CDC) study found that smoking in the three months prior to assisted reproductive technology (ART) treatment was associated with higher adjusted odds of cycle cancellation resulting in no embryo transfer and cancellations before fresh oocyte retrieval or frozen embryo transfer. Associations between smoking and selected ART clinical outcomes are described in an article published in Journal of Women's Health, a peer-reviewed publication from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the Journal of Women's Health website through February 23, 2019.

This was a retrospective cohort study which used National ART Surveillance System data on ART treatments performed between 2009 and 2013 in all U.S. states and territories. In "Smoking and Clinical Outcomes of Assisted Reproductive Technologies," Karilynn Rockhill, MPH, CDC (Atlanta, GA) and Oak Ridge Institute for Science and Education (Tennessee) and colleagues from the CDC found that more than 12,000 ART cycles during this period were potentially exposed to smoking. Although smoking increased the odds of cycle cancellation, associations with other clinical outcomes were not significant.

"As smoking may reduce the chance of becoming pregnant with ART, it is important that providers discuss with women the effects of smoking on fertility and pregnancy outcomes," states Susan G. Kornstein, MD, Editor-in-Chief of Journal of Women's Health and Executive Director of the Virginia Commonwealth University Institute for Women's Health, Richmond, VA. "Women should be offered information about effective smoking cessation interventions and support to help them quit smoking before they start ART treatment."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Emergency caesareans put new mothers at higher risk of developing postnatal depression

A new study has revealed first-time mothers who give birth via unplanned caesarean section are 15% more likely to experience postnatal depression.

The author of the study is calling for more mental health support for women whose babies are delivered via emergency caesarean section, or C-section - a surgical procedure usually carried out because of complications during labour.

The major new study, from the University of York, provides new evidence that emergency C-sections put new mothers at greater risk of experiencing mental health problems after giving birth.

The numbers of C-sections performed world-wide have increased dramatically in many developed countries over the past few decades.

Out of the 165,000 births in England each year, there are currently around 25,000 unplanned caesarean deliveries.

Author of the study, Dr Valentina Tonei from the Department of Economics at the University of York, said: "The findings of this study are striking because they provide evidence of a causal relationship between emergency C-sections and postnatal depression.

"This has important implications for public health policy, with new mothers who give birth this way in need of increased support.

"The effects of postnatal depression can be far reaching, with previous studies suggesting that it can have a negative effect, not just on the health of the mother and her relationships with her partner and family members, but also on the baby's development. Mothers who experience postnatal depression are also less likely to go on to have more children."

While previous studies have often been based on small sample sizes from single hospitals, the study looked at data from 5,000 first-time mothers from the UK Millennium Cohort Study, a representative study of the UK population.

The study isolated the effects of emergency C-section on mothers' psychological well-being in the first nine months after delivery by taking other factors, such as differences in the resource and staffing levels in hospitals and the mental health history of the mothers, into account.

By focusing on first-time mothers, the effects of previous birthing experiences were also eliminated.

Dr Tonei added: "Unplanned caesareans may have a particularly negative psychological impact on mothers because they are unexpected, usually mentally and physically stressful and associated with a loss of control and unmatched expectations.

"While the financial costs associated with this surgical procedure are well recognised, there has been less focus on the hidden health costs borne by mothers and their families. We hope this new evidence brings the impact on mothers' mental health into the spotlight."

Credit: 
University of York

Forest soils need many decades to recover from fires and logging

image: Victoria's Mountain Ash forests in Australia.

Image: 
Tabitha Boyer, ANU

A landmark study from The Australian National University (ANU) has found that forest soils need several decades to recover from bushfires and logging - much longer than previously thought.

Lead researcher Elle Bowd from the ANU Fenner School of Environment and Society said the team found forest soils recovered very slowly over many years from these events - up to 80 years following a bushfire and at least 30 years after logging.

"We discovered that both natural and human disturbances can have incredibly long-lasting effects on forest soils that could impact plant communities and ecosystem function," said Ms Bowd, who is the lead author of the ANU team's Nature Geoscience paper.

Professor David Lindenmayer, also from the ANU team, said scientists had not known how long soils were impacted by bushfires and logging prior to this study.

"We thought forests could recover within 10 or 15 years, at most, after these sorts of events," said Professor Lindenmayer from the ANU Fenner School of Environment and Society.

"Almost 99 per cent of Victoria's Mountain Ash forests have either been logged or burnt in the past 80 years, so these forests are facing a huge uphill battle to restore themselves to their former glory."

The team collected 729 soil cores from 81 sites exposed to nine different disturbance histories in the Victorian Mountain Ash forests.

These forests generate nearly all of the water for the five million people living in Melbourne, store large amounts of biomass carbon and support timber, pulpwood and tourism industries.

"It's very likely that other forests around the world are facing similarly big challenges in terms of soil recovery following bushfires and logging," Ms Bowd said.

"Soil temperatures can exceed 500 degrees Celsius during high-intensity fires and can result in the loss of soil nutrients.

"Logging can expose the forest floor, compact soils, and alter soil structure, reducing vital soil nutrients. These declines are more severe in areas that have experienced multiple fires and logging."

Big, old trees in these forests take more than a century to recover from disturbances, and forest soils may take a similar amount of time to be restored.

"To best preserve the vital functions soils have in forests, land management and policy need to consider the long-lasting impacts of disturbances on forest soils, and reduce future disturbances such as clearcut logging."

Credit: 
Australian National University

Without habitat management, small land parcels do not protect birds

image: Some of the birds that were captured and banded in the research included, top row, from left, Carolina wren and chestnut-sided warbler and, bottom row from left, Baltimore oriole, bluejay and common yellowthroat

Image: 
Julian Avery

Designating relatively small parcels of land as protected areas for wildlife with no habitat management -- which has frequently been done in urban-suburban locales around the world -- likely does not benefit declining songbird species, according to a team of researchers who studied a long-protected northeastern virgin forest plot.

They reached their conclusion after comparing bird population data collected in the 1960s in Hutcheson Memorial Forest, a unique, uncut 40-acre tract owned by Rutgers University in central New Jersey, to bird numbers found there in recent years. In the 1950s, when Rutgers received the land, a deed restriction explicitly prohibited habitat and wildlife management.

This single site is very typical of protected areas established in the last decade worldwide, researchers noted, because 68 percent of the 35,694 terrestrial protected areas added to the World Protected Area Network from 2007 to 2017 are of equal or smaller size to Hutcheson Memorial Forest.

In the study, researchers tracked bird compositional changes using a "within-season repeat sampling protocol." Using the same locations and methods employed 40 years before to collect birds, researchers documented species gains and losses through time. Using national Breeding Bird Survey data, they also contrasted songbird numbers in the protected area to the surrounding region's bird population.

The researchers found that nearly half the species found in the forest at the time of initial protection are now gone, and that yearly forest species composition is highly dynamic. Ground nesting and migratory species were more likely to be missing than were canopy breeders, cavity nesters and year-round residents.

Regional population declines explained differences in local extinction probability across species, indicating that the study population, to some extent, mirrored larger regional dynamics, researchers said. However, the abundance of a substantial number of species declined within the forest while experiencing no regional declines, or even regional increases, in abundance.

"We believe this study provides a nice window into what is happening in small, urban-suburban protected bird areas around the world," said research team member Julian Avery, assistant research professor of wildlife ecology and conservation in Penn State's College of Agricultural Sciences. "It gave us an unprecedented look into how the bird community has changed through time."

When Avery was a doctoral student at Rutgers in 2007, he played a key role in getting the bird-banding station restarted in Hutcheson Memorial Forest, duplicating activities conducted during the 1960s. After he graduated and joined Penn State's faculty, researchers continued to capture and recapture birds with mist nets in the exact same spots with the same frequency as was done 40 years before.

Species that were often present in the 1960s in the Hutcheson Memorial Forest no longer are present, include the ovenbird, brown thrasher and red-eyed vireo. Some species that were less common historically, now are very common, such as the common yellowthroat and hairy woodpecker. Species commonly found in the research area both then and now include eastern towhee, gray catbird, red-bellied woodpecker and Carolina wren.

The findings of the research, recently published in Biodiversity and Conservation, suggest that even with protected status, small forest fragments may not provide the conservation benefits that protection is meant to provide. A major reason why, Avery believes, is that in many cases with habitat protection there is a subsequent lack of wildlife and habitat management.

As a result, over-abundant deer can overbrowse and decimate the forest understory, and invasive plant species also begin to colonize, eliminating needed cover for ground-nesting birds. The lack of sound forestry practices, invasive plant removal and prescribed fire, in the long run, has the consequence of degrading habitat -- and bird populations suffer.

"We have learned that an active approach is much better for the habitat and the birds -- if you don't do something, provide some management, then you prevent the existing plant community from replacing itself," Avery said. "With the Hutcheson Memorial Forest, the donors didn't want anybody to harvest the ancient trees or change the forest. That sentiment was well-meaning, but when the deer are eating the native plants such as oak and hickory and avoiding the invasives, through time, we are not getting replacement of the desired, original forest. Habitat conservation needs effective management."

Credit: 
Penn State

Repeated Inactive Flu Vaccine Does Not Have Negative Effects

Children's Respiratory Symptoms Can Last Up to Three Weeks

Symptoms of children's respiratory tract infections, including runny nose, dry cough, and sore throat, can seem never-ending. According to new research, it takes 23 days for 90 percent of children to recover from respiratory tract infection symptoms. Researchers used a novel online study design to follow 485 children in 331 families in Bristol, England as they fell ill with respiratory tract infections (n=197 respiratory tract infections) without requiring them to visit their family physician. Overall, median duration of symptoms was 9 days. For children three years of age or younger, median symptom duration was 11 days compared to seven days for older children. Children whose parents reported lower respiratory tract symptoms (such as wet cough and wheeze) had median symptom duration of 12 days compared to eight days for those who had only upper respiratory tract symptoms (such as runny nose and sore throat). Among children with only upper respiratory tract symptoms, the most persistent symptom was runny nose, while the fastest symptom to resolve was earache. For children with at least one lower respiratory tract symptom, all symptoms persisted for three weeks; runny nose and wet cough were the most severe symptoms. One in 12 parents sought help from their family physician. These findings, the authors suggest, could inform primary care practice, public health interventions, and, ultimately, parents regarding the concerning symptoms for which they should consult their primary care physician.

Respiratory Tract Infections in Children in the Community: Prospective Online Inception Cohort Study

Professor Alastair D. Hay, et al
University of Bristol, Bristol, United Kingdom

Repeated Inactive Flu Vaccine Does Not Have Negative Effects

Repeated inactivated influenza vaccine immunization in children with pre-existing medical conditions has no negative impact on, and may even enhance, long-term protection against respiratory illness. A study from the Netherlands examined data for 4,183 children aged 6 months to 18 years with pre-existing conditions who received inactivated influenza vaccine at least once from 2004-2015. Adjusted estimates showed lower odds for respiratory illness in immunized children with prior inactivated influenza vaccine compared to children immunized for the first time. These findings suggest that there is residual protection from earlier inactivated influenza vaccinations. This is particularly relevant for children with pre-existing medical conditions who receive inactivated influenza vaccines repeatedly during childhood.

Impact of Repeated Influenza Immunization on Respiratory Illness in Children With Preexisting Medical Conditions

Marieke L.A. de Hoog, PhD, et al
University Medical Center Utrecht, Utrecht, The Netherlands

Burnout Predicts Turnover in Clinicians, not Staff

Burnout is alarmingly high among primary care clinicians and staff, but is it related to turnover of personnel? New research finds that burnout contributes to turnover among primary care clinicians, but not staff. A study of 740 primary care clinicians and staff in two health systems compared 2013-14 survey data on burnout and employee engagement (the likelihood of recommending their clinic as a place to work) with 2016 employment roster data. Fifty-three percent of both clinicians and staff reported burnout, while only one-third (32 percent of clinicians and 35 percent of staff) reported feeling highly engaged in their work. By 2016, 30 percent of clinicians and 41 percent of staff no longer worked in primary care in the same healthcare system. Clinicians who reported burnout in 2013-14 were more likely to leave the health system by 2016, taking into account their clinical time and length of time they had worked in the system. In regression models, neither burnout nor employee engagement predicted turnover for staff. The high rates of turnover, the authors suggest, have important implications for patient care. Continuity of care, which is a fundamental principle of primary care, is difficult to maintain in environments with frequent clinician and staff turnover. Furthermore, turnover is expensive for health care organizations. Although reducing burnout may help decrease rates of turnover among clinicians, the authors urge health care organizations and policymakers concerned about primary care employee turnover to understand its multifactorial causes and develop effective retention strategies for clinicians and staff.

Burnout and Health Care Workforce Turnover

Rachel Willard-Grace, MPH, et al
University of California, San Francisco, San Francisco, California

Physicians More Satisfied When They Can Address Patients' Social Needs

When primary care practices try to help patients address social and economic needs, their efforts might have an unexpected benefit. According to a new study, primary care physicians who practice in a setting prepared to manage patients with social needs have significantly higher job satisfaction than other physicians. Among 890 US physicians responding to the Commonwealth Fund's 2015 International Health Policy Survey of Primary Care Doctors, those who reported working in a practice able to manage patients' social needs had significantly higher job satisfaction, were more satisfied with amount of time spent with patients, and thought that the quality of medical care patients receive had improved. Feeling that it was easy to coordinate patients' care with social services or other community resources was also significantly associated with higher job satisfaction, personal and relative income satisfaction, satisfaction with amount of time spent with patients, and outlook on patients' quality of medical care. The authors call on health systems to include clinician satisfaction, which is closely tied to issues of burnout and retention, in their calculations of the costs and benefits of responding to patients' social needs.

Practice Capacity to Address Patients' Social Needs and Physician Satisfaction and Perceived Quality of Care

Matthew S. Pantell, MD, MS, et al
University of California, San Francisco, San Francisco, California

Psychological Therapies and Gradual Tapering Aid in Discontinuing Antidepressants

When discontinuing antidepressants, the risk of relapse or recurrence is significantly reduced by combining cognitive behavior therapy with gradual tapering of the medication. An analysis of existing research found that, at 2 years, risk of relapse or recurrence was lower with cognitive behavior therapy plus tapering (15-25 percent) compared to clinical management plus tapering (35-80 percent). Relapse/recurrence rates were similar for mindfulness-based cognitive therapy with tapering and maintenance antidepressants. In two studies prompting primary care clinician discontinuation with antidepressant tapering guidance, six percent and seven percent of patients discontinued, compared to eight percent for usual care. Six studies of psychological or psychiatric treatment plus tapering reported cessation rates of between 40 percent and 95 percent. Two studies reported a higher risk of discontinuation symptoms with abrupt termination of medication. The authors note that cognitive behavior therapy seems to improve discontinuation rates compared to primary care clinician management of tapering with brief guidance; however, patient access to such therapy may be limited. They call for exploration of psychologically informed digital support for discontinuing antidepressants to complement care provided by primary care clinicians.

Managing Antidepressant Discontinuation: A Systematic Review

Professor Tony Kendrick, et al
University of Southampton, Southampton, United Kingdom

Computerized Adaptive Testing Can Screen for Depression and Anxiety in Primary Care

Computerized adaptive testing is a valid tool for screening for major depressive disorder in primary care and offers a format that is well received by patients. New research compared computerized adaptive tests, which personalize assessments by adaptively varying questions based on previous responses, with widely used paper screening tools (Patient Health Questionnaire-9, Patient Health Questionnaire-2, and Generalized Anxiety Disorder-7), and semi-structured interview, which is generally considered the gold standard in psychiatric assessment. The diagnostic accuracy of the Computerized Adaptive Diagnostic Test for Major Depressive Disorder was similar to the PHQ-9 and higher than the PHQ-2. Compared to interview, the accuracy of the Computerized Adaptive Test-Anxiety Inventory was similar to the Generalized Anxiety Disorder-7 for assessing anxiety severity. Participants preferred using tablet computers (53 percent), compared to interview (33 percent) and paper-and-pencil questionnaires (14 percent). The majority of participants (64 percent) rated paper-and-pencil questionnaire as their least preferred screening method. The widespread use of electronic health records in primary care presents new opportunities to leverage electronic tools for screening, the authors suggest, while multidimensional item response theory, used in computerized adaptive testing, can increase the efficiency of assessing mental and physical health.

Validation of the Computerized Adaptive Test for Mental Health in Primary Care

Neda Laiteerapong, MD, et al
The University of Chicago, Chicago, Illinois

New Family Physicians Feel Better Prepared but Report a Narrower Scope of Practice

Recent family medicine residency graduates feel better prepared to provide a variety of procedures and clinical services than their predecessors but report a narrower scope of practice. These findings are the result of a University of Washington survey of family medicine graduates in affiliated programs in five states, with a focus on two cohorts: those who graduated residency between 2010 and 2013 (n=408) and an earlier cohort who graduated between 1996 and 1999 (n=293). The survey addressed 26 services and procedures that graduates might provide in their practice and how prepared they feel to provide those services. Researchers found that the earlier cohort had a similar or significantly higher proportion of graduates practicing almost all listed procedures and services compared to the later cohort; only OB ultrasound and end-of-life care were more common among more recent graduates. The pattern of findings was reversed when comparing graduates who felt more than adequately prepared for practice; a greater proportion of those in the later cohort reported feeling prepared in most areas compared to earlier graduates. For example, 52 percent of the earlier cohort reported providing nursing home care, compared to 33 percent of the later cohort, but 59 percent of the later cohort felt more than adequately prepared to provide such care, compared to 27 percent of earlier graduates. According to the authors, these findings suggest that training has improved over the last decade, but that scope of practice is declining for reasons unrelated to training. Changes are likely due to a variety of factors, including the evolution of clinical practice and differences in practice size and type, including a trend toward larger, multispecialty groups in which family physicians may not be required (or allowed) to practice a wide array of procedures. The decline in scope of practice, the authors state, has negative implications for the breadth and richness of physician practice and for patients' access to and quality of care. According to the authors, family medicine educators may need to adapt their training to a new generation of practice realities and physician preferences.

Changes in Preparation and Practice Patterns Among New Family Physicians

Amanda K.H. Weidner, MPH, et al
University of Washington, Seattle, Washington

Ultrasound Is an Increasingly Important Tool in Family Medicine/General Practice

Family physicians and general practitioners perform ultrasonography for a variety of conditions and with satisfactory levels of accuracy. This is according to an analysis of existing research, the first comprehensive systematic review of the use of ultrasonography by family physicians and general practitioners. Ultrasonography has been used for a variety of different conditions, most often focused on abdominal and obstetric ultrasound scans. The extent of training programs varied from 2-320 hours. Competence in some types of focused ultrasound scans could be attained with only few hours of training. Focused point-of-care ultrasound scans were reported to have a higher diagnostic accuracy and cause less harm than more comprehensive ultrasound scans or screening examinations. In studies assessing quality, participants generally scanned with a satisfactory level of accuracy, with quality depending on the extent of the examination and the anatomical area being scanned. Some focused scans had higher levels of diagnostic accuracy, required less training and were associated with less harm, whereas more extensive examinations were associated with lower quality scans and potential harms. The authors anticipate that point-of-care ultrasound will be increasingly important for family physicians/general practitioners in diagnosis, choice of treatment, and referral. These study results, they note, can help inform curricula and future exploration of the use of point of care ultrasound in family medicine/general practice.

Point-of-Care Ultrasound in General Practice: A Systematic Review

Camilla Aakjær Andersen, MD, et al
Aalborg University, Aalborg East, Denmark

Could Integration of Social and Medical Care Worsen Health and Increase Health Inequity?

At a time when health care is increasingly focused on the relationship between patients' social and medical needs, a provocative new essay proposes that this focus may have unintended consequences. In fact, the authors state, there is a risk that some of these efforts could worsen health and widen health inequities. Examples include attempts by the Centers for Medicare & Medicaid Services (CMS) to encourage states to explore work requirements as a condition for Medicaid eligibility, with a rationale based on the health benefits of work and work promotion. According to the authors, such efforts could reduce access to health care by serving as a disincentive to Medicaid enrollment. Other examples include the growing use of social data for commercial health care purposes, which could augment insurance coverage bias and exclusion; and new research on how social deprivation affects biological susceptibility to mental and physical illness, which could shift issues like poverty from the social to the medical realm. To address these issues, the authors call for, "A new dialogue...about both the opportunities and potential consequences of bringing information about patients' social circumstances into a market-based health care system."

Integrating Social and Medical Care: Could it Worsen Health and Increase Inequity?

Laura M. Gottlieb, MD, MPH, et al
University of California, San Francisco, San Francisco, California

Incorporating Community Organizing Into Clinical Practice

A family physician reflects on her journey from community organizer to primary care clinician. As a clinician, she remains inspired by community organizing--a model for driving social change and improving public health--but has found it challenging to incorporate into clinical practice. She proposes a model for how clinicians and practices can proactively partner with community organizing groups and facilitate referrals to help patients directly engage in transforming the root causes of their health challenges. This model shifts the focus from the patient as an individual agent of change to the community and offers important lessons to clinicians interested in community health equity.

When "Patient-Centered" is Not Enough: A Call for Community-Centered Medicine

Juliana E. Morris, MD, EdM
University of California, San Francisco, San Francisco, California

The Past, Present and Future of Research in Primary Care

Two articles in the this issue of Annals of Family Medicine explore the history of research in family medicine and primary care, while a third considers its future.

Low- and Middle-Income Countries Establish Primary Care Research Priorities

In low- and middle-income countries, primary care research priorities include integration of care at the public/private interface, secondary care, community services, and models of care and finance to promote equitable access to care. These priorities were developed by a three-round modified Delphi expert panel of primary care practitioners and academics in low- and middle-income countries sampled from global networks using web-based surveys. They generated an initial list of more than 1,000 research ideas, which researchers synthesized into 36 organizational and 31 finance questions. The final four prioritized questions on organization address primary/secondary care transitions, horizontal integration within a multidisciplinary team, integration of private and public sectors, and ways to support successfully functioning primary health care teams. Corresponding finance questions address payment systems to increase access and availability of primary care, mechanisms to encourage governments to invest in primary care, the ideal proportion of a health care budget devoted to primary care, and factors to improve workforce distribution. Panelists have developed country-specific research implementation plans for prioritized questions, which will be presented to potential research funders.

Primary Care Research Priorities in Low- and Middle-Income Countries

Professor Felicity Goodyear-Smith, et al
University of Auckland, Auckland, New Zealand

Research Publications About Primary Care Have Increased But Remain a Small Proportion of Total

Since 1974, the number of research publications addressing primary care has increased, but still represents a small proportion of publications in the MEDLINE database. According to a bibliometric analysis of research output in 21 countries between 1974 and 2017, the United States and the United Kingdom had the highest volume of research publications about primary care, followed by Canada and Australia. There was significant growth in publications from countries in Southern, Eastern and Western Europe. During the same time period, the United Kingdom and Australia had the largest share of publications in primary care among all publications that appeared in MEDLINE. When compared to the total number of MEDLINE publications in 2017, however, primary care publications still represented only a small proportion of the total. The authors suggest that examining factors associated with increased research output may help define priorities in primary care research.

Development of Primary Care Research in North America, Europe, and Australia From 1974 to 2017

Gladys Ibanez, MD, PhD, et al
INSERM, Institut Pierre Louis d'épidémiologie et de Santé Publique, Paris, France

Early Attitudes Toward Research in Family Medicine Reflected the Specialty's Countercultural Roots

While family medicine research has experienced tremendous growth in the past five decades, research was not a priority when the specialty was established in 1969. An examination of archival and secondary sources suggests that the priority placed on research in family medicine's early years was due to internal and external factors, including family physicians' desire to differentiate themselves from the prevailing specialty environment; lack of a clear identity for the new specialty; the non-laboratory nature of family medicine research; reliance on information from other specialties; and a focus on establishing an academic presence. A strong culture of generalist knowledge is crucial in assuring family medicine's future and strengthening its ability to improve the health of individuals, families, and communities, the author suggests.

Unfinished Business: The Role of Research in Family Medicine

Robin S. Gotler, MA
Case Western Reserve University, Cleveland, Ohio

Innovations in Primary Care

Innovations in Primary Care are brief one-page articles that describe novel innovations from health care's front lines. In this issue:

Expanding the Use of Botulinum Toxin in Primary Care for Chronic Migraine - Primary care and neurology departments partnered to train and credential primary care clinicians to provide patients with botulinum toxin injections for chronic migraine headaches.

Partnering Research Fellows and Clinicians in Practice Settings - A family medicine postdoctoral research fellowship places researchers alongside clinicians in clinical settings, providing on-site research expertise for clinicians wanting to transform clinical questions into research and quality improvement projects.

Credit: 
American Academy of Family Physicians

Not only do Gulf of Aqaba corals survive climate change but their offspring may too

image: Jessica Bellworthy monitoring coral health on the reef.

Image: 
M. Kahana

While coral reefs worldwide are suffering severe damage due to global warming and ocean acidification, there is one place in the world where they are surviving these harsh conditions - the Gulf of Aqaba in the Red Sea.

The researchers who discovered this phenomenon are now reporting a new, promising finding: even when parent corals from the Gulf of Aqaba experience increased temperatures and ocean acidification stress during the peak reproductive period, they are not only able to maintain normal physiological function, but also have the same reproductive output and produce offspring that function and survive as well as those which were produced under today's ambient water conditions.

While only one species has been tested for one reproductive cycle so far, this is a success story that shows that corals in the Gulf of Aqaba may persist through climate change, according to Jessica Bellworthy, a doctoral student who carried out the study in the laboratory of Prof. Maoz Fine, of Bar-Ilan University's Mina and Everard Goodman Faculty of Life Sciences, along with scientists from Ecole Polytechnique Fédérale de Lausanne (EPFL). The study was recently published in the Journal of Experimental Biology.

Using a Red Sea Simulator developed in Israel, the researchers tightly manipulated water conditions in Prof. Fine's lab at an 80-aquarium, high-tech facility located at the Interuniversity Institute for Marine Science in Eilat to mimic the ocean under a severe climate change scenario.

"To our surprise but also joy, there were no detected differences in the number or the quality of the offspring produced under ocean acidification and warming scenario compared to the present day ambient control conditions," says Bellworthy.

Previous work in Prof. Fine's lab showing the corals' resistance focused solely on their adult life stages. This led the researchers to investigate the impact of climate change and ocean acidification on the reproductive performance of the adult population and on next generation corals.

"Corals around the world are already suffering mass mortality as a result of anomalously high water temperatures," says Bellworthy. "In the Gulf of Aqaba we have noted a population that withstands thermal stress way beyond what is expected in this century. Furthermore, this study begins to show that this thermal resistance not only applies during the adult life phase but also during the early life stages, which are often considered much more vulnerable and sensitive. This adds support to previous suggestions from our lab and others that the Gulf of Aqaba may be a refuge for corals in the face of climate change."

Over the coming year the researchers plan to run a similar experiment combining global and local harmful factors, such as ocean warming and heavy metal pollution, to investigate how local factors affect the ability of coral reefs to withstand climate change.

Credit: 
Bar-Ilan University

Rising temperatures may safeguard crop nutrition as climate changes

image: Soybeans grow under heaters used to mimic futuristic conditions. Their seeds suggest that rising temperatures may actually improve nutrition but decrease yields, according to a new study.

Image: 
Claire Benjamin/RIPE Project

Recent research has shown that rising carbon dioxide levels will likely boost yields, but at the cost of nutrition. A new study in Plant Journal from the University of Illinois, U.S. Department of Agriculture Agricultural Research Service (USDA-ARS), and Donald Danforth Plant Science Center suggests that this is an incomplete picture of the complex environmental interactions that will affect crops in the future--and rising temperatures may actually benefit nutrition but at the expense of lower yields.

Two years of field trials show that increasing temperatures by about 3 degrees Celcius may help preserve seed quality, offsetting the effects of carbon dioxide that make food less nutritious. In soybeans, elevated carbon dioxide levels decreased the amount of iron and zinc in the seed by about 8 to 9 percent, but increased temperatures had the opposite effect.

"Iron and zinc are essential for both plant and human health," said Ivan Baxter, a principal investigator at the Danforth Center. "Plants have multiple processes that affect the accumulation of these elements in the seeds, and environmental factors can influence these processes in different ways, making it very hard to predict how our changing climate will affect our food."

"This study shows that a trade-off between optimizing yields for global change and seed nutritional quality may exist," said co-principal investigator Carl Bernacchi, a scientist at the USDA-ARS, which funded the research along with the USDA National Institute of Food and Agriculture.

The team tested the soybeans in real-world field conditions at the Soybean Free-Air Concentration Experiment (SoyFACE), an agricultural research facility at Illinois that is equipped to artificially increase carbon dioxide and temperature to futuristic levels.

"It's a very controlled way of altering the growing environment of crops in agronomically relevant situations where the plants are planted and managed exactly like other fields in the Midwestern United States," Bernacchi said, who is also an assistant professor of plant biology and crop sciences at Illinois' Carl R. Woese Institute for Genomic Biology.

Next, they plan to design experiments to figure out the mechanisms responsible for this effect.

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Ecological benefits of part-night lighting revealed

Switching off street lights to save money and energy could have a positive knock-on effect on our nocturnal pollinators, according to new research.

A study, led by experts from Newcastle and York universities, has shown that turning off the lights even for just part of the night is effective at restoring the natural behaviour of moths.

The important role moths play in the pollination of plants - potentially even including key food crops such as peas, soybean and oilseed rape - is often overlooked. But recent studies show that moths supplement the day-time work of bees and other pollinating insects.

Night-lighting disrupts nocturnal pollination by attracting moths upwards, away from the fields and hedgerows so they spend less time feeding and therefore pollinating. But in this latest study, published today in Ecosphere, the team found there was no difference in pollination success between part-night lighting and full darkness.

Dr Darren Evans, Reader in Ecology and Conservation at Newcastle University, who supervised the study, said that at a time when local authorities are switching off the street lamps to save money, this study highlighted the environmental benefits of part-night lighting.

"Artificial light at night is an increasingly important driver of global environmental change and sky brightness is increasing by about 6% a year," he explains.

"Understanding the ecological impact of this artificial light on the ecosystem is vital.

"We know that light pollution significantly alters moth activity and this in turn is disrupting their role as pollinators. But what our study showed was that while full-night lighting caused significant ecological disruption, part-night lighting did not appear to have any strong effect on pollination success or quality."

Street light switch off

Ecological light pollution is increasingly linked to adverse effects on human health and wildlife. Disrupting the natural patterns of light and dark, artificial light "has the potential to affect every level of biological organisation," explains Evans, from cells to whole communities.

In the last decade, many local authorities have changed their street lighting regime in a bid to cut costs and save energy. This includes switching off or dimming the lights at certain times of the night as well as replacing the traditional high-pressure sodium (HPS) bulbs with energy-efficient light-emitting diodes (LEDs).

In the study, the team analysed the impact of a range of scenarios on the pollination of moth-pollinated flowers placed underneath street lights. These included both types of lighting (HPS and LED), run either all night or switched off at midnight. Results were compared to pollination under natural darkness.

They found that regardless of the type of light, full-night light caused the greatest ecological disruption. There was no difference between LED and HPS bulbs in the part-night scenarios and in both cases, the disruption to the plants' pollination was minimal compared to full darkness.

Lead author Dr Callum Macgregor, a Postdoctoral Research Associate from the University of York, said:

"Often, as conservationists, we have to make difficult trade-offs between development and environmental protection.

"However, our study suggests that turning off street lights in the middle of the night is a win-win scenario, saving energy and money for local authorities whilst simultaneously helping our nocturnal wildlife."

Credit: 
Newcastle University

A new low-latency congestion control for cellular networks

image: From left are Professor Kyunghan Lee, Shinik Park, and Junseon kim in the School of Electrical and Computer Engineering at UNIST.

Image: 
UNIST

Heavy traffic on narrow roads could be one of the leading causes of traffic congestion and this applies to communication networks, as well. If more data is collected than the allowed data capacity, communication latency appears. This has a devastating effect on the 5G-based internet services, such as self-driving cars, autonomous robots, and remote surgery.

A research team, led by Professor Kyunghan Lee in the School of Electrical and Computer Engineering at UNIST has proposed a novel technique that can reduce the congestion issues in the network environment. The team presented their latest work, known as 'ExLL: An ultra low-latency congestion protocol for mobile cellular network' at the ACM CoNEXT' 18, held in Heraklion, Greece on December 6, 2018.

According to the research team, the new technique is noted for its superior performance over Google's Biblio (BBR), which is known as the best low-latency transmission protocol. The performance evaluation and verification of communication protocol were undertaken both at home and abroad in collaboration with Professor Sangtae Ha's research team from University of Colorado Boulder, which boasts a mobile communication network testing facility.

Communication delay is a phenomenon in which when untreated data accumulates in the network, data transmission is delayed when more data than the network can handle. This is also called a 'buffer bloat'. When buffer bloats are observed in the data center or the mobile communication network, the transmission of packets is delayed and the efficiency of data exchange and service quality are lowered.

A solution to this is a 'low-delay transmission protocol' technique. It is a method to grasp the network situation and adjust the data transmission amount and reduce the delay. In order to understand the amount of data transmission (network bandwidth) that can be handled by the network in the past, a method of probing the network state has been used by increasing or decreasing the transmission amount per unit time. The amount of data transmission is controlled while considering the change of the delay performance according to the increase or decrease of the transmission amount. However, Google's BBR has not reached the ideal minimum (slowest) performance while sending the maximum data allowed by the network (maximum data rate).

"It is difficult to achieve the ideal performance due to the inefficiency of the search itself in the existing technique," said Shinik Park, the first author of the study. "This technology is a smart phone, "We have solved the problem by changing only the transmission protocol of mobile communication terminals."

The researchers focused on elaborating the 'allowed network capacity' for an efficient low-latency transmission protocol. If data is sent only as much as the network bandwidth allowed by the mobile communication terminal, the data will not be unnecessarily accumulated. To do this, we observed the pattern of packets received by the mobile communication terminal and directly infer the bandwidth of the mobile communication network.

"In LTE networks, the pattern of the amount of data received per millisecond and the interval of arrival of packets during this time is different," says Junseon Kim in the Ph.D. program in Computer Science. "This depends on the resources allocated to the base station and the channel conditions, so it is possible to know the allowed network capacity by analyzing it, and if the observation time is different, it can be applied to the 5G network."

Once you know only the allowed network capacity (baseline), the next step is easy. The mobile communication terminal directly transmits the calculated reference to the server, and the server directly controls the data transmission amount of the mobile communication terminal. If the current throughput is less than the allowed network capacity, it will increase the throughput rapidly. If the current network capacity is almost reached, it will be increased finely.

"ExLL can eliminate inefficiencies in the search process and implement ultra-low latency networking," says Professor Lee. "It is an existing low-latency transmission protocol that plays a major role in remote surgery, remote drone control, and 5G-based autonomous navigation."

Meanwhile, the study of low-latency transmission protocol was steadily developed after the buffer buffer concept was defined in 2011, leading to Google's BBR. Google has included BBR as one of its transport protocols since 2016, and recently added BBR to the Google Cloud Platform (GCP). This is reminiscent of Google's adoption of its own development protocols (SPDY, QUID) in the HTTP / 2 and HTTP3 standards, which are Web transmission technologies.

Credit: 
Ulsan National Institute of Science and Technology(UNIST)

New class of solar cells, using lead-free perovskite materials

image: From left are Kwang Min Kim, Byung Man Kim, HyeonOh Shin, and Professor Tae Hyuk Kwon in the School of Natural Science at UNIST.

Image: 
UNIST

Lead-based perovskites already gained much attention as promising materials for low-cost and high-efficiency solar cells. However, the intrinsic instability and the toxicity of lead (Pb) have raised serious concerns of the viability of Pb-based perovskites, hindering large-scale commercialization of solar cells and similar devices based on these materials. As an alternative solution, Pb-free perovskites were recently proposed to counter the toxicity of lead?based perovskites, yet it is of little use due to lower efficiencies.

A recent study, led by Professor Tae-Hyuk Kwon in the School of Natural Science at UNIST has taken a major step toward the development of a new generation of solar cells, using lead-free perovskites. With its promising electronic properties, the new perovskite material has been demonstrated to function as a charge regenerator with dye?sensitized solar cells, thus enhancing both the overall efficiency and stability. Published in the November 2018 issue of Advanced Materials, their findings will open new possibilities for the application of lead-free perovskites in solar cells.

Among the various alternatives to lead, the research team used the vacancy?ordered double perovskite (Cs2SnI6). Despite their promising outlook, the surface states of Cs2SnI6 and their function remain largely unclear. Thus, a comprehensive study is necessary to clarify these features of Cs2SnI6 for the future design of Cs2SnI6?based devices.

Through this work, the team examined the charge transfer mechanism of Cs2SnI6 with the aim of clarifying the function of its surface state. For this purpose, a 3?electrode system was developed to observe charge transfer through the surface state of Cs2SnI6. Cyclic voltammetry and Mott-Schottky analyses were also used to probe the surface state of Cs2SnI6, whose potential is related to its bandgap.

Their analysis demonstrated that the surface state of Cs2SnI6 is highly redox active and can be effectively charged/discharged in the presence of iodide redox mediators. Besides, the preparation of a charge regenerator system based on Cs2SnI6 confirmed that charge transfer occurred through the surface state of Cs2SnI6.

"In case of Cs2SnI6, charge transfer occurred through the surface state of Cs2SnI6," says HyeonOh Shin in the Combined MS./Ph.D in Chemistry at UNIST. "This will aid in the design of future electronic and energy devices, using Pb-free perovskites."

Based on this strategy, the research team engineered hybrid solar cells, using a Cs2SnI6?based charge regenerator for organic dye-sensitized solar cells (DSSCs). Such solar cells generate electric current in the process where the oxidized organic dye returns to its original state.

"Due to a high volume of electrical charges in organic dyes that show high connectivity with the surface state of Cs2SnI6, more electric current were generated," says Byung-Man Kim in the Department of Chemistry at UNIST, another lead author of this study. "Consequently, Cs2SnI6 shows efficient charge transfer with a thermodynamically favorable charge acceptor level, achieving a 79% enhancement in the photocurrent density compared with that of a conventional liquid electrolyte."

This study has attracted considerable attention among researchers, as it examined the charge transfer mechanism of Cs2SnI6 with the aim of clarifying the function of its surface state. Their results suggest that the surface state of Cs2SnI6 is the main charge transfer pathway in the presence of a redox mediator and should be considered in future designs of Cs2SnI6?based devices.

Credit: 
Ulsan National Institute of Science and Technology(UNIST)