Tech

Welcome indoors, solar cells

image: The organic solar cell optimized to convert ambient indoor light to electricity.

Image: 
Thor Balkhed

Swedish and Chinese scientists have developed organic solar cells optimised to convert ambient indoor light to electricity. The power they produce is low, but is probably enough to feed the millions of products that the internet of things will bring online.

As the internet of things expands, it is expected that we will need to have millions of products online, both in public spaces and in homes. Many of these will be the multitude of sensors to detect and measure moisture, particle concentrations, temperature and other parameters. For this reason, the demand for small and cheap sources of renewable energy is increasing rapidly, in order to reduce the need for frequent and expensive battery replacements.

This is where organic solar cells come in. Not only are they flexible, cheap to manufacture and suitable for manufacture as large surfaces in a printing press, they have one further advantage: the light-absorbing layer consists of a mixture of donor and acceptor materials, which gives considerable flexibility in tuning the solar cells such that they are optimised for different spectra - for light of different wavelengths.

Researchers in Beijing, China, led by Jianhui Hou, and Linköping, Sweden, led by Feng Gao, have now together developed a new combination of donor and acceptor materials, with a carefully determined composition, to be used as the active layer in an organic solar cell. The combination absorbs exactly the wavelengths of light that surround us in our living rooms, at the library and in the supermarket.

The researchers describe two variants of an organic solar cell in an article in Nature Energy, where one variant has an area of 1 cm2 and the other 4 cm2. The smaller solar cell was exposed to ambient light at an intensity of 1000 lux, and the researchers observed that as much as 26.1% of the energy of the light was converted to electricity. The organic solar cell delivered a high voltage of above 1 V for more than 1000 hours in ambient light that varied between 200 and 1000 lux. The larger solar cell still maintained an energy efficiency of 23%.

"This work indicates great promise for organic solar cells to be widely used in our daily life for powering the internet of things", says Feng Gao, senior lecturer in the Division of Biomolecular and Organic Electronics at Linköping University.

"We are confident that the efficiency of organic solar cells will be further improved for ambient light applications in coming years, because there is still a large room for optimization of the materials used in this work", Jianhui Hou, professor at the Institute of Chemistry, Chinese Academy of Sciences, underlines.

The result is a further advance in research within the field of organic solar cells. In the summer of 2018, for example, the scientists, together with colleagues from a number of other universities, published rules for the construction of efficient organic solar cells (see the link given below). The article collected 25 researchers from seven universities, and was published in Nature Materials. The research was led by Feng Gao. These rules have proven to be useful along the complete pathway to efficient solar cell for indoor use.

Spin off company

The Biomolecular and Organic Electronics research group at Linköping University, under the leadership of Olle Inganäs (now professor emeritus), has been for many years a world-leader in the field of organic solar cells. A few years ago, Olle Inganäs and his colleague Jonas Bergqvist, who is co-author of the articles in Nature Materials and Nature Energy, founded, and are now co-owners of a company, which focusses on commercialising solar cells for indoor use.

Credit: 
Linköping University

Don't make major decisions on an empty stomach, research suggests

image: This is Ben Vincent, lead author.

Image: 
University of Dundee

We all know that food shopping when hungry is a bad idea but new research from the University of Dundee suggests that people might want to avoid making any important decisions about the future on an empty stomach.

The study, carried out by Dr Benjamin Vincent from the University's Psychology department, found that hunger significantly altered people's decision-making, making them impatient and more likely to settle for a small reward that arrives sooner than a larger one promised at a later date.

Participants in an experiment designed by Dr Vincent were asked questions relating to food, money and other rewards when satiated and again when they had skipped a meal.

While it was perhaps unsurprising that hungry people were more likely to settle for smaller food incentives that arrived sooner, the researchers found that being hungry actually changes preferences for rewards entirely unrelated to food.

This indicates that a reluctance to defer gratification may carry over into other kinds of decisions, such as financial and interpersonal ones. Dr Vincent believes it is important that people know that hunger might affect their preferences in ways they don't necessarily predict.

There is also a danger that people experiencing hunger due to poverty may make decisions that entrench their situation.

"We found there was a large effect, people's preferences shifted dramatically from the long to short term when hungry," he said. "This is an aspect of human behaviour which could potentially be exploited by marketers so people need to know their preferences may change when hungry.

"People generally know that when they are hungry they shouldn't really go food shopping because they are more likely to make choices that are either unhealthy or indulgent. Our research suggests this could have an impact on other kinds of decisions as well. Say you were going to speak with a pensions or mortgage advisor - doing so while hungry might make you care a bit more about immediate gratification at the expense of a potentially more rosy future.

"This work fits into a larger effort in psychology and behavioural economics to map the factors that influence our decision making. This potentially empowers people as they may forsee and mitigate the effects of hunger, for example, that might bias their decision making away from their long term goals."

Dr Vincent and his co-author and former student Jordan Skrynka tested 50 participants twice - once when they had eaten normally and once having not eaten anything that day.

For three different types of rewards, when hungry, people expressed a stronger preference for smaller hypothetical rewards to be given immediately rather than larger ones that would arrive later.

The researchers noted that if you offer people a reward now or double that reward in the future, they were normally willing to wait for 35 days to double the reward, but when hungry this plummeted to only 3 days.

The work builds on a well-known psychological study where children were offered one marshmallow immediately or two if they were willing to wait 15 minutes.

Those children who accepted the initial offering were classed as more impulsive than those who could delay gratification and wait for the larger reward. In the context of the Dundee study, this indicates that hunger makes people more impulsive even when the decisions they are asked to make will do nothing to relieve their hunger.

"We wanted to know whether being in a state of hunger had a specific effect on how you make decisions only relating to food or if it had broader effects, and this research suggests decision-making gets more present-focused when people are hungry," said Dr Vincent.

"You would predict that hunger would impact people's preferences relating to food, but it is not yet clear why people get more present-focused for completely unrelated rewards.

"We hear of children going to school without having had breakfast, many people are on calorie restriction diets, and lots of people fast for religious reasons. Hunger is so common that it is important to understand the non-obvious ways in which our preferences and decisions may be affected by it."

Credit: 
University of Dundee

Algae and bacteria team up to increase hydrogen production

image: These are the researchers that worked on this study.

Image: 
University of Córdoba

In line with the fight against climate change and the search for a sustainable future, the idea appears of a future society based on hydrogen used as a fuel. This biofuel of the future could be what cars and engines run on (they actually already do), but without pollution and the issue of batteries, since it is much easier to store than electrical energy.

In order to bring that future closer, a team from the Biochemistry and Molecular Biology Department at the University of Cordoba has been searching for ways to increase hydrogen production by using microorganisms, specifically microalgae and bacteria.

In this vein, researchers Neda Fakhimi, Alexandra Dubini and David González Ballester were able to increase hydrogen production by combining unicellular green alga called Chlamydomonas reinhardtii with Escherichia coli bacteria. The teamwork of the algae and bacteria resulted in 60% more hydrogen production than they are able to produce if algae and bacteria work separately.

When alga works on its own, it produces hydrogen via photosynthesis whereas bacteria make hydrogen via sugar fermentation. The key to the synergy between algae and bacteria is acetic acid. This acid, in addition to providing the smell and taste of vinegar, is separated by bacteria during hydrogen production. Accumulation of acetic acid where the bacteria is found is seen as a problem: it causes the fermentation mechanism to stop and, therefore, so does its hydrogen production. That is where the microalga comes into play, as it takes advantage of the acetic acid in order to produce more hydrogen. Thus, the microalga benefits from what the bacteria does not want and together they become more efficient.

The potential of the algae-bacteria combination has been proven and opens the doors to its being used in industry since the sugar added for bacteria fermentation in the lab can be transferred to waste in the real world. In other words, the relationship between algae and bacteria could use industrial waste and dirty water to produce hydrogen and decontaminate at the same time.

The combination of bioremediation (the use of microorganisms for decontamination) and hydrogen production in order to be used as a biofuel brings sustainability full circle in a society that is ever more present.

Credit: 
University of Córdoba

Deaths halved among infarct patients attending Heart School

Patients who attend 'Heart School', as almost every patient in Sweden is invited to do after a first heart attack, live longer than non-participating patients. This is shown in a new study, by researchers at Uppsala University, published in the European Journal of Preventive Cardiology.

Patient education is an important aspect of rehabilitation after a heart attack (myocardial infarction). Core components of Sweden's 'Heart School' are individual counselling and group sessions focused on lifestyle-related, modifiable risks. Thus, patients are taught the importance of maintaining a healthy diet, being physically active and giving up smoking. Almost all patients with a first-time myocardial infarction are invited to participate. However, Heart School attendance is voluntary and fewer than half the patients choose to join.

This study represents the first scientific evaluation of Heart School in relation to mortality after myocardial infarction. To investigate the relationship between Heart School participation and how long patients survive after a first heart attack, the researchers used ten years' data from the nationwide Swedish heart registry SWEDEHEART (Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies) and the Swedish Cause of Death Register. Socioeconomic variables were obtained from Statistics Sweden (SCB).

The researchers' material comprised 47,907 patients who had their first heart attack in 2006-2015 and subsequently went to the first CR follow-up visit. Among them, time to total death (from all causes) and death from cardiovascular causes within two and five years after the attack were investigated. The data enabled the scientists to control for a range of other important confounding variables, such as demographic and socioeconomic factors, and other aspects of the patients' cardiac health.

After adjusting for confounding variables, the researchers found that attendance at Heart School was associated with a markedly lower risk (time to outcome) not only for total mortality, but also for cardiovascular mortality. With up to two years' follow-up, the Heart School participants' risk of dying was reduced by 47% (50% risk reduction for death from cardiovascular causes). After up to five years, the follow-up results showed a 38% lower mortality risk (43% lower for death from cardiovascular causes).

"We can say that Heart School attendance was associated with almost halved mortality, both total and specifically cardiovascular, after a first myocardial infarction," says John Wallert, licensed psychologist and doctoral student at the Department of Women's and Children's Health, Uppsala University.

The results were consistent across several sensitivity analyses, including varying dates of Heart School attendance and supplementary checks for participation in other cardiac rehabilitation programmes, among patients who also succeeded in achieving complete cardiac rehabilitation, after gender and age stratification etc.

"We were a little surprised at how robust the results were. In this study, thanks to Sweden's exceptional registry data, we have the means of controlling for not only clinical and demographic factors, but also factors related to self-selection and socioeconomic variables, such educational attainment and income. Data also provided the statistical power to achieve precise estimates and to allow for a range of sensitivity analyses

"Now we want to determine whether the association of attending Heart School with mortality is genuinely one of cause and effect. Ideally, we'll find this out in a large enough randomised clinical trial, preferably a registry-based one," Wallert says.

Credit: 
Uppsala University

New study shows common carp aquaculture in Neolithic China dating back 8,000 years

image: This is co-author Junzo Uchiyama preparing to measure common carp removed from the paddy field.

Image: 
Mark Hudson

In a recent study, an international team of researchers analyzed fish bones excavated from the Early Neolithic Jiahu site in Henan Province, China. By comparing the body-length distributions and species-composition ratios of the bones with findings from East Asian sites with present aquaculture, the researchers provide evidence of managed carp aquaculture at Jiahu dating back to 6200-5700 BC.

Despite the growing importance of farmed fish for economies and diets around the world, the origins of aquaculture remain unknown. The Shijing, the oldest surviving collection of ancient Chinese poetry, mentions carp being reared in a pond circa 1140 BC, and historical records describe carp being raised in artificial ponds and paddy fields in East Asia by the first millennium BC. But considering rice paddy fields in China date all the way back to the fifth millennium BC, researchers from Lake Biwa Museum in Kusatu, Japan, the Max Planck Institute for the Science of Human History in Jena, Germany, the Sainsbury Institute for the Study of Japanese Arts and Cultures in Norwich, U.K., and an international team of colleagues set out to discover whether carp aquaculture in China was practiced earlier than previously thought.

Carp farming goes way back in Early Neolithic Jiahu

Jiahu, located in Henan, China, is known for the early domestication of rice and pigs, as well the early development of fermented beverages, bone flutes, and possibly writing. This history of early development, combined with archaeological findings suggesting the presence of large expanses of water, made Jiahu an ideal location for the present study.

Researchers measured 588 pharyngeal carp teeth extracted from fish remains in Jiahu corresponding with three separate Neolithic periods, and compared the body-length distributions with findings from other sites and a modern sample of carp raised in Matsukawa Village, Japan. While the remains from the first two periods revealed unimodal patterns of body-length distribution peaking at or near carp maturity, the remains of Period III (6200-5700 BC) displayed bimodal distribution, with one peak at 350-400 mm corresponding with sexual maturity, and another at 150-200 mm.

This bimodal distribution identified by researchers was similar to that documented at the Iron Age Asahi site in Japan (circa 400 BC - AD 100), and is indicative of a managed system of carp aquaculture that until now was unidentified in Neolithic China. "In such fisheries," the study notes, "a large number of cyprinids were caught during the spawning season and processed as preserved food. At the same time, some carp were kept alive and released into confined, human regulated waters where they spawned naturally and their offspring grew by feeding on available resources. In autumn, water was drained from the ponds and the fish harvested, with body-length distributions showing two peaks due to the presence of both immature and mature individuals."

Species-composition ratios support findings, indicate cultural preferences

The size of the fish wasn't the only piece of evidence researchers found supporting carp management at Jiahu. In East Asian lakes and rivers, crucian carp are typically more abundant than common carp, but common carp comprised roughly 75% of cyprinid remains found at Jiahu. This high proportion of less-prevalent fish indicates a cultural preference for common carp and the presence of aquaculture sophisticated enough to provide it.

Based on the analysis of carp remains from Jiahu and data from previous studies, researchers hypothesize three stages of aquaculture development in prehistoric East Asia. In Stage 1, humans fished the marshy areas where carp gather during spawning season. In Stage 2, these marshy ecotones were managed by digging channels and controlling water levels and circulation so the carp could spawn and the juveniles later harvested. Stage 3 involved constant human management, including using spawning beds to control reproduction and fish ponds or paddy fields to manage adolescents.

Although rice paddy fields have not yet been identified at Jiahu, the evolution of carp aquaculture with wet rice agriculture seems to be connected, and the coevolution of the two is an important topic for future research.

Credit: 
Max Planck Institute of Geoanthropology

3 in 5 parents say their teen has been in a car with a distracted teen driver

image: More than half of parents in a new national poll believe their teen has been in an unsafe situation riding with a teen driver.

Image: 
C.S. Mott Children's Hospital National Poll on Children's Health at the University of Michigan

ANN ARBOR, Mich. -- It's a highly anticipated rite of passage for many high schoolers - finally getting to drive your friends around.

But having teens who carpool with peers can be a nerve-wracking experience for many parents, with more than half in a new national poll saying their child has probably been in an unsafe situation as a passenger with a teen driver.

Parents' top safety concerns include distracted driving caused by loud music (46%), a cell phone (42%) or other teens in the car (39%), according to the C.S. Mott Children's Hospital National Poll on Children's Health at the University of Michigan.

Some parents also noted unsafe conditions in which their teen rode with a teen driver who was speeding (45%), too tired to drive safely (14%) or impaired by alcohol or drugs (5%).

Despite such concerns, teens riding with teens is common. One in three parents say their teens are passengers with teen drivers at least once or twice a week.

"When teens start driving, there is rightfully a big focus on the safety of the drivers themselves," says poll co-director and Mott pediatrician Gary Freed, M.D. "But our poll suggests that parents should play an active role in not only preparing teens to be safe drivers - but to be safety-minded passengers when riding with friends."

The nationally-representative report is based on responses from 877 parents who had at least one child ages 14 to 18.

Many parents did report limiting their teen's risk as a passenger with another teen driver, with over half saying they made this effort if there was bad weather (68%), after midnight (67%) or if the driver had less than six months of driving experience (53%.)

Parents also try to limit their teen riding with more than two other teens in the car (48%), after dark (45%) and on the highway (41%).

"Many parents recognize that teens' minimal experience on the road is a disadvantage when there's a change in driving conditions," Freed says. "Some parents try to reduce their teens' risks in potentially hazard situations, such as late nights or bad weather."

Car crashes are the leading cause of death and injury for teens. More than half of teens who die in car crashes are not behind the wheel and their chances of being in a fatal accident are much higher when there is a teen driver, according to national statistics. Lack of experience can result in drivers not always reacting quickly to changes in road or driving conditions or paying as close attention to other cars or pedestrians as needed to stay safe, Freed says.

Freed encourages parents to talk to their kids about being responsible passengers. This could include offering to hold the driver's phone, lower radio volume, ask the driver to slow down if necessary and even ask to get out of the vehicle if they feel unsafe.

"Parents should try to empower their teen to be proactive in avoiding common situations that cause distractions to the driver and also speak up to stop any unsafe activities," Freed says.

"Safe driving should be a shared responsibility for both teen drivers and passengers as the risks are high for each."

Credit: 
Michigan Medicine - University of Michigan

Subgroup of colorectal cancer patients ID'd: Do poorly, could benefit from immunotherapy

DUARTE, Calif. -- While the medical community agrees that immune cells inside a tumor leads to improved health outcome, for a subset of colorectal cancer patients, having too much of a good thing - too many immune cells - is a strong predictor of disease recurrence and reduced chances of survival, according to new research from City of Hope, a world-renowned independent research and treatment center for cancer, diabetes and other life-threatening diseases.

"Having immune cells in tumors is widely recognized as a good thing, but we found that too much of a good thing is actually bad," said Peter P. Lee, M.D., chair of the Department of Immuno-Oncology at City of Hope and senior author of a study published in the Journal of Clinical Investigation on Sept. 16.

The researchers examined colorectal cancer, the third-leading cause of cancer-related deaths in the United States.

"This study is the first report of immune infiltrated tumors with poor health outcomes and is counter to the standard belief in the field," said Lee, the Billy and Audrey L. Wilder Professor in Cancer Immunotherapeutics. "This is a new way to look at colorectal tumors and is a reminder that physicians cannot base treatment merely on established, one-size-fits all treatment templates."

City of Hope physician-scientists are working on precision medicine research so that their patients can receive more bespoke treatment. They analyzed public genomic data sets from The Cancer Genome Atlas and NCBI Gene Expression Omnibus and validated their findings with data from 71 City of Hope patients diagnosed with Stage 3 colorectal cancer.

About 10% of these City of Hope colorectal cancer patients had a cornucopia of immune cells turned on, including CD8+ T cells. However, this group of patients all relapsed. In fact, they relapsed even earlier than patients with little or no immune cells in their tumors. The problem appeared to be that their immune system was on overdrive; they also had their immune checkpoint inhibitors - the proteins that put the immune system in neutral - in overdrive. The result is like two trains colliding: No one going anywhere fast.

Based on a series of analyses and cross-validation tests, the researchers were able to stratify the patients into four categories. Patients with high levels of immune cell infiltration and high levels of the checkpoint inhibitor PD-L1 were two to three times more likely to die from colorectal cancer compared to their peers who had tons of immune cell infiltration and low levels of checkpoint inhibitors.

Although the study needs to be duplicated in a prospective study with a larger sample size, the researchers propose that PD-L1 expression and CD8 combined score could be used as a biomarker to identify colorectal cancer patients who may require more aggressive monitoring and treatment.

"Patients in this subgroup may be good candidates for immunotherapy to reduce the chances of disease recurrence," said Marwan Fakih, M.D., co-director of the Gastrointestinal Cancer Program at City of Hope and lead author of the study.

About 60% of the patients in this subgroup had tumors that are "microsatellite instable," a disease category that points to people who typically have had positive responses to an immunotherapy that uses immune checkpoint inhibitors.

"Those in this subgroup who have high immune cell infiltration and a high immune-suppressive tumor microenvironment should be considered for enrollment in clinical trials that use immune checkpoint inhibitors," Fakih said. "If we continue to treat these patients with standard of care, they will continue to have a poor prognosis. We should use what we learned in this study to improve their chances of survival."

The study provides a more nuanced understanding of Immunoscore, a recent benchmark used to predict risk of colorectal cancer recurrence. Most colorectal patients who have tumors with high CD8+ T-cell infiltration still have favorable outcomes. However, City of Hope researchers found that if those patients also have high levels of PD-L1 expression, they may be mislabeled as having a good prognosis by Immunoscore, which relies solely on expression of CD3 and CD8 immune T cells.

City of Hope scientists are beginning to apply the same techniques used in this study to examine data on breast cancer patients. They plan to use the same process to analyze other cancers such as melanoma and lung cancer.

Credit: 
City of Hope

Commonly used drug for Alzheimer's disease doubles risk of hospitalization

A drug commonly used to manage symptoms of Alzheimer disease and other dementias -- donepezil -- is associated with a two-fold higher risk of hospital admission for rhabdomyolysis, a painful condition of muscle breakdown, compared with several other cholinesterase inhibitors, found a study in CMAJ (Canadian Medical Association Journal).

Dementia is a growing problem, with almost 10 million newly diagnosed cases every year around the world.

The study, led by researchers at Western University's Schulich School of Medicine & Dentistry and Lawson Health Research Institute, looked at ICES data from 2002 to 2017 on 220 353 patients aged 66 years or older in Ontario, Canada, with a new prescription for donepezil, rivastigmine or galantamine, three cholinesterase inhibitors used to manage dementia and Alzheimer disease.

Researchers found that donepezil was associated with a two-fold higher risk of hospitalization for rhabdomyolysis, a serious condition that can result in kidney disease. The relative risk was small but statistically significant.

"The findings of this population-based cohort study support regulatory agency warnings about the risk of donepezil-induced rhabdomyolysis," writes Dr. Jamie Fleet, a postgraduate year 4 resident in physical medicine and rehabilitation now at McMaster University, Hamilton, Ontario, with coauthors. "Reassuringly, the 30-day incidence of a hospital admission with rhabdomyolysis after initiating donepezil remains low.

"Risk of rhabdomyolysis with donepezil compared with rivastigmine or galantamine: a population-based cohort study" is published September 16, 2019.

Credit: 
Canadian Medical Association Journal

Light and sound in silicon chips: The slower the better

image: Top-view microscope image of a surface acoustic wave photonic device in silicon on insulator. A grating of gold stripes (right) is used to drive acoustic waves, which then affect light in standard waveguides.

Image: 
D. Munk, M. Katzman, M. Hen, M. Priel, M. Feldberg, T. Sharabani, S. Levy, A. Bergman, and A. Zadok

Integrated circuits in silicon enable our digital era. The capabilities of electronic circuits have been extended even further with the introduction of photonics: components for the generation, guiding and detection of light. Together, electronics and photonics support entire systems for data communication and processing, all on a chip. However, there are certain things that even electrical and optical signals can't do simply because they move too fast.

Sometimes moving slowly is actually better, according to Prof. Avi Zadok of Bar-Ilan University's Faculty of Engineering and Institute of Nanotechnology and Advanced Materials. "Important signal processing tasks, such as the precise selection of frequency channels, require that data is delayed over time scales of tens of nano-seconds. Given the fast speed of light, optical waves propagate over many meters within these timeframes. One cannot accommodate such path lengths in a silicon chip. It is unrealistic. In this race, fast doesn't necessarily win."

The problem, in fact, is a rather old one. Analog electronic circuits have been facing similar challenges in signal processing for sixty years. An excellent solution was found in the form of acoustics: A signal of interest is converted from the electrical domain to the form of an acoustic wave. The speed of sound, of course, is slower than that of light by a factor of 100,000. Acoustic waves acquire the necessary delays over tens of micro-meters instead of meters. Such path lengths are easily accommodated on-chip. Following propagation, the delayed signal can be converted back to electronics.

In a new work published today (September 16, 2019) in the journal Nature Communications, Zadok and co-workers carry over this principle to silicon-photonic circuits.

"There are several difficulties with introducing acoustic waves to silicon chips," says doctoral student Dvir Munk, of Bar-Ilan University, who participated in the study. "The standard layer structure used for silicon photonics is called silicon on insulator. While this structure guides light very effectively, it cannot confine and guide sound waves. Instead, acoustic waves just leak away." Due to this difficulty, previous works that combine light and sound waves in silicon do not involve the standard layer structure. Alternatively, hybrid integration of additional, nonstandard materials was necessary.

"That first challenge can be overcome by using acoustic waves that propagate at the upper surface of the silicon chip," continues Munk. "These surface acoustic waves do not leak down as quickly. Here, however, there is another issue: Generation of acoustic waves usually relies on piezo-electric crystals. These crystals expand when a voltage is applied to them. Unfortunately, this physical effect does not exist in silicon, and we much prefer to avoid introducing additional materials to the device."

As an alternative, students Munk, Moshe Katzman and coworkers relied on the illumination of metals. "Incoming light carries the signal of interest," explains Katzman. "It irradiates a metal pattern on the chip. The metals expand and contract, and strain the silicon surface below. With proper design, that initial strain can drive surface acoustic waves. In turn, the acoustic waves pass across standard optical waveguides in the same chip. Light in those waveguides is affected by the surface waves. In this way, the signal of interest is converted from one optical wave to another via acoustics. In the meantime, significant delay is accumulated within very short reach."

The concept combines light and sound in standard silicon with no suspension of membranes or use of piezo-electric crystals. Acoustic frequencies up to 8 GHz are reached, however the concept is scalable to 100 GHz. The working principle is applicable to any substrate, not only silicon. Applications are presented as well: the concept is used in narrowband filters of input radio-frequency signals. The highly selective filters make use of 40 nano-second long delays. "Rather than use five meters of waveguide, we achieve this delay within 150 microns," says Munk.

Prof. Zadok summarizes: "Acoustics is a missing dimension in silicon chips because acoustics can complete specific tasks that are difficult to do with electronics and optics alone. For the first time we have added this dimension to the standard silicon photonics platform. The concept combines the communication and bandwidth offered by light with the selective processing of sound waves."

One potential application of such devices is in future cellular networks, widely known as 5G. Digital electronics alone might not be enough to support the signal processing requirements in such networks. Light and sound devices might do the trick.

Credit: 
Bar-Ilan University

Using smart sensor technology in building design

image: This is Dr. Maryam Abhari, San Diego State University.

Image: 
San Diego State University

Have lights turned on automatically when you walk into a room? Does the air conditioner in the conference room turn on when a certain number of people enter the room?

In today's world, spaces with motion and temperature "smart sensors" are common and generally improve our overall well-being. Often times, data is being gathered from these sensors and is stored and analyzed in order to improve future architectural building design processes.

However, research conducted by Dr. Maryam Abhari ( who a registered architect) and Dr. Kaveh Abhari of San Diego State University, indicated that while the information and technology exists to assist architects in designing structures that offer more efficient space and energy management, they seldom take advantage of those available resources. "We've found recent studies showing that smart sensors are frequently added to a building after they are built, rather than being used as a source of inspiration and insight during the design process," said Kaveh Abhari. "We wanted to know why architects and design professionals were not only slow to integrate smart sensor design and data into their work, but why they were also slow to adapt to new technologies as well."

The researchers, both of whom are faculty members in the management information systems department (MIS) at SDSU's Fowler College of Business, initially interviewed 29 architecture professionals to identify why they were slow to integrate technology into their design processes. They interviewed only those professionals who either had experience with smart sensors or have attended educational sessions on smart sensor technology (SST). Their findings were presented at the 52nd Hawaii International Conference on System Sciences.

"We found there existed real or perceived barriers which prevented architects from utilizing technologies in their design processes," reported Maryam Abhari. "The barrier most cited by those professionals we surveyed was the increased time required by the incorporation of SST into the design process. Other barriers included the lack of technical knowledge, aversion to potentially increased costs, mistrust of the data, and the loss of control over the design process."

However, the researchers also found that the many advantages to utilizing SST into the design process were overlooked or ignored by the architects. "One of the key advantages to utilizing SST in the design process is that it offers insight into enhancing efficient space planning, ventilation, safety features and energy and water efficiency," pointed out Maryam Abhari. "If architects were willing to collect and analyze the data from environments that needed to be renovated, remodeled or rebuilt, SST would have the potential to disclose the pros and cons of the existing design which could have a positive impact on the project redesign, but, unfortunately, this frequently doesn't happen."

The researchers embarked on a second study using the responses of 236 architecture professionals to measure their experience with SST, their commitment to learning and collaborating with SST experts, and their intention to adopt SST.

Their results showed that while 70 percent of those surveyed said they trusted the SST data, only 10 percent considered themselves proficient enough to utilize SST into their design projects. Nevertheless, 60 percent said they would consider using SST data or working with IT professionals on their projects.

Although the majority of the architects in the second study understood that they must inevitably adapt to SST, the researchers determined that the most effective way to overcome the barriers named by the architects was to educate them about the value of the SST data and the value of incorporating the technology into their design process. They noted that the architects responded more positively to value education than assurances about the perceived risk of adaptation.

And while there may be monetary costs involved with SST data adaptation, the functionality enhancement of the completed project may be well worth the price. "At the very least, experience with SST adds credentials to architects' professional practice and boosts their professional reputation," noted Kaveh Abhari.

Credit: 
San Diego State University

Reduce, reuse, recycle: The future of phosphorus

image: All crops need phosphorus for healthy growth. Phosphorus is a building block of plant protein. Working to reduce, reuse and recycle phosphorus will make a more sustainable food system.

Image: 
Included in photo

When Hennig Brandt discovered the element phosphorus in 1669, it was a mistake. He was really looking for gold. But his mistake was a very important scientific discovery. What Brandt couldn't have realized was the importance of phosphorus to the future of farming.

Phosphorus is one of the necessary ingredients for healthy crop growth and yields. When farms were smaller and self-sufficient, farmers harvested their crops, and nutrients rarely left the farm. The family or animals consumed the food, and the farmer could spread manure from their animals onto the soil to rebuild nutrients. This was a fairly closed-loop phosphorus cycle.

But, as the world's population increased, so did food and nutrition needs. More of a farmer's harvest, and therefore nutrients, was sold off the farm. Agriculture adapted by developing many new growing methods, as well as fertilizers. Most phosphorus fertilizers use the world's supply of phosphate rock as a main ingredient. That main modern source is a finite resource and it's running out. Phosphate rock is also hard to mine and process.

"There is an urgent need to increase phosphorus use efficiency in agroecosystems," says Kimberley Schneider, a research scientist with Agriculture and Agri-Food Canada. "There are many chemical, physical and biological processes that affect the availability of phosphorus to crops." This is why farmers place great importance in having enough phosphorus for their crops.

Crop breeding and cultivar selection

Different plants can use phosphorus more efficiently than others. "Phosphorus use efficiency is the ability to yield more crop per unit of phosphorus taken up by the plant," explains Schneider. "There is potential for crop breeders to develop new varieties that use phosphorus in even more efficient ways. They can also breed crops that work with mycorrhizal fungi in the soil to help increase their phosphorus absorption. Focusing on breeding plants that work well in low phosphorus soils will take an interdisciplinary approach."

Cropping system design and phosphorus use efficiency

Since some crops can increase soil phosphorus availability for future crops, growers could focus on crop rotations that take advantage of this. Cover crops and green manures can also contribute to phosphorus availability in many conditions. For example, one study found sorghum did well with phosphorus use after alfalfa or red clover, but not after sweet clover. Getting the right combinations for the right crops and fields will be important.

Soil organic matter's role in mineralizing phosphorus

Soil organic matter is known to indicate soil health. It can improve plant phosphorus availability by allowing for greater root access to phosphorus and by releasing plant available phosphorus. Currently, soil organic matter is not part of the soil fertility measurements on farms, so this is an area of future research potential.

Naturally occurring soil fungi to the rescue

Many soils contain one or more types of friendly fungus called arbuscular mycorrhizal fungi. They work with plant roots to exchange "life chores." The fungi help free up phosphorus and other nutrients, while the plants make sugar compounds that the fungi use for growth. Of course, the fungi and roots must be able to be near one another for this exchange to happen. Researchers are looking at the promise of building up and better utilizing mycorrhizal fungi populations in soils.

Recycling and recovering phosphorus

Phosphorus is the 6th most common element on earth. Yet, it is a limiting factor in crop yields. Excess phosphorus in the wrong place - streams, lakes and other waterbodies - causes pollution. How did this come to be?

Let's trace the "life cycle" of a phosphorus molecule. Most phosphate rock is mined on the continents of Europe and Africa, although some deposits are available elsewhere. After it is made into fertilizer, this phosphorus is then moved to farms. From there, the phosphorus is used by a plant to make a product, perhaps a soybean. The soybean is removed from the farm and manufactured into tofu. It is then transported to your local grocery store, where you buy it and bring it home. If you live in a city, after you enjoy your meal of fried tofu, the waste your body produces flushes down the toilet. If you live in a rural area, it goes into the septic system.

Thus, the life cycle of this illustrative phosphorus molecule shows a broken cycle. The molecule originates far away from its final resting place. Because of modern day life, the phosphorus cycle that used to exist on farms is broken. The more urban society becomes, the more broken the phosphorus cycle is - unless scientists come up with answers to close the loops again.

Agricultural scientists are working with wastewater managers to develop ways to put those deserving phosphorus molecules back to work on the farm. "While most currently available phosphorus recovery technologies may not seem economically viable, the environmental and social benefits are important," says Schneider. "There are also other valuable products of phosphorus recovery, such as organic matter, other nutrients, and even water."

"Increasing phosphorus use efficiency in agroecosystems must be a priority to reduce reliance on fertilizer and to minimize the effects on the environment," says Schneider. "There are many possibilities for the agricultural system to improve the use of phosphorus. The outcome will be an agroecosystem that still feeds the world, while protecting the natural resources that help us grow our food and live healthy lives."

This article was recently published in a special section in the Journal of Environmental Quality called Celebrating the 350th Anniversary of Discovering Phosphorus--For Better or Worse.

The American Society of Agronomy and Soil Science Society of America are celebrating Phosphorus Week September 15-21, 2019, to raise awareness of phosphorus issues, and their importance in our food and urban systems. In addition to this web story, they have created five blogs with further information:

1. The discovery and general uses of phosphorus

2. Why is phosphorus needed on farms

3. What are sources of phosphorus for crops

4. What are the challenges regarding phosphorus use

5. Ten things we can do to manage phosphorus better

Credit: 
American Society of Agronomy

Combination of wood fibers and spider silk could rival plastic

image: Silk is a natural protein that can also be produced synthetically. It has good abilities and versatile possibilities.

Image: 
Eeva Suorlahti

Achieving strength and extensibility at the same time has so far been a great challenge in material engineering: increasing strength has meant losing extensibility and vice versa. Now Aalto University and VTT Technical Research Centre of Finland researchers have succeeded in overcoming this challenge, inspired by nature.

The researchers created a truly new bio-based material by gluing together wood cellulose fibres and the silk protein found in spider web threads. The result is a very firm and resilient material which could be used in the future as a possible replacement for plastic, as part of bio-based composites and in medical applications, surgical fibres, textile industry and packaging.

According to Aalto University Professor Markus Linder, nature offers great ingredients for developing new materials, such as firm and easily available cellulose and tough and flexible silk used in this research. The advantage with both of these materials is that, unlike plastic, they are biodegradable and do not damage nature the same way micro-plastic do.

'Our researchers just need to be able to reproduce these natural properties', adds Linder, who was also leading the research.

'We used birch tree pulp, broke it down to cellulose nanofibrils and aligned them into a stiff scaffold. At the same time, we infiltrated the cellulosic network with a soft and energy dissipating spider silk adhesive matrix,' says Research Scientist Pezhman Mohammadi from VTT.

Silk is a natural protein which is excreted by animals like silkworms and also found in spider web threads. The spider web silk used by Aalto University researchers, however, is not actually taken from spider webs but is instead produced by the researchers using bacteria with synthetic DNA.

'Because we know the structure of the DNA, we can copy it and use this to manufacture silk protein molecules which are chemically similar to those found in spider web threads. The DNA has all this information contained in it', Linder explains.

'Our work illustrates the new and versatile possibilities of protein engineering. In future, we could manufacture similar composites with slightly different building blocks and achieve a different set of characteristics for other applications. Currently we are working on making new composite materials as implants, impact resistance objects and other products," says Pezhman.

Credit: 
Aalto University

Efficient organic solar cells with a low energy loss enabled by a quinoxaline-based acceptor

image: Materials, photoelectric property and photovoltaic performance.

Image: 
©Science China Press

Organic photovoltaics (OPVs) have attracted much attention because of the advantages in low-cost and large-area fabrication and the great potentials in achieving flexible and semi-transparent devices. However, compared with inorganic and perovskite solar cells, OPVs show relatively low photoelectric conversion efficiencies, which is admittedly attributed to intrinsically low dielectric constants of organic materials resulting in large energy losses (Eloss = Eg-qVoc, where Eg is the optical bandgap of the photoactive layer, Voc is the open-circuit voltage of photovoltaic devices and q is elementary charge.). For the traditional bulk-heterojunction OPVs applying fullerene derivatives as the electron acceptors, the Elosss are always higher than 0.6 eV, which limits the power conversion efficiencies (PCEs) less than 12%. With the rapid development of fused-ring electron acceptors especially with an acceptor (A)-donor (D)-acceptor (A) arrangement, PCEs of OPV devices quickly surpassed 12% and even reached 16% in a very short period, in quite a few of which the Elosss are less than 0.6 eV.

Although it is common for inorganic or perovskite solar cells, high-performance OPVs with the Elosss less than 0.5 eV are quite rare up to date, which means that the Eloss is still the key factor that limits the photovoltaic efficiency of the OPV technique. Recently, a research team led by Prof. Xiaozhang Zhu from Institute of Chemistry, Chinese Academy of Sciences, has designed and synthesized an electron acceptor AQx by fusing the quinoxaline moiety with the quinoid-resonance effect to the D-A system. The optical bandgap of AQx is estimated to be 1.35 eV according to the absorption onset in thin film, 918 nm. Cyclic voltammetry was performed to evaluate the frontier orbital energy levels of AQx: the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) energy levels of AQx are -5.58 and -3.85 eV, respectively. By matching with a middle-bandgap polymer donor PBDB-TF, as-cast devices exhibit the PCEmax of up to 10.99% and the optimized AQx-based device shows a much improved short-circuit current (Jsc) from 20.06 to 22.18 mA cm-2 and fill factor (FF) from 59.34% to 67.14% with a comparable Voc of 0.893 V, delivering the highest PCEmax of 13.31%. They measured the corresponding current density-voltage (J-V) and the external quantum efficiency (EQE) curves. The AQx-based device shows a broad and high EQE response in the 300-900 nm region, which is consistent with the absorption spectra of the corresponding photoactive components and matches well with the photon flux spectrum of solar radiation. The calculated Jscs based on the integration of the EQE spectra at different conditions are in good agreement with those obtained from the J-V curves with minor errors of 2~3%. They calculated the Elosss of AQx-based devices according to the equation: Eloss = Eg - qVoc, in which Eg is determined based on the EQE spectra. All the Elosss of the blend films processed at different conditions are well below 0.47 eV and reach 0.45 eV for the most optimized device, the smallest value for the binary OPVs with PCEs over 13% reported so far.

Credit: 
Science China Press

Gutsy effort to produce comprehensive study of intestinal gases

A source of embarrassment to some, or pure comedy to others, flatulence and the gases of the intestines are increasingly seen as playing an important role in our digestive health.

A paper led by UNSW Sydney and published in Nature Reviews Gastroenterology & Hepatology has examined all available literature on gastrointestinal gases, their interactions with the microbiome of the gut, their associated disorders and the way that they can be measured and analysed.

Lead author Professor Kourosh Kalantar-Zadeh, who is an ARC Laureate Fellow with UNSW's School of Chemical Engineering, says the purpose of the study is to lift the lid on the various gases of the gut and show how vital they are for human health.

"This is about providing knowledge to people about the importance of gases in the gut," he says.

"Rather than laughing about it or feeling embarrassed about this subject, actually there is good reason to take this very seriously.

"Even Benjamin Franklin wrote about this more than 200 years ago. He was one of the first to propose that different types of foods have different effects on our gut health, which can be measured by smelling the resulting farts - although I'm not so sure about his methods."

Indeed, Franklin wrote a letter to the Royal Academy of Brussels where he proposed "To discover some Drug wholesome & not disagreable, to be mix'd with our common Food, or Sauces, that shall render the natural Discharges of Wind from our Bodies, not only inoffensive, but agreable as Perfumes".

While Franklin's challenge continues to elude modern pharmacology, a change of diet to avoid foods rich in sulphide - such as broccoli, cauliflower, eggs, beef, and garlic - could reduce the malodorous nature of our gaseous emissions.

Gas profiles

In the paper published today, the authors examine each of the main gases that are found in the gastrointestinal system.

"Interestingly, the gases in most abundance throughout the digestive system - nitrogen, oxygen, carbon dioxide, hydrogen and even methane - are odourless," Professor Kalantar-zadeh says.

By contrast, smelly sulphide compound gases exist in trace amounts in the colon. Nitrogen and oxygen end up in the gut by being swallowed and carbon dioxide can be chemically produced in the stomach.

"The rest are mostly by-products of the microbiome - the colonies of bacteria living in our intestines - as they break down carbohydrates, fats and proteins."

With the exception of nitrogen, the gases found in the intestines have also been linked with various gut diseases including malabsorption of food, irritable bowel syndrome (IBS), inflammatory bowel diseases (IBD) and even colon cancer, especially when the gas profiles deviate from the norm.

"Adjustment of diet is generally the first port of call to mitigate these disorders as we can modulate the gases by eating different types of foods," Professor Kalantar-Zadeh says.

Gas-sensing technology

The UNSW team, together with their partners at Monash University and startup company Atmo Biosciences, is commercialising a revolutionary tool to analyse the gastrointestinal gases in vivo (within the body) in the form of an ingestible capsule loaded with gas-sensing technology. The capsule can detect gaseous biomarkers as it passes through the gut, all the while transmitting the captured data wirelessly to the cloud for aggregation and analysis.

Traditionally, testing and measuring of the various gases has ranged from the non-invasive in vitro (ie. in the laboratory) gut simulators and indirect breath testing through to colonic or small intestine tube-insertion, a much more invasive method used to capture stool or gas samples.

But the capsule developed by Professor Kalantar-Zadeh and the team gets around the problem of invasiveness while also ensuring the gases can be analysed in their natural environment. The ingestible capsule can simultaneously detect oxygen and hydrogen concentrations as it moves through the gastrointestinal gut and wirelessly transmit the data to an external receiver.

"There is no other tool that can do what this capsule does," Professor Kalantar-Zadeh says.

"In our early trials, the capsule has accurately shown the onset of food-related fermentation in the gut, which would be immensely valuable for clinical studies of food digestion and normal gut function."

Professor Kalantar-Zadeh says a trial is currently underway by Atmo Biosciences to test the commercial version of the capsule, the results of which will be detailed in a future research paper.

Credit: 
University of New South Wales

Accounting for influencing factors when estimating suicide rates among US youth

Bottom Line: Using unadjusted suicide rates to describe trends may be skewed because they are affected by differences in age and year of birth. This secondary analysis of data included total population and suicide deaths by single year of age from 10 to 19 and by sex from 1999 to 2017 and accounted for those factors. Unadjusted suicide rates for females were 1.6 per 100,000 in 1999 and 3.5 per 100,000 in 2017, while adjusted rates that accounted for differences in age and year of birth increased from 1.7 per 100,000 in 1999 to 4.2 per 100,000 in 2017. Unadjusted rates for males were 7.4 per 100,000 in 1999 and 10.7 per 100,000 in 2017, while adjusted rates were 4.9 per 100,000 in 1999 and 8.7 per 100,000 in 2017. A limitation of the study is the use of data and coding in which the misclassification of suicide death cannot be completely ruled out.

Authors: Bin Yu, M.D., M.P.H., University of Florida, Gainesville, and coauthors

(doi:10.1001/jamanetworkopen.2019.11383)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network