Tech

Trees with grassy areas soften summer heat

image: One location was the Lehrer-Wirth-Strasse in Munich, where measuring instruments were attached to Robinia below the foliage.

Image: 
(Picture: F. Rahman/ TUM)

Trees cool their environment and "heat islands" like Munich benefit from it. However, the degree of cooling depends greatly on the tree species and the local conditions. In a recent study, scientists at the Technical University of Munich (TUM) compared two species of urban trees.

It is cooler under black locusts, especially on hot summer days. This has significant implications for landscape architecture and urban planning: "Tree species such as the black locust that consume little water can provide a higher cooling effect if they are planted on grass lawns," Dr. Mohammad Rahman from TUM explained. "The surrounding soil remains moister thanks to the trees, the grass dissipates additional heat through the evaporation of water and thus reduces the temperature near the ground." This is an important finding obtained by the team led by Humboldt research fellow Rahman.

A look under the treetops

Trees are considered to be nature's air conditioners, making them the most practical way of alleviating the heat in cities such as Munich. The Bavarian capital is the third-largest and the most densely populated city in Germany. It has an air temperature up to six degrees Celsius warmer than its rural surroundings. A team from the Chair for Strategic Landscape Planning and Management and the Chair of Forest Growth and Yield Science at TUM has now used combined sensor and storage devices (data loggers) to investigate how the microclimate develops below urban treetops in particular.

This was carried out on summer days with varying temperatures at different locations in Munich -- close to the East Station of Munich and Messestadt Riem. With the little-leaved linden, the 2016 tree of the year, and the black locust -- also known as false acacia -- they selected two popular but contrasting urban tree species to analyze the complex interplay of location factors, current weather conditions, and tree type. In light of climate change, the focus was on the cooling effect on very hot days.

Black locusts need less water -- and are therefore better suited for cities

The analysis by the research team becomes clearer by comparison: The output of a mechanical air conditioner is between one and ten kilowatts (kW) that of a linden tree up to 2.3 kW. This cooling capacity is fed by various processes such as the dense treetops that provide shade. Or the fact that the leaf surfaces reflect the short-wave rays of the sun and also use them for transpiration.

These cooling mechanisms are common in all plants including grass. However, with bigger and denser canopies along with higher water loss from the stomata of their leaves, linden trees use a large percentage of the intercepted radiation to vaporize them, hence cooling the surrounding micro-climate better.

However, there are several differences to the luxuriantly blooming black locust: Its crown is less dense, the leaf surface is smaller, and hence the transpiration is lower. That makes the linden tree more effective when it comes to cooling on mild summer days. However, the black locust needs less water than the linden tree, which takes more water out of the soil during the high heat. Therefore in case of grass lawns additional cooling function from the grass surfaces under black locust trees seem more effective. With the climate change and accelerating drought either we have to water the grass lawns for having higher cooling effect under the tree species such as linden or less water demanding species need to be found. On the other hand, for paved surfaces better cooling from the dense shade of linden trees are more effective.

As Mohammad Rahman summarized: "On very hot days, city dwellers have a cooler time on grass lawns under trees with a less dense crown and a lower water requirement."

Credit: 
Technical University of Munich (TUM)

Human-like walking mechanics evolved before the genus Homo

image: Footprints from (A) a modern human walking normally, (B) a modern human walking with a stooped posture known as the "bent knees, bent hip," or BKBH, posture, and (C) 3.6 million-year-old hominin footprints found in Laetoli, Tanzania. The team's analysis suggests ancient hominins probably walked in a way that is very similar to modern humans.

Image: 
David Raichlen, University of Arizona.

Ever since scientists realized that humans evolved from a succession of primate ancestors, the public imagination has been focused on the inflection point when those ancestors switched from ape-like shuffling to walking upright as we do today. Scientists have long been focused on the question, too, because the answer is important to understanding how our ancestors lived, hunted and evolved.

A close examination of 3.6 million year old hominin footprints discovered in Laetoli, Tanzania suggests our ancestors evolved the hallmark trait of extended leg, human-like bipedalism substantially earlier than previously thought.

"Fossil footprints are truly the only direct evidence of walking in the past," said David Raichlen, PhD, associate professor at the University of Arizona. "By 3.6 million years ago, our data suggest that if you can account for differences in size, hominins were walking in a way that is very similar to living humans. While there may have been some nuanced differences, in general, these hominins probably looked like us when they walked."

Raichlen will present the research at the American Association of Anatomists annual meeting during the 2018 Experimental Biology meeting, held April 21-25 in San Diego.

The species that comprises modern humans, Homo sapiens sapiens, emerged roughly 200,000-300,000 years ago. The genus Homo is thought to have emerged about 2-2.5 million years ago. The term hominin is used to refer to a broader set of ancestors that existed before that, although there is debate about the nature of the species included in that grouping and the relationships among them.

It is thought that hominins began walking on two legs around 7 million years ago, but based on the way other primates evolved, it is considered likely that these early ancestors retained a crouched, bent-legged walking posture for some time.

Raichlen and his team use a variety of methods to reconstruct walking mechanics based on fossilized footprints and skeletons of early human ancestors. Their most recent results use a combination of experimental data and morphological studies to show that the footprints at Laetoli are consistent with fully upright, human-like bipedal walking.

In one experiment, the team compared the depth and shape of the Laetoli footprints to those left by eight volunteers--modern humans--walking in either an upright or stooped posture (in which the knees and hips are bent). When they analyzed the impression made by the toe versus the heel, which reflects how the center of pressure moves along your foot as you take a step, they found the footprints at Laeoli were much more similar to the footprints made by modern humans walking upright.

Walking upright with the legs fully extended uses less energy than bipedal walking in a more ape-like crouched manner, allowing one to endure longer journeys. This suggests that the switch to a more human-like gait likely had something to do with how our ancestors found food--and how far they had to travel to find it.

"The data suggest that by this time in our evolutionary history, selection for reduced energy expenditures during walking was strong," said Raichlen. "This work suggests that, by 3.6 million years ago, climate and habitat changes likely led to the need for ancestral hominins to walk longer distances during their daily foraging bouts. Selection may have acted at this time to improve energy economy during locomotion, generating the human-like mechanics we employ today."

Although the evidence is strong that hominins were walking upright by 3.6 million years ago, the exact stage when the locomotion of our ancestors diverged from that of modern-day apes remains unknown, Raichlen said. Answering that will likely require following in more--even older--footprints.

David Raichlen will present this research on Sunday, April 22, from 4:30-5 p.m. in Room 11B, San Diego Convention Center (abstract). Contact the media team for more information or to obtain a free press pass to attend the meeting.

Credit: 
Experimental Biology

Large Candida auris outbreak linked to multi-use thermometers in UK ICU

Madrid, Spain: Outbreaks of the fungal pathogen Candida auris (C. auris) in healthcare settings, particularly in intensive care units (ICUs), may be linked to multi-use patient equipment, such as thermometers, according to research presented at the 28th European Congress of Clinical Microbiology and Infectious Diseases (ECCMID) [1].

Researchers examined one of the largest outbreaks of the emerging drug-resistant fungal pathogen C. auris to date. The outbreak occurred in Oxford University Hospitals' Neurosciences Intensive Care Unit (NICU) in the United Kingdom. In investigating the possible source of the outbreak, researchers found a major source for spreading the fungus was multi-use patient monitoring equipment, such as axillary thermometers, those used to measure temperature in the armpit. These thermometers had been used in 57 of the 66 patients, or 86%, who had been admitted to the NICU before being diagnosed with C. auris. Use of these thermometers was still a strong risk factor for having C. auris after the research team controlled for other factors, such as how long a patient remained in the NICU, how unwell a patient was and their blood tests.
Presenting author Dr David Eyre from the Nuffield Department of Medicine at the University of Oxford said: "Despite a bundle of infection control interventions, the outbreak was only controlled following removal of the temperature probes."

Between 2 February 2015 and 31 August 2017, the researchers analysed 70 patients who were either colonised with C. auris, meaning they had the fungus but showed no signs of illness, or infected, meaning they did show symptoms. Sixty-six patients, or 94%, had been admitted to the NICU before being diagnosed. Seven patients developed invasive infections, but none died directly as a result of a C. auris infection. Most patients were colonised for between one to two months. There was no evidence that C. auris was associated with increased rates of death when adjusting for age, sex and the reason the patient had been originally admitted to the ward.

C. auris is an emerging fungal pathogen, which means its presence is growing in the population and it can be responsible for infections in wounds and the bloodstream. The reasons for C. auris spreading are not well understood, but this study offers hope of controlling the fungus' rise. The researchers found that the fungus tested was resistant to common treatments. C. auris is typically resistant to many of the available antifungal drugs, including in Oxford to fluconazole and related drugs, as well as occasionally amphotericin.
C. auris was rarely detected in the general ward environment. However, researchers were able to both culture samples from the medical equipment and see it on the surface of temperature probes using a scanning electron microscope.

They were able to analyse the fungal samples' genetic information and determine that the fungus found on the equipment matched those of the patients' samples. It appears that these fungi were able to survive on the hospital equipment despite hygiene standards in place.

"This reinforces the need to carefully investigate the environment, and in particular multi-use patient equipment, in any unexplained healthcare-associated outbreak," Eyre concluded. The team have successfully controlled the outbreak.

Credit: 
European Society of Clinical Microbiology and Infectious Diseases

Graphene sets a new record on squeezing light to one atom

image: Artistic impression of the squeezed light (plasmon) in between the metal and graphene, separated by a one-atom thick dielectric.

Image: 
ICFO

In a recent study published in Science, researchers at ICFO - The Institute of Photonic Sciences in Barcelona, Spain, along with other members of the Graphene Flagship, reached the ultimate level of light confinement. They have been able to confine light down to a space one atom, the smallest possible. This will pave the way to ultra-small optical switches, detectors and sensors.

Light can function as an ultra-fast communication channel, for example between different sections of a computer chip, but it can also be used for ultra-sensitive sensors or on-chip nanoscale lasers. There is currently much research into how to further shrink devices that control and guide light.

New techniques searching for ways to confine light into extremely tiny spaces, much smaller than current ones, have been on the rise. Researchers had previously found that metals can compress light below the wavelength-scale (diffraction limit), but more confinement would always come at the cost of more energy loss. This fundamental issue has now been overcome.

"Graphene keeps surprising us: nobody thought that confining light to the one-atom limit would be possible. It will open a completely new set of applications, such as optical communications and sensing at a scale below one nanometer," said ICREA Professor Frank Koppens at ICFO - The Institute of Photonic Sciences in Barcelona, Spain, who led the research.

This team of researchers including those from ICFO (Spain), University of Minho (Portugal) and MIT (USA) used stacks of two-dimensional materials, called heterostructures, to build up a new nano-optical device. They took a graphene monolayer (which acts as a semi-metal), and stacked onto it a hexagonal boron nitride (hBN) monolayer (an insulator), and on top of this deposited an array of metallic rods. They used graphene because it can guide light in the form of plasmons, which are oscillations of the electrons, interacting strongly with light.

"At first we were looking for a new way to excite graphene plasmons. On the way, we found that the confinement was stronger than before and the additional losses minimal. So we decided to go to the one atom limit with surprising results," said David Alcaraz Iranzo, the lead author from ICFO.

By sending infra-red light through their devices, the researchers observed how the plasmons propagated in between the metal and the graphene. To reach the smallest space conceivable, they decided to reduce the gap between the metal and graphene as much as possible to see if the confinement of light remained efficient, i.e. without additional energy losses. Strikingly, they saw that even when a monolayer of hBN was used as a spacer, the plasmons were still excited, and could propagate freely while being confined to a channel of just one atom thick. They managed to switch this plasmon propagation on and off, simply by applying an electrical voltage, demonstrating the control of light guided in channels smaller than one nanometer.

This enables new opto-electronic devices that are just one nanometer thick, such as ultra-small optical switches, detectors and sensors. Due to the paradigm shift in optical field confinement, extreme light-matter interactions can now be explored that were not accessible before. The atom-scale toolbox of two-dimensional materials has now also proven applicable for many types of new devices where both light and electrons can be controlled even down to the scale of a nanometer.

Professor Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and Chair of its Management Panel, added "While the flagship is driving the development of novel applications, in particular in the field of photonics and optoelectronics, we do not lose sight of fundamental research. The impressive results reported in this paper are a testimony to the relevance for cutting edge science of the Flagship work. Having reached the ultimate limit of light confinement could lead to new devices with unprecedented small dimensions."

Credit: 
Graphene Flagship

For heavy lifting, use exoskeletons with caution

image: For a study at The Ohio State University, an experimental subject operates a hand tool while wearing an exoskeleton.

Image: 
Photo by Eric Weston, courtesy of The Ohio State University.

COLUMBUS, Ohio--You can wear an exoskeleton, but it won't turn you into a superhero.

That's the finding of a study in which researchers tested a commercially available exoskeleton--a mechanical arm attached to a harness--that's typically worn by workers to help them carry heavy objects hands-free.

In the journal Applied Ergonomics, the researchers report that that the device relieved stress on the arms just as it was supposed to--but it increased stress on the back by more than 50 percent.

There are tradeoffs with all exoskeletons on the market today, because they inherently change the way we move, said William Marras, director of The Ohio State University Spine Research Institute and Honda Chair Professor of Integrated Systems Engineering at Ohio State.

"The simplest way to describe it is like dancing with a really bad partner," he said. "Someone is tugging and pulling on you in directions you're not expecting, and your body has to compensate for that. And the way you compensate is by recruiting different muscles to perform the task."

For the study, 12 volunteers used two different pneumatic tools, a torque wrench and an impact wrench, as they might in industry. They used the wrenches with and without the aid of the exoskeleton.

The torque wrench weighed about 10 pounds, while the impact wrench weighed 30 pounds. When participants wore the exoskeleton, the wrenches were supported by the mechanical arm, which transferred the weight to a vest-like harness. The participants then only had to grip the wrench and move it up or forward as they might to tighten bolts in a factory.

Over the course of a few hours, researchers measured the forces on the volunteers' back muscles and spine. They found that wearing the exoskeleton increased compressive spinal loads up to nearly 53 percent compared to not wearing it. Stress on different muscles in the torso increased anywhere from 56 percent to 120 percent while wearing it.

"This exoskeleton is meant to offload weight from your arms, so for your arms it's great," said Gregory Knapik, senior researcher at the institute. "The problem is, the weight of the tool, the weight of the mechanical arm and the weight of the vest you're wearing--that all goes to your back. At the end of the day, you're just trading one problem for a potentially even worse problem."

The volunteers didn't seem to notice the extra strain on their backs, but they did notice that they were uncomfortable, chiefly because of the stiff metal rods that lined the harness and prevented them from moving normally.

"You see people wearing this same exoskeleton all the time--workers in industry, camera people at sporting events--so you'd think they'd be more comfortable. But, no," said Knapik. "People hated it for the short time that they wore it. Every single person said they would never wear this if they didn't have to."

Given that the study participants had to wear the harness for only part of a day, the researchers expect that the stresses would be higher for someone who had to wear it for an entire work shift, day after day.

The manufacturer of this particular exoskeleton is aware that it can cause back fatigue. Like makers of similar products, it recommends that users undergo muscle conditioning to prevent injury while wearing it.

Passive exoskeletons, like the one tested in this study, contain braces and springs to help support areas of the body. Active exoskeletons, like those worn by Iron Man or Ripley in the movie Aliens, are just now starting to become a reality. They contain motors that aid movement--almost, Marras said, "like power steering for the body." He and Knapik will be testing the spinal loads caused by just such a powered exoskeleton this fall.

Credit: 
Ohio State University

Selection of a pyrethroid metabolic enzyme CYP9K1 by malaria control activities

Researchers from LSTM, with partners from a number of international institutions, have shown the rapid selection of a novel P450 enzyme leading to insecticide resistance in a major malaria vector.

In a paper published in PNAS the team of researchers led by LSTM's Professor Janet Hemingway describe how operational malaria control activities, and a move back to pyrethroid based indoor residual spray (IRS), has led to the selection of metabolic resistance in Anopheles gambiae on Bioko Island, Equatorial Guinea. Resistance involves a cytochrome P450 enzyme CYP9K1 which for the first time a role in pyrethroid resistance is established.

Professor Hemingway said: "In theory it should be easier to eliminate malaria from an island than from a country on mainland Africa, and while we have seen the near total decline of two of the island's vectors, Anopheles funestus and Anopheles coluzzii this study shows how difficult that can be. The proof that this particular enzyme is also a marker for resistance to pyrethroids, gives us a further opportunity to monitor the rate of insecticide resistance and make evidence based decisions in line with national plans."

Since 2004, IRS and long-lasting insecticide treated bed nets (LLINs) have reduced the malaria parasite prevalence in children on Bioko Island, which up until that point was one of the highest in Africa. After target site-based (kdr) pyrethroid resistance was detected in 2004 and rose in frequency the carbamate bendiocarb was introduced for IRS. When subsequent research showed that kdr alone was not operationally significant and activity of metabolic resistance genes appeared absent, pyrethroid IRS was reintroduced in 2012.

This reintroduction, along with mass distribution of LLINs in 2007 and 2014/15, while dramatically reducing the An. funestus and An. coluzzii populations, in An. gambiae appears to have driven an increase in both kdr frequency, along with the selection of metabolic resistance via strongly increased expression of P450s, especially CYP9K1, which prompted a revision to bendiocarb IRS in 2016 to prevent a resurgence in cases of malaria. "Even with the decline of other mosquito vectors our study shows the difficulty posed by insecticide resistance in terms of malaria elimination." Continued Professor Hemingway. "The rapid evolution of resistance following the reintroduction of pyrethroid IRS, along with movement of infected people from the mainland highlights the importance of monitoring for changes in vector populations. This is especially important if we are to maintain the massive gains we have made in reducing malaria prevalence in Africa, 80% of which is attributable to vector control."

Credit: 
Liverpool School of Tropical Medicine

Faster walking heart patients are hospitalized less

Ljubljana, Slovenia - 20 April 2018: Faster walking patients with heart disease are hospitalised less, according to research presented today at EuroPrevent 2018, a European Society of Cardiology congress, and published in the European Journal of Preventive Cardiology.1,2

The three-year study was conducted in 1,078 hypertensive patients, of whom 85% also had coronary heart disease and 15% also had valve disease.

Patients were then asked to walk 1 km on a treadmill at what they considered to be a moderate intensity.3 Patients were classified as slow (2.6 km/hour), intermediate (3.9 km/hour) and fast (average 5.1 km/hour). A total of 359 patients were slow walkers, 362 were intermediate and 357 were fast walkers.

The researchers recorded the number of all-cause hospitalisations and length of stay over the next three years. Participants were flagged by the regional Health Service Registry of the Emilia-Romagna Region, which collects data on all-cause hospitalisation.

Study author Dr Carlotta Merlo, a researcher at the University of Ferrara, Ferrara, Italy, said: "We did not exclude any causes of death because walking speed has significant consequences for public health. Reduced walking speed is a marker of limited mobility, which is a precursor of disability, disease, and loss of autonomy." 4,5

During the three year period, 182 of the slow walkers (51%) had at least one hospitalisation, compared to 160 (44%) of the intermediate walkers, and 110 (31%) of the fast walkers.

The slow, intermediate and fast walking groups spent a total of 4,186, 2,240, and 990 days in hospital over the three years, respectively.

The average length of hospital stay for each patient was 23, 14, and 9 days for the slow, intermediate and fast walkers, respectively (see figure).

Each 1 km/hour increase in walking speed resulted in a 19% reduction in the likelihood of being hospitalised during the three-year period. Compared to the slow walkers, fast walkers had a 37% lower likelihood of hospitalisation in three years.

Dr Merlo said: "The faster the walking speed, the lower the risk of hospitalisation and the shorter the length of hospital stay. Since reduced walking speed is a marker of limited mobility, which has been linked to decreased physical activity,4 we assume that fast walkers in the study are also fast walkers in real life."

She continued: "Walking is the most popular type of exercise in adults. It is free, does not require special training, and can be done almost anywhere. Even short, but regular, walks have substantial health benefits. Our study shows that the benefits are even greater when the pace of walking is increased."

Credit: 
European Society of Cardiology

Molecular motor: Four states of rotation

With the help of ultrafast spectroscopy and quantum mechanical calculations, Ludwig-Maximilians-Universitaet (LMU) in Munich researchers have characterized the complete rotational cycle of the light-driven, chemical motor molecule hemithioindigo.

Chemist Dr. Henry Dube, heading an Emmy Noether Junior Research Group, has developed a molecular machine based on the molecule hemithioindigo (HTI). It exhibits unidirectional rotational motion about a specific chemical bond when exposed to light. In collaboration with his colleagues in the DFG-funded Collaborative Research Center (SFB) 749 - Prof. Eberhard Riedle (Chair of Experimental Physics - BioMolekulare Optik) and Regina de Vivie-Riedle (Professor of Theoretical Physics) - he has now resolved the dynamics of the entire rotational mechanism. The findings appear in the Journal of the American Chemical Society (JACS).

Hemithioindigo contains a central carbon-carbon double bond (C=C). This type of bond is capable of undergoing a reversible, light-dependent structural change known as photo-isomerization, which is normally not directional. In previous work, Dube had shown that HTI can serve as the basis for a molecular motor whose motion can be controlled with exquisite precision. In the HTI-based molecular motor a succession of photo-isomerization and thermal helix-inversion steps causes the central double bond to rotate unidirectionally at a rate of up to 1 kHz at room temperature. While most other chemical motors require high-energy ultraviolet light to power them, the HTI motor can be driven with visible light. This feature extends its range of application and increases its potential for use in biological and medical contexts.

The team has now characterized the dynamics of unidirectional rotation in the HTI motor using a variety of ultrafast spectroscopic techniques to distinguish the intermediate states in the rotation cycle. By comparing these results with detailed quantum mechanical calculations of the possible reaction pathways, they were able to construct a precise quantitative model of the operation of this nanomachine. The results show that the rotation remains unidirectional even at room temperature and reveal how the rate of rotation can be most effectively upgraded. The full rotation cycle resolves into four conformational and energy states, and the probabilities and rates of the transitions between them were determined for the first time. The relevant timescales for these transitions vary from picoseconds (10-12 s) to milliseconds (10-3 s). All the relevant steps were successfully monitored spectroscopically under the same conditions, i.e. over a range spanning nine orders of magnitude. "Our comprehensive analysis yields unprecedented functional insight into the operation of such molecular motors. We now have a complete picture of the rotational motion of this molecule, which we can exploit to develop novel approaches to motor design that make better use of light energy and are thus more efficient," says Dube.

Credit: 
Ludwig-Maximilians-Universität München

Integrating optical components into existing chip designs

image: Researchers have developed a technique for assembling on-chip optics and electronic separately, which enables the use of more modern transistor technologies.

Image: 
Amir Atabaki

CAMBRIDGE, MASS.--Two and a half years ago, a team of researchers led by groups at MIT, the University of California at Berkeley, and Boston University announced a milestone: the fabrication of a working microprocessor, built using only existing manufacturing processes, that integrated electronic and optical components on the same chip.

The researchers' approach, however, required that the chip's electrical components be built from the same layer of silicon as its optical components. That meant relying on an older chip technology in which the silicon layers for the electronics were thick enough for optics.

In the latest issue of Nature, a team of 18 researchers, led by the same MIT, Berkeley, and BU groups, reports another breakthrough: a technique for assembling on-chip optics and electronic separately, which enables the use of more modern transistor technologies. Again, the technique requires only existing manufacturing processes.

"The most promising thing about this work is that you can optimize your photonics independently from your electronics," says Amir Atabaki, a research scientist at MIT's Research Laboratory of Electronics and one of three first authors on the new paper. "We have different silicon electronic technologies, and if we can just add photonics to them, it'd be a great capability for future communications and computing chips. For example, now we could imagine a microprocessor manufacturer or a GPU manufacturer like Intel or Nvidia saying, 'This is very nice. We can now have photonic input and output for our microprocessor or GPU.' And they don't have to change much in their process to get the performance boost of on-chip optics."

Light appeal

Moving from electrical communication to optical communication is attractive to chip manufacturers because it could significantly increase chips' speed and reduce power consumption, an advantage that will grow in importance as chips' transistor count continues to rise: The Semiconductor Industry Association has estimated that at current rates of increase, computers' energy requirements will exceed the world's total power output by 2040.

The integration of optical - or "photonic" - and electronic components on the same chip reduces power consumption still further. Optical communications devices are on the market today, but they consume too much power and generate too much heat to be integrated into an electronic chip such as a microprocessor. A commercial modulator - the device that encodes digital information onto a light signal - consumes between 10 and 100 times as much power as the modulators built into the researchers' new chip.

It also takes up 10 to 20 times as much chip space. That's because the integration of electronics and photonics on the same chip enables Atabaki and his colleagues to use a more space-efficient modulator design, based on a photonic device called a ring resonator.

"We have access to photonic architectures that you can't normally use without integrated electronics," Atabaki explains. "For example, today there is no commercial optical transceiver that uses optical resonators, because you need considerable electronics capability to control and stabilize that resonator."

Atabaki's co-first-authors on the Nature paper are Sajjad Moazeni, a PhD student at Berkeley, and Fabio Pavanello, who was a postdoc at the University of Colorado at Boulder, when the work was done. The senior authors are Rajeev Ram, a professor of electrical engineering and computer science at MIT; Vladimir Stojanovic, an associate professor of electrical engineering and computer sciences at Berkeley; and Milos Popovic, an assistant professor of electrical and computer engineering at Boston University. They're joined by 12 other researchers at MIT, Berkeley, Boston University, the University of Colorado, the State University of New York at Albany, and Ayar Labs, an integrated-photonics startup that Ram, Stojanovic, and Popovic helped found.

Sizing crystals

In addition to millions of transistors for executing computations, the researchers' new chip includes all the components necessary for optical communication: modulators; waveguides, which steer light across the chip; resonators, which separate out different wavelengths of light, each of which can carry different data; and photodetectors, which translate incoming light signals back into electrical signals.

Silicon - which is the basis of most modern computer chips - must be fabricated on top of a layer of glass to yield useful optical components. The difference between the refractive indices of the silicon and the glass - the degrees to which the materials bend light - is what confines light to the silicon optical components.

The earlier work on integrated photonics, which was also led by Ram, Stojanovic, and Popovic, involved a process called wafer bonding, in which a single, large crystal of silicon is fused to a layer of glass deposited atop a separate chip. The new work, in enabling the direct deposition of silicon - with varying thickness - on top of glass, must make do with so-called polysilicon, which consists of many small crystals of silicon.

Single-crystal silicon is useful for both optics and electronics, but in polysilicon, there's a tradeoff between optical and electrical efficiency. Large-crystal polysilicon is efficient at conducting electricity, but the large crystals tend to scatter light, lowering the optical efficiency. Small-crystal polysilicon scatters light less, but it's not as good a conductor.

Using the manufacturing facilities at SUNY-Albany's Colleges for Nanoscale Sciences and Engineering, the researchers tried out a series of recipes for polysilicon deposition, varying the type of raw silicon used, processing temperatures and times, until they found one that offered a good tradeoff between electronic and optical properties.

"I think we must have gone through more than 50 silicon wafers before finding a material that was just right," Atabaki says.

Credit: 
Massachusetts Institute of Technology

Novel discoveries on aggressive NK-cell leukemia pave the way for new treatments

International research consortium led by researchers from the University of Helsinki, Finland, discovered new information related to a rare form of leukemia called aggressive NK-cell leukemia. Potential new treatment options were found which are highly warranted as currently this disease usually leads to rapid death of patients.

The study was published in Nature Communications.

Aggressive NK-cell leukemia (ANKL) is a cancer in which leukemia cells consist of natural killer cells, a part of our immune system in normal conditions. The disease is very rare and aggressive: with the current treatment options (cytostatic drugs and hematopoietic stem cell transplantation) patients usually survive only a couple of months. This leukemia type is more common in the Asian population. However, related diseases such as NK/T-cell lymphomas occur also in western countries.

Together with Japanese, South-Korean, Taiwanese and US research teams, the researchers from the University of Helsinki aimed to discover which genetic defects are typical in this type of leukemia.

"ANKL patients often had mutations in the STAT3 and DDX3X genes which points towards partly shared genetic background with other NK- and T-cell malignancies," says Professor Satu Mustjoki whose group initially discovered somatic STAT3 mutations in LGL leukemia.

By comparing the exome sequencing data from ANKL patients to previously published datasets from NK/T-cell lymphoma patients, researchers also uncovered novel gene amplifications in the JAK-STAT signaling pathway. In some cases, the amplified regions in the genome also included the PD-L1 gene which has therapeutic potential. In other lymphoma types, tumors with amplifications in the PD-L1 gene have responded well to novel immune checkpoint inhibitor therapies.

Researchers also aimed to discover novel potential drugs for the treatment of ANKL by testing the ability of over 400 different drugs to kill malignant and normal NK cells in cell culture conditions. Some potential drug candidates were discovered: NK cells were especially sensitive for drugs which inhibit JAK tyrosine kinases and anti-apoptotic BCL family members. JAK inhibitors inhibit the same signaling pathway in which genetic alterations were discovered in ANKL patients.

"JAK inhibitors, currently used in the treatment of rheumatoid arthritis and some other hematological diseases, could potentially improve the treatment of various NK-cell malignancies," says MD/PhD student Olli Dufva who is the first author in the publication.

Credit: 
University of Helsinki

New strategies for hospitals during mass casualty incidents

A community's ability to cope with mass casualty incidents (MCIs) is very dependent on the capacity and capability of its hospitals for handling a sudden surge of patients requiring resource-intensive and specialized needs.

In a recent paper published by the Disaster Medicine and Public Health Preparedness journal, authors Mersedeh TariVerdi, Elise Miller-Hooks from George Mason University, and Thomas Kirsch, from National Center for Disaster Medicine and Public Health, presented a whole-hospital simulation model to replicate medical staff, resources and space to investigate hospital responsiveness to MCIs. Using simulation software designed experiments were conducted to measure functionality and impact and transient system behavior. Diversion of patients to alternative facilities and modified triage were also investigated. Several important conclusions were made from these analyses;

1) response capability can depend on patient arrival pattern and injury types. Regional response planning can help a hospital with this.
2) Trauma level I hospitals could provide more space in the Emergency Department and Operating Rooms by increasing the number of beds in an internal general ward, whereas a trauma level III hospital could provide a better response by increasing the capacity of the emergency department.

The suggested strategies for expanding capacities were found to have a superlative effect overall especially when combined.

According to Dr. Kirsch "recent mass-casualty incidents, like the mass shooting in Las Vegas with over 500 casualties, has demonstrated the importance of improving hospital preparedness for these events. Perhaps more important is to use these models to help prepare an entire municipal healthcare system because few individual hospitals can care for more than a couple dozen acutely injured people."

Dr. Miller-Hooks reports that "the George Mason team (TariVerdi and Miller-Hooks) is currently studying the effectiveness of formalized collaboration strategies through which resources, including staff and supplies, can be shared across hospitals. We believe such measures can be crucial to patient welfare in MCIs of such [Las Vegas] magnitude."

Credit: 
Society for Disaster Medicine and Public Health, Inc.

Researchers find new way of exploring the afterglow from the Big Bang

Researchers have developed a new way to improve our knowledge of the Big Bang by measuring radiation from its afterglow, called the cosmic microwave background radiation. The new results predict the maximum bandwidth of the universe, which is the maximum speed at which any change can occur in the universe.

The cosmic microwave background (CMB) is a reverberation or afterglow left from when the universe was about 300,000 years old. It was first discovered in 1964 as a ubiquitous faint noise in radio antennas. In the past two decades, satellite-based telescopes have started to measure it with great accuracy, revolutionizing our understanding of the Big Bang.

Achim Kempf, a professor of applied mathematics at the University of Waterloo and Canada Research Chair in the Physics of Information, led the work to develop the new calculation, jointly with Aidan Chatwin-Davies and Robert Martin, his former graduate students at Waterloo.

"It's like video on the Internet," said Kempf. "If you can measure the CMB with very high resolution, this can tell you about the bandwidth of the universe, in a similar way to how the sharpness of the video image on your Skype call tells you about the bandwidth of your internet connection."

The study appears in a special issue of Foundations of Physics dedicated to the material Kempf presented to the Vatican Observatory in Rome last year. The international workshop entitled, Black Holes, Gravitational Waves and Spacetime Singularities, gathered 25 leading physicists from around the world to present, collaborate and inform on the latest theoretical progress and experimental data on the Big Bang. Kempf's invitation was the result of this paper in Physical Review Letters, a leading journal in the field.

"This kind of work is highly collaborative," said Kempf, also an affiliate at the Perimeter Institute for Theoretical Physics. "It was great to see at the conference how experimentalists and theoreticians inspire each other's work."

While at the Vatican, Kempf and other researchers in attendance also shared their work with the Pope.

"The Pope has a great sense of humor and had a good laugh with us on the subject of dark matter," said Kempf.

Teams of astronomers are currently working on even more accurate measurements of the cosmic microwave background. By using the new calculations, these upcoming measurements might reveal the value of the universe's fundamental bandwidth, thereby telling us also about the fastest thing that ever happened, the Big Bang.

Credit: 
University of Waterloo

How environmental pollutants and genetics work together in rheumatoid arthritis

ANN ARBOR, Mich. - It has been known for more than three decades that individuals with a particular version of a gene -- human leukocyte antigen (HLA) -- have an increased risk for rheumatoid arthritis.

Meanwhile, in recent years, there has been a growing interest in the relationship between rheumatoid arthritis and environmental factors, such as cigarette smoking. In smokers who develop rheumatoid arthritis, the disease hits harder. Smokers who also carry the HLA gene variant have even higher likelihood to develop RA, and their disease is more severe. For these patients, this means not only greater pain and swelling, but also more severe bone destruction -- a lesser known and more dangerous aspect of the disease.

In a new mouse study, Michigan Medicine researchers probed the relationship between these two factors: the HLA gene and environmental pollutants.

"We found a particular enzyme that acts as a channel, or pathway, in the cell for a conversation between the two culprits, so they work together to do greater damage. Individually they are bad, but together, they're worse," says Joseph Holoshitz, M.D., professor of internal medicine and associate chief for research in the Division of Rheumatology at the University of Michigan School of Medicine.

The work is published in the Proceedings of the National Academy of Sciences.

Factors at work

Cigarettes are one of the top environmental concerns with rheumatoid arthritis. But many other environmental pollutants can also help trigger the condition. For example, living in urban areas or near highways is linked with RA, regardless of cigarette use.

The chemical dioxin may be to blame. It's the same contaminant that was found in soil near a Dow Chemical plant in Midland, Michigan. "One scenario is that air pollution from vehicles on highways produces dioxin or other pollutants. Dioxin is just one of many chemicals that similarly activate this pathway," says Holoshitz.

Dioxin also has been shown to increase severity in an experimental model of another autoimmune disease, multiple sclerosis.

"We've shown in this study that the interaction between dioxin and the HLA gene variant activates events known to be associated with rheumatoid arthritis. And we've demonstrated quite convincingly that this facilitates bone destruction," says Holoshitz.

Bone degeneration in rheumatoid arthritis is caused by hyperactivity of certain bone cells called osteoclasts, which absorb bone tissue. "In our research with the combination of dioxin and the HLA gene variant, we saw that osteoclasts are overactive and overabundant, and that bone is destroyed because of it," says Holoshitz.

Currently, the treatments available for rheumatoid arthritis focus primarily on the inflammation but do not directly target bone destruction, says Holoshitz. "Once we have better drugs that directly and specifically address bone destruction in this disease, we'll have better treatment."

Says Holoshitz: "As a separate project, we have a couple of early-stage drug candidates that block the HLA gene-activated pathway and are effective in preventing bone damage. These drugs almost completely inhibit experimental rheumatoid arthritis and bone damage in mice.

"By understanding the mechanisms, we may be able to develop better inhibitors to prevent disease and identify therapeutic targets for new treatment strategies," says Holoshitz.

Credit: 
Michigan Medicine - University of Michigan

Low total testosterone in men widespread, linked to chronic disease

ANN ARBOR, Mich. - A male's total testosterone level may be linked to more than just sexual health and muscle mass preservation, a new study finds. Low amounts of the hormone could also be associated with chronic disease, even among men 40 years of age and younger.

"If we look at data for men from a population level, it has become evident over time that chronic disease is on the rise in older males," says Mark Peterson, Ph.D., M.S., FACSM, lead author of the study and assistant professor of physical medicine and rehabilitation at Michigan Medicine. "But we're also finding that a consequence of being obese and physically inactive is that men are seeing declines in testosterone even at younger ages."

Published in Scientific Reports, Peterson and colleagues studied this relationship among testosterone, age and chronic disease.

"Previous research in the field has shown that total testosterone deficiency in men increases with age, and studies have shown that testosterone deficiency is also associated with obesity-related chronic diseases," Peterson says. "But it hasn't been previously understood what the optimal levels of total testosterone should be in men at varying ages, and to what effect those varying levels of the hormone have on disease risk across the life span."

The study's basis came from previous work from other researchers that appeared to define normal ranges of testosterone but didn't use population-representative cohorts, he says.

"Previous studies used clinical cohorts that were not reflective of the current male population in the United States," Peterson says. "The cohorts they used enforced strict guidelines for patients that were accepted into the cohort. Therefore, those patients tended to be much healthier."

Peterson and colleagues, then, leveraged a population sample that was much more representative of males in the United States today.

Multimorbidity across age groups

Using data from the National Health and Nutrition Examination Survey, the research team examined the extent to which hypogonadism is prevalent among men of all ages.

Of the 2,399 men in the survey who were at least 20 years old, 2,161 had complete information on demographics (e.g., age, ethnicity and household income), chronic disease diagnoses, blood samples obtained for total testosterone, grip strength and lab results for cardiometabolic disease risk factors.

Peterson and team then examined prevalence of nine chronic conditions, including type 2 diabetes, arthritis, cardiovascular disease, stroke, pulmonary disease, high triglycerides, hypercholesterolemia, hypertension and clinical depression.

The researchers studied the prevalence of multimorbidity, or when two or more of the chronic conditions were present, among three age groups (young, middle-aged and older men) with and without testosterone deficiency. They found that low total testosterone was associated with multimorbidity in all age groups -- but it was more prevalent among young and older men with testosterone deficiency.

"We also found a large dose-response relationship between the age-specific low total testosterone and moderate total testosterone levels and multimorbidity, even after adjusting for obesity and muscle strength capacity," Peterson says. "Which means that men should be concerned about declining total testosterone, even if it has not reached a level to warrant a clinical diagnosis (

Co-author Aleksandr Belakovskiy, M.D., a resident in family medicine at Michigan Medicine, who helped to design and carry out the study, notes that the results show the need for further testing and research.

"This study showed a robust association between testosterone and multiple medical morbidities that could influence the way we think about testosterone in general practice," Belakovskiy says. "While these findings cannot prove causation, it does spark the need for better clinical awareness and more research."

The team hopes the study and its results can serve as a public service announcement for men.

"A lot of men may not be aware of the risk factors for testosterone deficiency because of their current lifestyle," Peterson says. "And more importantly, that declining levels could be contributing to a silent decline in overall health and increased risk for chronic disease."

Credit: 
Michigan Medicine - University of Michigan

Hurricane Harvey: Dutch-Texan research shows most fatalities occurred outside flood zones

image: An aerial view shows extensive flooding from Harvey in a residential area in southeast Texas, Aug. 31, 2017.

Image: 
Air National Guard photo by Staff Sgt. Daniel J. Martinez

A Dutch-Texan team found that most Houston-area drowning deaths from Hurricane Harvey occurred outside the zones designated by government as being at higher risk of flooding: the 100- and 500-year floodplains. Harvey, one of the costliest storms in US history, hit southeast Texas on 25 August 2017 causing unprecedented flooding and killing dozens. Researchers at Delft University of Technology in the Netherlands and Rice University in Texas published their results today in the European Geosciences Union journal Natural Hazards and Earth System Sciences.

"It was surprising to me that so many fatalities occurred outside the flood zones," says Sebastiaan Jonkman, a professor at Delft's Hydraulic Engineering Department who led the new study.

Drowning caused 80% of Harvey deaths, and the research showed that only 22% of fatalities in Houston's 4,600-square-kilometre district, Harris County, occurred within the 100-year floodplain, a mapped area that is used as the main indicator of flood risk in the US.

Flood zones, or floodplains, are low-lying areas surrounding rivers and streams that are subject to flooding. To assess flood risk for insurance purposes and to set development standards, US authorities outline floodplains for 100- and 500-year floods. These events have a 1% probability (100-year flood) and a 0.2% probability (500-year) of occurring in any given year.

"Hurricane Harvey was much larger than a 100- or 500-year flood, so flooding outside of these boundaries was expected," says Jonkman. Rainfall totals in the week after the hurricane made landfall were among the highest recorded in US history, with over 1000 mm of rain falling in just three days in large parts of both Harris and surrounding counties. As a result, a report by Delft University found that "unprecedented flooding occurred over an area the size of the Netherlands."

Nonetheless, it was surprising for the researchers to find that so many of Harvey's fatalities happened outside the designated floodplains given that these zones are expected to be "reasonable predictors of high-risk areas," according to Jonkman.

The research began within days of the storm: "We wanted to identify lessons that could be learned, for both Texas and the Netherlands, from Harvey's impact and the local and government response to the flooding," says study co-author Antonia Sebastian, a postdoctoral research associate at Rice University's Severe Storm Prediction, Education and Evacuation from Disasters (SSPEED) Center, who was based at Delft University when Harvey struck.

The team compiled a database of fatalities, using official government records and media sources, which they analysed in the Natural Hazards and Earth System Sciences study published today. They concluded that at least 70 deaths occurred as a consequence of Hurricane Harvey, including 37 in Harris County. Of the Harris County deaths, eight were in the 100-year floodplain, 10 more fell within the larger 500-year floodplain, and 19 were recovered outside the 100- and 500-year zones. "The number of fatalities outside of the floodplains highlights how widespread flooding from Harvey really was," says Sebastian.

The new study also shows that most fatalities - over 80% - were drownings, many occurring either in vehicles or when people were swept away while trying to get out of their cars. Six people died when their boat capsized during a rescue. The second largest causes of death were electrocution and lack of medical treatment, responsible for 6% of fatalities each.

About 70% of those killed by Harvey were men. The team thinks the reason behind the high percentage of male fatalities could be that men tend to show more risk-taking behaviour, such as driving through flooded crossings or taking part in rescues.

The researchers hope their findings encourage authorities to identify high risk areas outside of the designated floodplains and to take preventive measures to reduce the number of victims in future floods, including closing low water crossings and underpasses during extreme flood events.

Jonkman says that the current flood maps will need to be improved, but that floodplains should not be abandoned as an indicator of high-risk areas. "Better communication of their purpose and limitations would help reduce risk."

Credit: 
European Geosciences Union