Tech

SUTD develops revolutionary reversible 4D printing with research collaborators

video: Reversible 4D Printing

Image: 
SUTD

Imagine having your curtains extended or retracted automatically without needing to lift a finger?

Reversible 4D printing technology could make these 'smart curtains' a reality without the use of any sensors or electrical devices, and instead rely on the changing levels of heat during the different times of the day to change its shape.

4D printing essentially refers to the ability of 3D printed objects to change its shape over time caused by either heat or water while the reversibility aspect of it allows it to revert to its original shape. However, to have it change back to its original shape usually requires the manual stretching or pulling of the object, which can be laborious and time consuming.

In recent years, there have been successful breakthroughs in the study of reversible 4D printing, where the object gets back its original shape without any human intervention. This usually involved the use of hydrogel as a stimulus to achieve reversible 4D printing.

As hydrogel lacks mechanical strength, it became a limitation when used for load-bearing applications. At the same time, other research work that utilised various layers of material as an alternative to hydrogel, only made the procedure to enable reversible actuation more tedious.

To address these challenges, researchers from the Singapore University of Technology and Design collaborated with Nanyang Technological University to revolutionise 4D printing by making it reversible, without the need for hydrogel nor human interference (refer to video). This paper has been published in the Engineering journal.

This research work utilised only two materials, VeroWhitePlus and TangoBlackPlus, which were more readily available and compatible for printing in a 3D polyjet printer compared to using a hydrogel. The researchers also proved in their paper that the materials were able to retain considerable mechanical strength during and after actuation.

The process consisted of the swelling of elastomer with ethanol to replace the function of hydrogel swelling to induce stress on the transition material. When heated, the transition material changes its shape to a second shape. After the ethanol is being dried out of the elastomer, heating the transition material again will then allow it to revert to its original shape, as the elastomer will pull the transition material back due to elastic energy stored in it after drying.

The elastomer plays a dual function in this whole process. It is used to induce stress in the programming stage and store elastic energy in the material during the recovery stage.

This process of reversible 4D printing has also proven to be more precise when the material reverts to its original shape compared to manually stretching or inducing stress on it. While it is still in its infancy, this breakthrough development provides a wide variety of applications in the future when more mechanisms and more materials become available for printing.

"While reversible 4D printing in itself is a great advancement, being able to use a more robust material while ensuring a more precise reversal during shape change is revolutionary as it allows us to produce complex structures that cannot easily be achieved through conventional fabrication. By relying on environmental conditions instead of electricity, it makes it a game changer across various industries, completely changing the way we design, create, package and ship products," said Professor Chua Chee Kai, lead researcher and Head of Engineering Product Development in SUTD.

Credit: 
Singapore University of Technology and Design

Scientists find far higher than expected rate of underwater glacial melting

image: An autonomous kayak surveys the ocean in front of the 20-mile-long LeConte Glacier in Alaska. The kayak measures ocean currents and water properties to study the underwater melting of the glacier and track meltwater as it spreads in the ocean.

Image: 
David Sutherland/University of Oregon

Tidewater glaciers, the massive rivers of ice that end in the ocean, may be melting underwater much faster than previously thought, according to a Rutgers co-authored study that used robotic kayaks.

The findings, which challenge current frameworks for analyzing ocean-glacier interactions, have implications for the rest of the world's tidewater glaciers, whose rapid retreat is contributing to sea-level rise.

The study, published in the journal Geophysical Research Letters, surveyed the ocean in front of 20-mile-long LeConte Glacier in Alaska. The seaborne robots made it possible for the first time to analyze plumes of meltwater, the water released when snow or ice melts, where glaciers meet the ocean. It is a dangerous area for ships because of ice calving - when falling slabs of ice that break from glaciers crash into the water and spawn huge waves.

"With the kayaks, we found a surprising signal of melting: Layers of concentrated meltwater intruding into the ocean that reveal the critical importance of a process typically neglected when modeling or estimating melt rates," said lead author Rebecca Jackson, a physical oceanographer and assistant professor in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick. Jackson led the study when she was at Oregon State University.

Two kinds of underwater melting occur near glaciers. Where freshwater discharge drains at the base of a glacier (from upstream melt on the glacier's surface), vigorous plumes result in discharge-driven melting. Away from these discharge outlets, the glacier melts directly into the ocean waters in a regime called ambient melting.

The study follows one published last year in the journal Science that measured glacier melt rates by pointing sonar at the LeConte Glacier from a distant ship. The researchers found melt rates far higher than expected but couldn't explain why. The new study found for the first time that ambient melting is a significant part of the underwater mix.

Before these studies, scientists had few direct measurements of melt rates for tidewater glaciers and had to rely on untested theory to get estimates and model ocean-glacier interactions. The studies' results challenge those theories, and this work is a step toward better understanding of submarine melt - a process that must be better represented in the next generation of global models that evaluate sea-level rise and its impacts.

Researchers at Oregon State University, University of Alaska Southeast, University of Oregon and University of Alaska Fairbanks contributed to the study.

Credit: 
Rutgers University

Hermetically sealed semi-conductors

image: HZDR researchers have developed a new method to protect semi-conductors made of sensitive materials from contact with air and chemicals. It becomes, thus, possible to integrate these ultra-thin layers in electronic components, without impairing their performance.

Image: 
HZDR / Sahneweiß

Tomorrow's electronics are getting ever smaller. Researchers are thus searching for tiny components that function reliably in increasingly narrow configurations. Promising elements include the chemical compounds indium selenide (InSe) and gallium selenide (GaSe). In the form of ultra-thin layers, they form two-dimensional (2D) semi-conductors. But, so far, they have hardly been used because they degrade when they get in contact with air during manufacturing. Now, a new technique allows the sensitive material to be integrated in electronic components without losing its desired properties. The method, which has been described in the journal ACS Applied Materials and Interfaces (DOI: 10.1021/acsami.9b13442), was developed by Himani Arora, a doctoral candidate of physics at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR).

"We managed to make encapsulated transistors based on indium selenide and gallium selenide," reports Dr. Artur Erbe, head of the "Transport in Nanostructures" group at HZDR's Institute of Ion Beam Physics and Materials Research. "The encapsulation technique protects the sensitive layers from external impacts and preserves its performance." For encapsulation, the scientists use hexagonal boron nitride (hBN). It is ideal for the purpose because it can be formed into a thin layer and is also inert, so it does not respond to its environment.

Indium and gallium selenide are seen as promising candidates for various applications in areas such as high-frequency electronics, optoelectronics and sensor technology. These materials can be made into flake-like films only 5 to 10 atomic layers thick which can be used to produce electronic components of extremely small dimensions.

During encapsulation, the 2D flakes are arranged between two layers of hexagonal boron nitride and thus completely enclosed. The upper hBN layer is responsible for outward insulation, the lower one for maintaining distance to the substrate. The technique was originally developed by the group of James Hone at Columbia University in New York where Himani Arora learned it during a research visit. The doctoral student subsequently continued to work on the topic at HZDR's International Helmholtz Research School (IHRS) NanoNet.

Applying contacts without lithography

One of the particularly big challenges posed by the encapsulation technique was to apply external contacts to the semi-conductors. The usual method of evaporation deposition using a photomask is unsuitable because during this process the sensitive materials come into contact both with chemicals and with air and thus degrade. So, the HZDR researchers employed a lithography-free contacting technique involving metal electrodes made of palladium and gold embedded in hBN foil. This means the encapsulation and the electric contact with the 2D layer underneath can be achieved concurrently.

"In order to produce the contacts, the desired electrode pattern is etched onto the hBN layer so that the holes created can be filled with palladium and gold by means of electron beam evaporation," Himani Arora explains. "Then you laminate the hBN foil with the electrodes onto the 2D flake." When there are several contacts on an hBN wafer, contact with several circuits can be made and measured. For later application, the components will be stacked in layers.

As the experiments have shown, complete encapsulation with hexagonal boron nitride protects the 2D layers from decomposition and degradation and ensures long-term quality and stability. The encapsulation technique developed at HZDR is robust and easy to apply to other complex 2D materials. This opens up new paths for fundamental studies as well as for integrating these materials into technological applications. The new two-dimensional semi-conductors are cheap to produce and can be used for various applications such as detectors that measure light wavelengths. Another example of use would be as couplers between light and electronic current by generating light or switching transistors using light.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

Neural effects of acute stress on appetite: a magnetoencephalography study

image: figure1

Image: 
Osaka City University

Stress is prevalent in modern society and can affect human health through its effects on appetite. However, knowledge about the neural mechanisms related to the alteration of the subjective level of appetite caused by acute stress in humans remains limited. We focused on the effects of stress caused by expecting critical personal events such as school examinations and public speaking engagements on appetite and aimed to clarify the neural mechanisms by which acute stress affects appetite in healthy, non-obese males during fasting.

In total, 22 fasted male volunteers participated in two experiments (stress and control conditions) on different days. The participants performed a stress-inducing speech-and-mental-arithmetic task under both conditions, and then viewed images of food, during which, their neural activity was recorded using magnetoencephalography (MEG). In the stress condition, the participants were told to perform the speech-and-mental-arithmetic task again subsequently to viewing the food images; however, another speech-and-mental-arithmetic task was not performed actually. Subjective levels of stress and appetite were then assessed using a visual analog scale. Electrocardiography was performed to assess the index of heart rate variability reflecting sympathetic nerve activity.

The findings showed that subjective levels of stress and sympathetic nerve activity were increased during the MEG recording in the stress condition, whereas appetite gradually increased during the MEG recording only in the control condition. The decrease in alpha (8-13 Hz) band power in the frontal pole caused by viewing the food images was greater in the stress condition than in the control condition, suggesting that the acute stress can suppress the increase of appetite and this suppression is associated with the alteration of the neural activity in the frontal pole.

Since it has been reported that the frontal pole is involved in the thinking and planning of future actions and the cognitive control of appetite, it is speculated that the participants' expectations of the forthcoming speech-and-mental-arithmetic task in our present study activated the frontal pole for the thinking and planning of future actions and this activation of the frontal pole might have interfered with the regulatory processes related to appetite that were also subserved by the frontal pole, resulting in the suppression of appetite under the stress caused in our experiment. These findings will provide valuable clues to gain a further understanding of the neural mechanisms by which acute stress affects appetite, and could contribute to the development of methods to prevent and reduce the adverse effects on health caused by stress.

Credit: 
Osaka City University

Mountain vegetation dries out Alpine water fluxes

image: In an average summer, less water evaporates through vegetation; in a hot summer accompanied by dry conditions, the opposite is true, which amplifies the lack of water in streams.

Image: 
ETH Zurich / Michael Stuenzi

Until now, scientists assumed that most plants suffer from water stress during droughts: they close their stomata to retain water, stop growing and, in the worst case, wither. As a result, there is a decrease in evaporation and transpiration of water from vegetation, soil and water surfaces - a process that experts call evapotranspiration. "But despite dry and warm conditions, droughts are not occurring at higher altitudes in, say, forested mountain areas," says Simone Fatichi, senior assistant at the ETH Zurich Institute of Environmental Engineering.

Analyses of observations and computer model simulations from the heatwave of summer 2003 (and recent hot and dry summers) indicate that, during droughts, mountain forests and grasslands at higher elevations release even more water into the air than in "normal" periods of growth with average temperatures and sufficient precipitation.

This is because warmth and abundant sunshine promote vegetation growth. But at the same time, the vegetation has a higher metabolism, and so it essentially sucks every last drop of water from the ground in order to grow. For that reason, evapotranspiration was much greater than expected at higher altitudes during the droughts studied.

Green water predominates in dry and warm summers

Fatichi and his colleagues have now investigated this phenomenon across large areas in the European Alps for the first time, with the help of a computer model. This enabled them to quantify the share of "green" water, i.e. water that reaches the air through evapotranspiration, in proportion to that of "blue" water, the water that runs off into streams, rivers and lakes.

The researchers populated their model with data recorded at more than 1,200 stations throughout the Alpine region that measure, among other things, meteorological parameters and river runoff.

On the basis of their simulation, Fatichi and his doctoral student Theodoros Mastrotheodoros calculated that in forested mountain areas 1,300-3,000 metres above sea level, evapotranspiration rates were above average in large parts of the Alps during the heatwave of 2003.

That summer, Alpine water fluxes were on average only half their usual volume and, according to the ETH researchers' calculations, one third of this runoff deficit was attributable to evapotranspiration. Fatichi emphasises that "it is therefore the vegetation at this altitude that was instrumental in draining the half-dry rivers and streams."

Global warming amplifies evapotranspiration

As part of their investigation, the researchers also simulated a temperature rise of 3 degrees in the Alpine region - a scenario that could become reality by the end of this century and that could further increase annual evaporation rates by as much as 6 percent. In terms of precipitation, the amount of evaporated water would be comparable to an annual decline in the Alps of 45 litres per square metre on average - which corresponds to 3-4 percent of annual precipitation. This remarks that at the annual scale - differently from warm summers - precipitation and its changes are by far the most important factors that determine runoff volumes.

As a result, discharge volumes in rivers and streams will come under even greater pressure in future. "As summers become warmer and drier, we'll see a shift towards more green and less blue water," Fatichi says. In the long term, this will endanger the supply of water to the lower-lying regions in and around the Alps, he explains.
Several factors play into this scenario: global warming is expected to result in a general reduction in precipitation, glaciers are set to dwindle and, in dry and warm summers, evapotranspiration will intensify the problem of lower runoff volumes.

Such circumstances could cast some doubt over the Alps' role as the "water towers of Europe". Four of Europe's major rivers, the Rhine, Rhône, Inn and Po, originate in the Alps. Together, they supply around 170 million people with water and play a crucial role in power generation and agriculture. A large part of Europe therefore depends on the blue water from the Alps, leading Fatichi to ask the question, "can we really afford to allow the volumes of this water to decline?"

Credit: 
ETH Zurich

Newspaper 'hierarchy' of injury glamorises war

British newspapers are routinely glamorising combat by creating a moral separation between combat and non-combat injuries, according to new research published in the journal Media, War and Conflict.

Academics from Anglia Ruskin University's Veterans and Families Institute for Military Social Research (VFI) examined the reporting of injuries sustained by British military personnel during the height of the UK's war in Afghanistan in 2009, and a comparison period in 2014, in all daily and Sunday UK national newspapers.

They found that representation of injured personnel differed substantially between articles reporting on combat and non-combat injuries, with wounds suffered in battle being framed as more 'heroic' than those sustained in other situations, such as during training or in road traffic accidents.

Newspapers tended to provide factual descriptions of non-combat injuries, but in reports of wounds suffered in battle, there was a tendency to add emotive terms, such as "horrific" or "harrowing", and provide more details and context.

Figures from the Ministry of Defence show that 2,201 personnel were admitted to the Field Hospital at Camp Bastion between 2009 and 2014 with combat injuries. During the same period, 2,019 were admitted as a result of non-battle injuries, including crushing accidents, accidental small arms fire, slips, trips and falls, demonstrating the wide variety of injuries sustained by military personnel during times of conflict.

Lead author Dr Nick Caddick, Senior Research Fellow at Anglia Ruskin University (ARU), said: "The media plays a key role in how the public understands war and it generates and amplifies the heroic rhetoric that sticks to soldiers and veterans during times of conflict.

"The consequences of media framing are rarely benign and can skew the perception of combat. Media constructs and reinforces powerful meanings about particular topics or social groups, such as injured soldiers and veterans.

"We found that reporting describing combat injuries was highly charged, sensational and emotive. At the same time, bland, factual descriptions were used when reporting on military personnel serving in Afghanistan whose injuries were not sustained on the battlefield. Glamorising combat injuries as a more worthy form of heroic sacrifice obscures the reality that there is nothing glamorous about the often hideous day-to-day realities of war and its aftermath.

"It is worth emphasising that deployment to a warzone is not the only military activity that carries a risk of death and injury. Using language in this way may create risks to the mental health of soldiers and veterans who have received non-combat injuries, as they may feel that they are somehow less worthy or valued by the population than those who have been wounded in battle."

Credit: 
Anglia Ruskin University

A nanoscale lattice of palladium and yttrium makes for a superlative carbon-linking catalyst

image: Proposed reaction paths for the Suzuki cross-coupling process.

Image: 
<em>Nature Communications</em>

A group of materials scientists at Tokyo Institute of Technology has shown that a palladium-based intermetallic electride, Y3Pd2, can improve the efficiency of carbon-carbon cross-coupling reactions. Their findings point the way to a more sustainable world through catalysis.

Researchers at Tokyo Institute of Technology (Tokyo Tech) have developed an electride[1] material composed of yttrium and palladium (Y3Pd2) as a catalyst for Suzuki cross-coupling reactions. These reactions are among the most widely used for the formation of carbon-carbon bonds in organic and medicinal chemistry.

Y3Pd2 was predicted to be an effective electride based on theoretical calculations, explains Tian-Nan Ye, an assistant professor at Tokyo Tech's Materials Research Center for Element Strategy and first author of the study published in Nature Communications. "In an electride, anionic electrons are trapped in interstitial sites and typically host a strong electron donation effect," he says. "This feature motivated us to apply Y3Pd2 as a Suzuki coupling reaction catalyst as the reaction barrier of the rate-determining step can be suppressed through electron transfer from the electride to the substrates."

In lab tests, the catalytic activity of Y3Pd2 was shown to be ten times higher than that achieved by a pure Pd catalyst, and the activation energy was reduced by 35%.

What makes Y3Pd2 so efficient and stable is the successful incorporation of active Pd atoms in an intermetallic electride lattice. "The stabilized Pd active sites in our crystalline lattice solve the problems of aggregation and leaching that have commonly occurred in other systems reported so far," says Ye. "This makes our catalyst extremely robust and stable for long-term usage, without deactivation."

The reusability of the catalyst (up to 20 cycles) and the relative ease with which Pd atoms can be recovered represents an important step to achieving greater sustainability in the chemical industry.

The idea of combining yttrium and palladium was sparked by the work of Jens Kehlet Nørskov, now at Stanford University, says Ye. In 2009, Nørskov and co-workers published groundbreaking findings on catalysts made of platinum alloyed with early transition metals, including yttrium. Since then, many groups have been investigating new combinations of intermetallic compounds (consisting of a rare earth metal and an active transition metal), with the goal of developing much more efficient catalysts for the chemical industry.

Through a series of calculations and experimental studies, Ye and his team showed that Y3Pd2 has a strong electron-donating effect associated with a low work function and high carrier density -- features that enable the catalyst to work at a much lower activation energy than that of a pure Pd catalyst.

One remaining challenge is the relatively low surface area of Y3Pd2. To tackle this issue, the team used a pulverizing technique called ball-milling[2] and compared catalytic activity using different solvents such as heptane and ethanol. In all of the samples investigated so far, the team found that the Suzuki coupling reaction rate increased in proportion to the increase in surface area. These initial results are "very promising," says Ye, suggesting that "catalytic performance could be improved through further nanocrystallization."

Credit: 
Tokyo Institute of Technology

To make amino acids, just add electricity

image: A demonstration flow reactor constructed by researchers at Kyushu University continuously converts source materials into amino acids through a reaction driven by electricity. By choosing the right combination of electrocatalyst and source materials, the researchers achieved highly efficient synthesis of amino acids. This method for producing amino acids is less resource intensive than current methods, and similar methods may one day be used for providing people living in space with some of the essential nutrients they need to survive.

Image: 
Szabolcs Arany, Kyushu University

New research from Kyushu University in Japan could one day help provide humans living away from Earth some of the nutrients they need to survive in space or even give clues to how life started.

Researchers at the International Institute for Carbon-Neutral Energy Research reported a new process using electricity to drive the efficient synthesis of amino acids, opening the door for simpler and less-resource-intensive production of these key components for life.

In addition to being the basic building blocks of proteins, amino acids are also involved in various functional materials such as feed additives, flavor enhancers, and pharmaceuticals.

However, most current methods for artificially producing amino acids are based on fermentation using microbes, a process that is time and resource intensive, making it impractical for production of these vital nutrients in space-limited and resource-restricted conditions.

Thus, researchers have been searching for efficient production methods driven by electricity, which can be generated from renewable sources, but efforts so far have used electrodes of toxic lead or mercury or expensive platinum and resulted in low efficiency and selectivity.

Takashi Fukushima and Miho Yamauchi now report in Chemical Communications that they succeeded in efficiently synthesizing several types of amino acids using abundant materials.

"The overall reaction is simple, but we needed the right combination of starting materials and catalyst to get it to actually work without relying on rare materials," says Yamauchi.

The researchers settled on a combination of titanium dioxide as the electrocatalyst and an organic acid called alpha-keto acid as the key source material. Titanium dioxide is abundantly available on Earth, and alpha-keto acid can be easily extracted from woody biomass.

Placing the alpha-keto acid and a source of nitrogen, such as ammonia or hydroxylamine, in a water-based solution and running electricity through it using two electrodes, one of which was titanium dioxide, led to synthesis of seven amino acids--alanine, glycine, aspartic acid, glutamic acid, leucine, phenylalanine, and tyrosine--with high efficiency and high selectivity even under mild conditions.

Hydrogen, which is also needed as part of the reaction, was generated during the process as a natural result of running electricity between electrodes in water.

In addition to demonstrating the reaction, the researchers also built a flow reactor that can electrochemically synthesize the amino acids continuously, indicating the possibilities for scaling up production in the future.

"We hope that our approach will provide useful clues for the future construction of artificial carbon and nitrogen cycles in space," comments Yamauchi.

"Electrochemical processes are also believed to have played a role in the origin of life by producing fundamental chemicals for life through non-biological pathways, so our findings may also contribute to the elucidation of the mystery of the creation of life," she adds.

Credit: 
Kyushu University

NEJM: transcatheter aortic valve replacement shows similar safety outcomes as open-heart surgery

image: Raj Makkar, MD, led a multicenter national study comparing outcomes for minimally invasive heart valve replacement to open-heart surgery.

Image: 
Photo by Cedars-Sinai

LOS ANGELES (Jan. 29, 2020) -- A new study from the Smidt Heart Institute at Cedars-Sinai and other centers nationwide shows that patients who underwent a minimally invasive transcatheter aortic-valve replacement (TAVR), had similar key 5-year clinical outcomes of death and stroke as patients who had traditional open-heart surgery to replace the valve.

The study, the PARTNER 2A study, compares long-term outcomes of the two different approaches to treating aortic stenosis, a common heart problem affecting some 12% of people over 65. The research, involving more than 2,000 patients, was published today by The New England Journal of Medicine and will appear in the Feb. 27 print edition.

"The results of this study are encouraging because TAVR was comparable in terms of outcomes," said study lead author Raj Makkar, MD, vice president of Cardiovascular Innovation and Intervention at Cedars-Sinai. "These findings allow patients to have more peace of mind and undergo a less invasive procedure. Unlike surgery, TAVR is now often done without or with minimal anesthesia and with next-day discharge from the hospital."

During the TAVR procedure, an interventional cardiologist inserts a replacement valve into a catheter and guides it through an artery to the patient's heart, where a balloon is expanded to press the valve into place.

According to the American Heart Association, nearly 1.5 million people in the U.S. have aortic valve stenosis, a narrowing or hardening of the aortic valve caused by calcium buildup on the heart valve flaps. Patients with a severe form of the disease experience symptoms including shortness of breath, chest pain, fatigue and fainting. Many cannot even walk enough to perform basic daily activities.

Aortic stenosis often results in death within three years of diagnosis. Replacing the aortic valve - either through the TAVR procedure or surgery - restores patients' life expectancy to normal. During 2019, more than 100,000 patients underwent TAVR procedures in the U.S., compared to about 50,000 patients who had aortic valve replacement with open-heart surgery.

Study investigators at 57 medical centers followed approximately 1,000 aortic stenosis patients who underwent the minimally invasive procedure and 1,000 patients who underwent open-heart surgery. The patients were followed for 5 years, beginning in 2011. Investigators found:

There was no statistical difference between the two groups in mortality rates or disabling stroke rates;

At five years follow-up, both TAVR patients and open-heart surgery patients experienced similar improvements in disease-specific quality-of-life measures;

The rate of aortic valve reintervention over 5 years was 3.2% for TAVR patients compared to 0.8% with surgery. Nonetheless, reintervention with TAVR was associated with lower mortality than surgery.

Patients who had TAVR performed using a transfemoral approach (from the groin to the heart) and open-heart surgery patients both had better outcomes than patients who underwent TAVR performed through an incision in the chest area.

"This landmark study clarifies the long-term pros and cons of surgical versus percutaneous interventions for the aortic valve," said Eduardo Marbán, MD, PhD, executive director of the Smidt Heart Institute. "As one of the top enrolling sites, the Smidt Heart Institute is particularly proud of Dr. Makkar's leadership of this exceptionally talented team of clinical investigators."

Makkar believes the results would be more striking with the TAVR valve now in current use by physicians because of the improvements in technology and implantation techniques. Newer valves being manufactured today come with a skirt designed to reduce leak around the valve and in more sizes, which is important for correct sizing of the valve implant.

Credit: 
Cedars-Sinai Medical Center

It's closeness that counts: how proximity affects the resistance of graphene

image: View into the scanning tunnelling microscope showing its metal tip very close to a surface under investigation

Image: 
Georg A Traeger/Anna Sinterhauf - University of Göttingen

Graphene is often seen as the wonder material of the future. Scientists can now grow perfect graphene layers on square centimetre-sized crystals. A research team from the University of Göttingen, together with the Chemnitz University of Technology and the Physikalisch-Technische Bundesanstalt Braunschweig, has investigated the influence of the underlying crystal on the electrical resistance of graphene. Contrary to previous assumptions, the new results show that the process known as the 'proximity effect' varies considerably at a nanometre scale. The results have been published in Nature Communications.

The composition of graphene is very simple. It is a single atomic layer of carbon atoms arranged in a honeycomb structure. The three-dimensional form is already an integral part of our everyday lives: we see it in the lead of an ordinary pencil for instance. However, the two-dimensional material graphene was not synthesized in the laboratory until 2004. To determine the electrical resistance of graphene at the smallest scale possible, the physicists used a "scanning tunnelling microscope". This can make atomic structures visible by scanning the surface with a fine metal tip. The team also used the tip of the scanning tunnelling microscope to measure the voltage drop and thus the electrical resistance of the tiny graphene sample.

Depending on the distance that they measured, the researchers determined very different values for the electrical resistance. They cite the proximity effect as the reason for this. "The spatially varying interaction between graphene and the underlying crystal means that we measure different electrical resistances depending on the exact position," explains Anna Sinterhauf, first author and doctoral student at the Faculty of Physics at the University of Göttingen.

At low temperatures of 8 Kelvin, which is around minus 265 degrees Centigrade, the team found variations in local resistance of up to 270 percent. "This result suggests that the electrical resistance of graphene layers epitaxially grown on a crystal surface cannot simply be worked out from an average taken from values measured at a larger scale," explains Dr Martin Wenderoth, head of the working group. The team assumes that the proximity effect might also play an important role for other two-dimensional materials.

Credit: 
University of Göttingen

Hybrid technique to produce stronger nickel for auto, medical, manufacturing

image: Purdue University innovators have created a hybrid technique to fabricate a new form of nickel.

Image: 
Qiang Li/Purdue University

WEST LAFAYETTE, Ind. - Nickel is a widely used metal in the manufacturing industry for both industrial and advanced material processes. Now, Purdue University innovators have created a hybrid technique to fabricate a new form of nickel that may help the future production of lifesaving medical devices, high-tech devices and vehicles with strong corrosion-resistant protection.

The Purdue technique involves a process where high-yield electrodeposition is applied on certain conductive substrates. The Purdue team's work is published in the December edition of Nanoscale.

One of the biggest challenges for manufacturers with nickel is dealing with the places within the metals where the crystalline grains intersect, which are known as the boundary areas. These conventional grain boundaries can strengthen metals for high-strength demand.

However, they often act as stress concentrators and they are vulnerable sites for electron scattering and corrosion attack. As a result, conventional boundaries often decrease ductility, corrosion resistance and electrical conductivity.

Another specific type of boundary, much less common in metals such as nickel due to its high-stacking fault energy, is called a twin boundary. The unique nickel in a single-crystal-like form contains high-density ultrafine twin structure but few conventional grain boundaries.

This particular nickel has been shown by the Purdue researchers to promote strength, ductility and improve corrosion resistance. Those properties are important for manufacturers across several industries - including automotive, gas, oil and micro-electro-mechanical devices.

"We developed a hybrid technique to create nickel coatings with twin boundaries that are strong and corrosion-resistant," said Xinghang Zhang, a professor of materials engineering in Purdue's College of Engineering. "We want our work to inspire others to invent new materials with fresh minds."

The solution of the researchers at Purdue is to use a single crystal substrate as a growth template in conjunction with a designed electrochemical recipe to promote the formation of twin boundaries and inhibit the formation of conventional grain boundaries. The high-density twin boundaries contribute a high mechanical strength exceeding 2 GPa, a low corrosion current density of 6.91 × 10?8 A cm?2, and high polarization resistance of 516 kΩ.

"Our technology enables the manufacturing of nanotwinned nickel coatings with high-density twin boundaries and few conventional grain boundaries, which leads to superb mechanical, electrical properties and high corrosive resistance, suggesting good durability for applications at extreme environments," said Qiang Li, a research fellow in materials engineering and member of the research team. "Template and specific electrochemical recipes suggest new paths for boundary engineering and the hybrid technique can be potentially adopted for large-scale industrial productions."

Potential applications for this Purdue technology include the semiconductor and automotive industries, which require metallic materials with advanced electric and mechanical properties for manufacturing. The nanotwinned nickel can be applied as corrosion-resistant coatings for the automobile, gas and oil industries.

The new nickel hybrid technique can be potentially integrated to the micro-electro-mechanical system industry after careful engineering designs. MEMS medical devices are used in critical care departments and other hospital areas to monitor patients.

The relevant pressure sensors and other functional small-scale components in MEMS require the use of materials with superior mechanical and structural stability and chemical reliability.

Credit: 
Purdue University

Blind as a bat? The genetic basis of echolocation in bats and whales

image: Many species of bat and toothed whales (including dolphins) evolved echolocation independently.

Image: 
Vishu Vishuma (left), Darin Ashby (right)

Clicks, squeaks, chirps, and buzzes...though they may be difficult to distinguish to our ears, such sounds are used by echolocating animals to paint a vivid picture of their surroundings. By generating a sound and then listening to how the sound waves bounce off of objects around them, these animals are able to "see" using sound. While a number of species engage in some form of echolocation, including some birds, shrews, and even humans, the echolocation systems of bats and toothed whales (including dolphins, porpoises, killer whales, and sperm whales) are exquisitely sophisticated. Echolocation evolved independently in these animals (Fig. 1) under conditions of poor visibility--the night sky for bats and deep underwater for toothed whales--enabling them to hunt for prey and navigate in complete darkness. It is a fascinating example of convergent evolution, the process by which distantly related organisms evolve similar features or adaptations. To better understand how echolocation evolved in these species, a new study in Genome Biology and Evolution, titled "Evolutionary basis of high-frequency hearing in the cochleae of echolocators revealed by comparative genomics," takes advantage of advances in genomic analysis to investigate the origin and evolution of high-frequency hearing, an adaptation that allows echolocators to perceive ultrasonic signals.

The interpretation of sound begins in the cochlea, a hollow, spiral-shaped bone in the inner ear that converts sound vibrations into nerve impulses. Because gaining the ability to hear very high frequencies likely required changes to the cochlea, the authors of the study, led by Keping Sun and Jiang Feng from Northeast Normal University and Jilin Agricultural University in China, analyzed the genes expressed in the cochleae of three bat species that use different forms of echolocation: constant-frequency, frequency-modulated, and tongue-click echolocation (Wang et al., 2020). They then compared these gene sequences to those from 16 other mammals, including other echolocating and non-echolocating bats, as well as both echolocating and non-echolocating whales. According to Sun and Feng, this allowed them to "provide for the first time a comprehensive understanding of the genetic basis underlying high-frequency hearing in the cochleae of echolocating bats and whales."

Through their comparative analysis, the researchers identified 34 genes involved in hearing or auditory perception that showed evidence for positive selection in echolocating species. This included 12 genes involved in bone formation that may help regulate the bone density of the cochlea to enable high-frequency hearing. It also included several genes with antioxidant activity that may help protect the ear from damage and hearing loss caused by chronic exposure to high-intensity noise. (Bat calls can reach 120 decibels, louder than a rock concert and above the human pain threshold. It is lucky for us that they are too high-pitched for us to hear.)

The study also revealed large numbers of parallel or convergent mutations between pairs of echolocating mammals, where the same genetic changes had occurred independently in distantly related echolocators. Interestingly, there were significantly more of these parallel/convergent mutations between pairs of echolocators than when comparing an echolocator to an equally distant non-echolocator (Fig. 2). This suggests that some of these mutations may play a key role in high-frequency hearing and echolocation.

As noted by Sun and Feng, confirming such a hypothesis requires functional assays for each candidate gene that may underlie echolocation, such as suppressing the expression of hearing genes using RNA interference technology. Unfortunately, such experiments can be costly and both time- and labor-intensive (and in the case of whales, nearly impossible to implement). Despite the remaining challenges, however, the authors are grateful that comprehensive studies like theirs are now possible thanks to recent advances in genomic technologies. "This is an exciting time for the study of adaptive evolution of echolocation in mammals. More and more genetic and genomic data sets have been published, providing insights into the evolutionary basis of echolocation."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

An ultrafast microscope for the quantum world

image: Resolution taken to the extreme: Using a combination of ultrashort laser pulses (red) and a scanning tunnelling microscope, researchers at the Max Planck Institute for Solid State Research are filming processes in the quantum world. They focus the laser flashes on the tiny gap between the tip of the microscope and the sample surface, thus solving the tunneling process in which electrons (blue) overcome the gap between the tip and the sample. In this way, they achieve a temporal resolution of several hundred attoseconds when they image quantum processes such as an electronic wave packet (coloured wave) with atomic spatial resolution.

Image: 
Dr. Christian Hackenberger

The operation of components for future computers can now be filmed in HD quality, so to speak. Manish Garg and Klaus Kern, researchers at the Max Planck Institute for Solid State Research in Stuttgart, have developed a microscope for the extremely fast processes that take place on the quantum scale. This microscope - a sort of HD camera for the quantum world - allows the precise tracking of electron movements down to the individual atom. It should therefore provide useful insights when it comes to developing extremely fast and extremely small electronic components, for example.

The processes taking place in the quantum world represent a challenge for even the most experienced of physicists. For example, the things taking place inside the increasingly powerful components of computers or smartphones not only happen extremely quickly but also within an ever-smaller space. When it comes to analysing these processes and optimising transistors, for example, videos of the electrons would be of great benefit to physicists. To achieve this, researchers need a high-speed camera that exposes each frame of this "electron video" for just a few hundred attoseconds. An attosecond is a billionth of a billionth of a second; in that time, light can only travel the length of a water molecule. For a number of years, physicists have used laser pulses of a sufficiently short length as an attosecond camera.

In the past, however, an attosecond image delivered only a snapshot of an electron against what was essentially a blurred background. Now, thanks to the work of Klaus Kern, Director at the Max Planck Institute for Solid State Research, and Manish Garg, a scientist in Kern's Department, researchers can now also identify precisely where the filmed electron is located down to the individual atom.

Ultrashort laser pulses combined with a scanning tunnelling microscope

To do this, the two physicists use ultrashort laser pulses in conjunction with a scanning tunnelling microscope. The latter achieves atomic-scale resolution by scanning a surface with a tip that itself is ideally made up of just a single atom. Electrons tunnel between the tip and the surface - that is, they cross the intervening space even though they actually don't have enough energy to do so. As the effectiveness of this tunnelling process depends strongly on the distance the electrons have to travel, it can be used to measure the space between the tip and a sample and therefore to depict even individual atoms and molecules on a surface. Until now, however, scanning tunnelling microscopes did not achieve sufficient temporal resolution to track electrons.

"By combining a scanning tunnelling microscope with ultrafast pulses, it was easy to use the advantages of the two methods to compensate for their respective disadvantages," says Manish Garg. The researchers fire these extremely short pulses of light at the microscope tip - which is positioned with atomic precision - to trigger the tunnelling process. As a result, this high-speed camera for the quantum world can now also achieve HD resolution.

Paving the way for light-wave electronics, which is millions of times faster

With the new technique, physicists can now measure exactly where electrons are at a specific time down to the individual atom and to an accuracy of a few hundred attoseconds. For example, this can be used in molecules that have had an electron catapulted out of them by a high-energy pulse of light, leading the remaining negative charge carriers to rearrange themselves and possibly causing the molecule to enter into a chemical reaction with another molecule. "Filming electrons in molecules live, and on their natural spatial and temporal scale, is vital in order to understand chemical reactivity, for example, and the conversion of light energy within charged particles, such as electrons or ions," says Klaus Kern, Director at the Max Planck Institute for Solid State Research.

Moreover, the technique not only allows researchers to track the path of electrons through the processors and chips of the future, but can also lead to a dramatic acceleration of the charge carriers: "In today's computers, electrons oscillate at a frequency of a billion hertz," says Klaus Kern. "Using ultrashort light pulses, it may be possible to increase their frequency to a trillion hertz." With this turbo booster for light waves, researchers could clear the way for light-wave electronics, which is millions of times faster than current computers. Therefore, the ultrafast microscope not only films processes in the quantum world, but also acts as the Director by interfering with these processes.

Credit: 
Max-Planck-Gesellschaft

Swing feel in the lab

image: Jazz is not only about the groove, but also the swing. Music experts are still debating what is so special about this swing feel. As an interdisciplinary team of Göttingen-based researchers recently discovered, microtiming deviations play no role in this.

Image: 
unsplash

In 1931, Duke Ellington and Irving Mills even dedicated a song to the phenomenon of swing which they called "It Don't Mean a Thing, If It Ain't Got That Swing". Yet, to this day, the question of what exactly makes a jazz performance swing has not really been clarified. A team drawn from the Max Planck Institute for Dynamics and Self-Organization in Göttingen and the University of Göttingen recently carried out an empirical study into the role played by microtiming in this process - a topic that has hitherto been controversial among music experts and musicologists. Experts refer to tiny deviations from a precise rhythm as "microtiming deviations". The project team has now clarified the controversy about the role of microtiming deviations for the swing feel by digital jazz piano recordings with manipulated microtiming that were rated by 160 professional and amateur musicians with respect to the swing feel.

Jazz, but also rock and pop music can literally sweep listeners along, causing them to tap their feet involuntarily or move their heads in time with the rhythm. In addition to this phenomenon, which is known as "groove", jazz musicians have been using the concept of swing since the 1930s, not just as a style, but also as a rhythmic phenomenon. However, to this day musicians still find it hard to say what swing actually is. In his introduction "What is Swing?", for example, Bill Treadwell wrote: "You can feel it, but you just can't explain it". Musicians and many music fans possess an intuitive feel for what swing means. But thus far, musicologists have mainly characterized one of its rather obvious features unequivocally: rather than sounding successive eighth notes for the same length of time, the first is held longer than the second (the swing note). The swing ratio, i.e., the duration ratio of these two notes, is often close to 2:1, and it has been found that it tends to get shorter at higher tempos and longer at lower tempos.

Recordings with the original and systematically manipulated timing

Musicians and musicologists also discuss rhythmic fluctuations as one of the particular characteristics of swing. Soloists, for instance, occasionally play distinctly after the beat for short spells, or in a laid-back fashion to use the technical jargon. But is this necessary for the swing feel, and what role do much smaller timing fluctuations play that escape the conscious attention of even experienced listeners? Some musicologists have long held the opinion that it is only thanks to such microtiming deviations (for example between different instruments) that jazz swings. Researchers from the Max Planck Institute for Dynamics and Self-Organization and the University of Göttingen recently came to a different conclusion based on their empirical study. They suggest that jazz musicians feel the swing slightly more when the swing ratio fluctuates as little as possible during a performance.

Dissatisfaction with the fact that the essence of swing remains a mystery was what motivated the researchers, led by Theo Geisel, Emeritus Director of the Max Planck Institute for Dynamics and Self-Organization, to conduct the study: "If jazz musicians can feel it but not precisely explain it", says Geisel, himself a jazz saxophonist, "we should be able to characterize the role of microtiming deviations operationally by having experienced jazz musicians evaluate recordings with the original and systematically manipulated timings".

Microtiming deviations are not an essential component of swing

Accordingly, the team recorded twelve pieces played over pre-generated precise bass and drum rhythms played by a professional jazz pianist and manipulated the timing in three different ways. For example, they eliminated all of the pianist's microtiming deviations throughout the piece, i.e. they "quantized" his performance; they then doubled the duration of microtiming deviations, and in the third manipulation, they inverted them. Thus, if the pianist played a swing note 3 milliseconds before the average swing note for that piece in the original version, the researchers shifted the note by the same amount, i.e., 3 milliseconds behind the average swing note, in the inverted version. Subsequently, in an online survey, 160 professional and amateur musicians rated the extent to which the manipulated pieces sounded natural or flawed and, particularly the degree of swing in the various versions.

"We were surprised", says Theo Geisel, "because, on average, the participants in the online study rated the quantized versions, i.e. those with no microtiming deviations, as being slightly more swinging than the originals. So, microtiming deviations are not a necessary component of swing". Pieces with doubled microtiming deviations were rated by the survey participants as being the least swinging. "Contrary to our original expectation, inverting the temporal microtiming deviations had a negative influence on the ratings for only two pieces", says York Hagmayer, a psychologist at the University of Göttingen. The amount of swing each participant attributed to the pieces also depended on their individual musical backgrounds. Regardless of the piece and version, professional jazz musicians generally gave slightly lower swing ratings.

At the end of the study, the researchers asked the participants for their opinions on what makes a piece swing. The respondents named further factors such as dynamic interactions between the musicians, accentuation, and the interplay between rhythm and melody. "What became clear was that, whilst rhythm does play a major role, other factors, which should be investigated in further research, are also important", says Annika Ziereis, first author of the paper along with George Datseris.

Credit: 
Max-Planck-Gesellschaft

Researchers rank 'smartest' schools of fish when it comes to travel formations

image: A research team from New Jersey Institute of Technology (NJIT) and New York University (NYU) has showcased a new mathematical model capable of determining what formations give a school's swimmers the biggest advantage when it comes to energy efficiency and speeds, particularly when compared to school-less fishes. This image shows the comparison of school formations and fluid flows examined by the team's model: (a) in-line formation; (b) phalanx; (c) rectangular lattice; (d) diamond lattice.

Image: 
NJIT

The concert of motion that fish schools are famous for isn't merely an elaborate display of synchronized swimming. Their seemingly telepathic collective movement is part of a time-tested strategy for improving the group's chances for survival as a whole, from defense against predators to food-finding and mating.

A study published in Physical Review X is offering new details that show how the aquatic flows created by certain schools of fish can benefit each of its individual members in yet another way -- hydrodynamically.

A research team from New Jersey Institute of Technology (NJIT) and New York University (NYU) has showcased a new mathematical model capable of determining what formations give a school's swimmers the biggest advantage when it comes to energy efficiency and speeds, particularly when compared to school-less fishes.

The researchers say the study offers a physical picture that illustrates how swimmers in fish schools are influenced through the constant connection between each swimmer's flapping wings and the persistent flow vortices generated by the collective.

"There is a lot of scientific literature that has focused on the dynamics of fish schools and social interactions that shape them, such as the need to take up formations for predator avoidance for instance," said Anand Oza, assistant professor at NJIT's Department of Mathematics and one of the study's authors. "Often neglected, however, has been fluid dynamics ... 'can fluid flows actually influence the structure of schools?'. What I find exciting is that with this study we can now quantitatively point to how hydrodynamics can help or even hinder a school."

The team examined four common fish school formation types in motion: in-line formations, single-file "phalanx" formations, rectangular "lattice" formations; and diamond lattice formations.

By applying experimental data from previous studies conducted at NYU to their model, the team captured a range of subtle hydrodynamic interactions that occur within various fish schools, showing how much energy was exerted by each fish from their flapping movements as they swam within their formation. The team's model also kept track of the forces due to small whirlpool-like vortices the swimmers shed with every stroke, showing how much the fishes were propelled along by vortex flows generated by their schoolmates.

Overall, the team's computer simulations revealed that schools formed in a single-line across (phalanx) received marginal speed and energy savings over solitary swimmers, while in-line and rectangular lattice formations offered substantial improvements. However, the team observed that fish organized in a diamond lattice formation received the greatest hydrodynamic advantage.

"Finding that the diamond formation is best was not altogether surprising, but what we learned is that all diamonds are not equal ... the geometry does matter. Generally, the thinner the diamond formation, the better the performance," explained Oza.

Oza now says their team hopes to develop their model further to study similar dynamics in bird flocks. The results could have engineering applications in energy harvesting and propulsion, perhaps in ways that may be useful for developing more efficient wind farms.

"We need to further validate our model and conduct more tests, but ideally I could see conceptually similar models used to help determine how to arrange wind turbines together to get the best output of energy," said Oza. "We'd also like to use this model to look how vortices and fluid-mediated memory can influence the collective behavior of densely packed or disordered schools and flocks. That is an exciting look forward that hasn't been explored a lot."

Credit: 
New Jersey Institute of Technology