Tech

Simple 'smart' glass reveals the future of artificial vision

image: Researchers: Zongfu Yu (left), Ang Chen (center) and Efram Khoram (right) developed the concept for a 'smart' piece of glass that recognizes images without any external power or circuits.

Image: 
Photo by Sam Million-Weaver

MADISON -- The sophisticated technology that powers face recognition in many modern smartphones someday could receive a high-tech upgrade that sounds -- and looks -- surprisingly low-tech.

This window to the future is none other than a piece of glass. University of Wisconsin-Madison engineers have devised a method to create pieces of "smart" glass that can recognize images without requiring any sensors or circuits or power sources.

"We're using optics to condense the normal setup of cameras, sensors and deep neural networks into a single piece of thin glass," says UW-Madison electrical and computer engineering professor Zongfu Yu.

Yu and colleagues published details of their proof-of-concept research today in the journal Photonics Research.

Embedding artificial intelligence inside inert objects is a concept that, at first glance, seems like something out of science fiction. However, it's an advance that could open new frontiers for low-power electronics.

Now, artificial intelligence gobbles up substantial computational resources (and battery life) every time you glance at your phone to unlock it with face ID. In the future, one piece of glass could recognize your face without using any power at all.

"This is completely different from the typical route to machine vision," says Yu.

He envisions pieces of glass that look like translucent squares. Tiny strategically placed bubbles and impurities embedded within the glass would bend light in specific ways to differentiate among different images. That's the artificial intelligence in action.

For their proof of concept, the engineers devised a method to make glass pieces that identified handwritten numbers. Light emanating from an image of a number enters at one end of the glass, and then focuses to one of nine specific spots on the other side, each corresponding to individual digits.

The glass was dynamic enough to detect, in real-time, when a handwritten 3 was altered to become an 8.

"The fact that we were able to get this complex behavior with such a simple structure was really something," says Erfan Khoram, a graduate student in Yu's lab.

Designing the glass to recognize numbers was similar to a machine-learning training process, except that the engineers "trained" an analog material instead of digital codes. Specifically, the engineers placed air bubbles of different sizes and shapes as well as small pieces of light-absorbing materials like graphene at specific locations inside the glass.

"We're accustomed to digital computing, but this has broadened our view," says Yu. "The wave dynamics of light propagation provide a new way to perform analog artificial neural computing"

One such advantage is that the computation is completely passive and intrinsic to the material, meaning one piece of image-recognition glass could be used hundreds of thousands of times.

"We could potentially use the glass as a biometric lock, tuned to recognize only one person's face" says Yu. "Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years."

Additionally, it works at literally the speed of light, because the glass distinguishes among different images by distorting light waves.

Although the up-front training process could be time consuming and computationally demanding, the glass itself is easy and inexpensive to fabricate.

In the future, the researchers plan to determine if their approach works for more complex tasks, such as facial recognition.

"The true power of this technology lies in its ability to handle much more complex classification tasks instantly without any energy consumption," says Ming Yuan, a collaborator on the research and professor of statistics at Columbia University. "These tasks are the key to create artificial intelligence: to teach driverless cars to recognize a traffic signal, to enable voice control in consumer devices, among numerous other examples."

Unlike human vision, which is mind-bogglingly general in its capabilities to discern an untold number of different objects, the smart glass could excel in specific applications -- for example, one piece for number recognition, a different piece for identifying letters, another for faces, and so on.

"We're always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies." says Yu. "This changes almost everything about how we design machine vision."

Credit: 
University of Wisconsin-Madison

Robot uses machine learning to harvest lettuce

image: A robotic harvester, dubbed the 'Vegebot', has been trained to identify and harvest iceberg lettuce, a crop which has so far resisted automation.

Image: 
University of Cambridge

A vegetable-picking robot that uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop has been developed by engineers.

The 'Vegebot', developed by a team at the University of Cambridge, was initially trained to recognise and harvest iceberg lettuce in a lab setting. It has now been successfully tested in a variety of field conditions in cooperation with G's Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The results are published in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the UK, iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

"Every field is different, every lettuce is different," said co-author Simon Birrell from Cambridge's Department of Engineering. "But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops."

"At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it's very physically demanding," said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the 'target' crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested, and finally cuts the lettuce from the rest of the plant without crushing it so that it is 'supermarket ready'. "For a human, the entire process takes a couple of seconds, but it's a really challenging problem for a robot," said co-author Josie Hughes.

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image, and then for each lettuce, classifies whether it should be harvested or not. A lettuce might be rejected because it's not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognise healthy lettuces in the lab, it was then trained in the field, in a variety of weather conditions, on thousands of real lettuces.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot's gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

"We wanted to develop approaches that weren't necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops," said Iida, who leads the team behind the research.

In future, robotic harvesters could help address problems with labour shortages in agriculture, and could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded. However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

"We're also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields," said Hughes. "We've still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech."

Credit: 
University of Cambridge

First observation of native ferroelectric metal

image: Ferroelectric domains in a WTe2 single crystal (PFM imaging).

Image: 
FLEET

In a paper released today in Science Advances, UNSW researchers describe the first observation of a native ferroelectric metal.

The study represents the first example of a native metal with bistable and electrically switchable spontaneous polarization states - the hallmark of ferroelectricity.

"We found coexistence of native metallicity and ferroelectricity in bulk crystalline tungsten ditelluride (WTe2) at room temperature," explains study author Dr Pankaj Sharma.

"We demonstrated that the ferroelectric state is switchable under an external electrical bias and explain the mechanism for 'metallic ferroelectricity' in WTe2 through a systematic study of the crystal structure, electronic transport measurements and theoretical considerations."

"A van der Waals material that is both metallic and ferroelectric in its bulk crystalline form at room temperature has potential for new nano-electronics applications," says author Dr Feixiang Xiang.

FERROELECTRIC BACKGROUNDER

Ferroelectricity can be considered an analogy to ferromagnetism. A ferromagnetic material displays permanent magnetism, and in layperson's terms, is simply, a 'magnet' with north and south pole. Ferroelectric material likewise displays an analogous electrical property called a permanent electric polarisation, which originates from electric dipoles consisting of equal, but oppositely charged ends or poles. In ferroelectric materials, these electric dipoles exist at the unit cell level and give rise to a non-vanishing permanent electric dipole moment.

This spontaneous electric dipole moment can be repeatedly transitioned between two or more equivalent states or directions upon application of an external electric field - a property utilised in numerous ferroelectric technologies, for example nano-electronic computer memory, RFID cards, medical ultrasound transducers, infrared cameras, submarine sonar, vibration and pressure sensors, and precision actuators.

Conventionally, ferroelectricity has been observed in materials that are insulating or semiconducting rather than metallic, because conduction electrons in metals screen-out the static internal fields arising from the dipole moment.

THE STUDY

A room-temperature ferroelectric semimetal was published in Science Advances in July 2019.

Bulk single-crystalline tungsten ditelluride (WTe2), which belongs to a class of materials known as transition metal dichalcogenides (TMDCs), was probed by spectroscopic electrical transport measurements, conductive-atomic force microscopy (c-AFM) to confirm its metallic behaviour, and by piezo-response force microscopy (PFM) to map the polarisation, detecting lattice deformation due to an applied electric field.

Ferroelectric domains - ie, the regions with oppositely oriented direction of polarization - were directly visualised in freshly-cleaved WTe2 single crystals.

Spectroscopic-PFM measurements with top electrode in a capacitor geometry was used to demonstrate switching of the ferroelectric polarization.

The study was supported by funding from the Australian Research Council through the ARC Centre of Excellence in Future Low-Energy Electronics Technologies (FLEET), and the work was performed in part using facilities of the NSW Nodes of the Australian National Fabrication Facility, with the assistance of the Australian Government Research Training Program Scholarship scheme.

First-principles density functional theory (DFT) calculations (University of Nebraska) confirmed the experimental findings of the electronic and structural origins of the ferroelectric instability of WTe2, supported by the National Science Foundation.

FERROELECTRIC STUDIES AT FLEET

Ferroelectric materials are keenly studied at FLEET (the ARC Centre of Excellence in Future Low-Energy Electronics Technologies) for their potential use in low-energy electronics, 'beyond CMOS' technology.

The switchable electric dipole moment of ferroelectric materials could for example be used as a gate for the underlying 2D electron system in an artificial topological insulator.

In comparison with conventional semiconductors, the very close (sub-nanometre) proximity of a ferroelectric's electron dipole moment to the electron gas in the atomic crystal ensures more effective switching, overcoming limitations of conventional semiconductors where the conducting channel is buried tens of nanometres below the surface.

Topological materials are investigated within FLEET's Research theme 1, which seeks to establish ultra-low resistance electronic paths with which to create a new generation of ultra-low energy electronics.

FLEET is an ARC-funded research centre bringing together over a hundred Australian and international experts to develop a new generation of ultra-low energy electronics, motivated by the need to reduce the energy consumed by computing.

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

A new way of making complex structures in thin films

CAMBRIDGE, MA -- Self-assembling materials called block copolymers, which are known to form a variety of predictable, regular patterns, can now be made into much more complex patterns that may open up new areas of materials design, a team of MIT researchers say.

The new findings appear in the journal Nature Communications, in a paper by postdoc Yi Ding, professors of materials science and engineering Alfredo Alexander-Katz and Caroline Ross, and three others.

"This is a discovery that was in some sense fortuitous," says Alexander-Katz. "Everyone thought this was not possible," he says, describing the team's discovery of a phenomenon that allows the polymers to self-assemble in patterns that deviate from regular symmetrical arrays.

Self-assembling block copolymers are materials whose chain-like molecules, which are initially disordered, will spontaneously arrange themselves into periodic structures. Researchers had found that if there was a repeating pattern of lines or pillars created on a substrate, and then a thin film of the block copolymer was formed on that surface, the patterns from the substrate would be duplicated in the self-assembled material. But this method could only produce simple patterns such as grids of dots or lines.

In the new method, there are two different, mismatched patterns. One is from a set of posts or lines etched on a substrate material, and the other is an inherent pattern that is created by the self-assembling copolymer. For example, there may be a rectangular pattern on the substrate and a hexagonal grid that the copolymer forms by itself. One would expect the resulting block copolymer arrangement to be poorly ordered, but that's not what the team found. Instead, "it was forming something much more unexpected and complicated," Ross says.

There turned out to be a subtle but complex kind of order -- interlocking areas that formed slightly different but regular patterns, of a type similar to quasicrystals, which don't quite repeat the way normal crystals do. In this case, the patterns do repeat, but over longer distances than in ordinary crystals. "We're taking advantage of molecular processes to create these patterns on the surface" with the block copolymer material, Ross says.

This potentially opens the door to new ways of making devices with tailored characteristics for optical systems or for "plasmonic devices" in which electromagnetic radiation resonates with electrons in precisely tuned ways, the researchers say. Such devices require very exact positioning and symmetry of patterns with nanoscale dimensions, something this new method can achieve.

Katherine Mizrahi Rodriguez, who worked on the project as an undergraduate, explains that the team prepared many of these block copolymer samples and studied them under a scanning electron microscope. Yi Ding, who worked on this for his doctoral thesis, "started looking over and over to see if any interesting patterns came up," she says. "That's when all of these new findings sort of evolved."

The resulting odd patterns are "a result of the frustration between the pattern the polymer would like to form, and the template," explains Alexander-Katz. That frustration leads to a breaking of the original symmetries and the creation of new subregions with different kinds of symmetries within them, he says. "That's the solution nature comes up with. Trying to fit in the relationship between these two patterns, it comes up with a third thing that breaks the patterns of both of them." They describe the new patterns as a "superlattice."

Having created these novel structures, the team went on to develop models to explain the process. Co-author Karim Gadelrab PhD '19, says, "The modeling work showed that the emergent patterns are in fact thermodynamically stable, and revealed the conditions under which the new patterns would form."

Ding says "We understand the system fully in terms of the thermodynamics," and the self-assembling process "allows us to create fine patterns and to access some new symmetries that are otherwise hard to fabricate."

He says this removes some existing limitations in the design of optical and plasmonic materials, and thus "creates a new path" for materials design.

So far, the work the team has done has been confined to two-dimensional surfaces, but in ongoing work they are hoping to extend the process into the third dimension, says Ross. "Three dimensional fabrication would be a game changer," she says. Current fabrication techniques for microdevices build them up one layer at a time, she says, but "if you can build up entire objects in 3-D in one go," that would potentially make the process much more efficient.

Credit: 
Massachusetts Institute of Technology

How trees could save the climate

image: Figure A shows the total land available that can support trees across the globe (total of current forested areas and forest cover potential available for restoration

Image: 
ETH Zurich / Crowther Lab

The Crowther Lab at ETH Zurich investigates nature-based solutions to climate change. In their latest study the researchers showed for the first time where in the world new trees could grow and how much carbon they would store. Study lead author and postdoc at the Crowther Lab Jean-François Bastin explains: "One aspect was of particular importance to us as we did the calculations: we ex-cluded cities or agricultural areas from the total restoration potential as these areas are needed for hu-man life."

Reforest an area the size of the USA

The researchers calculated that under the current climate conditions, Earth's land could support 4.4 billion hectares of continuous tree cover. That is 1.6 billion more than the currently existing 2.8 billion hectares. Of these 1.6 billion hectares, 0.9 billion hectares fulfill the criterion of not being used by hu-mans. This means that there is currently an area of the size of the US available for tree restoration. Once mature, these new forests could store 205 billion tonnes of carbon: about two thirds of the 300 billion tonnes of carbon that has been released into the atmosphere as a result of human activity since the Industrial Revolution.

According to Prof. Thomas Crowther, co-author of the study and founder of the Crowther Lab at ETH Zurich: "We all knew that restoring forests could play a part in tackling climate change, but we didn't really know how big the impact would be. Our study shows clearly that forest restoration is the best climate change solution available today. But we must act quickly, as new forests will take decades to mature and achieve their full potential as a source of natural carbon storage."

Russia best suited for reforestation

The study also shows which parts of the world are most suited to forest restoration. The greatest po-tential can be found in just six countries: Russia (151 million hectares); the US (103 million hectares); Canada (78.4 million hectares); Australia (58 million hectares); Brazil (49.7 million hectares); and China (40.2 million hectares).

Many current climate models are wrong in expecting climate change to increase global tree cover, the study warns. It finds that there is likely to be an increase in the area of northern boreal forests in re-gions such as Siberia, but tree cover there averages only 30 to 40 percent. These gains would be out-weighed by the losses suffered in dense tropical forests, which typically have 90 to 100 percent tree cover.

Look at Trees!

A tool on the Crowther Lab website enables users to look at any point on the globe, and find out how many trees could grow there and how much carbon they would store. It also offers lists of for-est restoration organisations. The Crowther Lab will also be present at this year's Scientifica (web-site available in German only) to show the new tool to visitors.

The Crowther Lab uses nature as a solution to: 1) better allocate resources - identifying those re-gions which, if restored appropriately, could have the biggest climate impact; 2) set realistic goals - with measurable targets to maximise the impact of restoration projects; and 3) monitor progress - to evaluate whether targets are being achieved over time, and take corrective action if necessary.

Credit: 
ETH Zurich

Area for restoring trees far greater than imagined and 'best climate change solution available'

In the first study to quantify how many trees the Earth can support, where, and how much carbon they could store, researchers report that Earth could support enough additional trees to cut carbon levels in the atmosphere by nearly 25% - levels not seen for almost a century. "We all knew restoring forests could play a part in tackling climate change, but we had no scientific understanding of what impact this could make," said study coauthor Thomas Crowther. "Our study shows clearly that forest restoration is the best climate change solution available today." Because trees capture and remove carbon dioxide (CO2) from the atmosphere, widespread reforestation has been considered one of the most effective weapons against climate change. According to the most recent Intergovernmental Panel on Climate Change (IPCC) report, an additional 1 billion hectares of forest will be required to limit global warming to 1.5 degrees Celsius by 2050. However, it remains unclear if these restoration goals are achievable because researchers do not know how much tree cover might be possible under current or future climate conditions. Here, to explore this, Jean-Francois Bastin, Tom Crowther and colleagues leveraged a unique global dataset of forest observations spanning nearly 80,000 forests, combined with the mapping software of Google Earth Engine, which they used to generate a predictive model to map potential tree cover worldwide under current conditions. Excluding existing trees, agricultural and urban areas, they suggest Earth's ecosystems could support an additional 0.9 billion hectares of tree cover, which, once matured, could sequester more than 200 Gigatons of carbon, or two-thirds of man-made carbon emissions. The global map of reforestation their study provides is essential for making more effective global-scale restoration targets, and for guiding local-scale restoration projects, the authors say. In a related Perspective, Robin Chazdon and Pedro Bancalion underscore the need to act quickly within a narrowing window of time, as currently forested areas continue to decline, and as reforestation efforts become more challenging in a warmer world.

Credit: 
American Association for the Advancement of Science (AAAS)

Call for green burial corridors alongside roads, railways and country footpaths

A leading public health expert is calling for a strategic initiative to develop green burial corridors alongside major transport routes because British graveyards and cemeteries are rapidly running out of room. With 500,000 deaths annually in England and Wales, it is likely that there will be no burial space left within five years.

Writing in the Journal of the Royal Society of Medicine, Professor John Ashton points to the recent announcement of a scheme to plant 130,000 trees in urban areas as a contribution to reducing pollution and global warming. While lacking in ambition, he writes, it gives a clue as to what might be possible by joining up the dots of green environmentalism and human burial.

The environmental and human health impacts of the fluids and materials used in embalming and coffins is a matter of growing interest and concern, writes Prof Ashton, and resonates with the recent move towards simpler funeral approaches, not least green funerals with biodegradable regalia and coffins in woodland areas.

With little prospect of finding burial space for those who seek it, he writes, there is a real opportunity of stepping up to the mark as boldly as the Victorians did with the Metropolitan Burial Act of 1852.

Prof Ashton concludes: "A glimpse of what might be possible with political will and imagination can be seen by what has happened alongside long-forgotten canals by neglect and default where wildlife corridors have evolved over time. It is time to revisit the public health roots of human burial and connect them to a new vision for a planet fit for future generations."

Credit: 
SAGE

'Eyes' for the autopilot

Automatic landings have long been standard procedure for commercial aircraft. While major airports have the infrastructure necessary to ensure the safe navigation of the aircraft, this is usually not the case at smaller airports. Researchers at the Technical University of Munich (TUM) and TU Braunschweig have now demonstrated a completely automatic landing with vision assisted navigation that functions properly without the need for ground-based systems.

At large airports the Instrument Landing System (ILS) makes it possible for commercial aircraft to land automatically with great precision. Antennas send radio signals to the autopilot to make sure it navigates to the runway safely. Procedures are also currently being developed that will allow automatic landing based on satellite navigation. Here too a ground-based augmentation system is required.

However, systems like these are not available for general aviation at smaller airports, which is a problem in case of poor visibility - then aircraft simply cannot fly. "Automatic landing is essential, especially in the context of the future role of aviation," says Martin Kügler, research associate at the TUM Chair of Flight System Dynamics. This applies for example when automated aircraft transport freight and of course when passengers use automated flying taxis.

Camera-based optical reference system

In the project "C2Land", supported by the German federal government, TUM researchers have partnered with Technische Universität Braunschweig to develop a landing system which lets smaller aircraft land without assistance from ground-based systems.

The autopilot uses GPS signals to navigate. The problem: GPS signals are susceptible to measurement inaccuracies, for example due to atmospheric disturbances. The GPS receiver in the aircraft can't always reliably detect such interferences. As a result, current GPS approach procedures require the pilots to take over control at an altitude of no less than 60 meters and land the aircraft manually.

In order to make completely automated landings possible, the TU Braunschweig team designed an optical reference system: A camera in the normal visible range and an infrared camera that can also provide data under conditions with poor visibility. The researchers developed custom-tailored image processing software that lets the system determine where the aircraft is relative to the runway based on the camera data it receives.

TUM research aircraft features Fly-by-Wire system

The TUM team developed the entire automatic control system of TUM's own research aircraft, a modified Diamond DA42. The aircraft is equipped with a Fly-by-Wire system enabling control by means of an advanced autopilot, also developed by the TUM researchers.

In order to make automatic landings possible, additional functions were integrated in the software, such as comparison of data from the cameras with GPS signals, calculation of a virtual glide path for the landing approach as well as flight control for various phases of the approach.

Successful landing in Wiener-Neustadt

In late May the team was able to watch as the research aircraft made a completely automatic landing at the Diamond Aircraft airfield. Test pilot Thomas Wimmer is completely convinced by the landing system: "The cameras already recognize the runway at a great distance from the airport. The system then guides the aircraft through the landing approach on a completely automatic basis and lands it precisely on the runway's centerline."

Credit: 
Technical University of Munich (TUM)

Are self-driving cars really the answer for older drivers?

With more of us living longer, driving is becoming increasingly important in later life, helping us to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

"In the UK, older drivers are tending to drive more often and over longer distances but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations," explains Dr Shuo Li, an expert in Intelligent Transport Systems at Newcastle University, UK.

"Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we've seen older drivers take to minimise risk. But this can be quite limiting for people."

Self-driving cars, says Li, are seen as a potential game-changer for this age group. Fully automated, they are unlikely to require a licence and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But, says Li, it's not as clear cut as it seems.

"There are several levels of automation, ranging from zero where the driver has complete control, through to level five where the car is in charge," he explains.

"We're some way off level five but level three may be a trend just around the corner. This will allow the driver to be completely disengaged - they can sit back and watch a film, eat, even talk on the phone.

"But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds.

"For younger people that switch between tasks is quite easy but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor."

Led by Newcastle University's Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take-back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University's state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to 'take-back' control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was 'total disengagement' - turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to re-gain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), 'active input' such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

"In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers," says Li. "Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph that means our older drivers would have needed an extra 35m warning distance - that's equivalent to the length of 10 cars.

"But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident."

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups. In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

The research team also explored older drivers' opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what's happening on the road and where they are even when they are busy with another activity.

The research team are now looking at what changes and improvement could be made to the vehicles to overcome some of these problems and better support older drivers when the automated cars hit our roads.

Newcastle University's Professor Phil Blythe, who led the study and is the UK's Chief Scientific Advisor for the Department for Transport, said:

"I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible.

"The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.

"Two pillars of the Government's Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge.

"Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties."

Credit: 
Newcastle University

New dairy cattle breeding method increases genetic selection efficiency

Brazilian scientists at São Paulo State University (UNESP) collaborating with colleagues at the University of Maryland and the United States Department of Agriculture (USDA) have developed a dairy cattle breeding method that adds a new parameter to genetic selection and conserves or even improves a population's genetic diversity.

The study, which is published in Journal of Dairy Science, was funded by the São Paulo Research Foundation - FAPESP and USDA.

Besides genetic value associated with milk, fat and protein yields, the new method also takes into consideration the variance in gametic diversity and what the authors call "relative predicted transmitting ability," defined as an individual animal's capacity to transmit its genetic traits to the next generation based on this variance.

"Not all progeny of highly productive animals inherit this quality. The new method selects animals that will produce extremely productive offspring," said Daniel Jordan de Abreu Santos, who conducted the study while he was a postdoctoral fellow at UNESP's School of Agricultural and Veterinary Sciences (FCAV) in Jaboticabal, São Paulo State.

Santos is currently doing more postdoctoral research at the University of Maryland in the United States. This is his second stint at Maryland, where he was previously a research intern with a scholarship from the FAPESP.

"Gamete diversity variance is generated by the separation of homologous chromosomes and the rate of recombination between genes linked to them. It isn't accounted for by the traditional selection method," Santos told.

The new method estimates the probability of the transmission of traits to the next generation on the basis of the genetic data of a parent or the possible combinations in a given mating.

Although it was developed for the selection of any species, in this study, the method was applied to Holstein and Jersey dairy cattle because of the volume and quality of the available data.

In computer simulations, the method produced genetic gains of up to 16% in ten generations of Holstein cattle, compared with a control group for which gamete diversity was not a factor.

Genomics

The study was possible because matings can now be simulated using large genomics databases with genetic details for the animals involved, including genes associated with certain traits of interest in breeding programs.

Based on these data, scientists can estimate the possible combinations of the parents' genetic material and predict the traits of their progeny. However, traits are not uniformly distributed among offspring.

Animals with the desired traits may produce offspring with very high or low levels of these same traits. As a result, the predicted traits of progeny in terms of milk, meat or fat yield are only an average of the parents' traits. The new method enables scientists to estimate the substantial variation around this average.

"It's now possible to predict which animals will produce highly productive offspring, above the expected average, before they mate. Gamete diversity is the factor that generates this estimate, determining the animal's capacity to transmit the traits of interest to its progeny," Santos said.

To apply the theory to an actual breeding program, Santos used data for over 160,000 Jersey cattle and approximately 1.4 million Holstein cattle from the database of the USDA's Agricultural Research Service.

Holstein is the world's major dairy breed and accounts for 90% of all US dairy cattle.

Software developed by the researchers was used to calculate the variation in all possible chromosome combinations and enabled them to separate individual animals with more or less gamete diversity variance.

"These variations can be used to select animals for specific purposes," Santos said. "You can select animals to have more homogeneous progeny, which you might want to do in order to obtain traits such as birth weight, or more heterogeneous progeny and hence some offspring that are more productive than the expected average."

Beyond guaranteeing that successive generations are as productive as their parents, the new parameter promises to produce major genetic gains in progeny bred from the same source animals.

The researchers also investigated the possible impact of the method on actual dairy herds. The match between variance based on genomic data and the actual variance observed in adult female progeny reached 90% in the case of 400 offspring per sire.

"This is cutting-edge science and will take some time to arrive in Brazil, but its current significance for us is that we import a lot of genetic material from US Holstein for breeding purposes and a large proportion of the Holstein genetic base in Brazil comes from semen produced in the US," said Humberto Tonhati, Full Professor at FCAV-UNESP and principal investigator for the study in Brazil.

The new parameter also helps mitigate the reductive impact of selection on a population's genetic diversity. In the case of livestock such as cattle, genetic variability tends to be low because of inbreeding.

"The new method offers a means of maintaining genetic variability. Individuals considered better by the traditional method will often be endogamous - offspring from the same gene pool - but when the new parameter is taken into account, they can no longer be classed among the best," Santos said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

CFTR inhibition: The key to treating bile acid diarrhea?

Estimates are that roughly 1 percent of people in Western countries may have bile acid diarrhea, including patients with Crohn's disease, ileal resection, diarrhea-predominant irritable bowel syndrome (IBS-D), and chronic functional diarrhea. Current management for bile acid diarrhea has demonstrated limited efficacy, with some therapies producing significant side effects. A recent study published in The FASEB Journal explored the efficacy of CFTR (cystic fibrosis transmembrane conductance regulator) inhibition to reduce excessive secretion in the colon due to bile acids.

Researchers studied cell culture models of human intestinal cells. Based on prior studies showing that chenodeoxycholic acid (CDCA) increases secretion in the colon and is linked to CFTR activation, researchers added CDCA to the cultures, which caused an increase in CFTR activity. They then added CFTR inhibitors to the cultures and used short-circuit current measurement to measure CFTR activity. Researchers found that CFTR inhibitors including investigational drug (R)-BPO-27 were able to fully block the increased CFTR activity.

"Bile acid diarrhea is a common problem for which therapeutic options are limited and often ineffective," said Alan S. Verkman, MD, PhD, a researcher within the Departments of Medicine and Physiology at the University of California, San Francisco. "Inhibition of chloride and fluid secretion in the intestine by a CFTR inhibitor offers a new therapeutic option to reduce diarrhea associated with excess bile acids."

Motivated by data implicating CFTR as a major determinant of bile acid secretion, the research team also tested the CFTR inhibitor in a rat model of bile acid diarrhea involving direct infusion of bile acids into the colon. Once again, (R)-BPO-27 was effective; this time, in reducing the increased stool water content.

"To see a physiologist as talented as Alan Verkman, and his colleagues, enter this field and offer such insights is a significant advance," said Thoru Pederson, PhD, Editor-in-Chief of The FASEB Journal.

Credit: 
Federation of American Societies for Experimental Biology

Molecular thumb drives: Researchers store digital images in metabolite molecules

image: In a step toward molecular storage systems that could hold vast amounts of data in tiny spaces, Brown University researchers have shown it's possible to store image files in solutions of common biological small molecules.

Image: 
Jacob Rosenstein et. al.

PROVIDENCE, R.I. [Brown University] -- DNA molecules are well known as carriers of huge amounts of biological information, and there is growing interest in using DNA in engineered data storage devices that can hold vastly more data than our current hard drives. But new research shows that DNA isn't the only game in town when it comes to molecular data storage.

A study led by Brown University researchers shows that it's possible to store and retrieve data stored in artificial metabolomes -- arrays of liquid mixtures containing sugars, amino acids and other types of small molecules. For a paper published in the journal PLOS ONE, the researchers showed that they could encode kilobyte-scale image files into metabolite solutions and read the information back out again.

"This is a proof-of-concept that we hope makes people think about using wider ranges of molecules to store information," said Jacob Rosenstein, a professor in Brown's School of Engineering and senior author of the study. "In some situations, small molecules like the ones we used here can have even greater information density than DNA."

Another potential advantage, Rosenstein says, stems from the fact that many metabolites can react with each other to form new compounds. That creates the potential for molecular systems that not only store data, but also manipulate it -- performing computations within metabolite mixtures.

The idea behind molecular computing grows out of an increasing need for more data storage capacity. By 2040, the world will have produced as much as 3 septillion (that's 3 followed by 24 zeros) bits of data by some estimates. Storing, searching and processing all of that data is a daunting challenge, and there simply may not be enough chip-grade silicon on Earth to do this with traditional semiconductor chips. Funded by a contract with the Defense Advanced Research Projects Administration (DARPA), a group of engineers and chemists at Brown has been working on a variety of techniques for using small molecules to create new information systems.

For this new study, the group wanted to see if artificial metabolomes could be a data-storage option. In biology, a metabolome is the full array of molecules an organism uses to regulate its metabolism.

"It's not hard to recognize that cells and organisms use small molecules to transmit information, but it can be harder to generalize and quantify," said Eamonn Kennedy, a postdoctoral associate at Brown and first author of the study. "We wanted to demonstrate how a metabolome can encode precise digital information."

The researchers assembled their own artificial metabolomes -- small liquid mixtures with different combinations of molecules. The presence or absence of a particular metabolite in a mixture encodes one bit of digital data, a zero or a one. The number of molecule types in the artificial metabolome determines the number of bits each mixture can hold. For this study, the researchers created libraries of six and 12 metabolites, meaning each mixture could encode either six or 12 bits. Thousands of mixtures are then arrayed on small metal plates in the form of nanoliter-sized droplets. The contents and arrangement of the droplets, precisely placed by a liquid-handling robot, encodes the desired data.

The plates are then dried, leaving tiny spots of metabolite molecules, each holding digital information. The data can then be read out using a mass spectrometer, which can identify the metabolites present at each spot on the plate and decode the data.

The researchers used the technique to successfully encode and retrieve a variety of image files of sizes up to 2 kilobytes. That's not big compared to the capacity of modern storage systems, but it's a solid proof-of-concept, the researchers say. And there's plenty of potential for scaling up. The number of bits in a mixture increases with the number of metabolites in an artificial metabolome, and there are thousands of known metabolites available for use.

There are some limitations, the researchers point out. For example, many metabolites chemically interact with each other when placed in the same solution, and that could result in errors or loss of data. But that's a bug that could ultimately become a feature. It may be possible to harness those reactions to manipulate data -- performing in-solution computations.

"Using molecules for computation is a tremendous opportunity, and we are only starting to figure out how to take advantage of it," said Brenda Rubenstein, a Brown assistant professor of chemistry and co-author of the study.

"Research like this challenges what people see as being possible in molecular data systems," Rosenstein said. "DNA is not the only molecule that can be used to store and process information. It's exciting to recognize that there are other possibilities out there with great potential."

Credit: 
Brown University

With little training, machine-learning algorithms can uncover hidden scientific knowledge

image: Vahe Tshitoyan, Anubhav Jain, Leigh Weston, and John Dagdelen were among the participants in a text-mining project that used machine learning to analyze 3.3 million abstracts from materials science papers.

Image: 
Marilyn Chung/Berkeley Lab

Sure, computers can be used to play grandmaster-level chess (chess_computer), but can they make scientific discoveries? Researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have shown that an algorithm with no training in materials science can scan the text of millions of papers and uncover new scientific knowledge.

A team led by Anubhav Jain, a scientist in Berkeley Lab's Energy Storage & Distributed Resources Division, collected 3.3 million abstracts of published materials science papers and fed them into an algorithm called Word2vec. By analyzing relationships between words the algorithm was able to predict discoveries of new thermoelectric materials years in advance and suggest as-yet unknown materials as candidates for thermoelectric materials.

"Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals," said Jain. "That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven't studied so far."

The findings were published July 3 in the journal Nature. The lead author of the study, "Unsupervised Word Embeddings Capture Latent Knowledge from Materials Science Literature," is Vahe Tshitoyan, a Berkeley Lab postdoctoral fellow now working at Google. Along with Jain, Berkeley Lab scientists Kristin Persson and Gerbrand Ceder helped lead the study.

"The paper establishes that text mining of scientific literature can uncover hidden knowledge, and that pure text-based extraction can establish basic scientific knowledge," said Ceder, who also has an appointment at UC Berkeley's Department of Materials Science and Engineering.

Tshitoyan said the project was motivated by the difficulty making sense of the overwhelming amount of published studies. "In every research field there's 100 years of past research literature, and every week dozens more studies come out," he said. "A researcher can access only fraction of that. We thought, can machine learning do something to make use of all this collective knowledge in an unsupervised manner - without needing guidance from human researchers?"

'King - queen + man = ?'

The team collected the 3.3 million abstracts from papers published in more than 1,000 journals between 1922 and 2018. Word2vec took each of the approximately 500,000 distinct words in those abstracts and turned each into a 200-dimensional vector, or an array of 200 numbers.

"What's important is not each number, but using the numbers to see how words are related to one another," said Jain, who leads a group working on discovery and design of new materials for energy applications using a mix of theory, computation, and data mining. "For example you can subtract vectors using standard vector math. Other researchers have shown that if you train the algorithm on nonscientific text sources and take the vector that results from 'king minus queen,' you get the same result as 'man minus woman.' It figures out the relationship without you telling it anything."

Similarly, when trained on materials science text, the algorithm was able to learn the meaning of scientific terms and concepts such as the crystal structure of metals based simply on the positions of the words in the abstracts and their co-occurrence with other words. For example, just as it could solve the equation "king - queen + man," it could figure out that for the equation "ferromagnetic - NiFe + IrMn" the answer would be "antiferromagnetic."

Word2vec was even able to learn the relationships between elements on the periodic table when the vector for each chemical element was projected onto two dimensions.

Predicting discoveries years in advance

So if Word2vec is so smart, could it predict novel thermoelectric materials? A good thermoelectric material can efficiently convert heat to electricity and is made of materials that are safe, abundant and easy to produce.

The Berkeley Lab team took the top thermoelectric candidates suggested by the algorithm, which ranked each compound by the similarity of its word vector to that of the word "thermoelectric." Then they ran calculations to verify the algorithm's predictions.

Of the top 10 predictions, they found all had computed power factors slightly higher than the average of known thermoelectrics; the top three candidates had power factors at above the 95th percentile of known thermoelectrics.

Next they tested if the algorithm could perform experiments "in the past" by giving it abstracts only up to, say, the year 2000. Again, of the top predictions, a significant number turned up in later studies - four times more than if materials had just been chosen at random. For example, three of the top five predictions trained using data up to the year 2008 have since been discovered and the remaining two contain rare or toxic elements.

The results were surprising. "I honestly didn't expect the algorithm to be so predictive of future results," Jain said. "I had thought maybe the algorithm could be descriptive of what people had done before but not come up with these different connections. I was pretty surprised when I saw not only the predictions but also the reasoning behind the predictions, things like the half-Heusler structure, which is a really hot crystal structure for thermoelectrics these days."

He added: "This study shows that if this algorithm were in place earlier, some materials could have conceivably been discovered years in advance." Along with the study the researchers are releasing the top 50 thermoelectric materials predicted by the algorithm. They'll also be releasing the word embeddings needed for people to make their own applications if they want to search on, say, a better topological insulator material.

Up next, Jain said the team is working on a smarter, more powerful search engine, allowing researchers to search abstracts in a more useful way.

The study was funded by Toyota Research Institute. Other study co-authors are Berkeley Lab researchers John Dagdelen, Leigh Weston, Alexander Dunn, and Ziqin Rong, and UC Berkeley researcher Olga Kononova.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Ovarian and breast cancer research finds new ways BRCA1 gene functions

Research led by the University of Birmingham has found important new ways that the BRCA1 gene functions which could help develop our understanding of the development of ovarian and breast cancers.

The research, published in Nature today (July 3rd), was led by experts at the University of Birmingham's Birmingham Centre for Genome Biology and Institute of Cancer and Genomic Sciences and is part of a five-year research project which is playing a pivotal role in identifying and understanding breast cancer genes.

First author Manolo Daza-Martin, of the University of Birmingham, explained: "No two people are born the same and, as a result, we all have slightly different chances of developing diseases during our lifetimes - this is the result of natural variation in our genes.

"On top of this natural variation, about one in a thousand people inherit from one of their parents a damaged, or 'mutated', copy of a gene called BRCA1.

"Previous research has shown us that in cells the BRCA1 gene makes a protein that helps repair damage to broken DNA. Therefore, people who inherit a faulty BRCA1 gene are less able to repair damage that inevitably accumulates in their DNA over time - putting them at higher risk of ovarian and breast cancer.

Hollywood actress Angelina Jolie had a double mastectomy and announced she was to have her ovaries removed in 2013 after being tested positive for the BRCA1 genetic mutation. Her mother died at the young age of 56 due to cancer.

DNA damage can also occur when cells have difficulties copying their DNA leaving it vulnerable to breakage. BRCA1 helps protect DNA when the copying machinery gets stuck, but it was not known how. Now University of Birmingham researchers, in collaboration with scientists at Imperial College London, have found that BRCA1 changes shape in order to protect vulnerable DNA until the copying machinery can be restarted. In addition, the researchers found that in some patients with a personal or family history of breast and ovarian cancer, the protective role of BRCA1 in DNA-copying is disabled - while its break repair function is still active.

Joint corresponding author Dr Ruth Densham, of the University of Birmingham's Institute of Cancer and Genomic Sciences, added: "BRCA1 is like a DNA Damage Scene Coordinator, whose role is to coordinate emergency response units at a damage site in order to help repair. It was surprising to find out that BRCA1 changes shape depending on the type of damage it finds at the scene, and this shape change alters the way the cell responds."

Lead and corresponding author Professor Jo Morris, also of the University of Birmingham's Institute of Cancer and Genomic Sciences, said: "Our research could be important for understanding how cancers develop and means we could have identified a new way of supressing tumours.

"We are long way from it, but ultimately this may alter how cancer patients are treated. We will now continue this important research into the role of BRCA1's DNA copying function in cancer development."

The average woman in the UK has a 12.5 per cent chance of developing breast cancer at some point in her life.

About one in 20 (five per cent) of the 50,000 women diagnosed with breast cancer every year carries an inherited gene fault like BRCA1.

A female BRCA1 carrier has between a 60 and 90 per cent chance of developing breast cancer, and around a 40 to 60 per cent chance of ovarian cancer. The precise figure for an individual woman will vary according to several things, such as her age, the number of affected family members, and the exact nature of the fault in the gene.

Credit: 
University of Birmingham

More 'reactive' land surfaces cooled the Earth down

image: Zermatt in the Western Alps.

Image: 
F. von Blanckenburg

From time to time, there have been long periods of cooling in Earth's history. Temperatures had already fallen for more than ten million years before the last ice age began about 2.5 million years ago. At that time the northern hemisphere was covered with massive ice masses and glaciers. A geoscientific paradigm, widespread for over twenty years, explains this cooling with the formation of the large mountain ranges such as the Andes, the Himalayas and the Alps. As a result, more rock weathering has taken place, the paradigm suggests. This in turn removed more carbon dioxide (CO2) from the atmosphere, so that the 'greenhouse effect' decreased and the atmosphere cooled. This and other processes eventually led to the 'ice Age'.

In a new study, Jeremy Caves-Rugenstein from ETH Zurich, Dan Ibarra from Stanford University and Friedhelm von Blanckenburg from the GFZ German Research Centre for Geosciences in Potsdam were able to show that this paradigm cannot be upheld. According to the paper, weathering was constant over the period under consideration. Instead, increased 'reactivity' of the land surface has led to a decrease in CO2 in the atmosphere, thus cooling the Earth. The researchers published the results in the journal Nature.

A second look after isotope analysis

The process of rock weathering, and especially the chemical weathering of rocks with carbonic acid, has controlled the Earth's climate for billions of years. Carbonic acid is produced from CO2 when it dissolves in rainwater. Weathering thus removes CO2 from the Earth's atmosphere, precisely to the extent that volcanic gases supplied the atmosphere with it. The paradigm that has been widespread so far states that with the formation of the large mountains ranges in the last 15 million years, erosion processes have increased - and with them also the CO2-binding rock weathering. Indeed, geochemical measurements in ocean sediments show that the proportion of CO2 in the atmosphere has strongly decreased during this phase.

"The hypothesis, however, has a big catch," explains Friedhelm von Blanckenburg of GFZ. "If the atmosphere had actually lost as much CO2 as the weathering created by erosion would have caused, it would hardly have had any CO2 left after less than a million years. All water would have had frozen to ice and life would have had a hard time to survive. But that was not the case."

That these doubts are justified, was already shown by von Blanckenburg and his colleague Jane Willenbring in a 2010 study, which appeared in Nature likewise. "We used measurements of the rare isotope beryllium-10 produced by cosmic radiation in the Earth's atmosphere and its ratio to the stable isotope beryllium-9 in ocean sediment to show that the weathering of the land surface had not increased at all," says Friedhelm von Blanckenburg.

The land's surface has become more 'reactive'

In the study published now, Caves-Rugenstein, Ibarra and von Blanckenburg additionally used the data of stable isotopes of the element lithium in ocean sediments as an indicator for the weathering processes. They wanted to find out how, despite constant rock weathering, the amount of CO2 in the atmosphere could have decreased. They entered their data into a computer model of the global carbon cycle.

Indeed, the results of the model showed that the potential of the land surface to weather has increased, but not the speed at which it weathered. The researchers call this potential of weathering the 'reactivity' of the land surface. "Reactivity describes how easily chemical compounds or elements take part in a reaction," explains Friedhelm von Blanckenburg. If there are more non-weathered and therefore more reactive rocks at the surface, these can in total react as extensively chemically with little CO2 in the atmosphere as already heavily weathered rocks would do with a lot of CO2. The decrease in CO2 in the atmosphere, which is responsible for the cooling, can thus be explained without an increased speed of weathering.

"However, a geological process is needed to rejuvenate the land surface and make it more 'reactive'," says Friedhelm von Blanckenburg."This does not necessarily have to be the formation of large mountains. Similarly, tectonic fractures, a small increase in erosion or the exposure of other types of rock may have caused more material with weathering potential to show at the surface. In any case, our new hypothesis must trigger geological rethinking regarding the cooling before the last ice age."

Credit: 
GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre