Tech

Tokyo Tech Hosono's story of IGZO TFT development features in Nature Electronics

image: Progress of IGZO TFT research and implementation into displays.

Image: 
Hideo Hosono, Tokyo Institute of Technology

Each issue of the journal Nature Electronics contains a column called "Reverse Engineering," which examines the development of an electronic device now in widespread use from the viewpoint of the main inventor. So far, it has featured creations such as the DRAM, DVD, CD, and Li-ion rechargeable battery. The July 2018 column tells the story of the IGZO thin film transistor (TFT) through the eyes of Professor Hideo Hosono of Tokyo Tech's Institute of Innovative Research (IIR), who is also director of the Materials Research Center for Element Strategy.

TFTs using oxides including indium (In), gallium (Ga), and zinc (Zn), or IGZO, made possible high-resolution energy-efficient displays that had not been seen before. IGZO electron mobility is 10 times that of hydrogenated amorphous silicon, which was used exclusively for displays in the past. Additionally, its off current is extremely low and it is transparent, allowing light to pass through. IGZO has been applied to drive liquid crystal displays, such as those on smartphones and tablets. Three years ago, it was also used to drive large OLED televisions, which was considered a major breakthrough. This market is rapidly expanding, as can be seen from the products being released by South Korean and Japanese electronics manufacturers, which now dominate store shelves.

The electron conductivity of transition metal oxides has long been known, but electric current modulation using electric fields has not. In the 1960s, it was reported that modulating the electric current was possible when zinc oxide, tin oxide, and indium oxide were formed into TFT structures. Their performance, however, was poor, and reports of research on organic TFTs were mostly nonexistent until around 2000. A new field called oxide electronics came into existence in the early noughties, examining oxides as electronic materials. A hub for this research was the present-day Laboratory for Materials and Structures within IIR, and research into zinc oxide TFTs soon spread worldwide. However, since the thin film was polycrystalline, there were problems with its characteristics and stability, and no practical applications were achieved.

Application in displays, unlike CPUs, requires the ability to form a thin, homogenous film on a large-sizedsubstrate -- like amorphous materials -- and a dramatic increase in electric current at a low gate voltage when the thin film is subjected to an electric field. However, while amorphous materials were the optimal choice for forming thin, homogeneous film, high carrier concentration and other issues due to structural disorder arose, for the most part preventing electric current modulation by electric fields. The only exception was amorphous silicon containing a large amount of hydrogen, reported in 1975. TFTs made of this material were applied to drive liquid crystal displays, which grew into a giant 10 trillion-yen industry. However, electron mobility was still lower by two to three orders of magnitude compared to that of crystalline silicon -- no better than 0.5 to 1 cm2 V-1 s-1. Amorphous semiconductors, therefore, were easy to produce, but were seen to have much inferior electronic properties.

Hosono focused his attention on oxides with highly ionic bonding nature, the series made up of non-transition metals belonging to the p-block of the periodic table. In this material series, the bottom of the conduction band, which works as the path for electron, is made up mainly of spherically symmetrical metal s-orbitals with a large spatial spread. Because of this, the degree of overlap of the orbitals, which govern how easily electrons can move, is not sensitive to bond angle variation which is an intrinsic nature of amorphous materials.

The professor realized that this characteristic might allow for mobility in amorphous materials that is comparable to that of polycrystalline thin films. He experimented accordingly, and was able to find some examples. In 1995, he presented his idea and examples at the 16th International Conference on Amorphous Semiconductors, and had the paper on its proceedings published the following year. After proving this hypothesis through experiments and calculations, he started test-producing TFTs. Many combinations of elements fulfilled the conditions of the hypothesis. IGZO was selected because it had a stable crystalline phase that is easy to prepare, and its specific local structure around Ga suggested that carrier concentration could be suppressed. In 2003, Hosono and his collaborators reported in Science that crystalline epitaxial thin film could produce mobility of around 80 cm2 V-1 s-1. In the following year, they published in Nature that amorphous thin film could also produce mobility of around 10 cm2 V-1 s-1.

Following these findings, research on amorphous oxide semiconductors and their TFTs began increasing rapidly around the world -- not just among the Society for Information Display (SID) and the International Conference on Amorphous Semiconductors. This activity has continued, and Hosono's two papers have now been cited over 2,000 and 5,000 times respectively. The total citations of the patents associated with these inventions now exceed 9,000. Products with displays incorporating these TFTs have been available to the general consumers since 2012. In particular, large OLED televisions, which appeared around 2015, became possible only due to the unique characteristics of amorphous IGZO TFTs -- their high mobility and ability to easily form a thin, homogenous film over a large area. Such displays are installed on the first floor of the Materials Research Center for Element Strategy and the foyer of the Laboratory for Materials and Structures at Tokyo Tech. Application of IGZO TFTs to high-definition large LCD televisions are expected to start soon.

Credit: 
Tokyo Institute of Technology

Thinking about quitting Facebook? There's a demographic analysis for that

People are either Facebook users or they are not.

Facebook user data can be used to draw conclusions about general social phenomena.

According to Eric P.S. Baumer, who studies human-computer interaction, the simple statements above are, in fact, not so simple--nor are they true.

Baumer's new research takes a fine-grained look at Facebook use and non-use, using a more nuanced approach than has previously been undertaken. Rather than employing a simple binary of two categories--use and non-use--Baumer's study looks at demographic and socioeconomic factors that impact Facebook use and non-use using four categories:

current user, who currently has and uses a Facebook account;

deactivated, who has temporarily deactivated her/his account but could technically reactivate at any time;

considered deactivating, who has considered deactivating her/his account but never actually done so; and

never used, who has never had a Facebook account.

Baumer analyzed data collected by Cornell's Survey Research Institute in 2015 for the Cornell National Social Survey. The data set includes responses from 1,000 U.S. households gleaned from a phone survey of adults 18 years or older. Through probabilistic modeling, Baumer sought to identify predictors for the four different types of Facebook use and non-use. He presented his findings at the 2018 ACM Conference on Human Factors in Computing Systems in Montreal Canada on April 25th. The findings were also published in a paper called: "Socioeconomic Inequalities in the Non/use of Facebook."

Of the factors he explored, eight emerged as predictors of use and non-use: age, gender, marital status, whether the respondent had looked for work in the past four weeks, household income, race, weight and social ideology (liberal to conservative). The strongest predictors, he found, were age, gender, whether the respondent had looked for work in the past four weeks and household income.

The research shows that current Facebook use is more common among respondents who are: middle aged (40 to 60), female, not seeking employment, of Asian descent, or currently married. Deactivation, either actual or considered, is more common among respondents who are younger, seeking employment, or not married. Respondents most likely to have never had an account are older, male, from a lower income household, racially of Black or African-American descent, more socially conservative, or weigh less.

Baumer says his findings suggest how socioeconomic factors might work in concert.

"My analysis reveals that individuals from lower-income households are less likely ever to have had a Facebook account," says Baumer. "Yet, social networks have been shown to play an important role in fostering 'social capital,' which can be leveraged for accomplishing certain tasks, including securing employment. Also, respondents who had looked for work within the last four weeks were more likely to have deactivated their Facebook accounts--eliminating a potential resource in their job search."

"Facebook, rather than acting as a democratizer," writes Baumer, "may be perpetuating existing social inequalities."

Other demographic results from the study include:

Age: Older respondents were more likely to have never had a Facebook account. Older respondents who did have an account were less likely to have deactivated or to have considered deactivating. For example, Baumer's analysis reveals that every one-year increase in age increases the odds of having never had a Facebook account by 4.6%.

Younger respondents are more likely to have either deactivated or considered deactivating their Facebook account, while they are simultaneously less likely to be a current user. The probability of deactivation, considered or actual, drops as age increases, while the probability of never having had an account goes up.

"Rather than try Facebook and leave, older respondents never had an account in the first place," writes Baumer in the paper.

Gender: Female respondents were 2.656 times more likely than male respondents to be a current user rather than never having used Facebook. However, gender did not significantly predict deactivation, either considered or actual. This result aligns with prior findings that social media use is more common among female respondents.

Employment: A respondent's current employment status did not emerge as a significant predictor. However, the model does include whether the respondent looked for work in the past four weeks. Respondents who had looked for work were 2.030 times more likely to have deactivated their account and 2.276 times more likely to have considered deactivating.

Household Income: A respondent's household income had no significant effect on deactivation, either considered or actual. However, respondents with lower household incomes were more likely to never have had a Facebook account.

Race: In the final model, only two racial categories have a significant impact, and each of those only significantly impacts a single type of use and/or non-use. First, respondents who identify as Asian are only 0.278 times as likely (i.e., 3.597 times less likely) to have considered deactivating their account. These respondents are also 0.238 times as likely (i.e., 4.202 times less likely) to have never had a Facebook account.

Second, respondents who identified as Black were more likely never to have had a Facebook account. This finding aligns with previous research suggesting Facebook use is relatively less common among African Americans.

Marital Status: Being married (as opposed to single, divorced, widowed, etc.) decreases the chance of considering deactivation and reduces the odds of actually deactivating almost by half.

Social Ideology: Self-identified conservative respondents were more likely never to have had a Facebook account. Each move toward the conservative end of the response scale corresponded to being 1.152 times more likely to have never had an account. Social ideology has only a slight impact on the probability of deactivation, either considered or actual.

Weight: Heavier respondents were less likely to have considered deactivating their account and to have never had an account. Lower-weight respondents are more likely only to consider deactivating, while higher-weight respondents are more likely to have actually deactivated.

His analysis, says Baumer, also provides specific details about the types of populations that researchers are, and are not, studying when analyzing data from social media.

"The analysis helps to explain the ways that Facebook, and likely all social media, are not representative of the broader population," says Baumer. "Facebook users are more likely older, female, higher income earners, married and ideologically liberal."

Credit: 
Lehigh University

Computing power solves molecular mystery

image: Researchers had to study almost 100,000 simulation images of this type before they were able to identify what triggers the water molecules to split. Lots of computing power went into those simulations.

Image: 
NTNU

Chemical reactions take place around us all the time - in the air we breathe, the water we drink, and in the factories that make products we use in everyday life. And those reactions happen way faster than you can imagine.

Given optimal conditions, molecules can react with each other in a quadrillionth of a second.

Industry is constantly striving to achieve faster and better chemical processes. Producing hydrogen, which requires splitting water molecules, is one example.

In order to improve the processes we need to know how different molecules react with each other and what triggers the reactions.

Challenging, even with computer simulations

Computer simulations help make it possible to study what happens during a quadrillionth of a second.

So if the sequence of a chemical reaction is known, or if the triggers that initiate the reaction occur frequently, the steps of the reaction can be studied using standard computer simulation techniques.

But this is often not the case in practice. Molecular reactions frequently behave differently. Optimal conditions are often not present - like with water molecules used in hydrogen production - and this makes reactions challenging to investigate, even with computer simulations.

Until recently, we haven't known what initiates the splitting of water molecules. What we do know is that a water molecule has a life span of ten hours before it splits. Ten hours may not sound like a long time, but compared to the molecular time scale - a quadrillionth of a second - it's really long.

This makes it super challenging to figure out the mechanism that causes water molecules to divide. It's like looking for a needle in a huge haystack.

Combining two techniques

NTNU researchers have recently found a way to identify the needle in just such a haystack. In their study, they combined two techniques that had not previously been used together.

Researchers had to study almost 100,000 simulation images of this type before they were able to identify what triggers the water molecules to split. Lots of computing power went into those simulations.

By using their special simulation method, the researchers first managed to simulate exactly how water molecules split.

"We started looking at these ten thousand simulation films and analysing them manually, trying to find the reason why water molecules split," says researcher Anders Lervik at NTNU's Department of Chemistry. He carried out his work with Professor Titus van Erp.

Huge amounts of data

"After spending a lot of time studying these simulation films, we found some interesting relationships, but we also realized that the amount of data was too massive to investigate everything manually.

The researchers used a machine learning method to discover the causes that trigger the reaction. This method has never been used for simulations of this type. Through this analysis, the researchers discovered a small number of variables that describe what initiates the reactions.

What they found provides detailed knowledge of the causative mechanism, as well as ideas for ways to improve the process.

Finding ways for industrial chemical reactions to happen faster and more efficiently has taken a significant step forward with this research. It offers great potential for improving hydrogen production.

Credit: 
Norwegian University of Science and Technology

New evidence supports radical treatment of widespread form of malaria

Darwin, Australia: A team of malaria experts from a large international research collaboration has published results supporting the need for a radical cure strategy to tackle one of the most debilitating forms of malaria caused by the Plasmodium vivax parasite.

Vivax malaria affects more than 13 million people each year, with an estimated 40% of the world's population at risk of contracting the infection across all continents from South America to South-East Asia. In some regions P. vivax has become resistant to standard treatment with chloroquine. The problem is compounded by vivax's ability to lie dormant in the liver for long periods of time before causing recurrent infections that have an enduring impact on people's lives and livelihoods.

Led by a team at Menzies School of Health Research in Australia, the study has assembled individual patient data from clinical trials conducted since 2000, investigating the effect of chloroquine dosing, combined with the partner drug primaquine, and the risk of recurrent malaria across different settings. The study published today in the international journal The Lancet Infectious Diseases is the result of a collaboration between more than 50 international researchers under the auspices of the Worldwide Antimalarial Resistance Network (WWARN).

"Our findings highlight the substantial benefit of a modest increase in the dose of chloroquine in children aged under 5 years and the importance of combining primaquine with chloroquine to have a better chance of curing patients." explains Dr Rob Commons, PhD student at the Menzies School of Health Research and part of the WWARN Clinical Group.

"This analysis of more than 5,000 patients from 37 studies, across 17 countries, is the largest individual patient data meta-analysis of P. vivax clinical trials to date. Our results show chloroquine is currently given in lower doses than recommended, with as many as 35% of patients in trials given less than the WHO recommended 25 mg/kg. We also know from our analysis that these patients are more likely to fail treatment" confirms Dr Commons.

"The study highlights the need for clinicians in affected areas to provide radical cure to kill the blood and liver stage of the vivax parasite and ensure patients can recover quickly. We also want to prevent transmission of the parasite to other people and reduce the global burden of this disease" adds Professor Ric Price, Head of the Clinical Group at the Worldwide Antimalarial Resistance Network (WWARN).

"This research team has highlighted some important potential adjustments are needed to ensure all patients, especially small children, are given the best chance of recovery from vivax malaria." concludes Prof Kevin Baird, Head of the Eijkman-Oxford Clinical Research Unit (EOCRU) in Jakarta, Indonesia.

The effect of chloroquine dose and primaquine on Plasmodium vivax recurrence: a WWARN systematic review and individual patient pooled meta-analysis. The Lancet Infectious Diseases. THELANCETID-D-18-00317R1.

Credit: 
Centre for Tropical Medicine & Global Health

Blood test might help reduce unnecessary CT scans after traumatic brain injury

Peer reviewed / Observational study / Humans

A high sensitivity blood test might help doctors rule out traumatic intracranial injuries like brain haemorrhage and contusion before resorting to CT scanning, according to a large, multicentre observational trial published in The Lancet Neurology journal.

The novel blood test was administered within 12 hours of a suspected traumatic brain injury (TBI), and measured levels of two biomarker proteins which are released into the bloodstream following a brain injury. In the study, the test correctly identified 99.6% of patients who did not have a traumatic intracranial injury on head CT scans among over 1,900 adults (mostly with mild TBI) presenting to emergency departments in the USA and Europe.

Current practice for mild TBI involves a series of checklists of symptoms and signs--known as clinical decision rules--that a treating physician will look for to decide whether a CT scan is necessary. One of the most important clinical guides for determining the need for a CT scan is the patient's initial level of alertness--measured using the Glasgow Coma Scale score (GCS)--with some guidelines recommending a head CT for anyone with a less than perfect GCS score of 15.[1]

The blood test investigated in this study was able to predict which patients did not have a brain injury visible on CT scan with very high accuracy, even among those with a GCS less than 15. Further research to determine the extent to which the biomarker test complements decision rules, and as well as its impact on health care costs and patient throughput, will be key to understanding the test's usefulness in clinical practice.

"Based on the results of this multicentre study, routine use of the new biomarker test in emergency departments could reduce head CT scans by a third in acutely head injured patients thought to be in need of CT scanning, avoiding unnecessary CT-associated costs and radiation exposure, with a very low false-negative rate", says Dr Jeffrey Bazarian at the University of Rochester School of Medicine in Rochester, New York (USA) who co-led the research. [2]

"Our results suggest that patients with mild TBI (initial GCS of 14 or 15) who have no other indication for a CT (such as a focal neurologic deficit), and who have a negative test can safely avoid a CT scan. Those patients with a positive test have a 10% chance of an intracranial lesion and most clinicians would get a CT scan of their head to determine if an intracranial injury exists, and define it further. The extent to which these biomarker results can be applied to patients presenting with more severe injury, that is in those with a GCS less than 14, requires further confirmation." [2]

Based on early, unpublished results of the study, the US Food and Drug Administration has approved the commercial use of this blood-based brain biomarker test, making it the first clinically approved test of its kind in North America.

TBI occurs when an external force such as a bump or blow to the head disrupts the normal function of the brain. Leading causes include falls, motor vehicle accidents, and assaults. An estimated 54-60 million people worldwide suffer a TBI every year. In the USA, TBI is responsible for more than 2.5 million visits to the emergency department every year, most of which involve milder TBIs like concussion.

Currently, doctors use CT scans to detect traumatic intracranial injuries, usually bleeding, which sometimes require immediate neurosurgery. Over 20 million head CT scans are performed each year in the US alone. However, CT scans reveal such injures in less than 10% of those with milder head injuries, which make up three-quarters of all TBIs, and there is concern about the high radiation dose associated with CT scans of the head which can increase the risk of cancer.

Previous research has highlighted the potential of blood-based brain injury biomarkers to predict patients at high risk of intracranial injuries and in need of CT scanning. S100B is a well-accepted biomarker for TBI and is already in clinical use in Europe. Two other proteins--ubiquitin C-terminal hydrolase-L1 (UCH-L1) and glial fibrillary acidic protein (GFAP)--have also emerged as promising predictors of head CT results in small studies.

To provide more evidence, the researchers conducted a prospective study of 1959 adults (aged 18 or over) attending emergency departments between December 2012 and March 2014 with suspected TBIs at 22 sites in the USA and Europe. The ALERT-TBI study directly compared the results of the new biomarker test combining UCH-L1 and GFAP with head CT scan results.

Participants had a Glasgow Coma Scale score ranging from 9-15, and the majority (98%) had a mild TBI, with a score of 14-15 (GCS scores range from 3 [deep coma] to 15 [full consciousness]). All participants received a head CT as part of standard emergency care and had blood samples taken within 12 hours of injury.

Results showed that 125 (6%) of participants had intracranial injuries detected on CT and eight (?1%) had injuries that were neurosurgically manageable. 1288 (66%) of patients had a positive GFAP and UCH-L1 test result, and 671 (34%) had a negative result.

The new biomarker test was positive in 97.6% of patients with a traumatic injury on head CT scan (sensitivity), whilst the probability that a patient with a negative test result was truly injury free was 99.6% (negative predictive value).

Given that there were ten times as many positive GFAP and UCH-L1 tests as positive CT scans among patients with TBIs, the authors speculate that these two proteins may be detecting more subtle degrees of injury not visible on CT scans.

According to co-lead author Dr Peter Biberthaler from the Technical University of Munich, Munich, Germany, "The majority of patients presenting with mild traumatic brain injuries like concussion do not have visible traumatic intracranial injuries on CT scans. Given the GFAP and UCH-L1 biomarker test's inherent simplicity, requiring only a blood draw, and its reliability at predicting the absence of intracranial injuries, we are hopeful of its future role in ruling out the need for CT scans in these patients." [2]

The authors note several limitations, including that the study did not evaluate the test's predictive ability for clinical outcomes such as prolonged post-concussive symptoms, cognitive impairment, and decreased functional status. They also note that it did not attempt to assess the test's diagnostic accuracy compared with currently used biomarkers (ie, S100B) and clinical decision rules for triaging CT scanning. Finally, the sensitivity analysis comparing the diagnostic performance of the biomarker test for each of the proteins separately suggested that GFAP alone might perform as well as the two proteins combined, and requires further validation.

Writing in a linked Comment, Professor Andrew Maas from Antwerp University Hospital and University of Antwerp, Belgium and Dr Hester Lingsma from Erasmus University Medical Center in the Netherlands, note that "several scientific and clinically relevant questions remain unanswered, despite ALERT-TBI being a pivotal study in the quest for objective parameters to aid in the evaluation of TBI in emergency departments. They write, "the only clinically relevant question for any new diagnostic test in mild TBI is does the test add value (eg, better outcomes or reduced costs) to currently used biomarkers and decision rules?...Inexplicably, this evaluation was not done...Our interpretation is that the added value of the test in clinical practice might well be small or even absent, and we strongly encourage the authors to prove us wrong. That would constitute a true addition to science and clinical practice."

Credit: 
The Lancet

Scientists unlock the properties of new 2D material

A new two-dimensional material has become a reality, thanks to a team of Danish and Italian scientists.

The research, led by physicists at Aarhus University, succeeded in the first experimental realisation and structural investigation of single-layer vanadium disulphide (VS2). It is published today in the journal 2D Materials.

VS2 is one of a diverse group of compounds known as transition metal dichalcogenides (TMDs). Many of these can assume a layered crystal structure from which atomically thin crystalline sheets can be isolated. The electronic properties of the single-atomic-layer crystals can differ in important ways from those of the layered bulk crystals.

Lead author Dr Charlotte Sanders of Aarhus University explained the importance of the new findings: "Theoretical studies suggest that single-layer VS2 might exhibit very interesting physics, including magnetism and strong correlations. It might also host charge density wave states, as does bulk VS2. However, making VS2 is difficult and the single layer has not been successfully made before now.

"In fact, magnetism in single-layer materials has only recently been observed, and is still quite rare. So, the possibility that this material might be magnetic is exciting."

To make the single layer of VS2, the researchers evaporated vanadium onto a clean gold surface at room temperature. They then heated the sample in the presence of sulphur-containing molecules that react with the vanadium to produce the VS2. The team measured the properties of the samples using low-energy electron diffraction, scanning tunnelling microscopy, and X-ray photoelectron spectroscopy.

Significantly, the team also discovered a new and unpredicted vanadium sulphide compound. Most 2D materials can, in theory (although not necessarily in practice), be derived from bulk layered crystals. However, there is no 3D material that has similar stoichiometry and crystal structure to those of the new compound, which is formed when single-layer VS2 is depleted of sulphur by heating.

In consideration of the likely magnetic properties of related vanadium compounds, the new material might be another candidate for two-dimensional magnetism.

"The new material's electronic structure, along with possible charge density wave phases and magnetic ordering, remain to be explored, and an interesting open question is how its properties differ from those of stoichiometric single-layer VS2," said Dr Sanders.

Credit: 
IOP Publishing

Researchers use nanotechnology to improve the accuracy of measuring devices

Scientists from Higher school of economics and the Federal Scientific Research Centre 'Crystallography and Photonics' have synthesized multi-layered nanowires in order to study their magnetoresistance properties. Improving this effect will allow scientists to increase the accuracy of indicators of various measuring instruments, such as compasses and radiation monitors. The results of the study have been published in the paper 'Structure of Cu/Ni Nanowires Obtained by Matrix Synthesis.'

One of the unique features of artificial nanostructures is the giant magnetoresistance effect in thin layers of metal. This effect is exploited in various electronic devices.

The scientists synthesized multi-layered copper and nickel nanowires, in order to study their characteristics, which depend on the layers' composition and geometry. 'We expect that the transition to multi-layered nanowires will increase the giant magnetoresistance effect considerably. Today, we are 'choosing' the method of nanowire synthesis, in order to get this effect', said Ilia Doludenko, Moscow Institute of Electronics and Mathematics (MIEM HSE) graduate and one of the authors.

To determine the correlation between the synthesis parameters and the crystal structure, the scholars synthesized nanowires of different lengths. The nanowire length was determined by the number of deposition cycles; one nickel layer and one copper layer were deposited in each cycle. The size of the nanowires was determined using a scanning electron microscope (SEM). The number of pairs of layers in the nanowires was found to be 10, 20, or 50, according to the number of electrodeposition cycles.

When the length of the nanowire was compared to the number of layers, it turned out that the relationship between the nanowire length and the number of layers was nonlinear. The average lengths of the nanowires composed of 10, 20, and 50 pairs of layers were, respectively, 1.54 μm, 2.6 μm, and 4.75 μm. The synthesized nanowires all had a grain structure with crystallites of different sizes, from 5-20 nm to 100 nm. Large, bright reflections were mainly due to metals (Ni and Cu) while diffuse rings and small reflections are generally related to the presence of copper oxides.

An elemental analysis confirmed the presence of alternating Ni and Cu layers in all of the nanowires in the study. However, the mutual arrangement of layers may differ. Ni and Cu layers in the same nanowire may be oriented perpendicular to its axis or be at a particular angle. The individual units of the same nanowire may have different thicknesses. The thickness of individual units in nanowires is in the range of 50-400 nm.

According to the study authors, this heterogeneity depends on the parameters of the pore and decreases closer to the pore mouth. This leads to an increase in current, enhancement of deposition rate, and, as a result, an increase in the deposited layer thickness. Another possible reason is the difference in the diffusion mobilities of ions of different metals. This explains the nonlinear relationship between the nanowire length and the number layers mentioned above. The study of the composition of particular units demonstrated that copper units consist mainly of copper, while nickel is almost entirely absent. Nickel units, on the other hand, always contain a certain amount of copper. This amount may sometimes be as high as 20%.

The relevance of these findings relates to the potential creation of more accurate and cheaper detectors of motion, speed, position, current and other parameters. Such instruments could be used in the car industry, or to produce or improve medical devices and radiation monitors and electronic compasses.

Credit: 
National Research University Higher School of Economics

A protein that promotes compatibility between chromosomes after fertilization

image: Maternal (?) and paternal (?) chromosomes in a recently fertilized fruit fly egg. DNA is in blue; the paternal chromosomes are also labelled in green.

Image: 
Paulo Navarro Costa

A research team from the Center for Biomedical Research (CBMR), at the University of Algarve (UAlg), and Instituto Gulbenkian de Ciência (IGC), led by Rui Gonçalo Martinho (UAlg) and Paulo Navarro-Costa (UAlg and IGC) has identified the mechanism by which the fertilized egg balances out the differences between chromosomes inherited from the mother and the father. The study, now published in the scientific journal EMBO reports*, may pave the way for future developments in the clinical management of infertile couples.

The fertilization of an egg by a sperm cell marks the beginning of a new life. However, many of the molecular mechanisms behind this extraordinary process remain a mystery.

It is well known that mother and father pass on their genetic information in a different manner. While the maternal chromosomes in the egg are still undergoing division, the paternal chromosomes carried by the sperm have both completed their division and been substantially compacted to fit into the small volume of the sperm cell. The mechanisms through which the fertilized egg levels these differences between parental chromosomes - an essential aspect for the correct initiation of embryo development - are largely unknown.

The close partnership between the University of Algarve and IGC teams uncovered a protein called dMLL3/4 that allows the fertilized egg to ensure both the correct division of the maternal chromosomes and the unpacking of the paternal genetic information.

" dMLL3/4 is a gene expression regulator, therefore, it has the ability to instruct cells to perform different functions. We observed that dMLL3/4 promotes, still during egg development, the expression of a set of genes that will later be essential for balancing out differences between the chromosomes inherited from the mother and from the father," explains Paulo Navarro-Costa.

"These results open the door to new diagnostic approaches to female infertility, and to possible improvements in embryo culture media formulations for assisted reproduction techniques," adds Paulo Navarro-Costa.

"The dMLL3/4 protein was identified using fruit flies (Drosophila melanogaster) as a model organism, which again reinforces the importance of basic research and the use of model organisms as critical stepping-stones for translational research and the improvement of human health", concludes Rui Martinho.

Credit: 
Instituto Gulbenkian de Ciencia

Where Martian dust comes from

image: A portion of the Medusae Fossae Formation on Mars showing the effect of billions of years of erosion. The image was acquired by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter.

Image: 
NASA/JPL/University of Arizona

The dust that coats much of the surface of Mars originates largely from a single thousand-kilometer-long geological formation near the Red Planet's equator, scientists have found.

A study published in the journal Nature Communications found a chemical match between dust in the Martian atmosphere and the surface feature, called the Medusae Fossae Formation.

"Mars wouldn't be nearly this dusty if it wasn't for this one enormous deposit that is gradually eroding over time and polluting the planet, essentially," said co-author Kevin Lewis, an assistant professor of Earth and planetary science at the Johns Hopkins University.

In the film The Martian, a dust storm leads to a series of events that strands an astronaut played by actor Matt Damon. As in the movie, dust on Mars has caused problems for real missions, including the Spirit Mars exploration rover. The fine, powdery stuff can get into expensive instruments and obscure solar panels needed to power equipment.

On Earth, dust is separated from soft rock formations by forces of nature including wind, water, glaciers, volcanoes and meteor impacts. For more than 4 billion years, however, streams of water and moving glaciers have likely made but a small contribution to the global dust reservoir on Mars. While meteor craters are a common feature on the fourth planet from the sun, the fragments created by the impacts typically are bigger than the fine particles that comprise Martian dust.

"How does Mars make so much dust, because none of these processes are active on Mars?" said lead author Lujendra Ojha, a postdoctoral fellow in Lewis' lab. Although these factors may have played a role in the past, something else is to blame for the large swathes of dust surrounding Mars now, he said.

Ojha and the science team looked at the dust's chemical composition. Landers and rovers far apart on the planet have all reported surprisingly similar data about the dust. "Dust everywhere on the planet is enriched in sulfur and chlorine and it has this very distinct sulfur-to-chlorine ratio," Ojha said.

They also studied data captured by the spacecraft Mars Odyssey, which has orbited the planet since 2001. Ojha and his colleagues were able to pinpoint the MFF region as having an abundance of sulfur and chlorine, as well as a match to the ratio of sulfur to chlorine in Mars dust.

Earlier findings suggest that the MFF had a volcanic origin. Once 50 percent of the continental United States in size, the wind has eroded it, leaving behind an area that's now more like about 20 percent. Yet it is the largest known volcanic deposit in our solar system.

Wind-carved ridges known as yardangs are the remnants of erosion. By calculating how much of the MFF has been lost over the past 3 billion years, the scientists could approximate the current quantity of dust on Mars, enough to form a 2 to 12 meters thick global layer.

Dust particles can also affect Martian climate by absorbing solar radiation, resulting in lower temperatures at the ground level and higher ones in the atmosphere. This temperature contrast can create stronger winds, leading to more dust being lifted from the surface.

While seasonal dust storms happen every Martian year (twice as long as an Earth year), global dust storms can form about every 10 or so years.

"It just explains, potentially, one big piece of how Mars got to its current state," Lewis said.

Credit: 
Johns Hopkins University

New Application of Blue Light Sees Through Fire

Researchers at the National Institute of Standards and Technology (NIST) have demonstrated that ordinary blue light can be used to significantly improve the ability to see objects engulfed by large, non-smoky natural gas fires—like those used in laboratory fire studies and fire-resistance standards testing.

Abnormal gene copying seen in tauopathy fruit fly models

SAN ANTONIO, Texas -- It sounds like science fiction: Nefarious genes clone themselves and settle their rogue copies in distant outposts of the galaxy (namely, our DNA), causing disease.

But it's a real phenomenon, and in research published July 23, scientists at UT Health San Antonio revealed that this genetic copy-and-paste activity is significantly increased in fruit fly models of tauopathies--neurodegenerative disorders that include Alzheimer's disease.

The researchers also discovered that lamivudine, an anti-retroviral drug approved for HIV and hepatitis B, decreased the copy-making and reduced the death of neuron cells in the brains of the fruit flies.

This research, published in Nature Neuroscience, suggests a potential novel avenue to treat the memory-robbing disease, which impacts 5.7 million Americans who have an Alzheimer's diagnosis and the millions more who provide care for them.

The researchers are from the Sam & Ann Barshop Institute for Longevity & Aging Studies, the Glenn Biggs Institute for Alzheimer's & Neurodegenerative Diseases, and the Department of Cell Systems & Anatomy at UT Health San Antonio.

The team identified "transposable element" activation as a key factor in neuron death in tauopathies. These disorders are marked by deposits of tau protein in the brain. There are more than 20 tauopathies, including Alzheimer's.

Lamivudine limited expression of genes that make DNA retrotransposons, which are the gene elements that clone themselves and insert the copies into a new spot, said Bess Frost, Ph.D., assistant professor of cell systems & anatomy and member of the Barshop and Biggs institutes at UT Health San Antonio.

"We know that these genes are copying themselves at higher levels in the tauopathy fly model," Dr. Frost said. "And we know we can stop that from happening by giving them this drug."

It's thought that the copy-and-paste activity is an effect that follows tau deposit accumulation. Ultimately in the disease course, neurons die.

"The toxic tau can be present, but if we give this drug and it blocks the transposable element activity, it's enough to decrease the amount of brain cells that are dying in the fly model," Dr. Frost said.

The researchers will study whether the drug could have the same effect in a human tauopathy. So far they have clues.

"We wanted to know if the transposable element activity was relevant to a human tauopathy, so we analyzed data obtained from a public-private program called the Accelerating Medicines Partnership," Dr. Frost said.

Transposable elements were found to be expressed at higher levels in the data drawn from human samples of Alzheimer's disease and another tauopathy, progressive supranuclear palsy. This gene expression is the first step before the copying activity can occur and will be further studied, Dr. Frost said.

The team believes the fruit fly and human findings are relevant not just to Alzheimer's disease but to all of the less common tauopathies, as well.

Normal fruit flies live about 70 days. The tauopathy model lives about 30 to 40 days, and researchers observe brain cell death at about 10 days, Dr. Frost said.

Credit: 
University of Texas Health Science Center at San Antonio

Generation of random numbers by measuring phase fluctuations from a laser diode with a silicon-on-in

image: Researchers created a chip-based device measuring a millimeter square that can potentially generate quantum-based random numbers at gigabit per second speeds. The small square to the right of the penny contains all the optical components of the random number generator.

Image: 
Francesco Raffaelli, University of Bristol

WASHINGTON -- Researchers have shown that a chip-based device measuring a millimeter square could be used to generate quantum-based random numbers at gigabit per second speeds. The tiny device requires little power and could enable stand-alone random number generators or be incorporated into laptops and smart phones to offer real-time encryption.

"While part of the control electronics is not integrated yet, the device we designed integrates all the required optical components on one chip," said first author Francesco Raffaelli, University of Bristol, United Kingdom. "Using this device by itself or integrating it into other portable devices would be very useful in the future to make our information more secure and to better protect our privacy."

Random number generators are used to encrypt data transmitted during digital transactions such as buying products online or sending a secure e-mail. Today's random number generators are based on computer algorithms, which can leave data vulnerable if hackers figure out the algorithm used.

In The Optical Society (OSA) journal Optics Express, the researchers report a quantum random number generator based on randomly emitted photons from a diode laser. Because the photon emission is inherently random, it is impossible to predict the numbers that will be generated.

"Compared to other integrated quantum random number generators demonstrated recently, ours can accomplish very high generation rates with relatively low optical powers," said Raffaelli. "Using less power to produce random numbers helps avoid problems such as excess heat on the chip."

Silicon photonics

The new chip was enabled by developments in silicon photonics technology, which uses the same semiconductor fabrication techniques used to make computer chips to fabricate optical components in silicon. It is now possible to fabricate waveguides into silicon that can guide light through the chip without losing the light energy along the way. These waveguides can be integrated onto a chip with electronics and integrated detectors that operate at very high speeds to convert the light signals into information.

The new chip-based random number generator takes advantage of the fact that under certain conditions a laser will emit photons randomly. The device converts these photons into optical power using a tiny device called an interferometer. Very small photodetectors integrated into the same chip then detect the optical power and convert it into a voltage that can be turned into random numbers.

"Despite the advancements in silicon photonics, there is still light lost inside the chip, which leads to very little light reaching the detectors," said Raffaelli. "This required us to optimize all the parameters very precisely and design low noise electronics to detect the optical signal inside the chip."

The new chip-based device not only brings portability advantages but is also more stable than the same device made using bulk optics. This is because interferometers are very sensitive to environmental conditions such as temperature and it is easier to control the temperature of a small chip. It is also far easier to precisely reproduce thousands of identical chips using semiconductor fabrication, whereas reproducing the necessary precision with bulk optics is more difficult.

Testing the chip

To experimentally test their design, the researchers had a foundry fabricate the random number generator chip. After characterizing the optical and electronic performance, they used it for random number generation. They estimate a potential randomness generation rate of nearly 2.8 gigabits per second for their device, which would be fast enough to enable real-time encryption.

"We demonstrated random number generation using about a tenth of the power used in other chip-based quantum random number generator devices," said Raffaelli. "Our work shows the feasibility of this type of integrated platform."

Although the chip containing the optical components is only one millimeter square, the researchers used an external laser which provides the source of randomness and electronics and measurement tools that required an optical table. They are now working to create a portable device about the size of a mobile phone that contains both the chip and the necessary electronics.

Credit: 
Optica

Scientists develop new materials that move in response to light

video: 'Petals' of a mechanical flower in a magnetic field bend toward light as each petal is illuminated in turn.

Image: 
SilkLab, Tufts University

MEDFORD/SOMERVILLE, Mass. (July 23, 2018)--Researchers at Tufts University School of Engineering have developed magnetic elastomeric composites that move in different ways when exposed to light, raising the possibility that these materials could enable a wide range of products that perform simple to complex movements, from tiny engines and valves to solar arrays that bend toward the sunlight. The research is described in an article published today in the Proceedings of the National Academy of Sciences.

In biology, there are many examples where light induces movement or change - think of flowers and leaves turning toward sunlight. The light actuated materials created in this study are based on the principle of the Curie temperature - the temperature above which certain materials will change their magnetic properties. By heating and cooling a magnetic material, one can turn its magnetism off and on. Biopolymers and elastomers doped with ferromagnetic CrO2 will heat up when exposed to laser or sunlight, temporarily losing their magnetic properties until they cool down again. The basic movements of the material, shaped into films, sponges, and hydrogels, are induced by nearby permanent or electromagnets and can exhibit as bending, twisting, and expansion.

"We could combine these simple movements into more complex motion, like crawling, walking, or swimming," said Fiorenzo Omenetto, Ph.D., corresponding author of the study and the Frank C. Doble Professor of Engineering in the School of Engineering at Tufts. "And these movements can be triggered and controlled wirelessly, using light."

Omenetto's team demonstrated some of these complex movements by constructing soft grippers that capture and release objects in response to light illumination. "One of the advantages of these materials is that we can selectively activate portions of a structure and control them using localized or focused light," said Meng Li, the first author of the paper, "And unlike other light actuated materials based on liquid crystals, these materials can be fashioned to move either toward, or away from the direction of the light. All of these features add up to the ability to make objects large and small with complex, coordinated movements."

To demonstrate this versatility, the researchers constructed a simple "Curie engine". A light actuated film was shaped into a ring and mounted on a needle post. Placed near a permanent magnet, when a laser was focused onto a fixed spot on the ring, it locally demagnetizes that portion of the ring, creating an unbalanced net force that causes the ring to turn. As it turns, the demagnetized spot regains its magnetization and a new spot is illuminated and demagnetized, causing the engine to continuously rotate.

Materials used to create the light actuated materials include polydimethylsoloxane (PDMS), which is a widely used transparent elastomer often shaped into flexible films, and silk fibroin, which is a versatile biocompatible material with excellent optical properties that can be shaped into a wide range of forms - from films to gels, threads, blocks and sponges.

"With additional material patterning, light patterning and magnetic field control, we could theoretically achieve even more complicated and fine-tuned movements, such as folding and unfolding, microfluidic valve switching, micro and nano-sized engines and more," said Omenetto.

Credit: 
Tufts University

NIST unblinded me with science: New application of blue light sees through fire

video: This video compares two different views of a laboratory fire spread test, one in normal light and the other with NIST's narrow-spectrum illumination system that uses ordinary blue light to see through the flames. In this example, blue-light imaging allows researchers to observe, track and measure the charring of the wood sample.

Image: 
Shot by J. Gales/York University and edited by D. Sawyer/NIST

Researchers at the National Institute of Standards and Technology (NIST) have demonstrated that ordinary blue light can be used to significantly improve the ability to see objects engulfed by large, non-smoky natural gas fires--like those used in laboratory fire studies and fire-resistance standards testing.

As described in a new paper in the journal Fire Technology, the NIST blue-light imaging method can be a useful tool for obtaining visual data from large test fires where high temperatures could disable or destroy conventional electrical and mechanical sensors.

The method provides detailed information to researchers using optical analysis such as digital image correlation (DIC), a technique that compares successive images of an object as it deforms under the influence of applied forces such as strain or heat. By precisely measuring the movement of individual pixels from one image to the next, scientists gain valuable insight about how the material responds over time, including behaviors such as strain, displacement, deformation and even the microscopic beginnings of failure.

However, using DIC to study how fire affects structural materials presents a special challenge: How does one get images with the level of clarity needed for research when bright, rapidly moving flames are between the sample and the camera?

"Fire makes imaging in the visible spectrum difficult in three ways, with the signal being totally blocked by soot and smoke, obscured by the intensity of the light emitted by the flames, and distorted by the thermal gradients in the hot air that bend, or refract, light," said Matt Hoehler, a research structural engineer at NIST's National Fire Research Laboratory (NFRL) and one of the authors of the new paper. "Because we often use low-soot, non-smoky gas fires in our tests, we only had to overcome the problems of brightness and distortion."

To do that, Hoehler and colleague Chris Smith, a research engineer formerly with NIST and now at Berkshire Hathaway Specialty Insurance, borrowed a trick from the glass and steel industry where manufacturers monitor the physical characteristics of materials during production while they are still hot and glowing.

"Glass and steel manufacturers often use blue-light lasers to contend with the red light given off by glowing hot materials that can, in essence, blind their sensors," Hoehler said. "We figured if it works with heated materials, it could work with flaming ones as well."

Hoehler and Smith used commercially available and inexpensive blue light-emitting diode (LED) lights with a narrow-spectrum wavelength around 450 nanometers for their experiment.

Initially, the researchers placed a target object behind the gas-fueled test fire and illuminated it in three ways: by white light alone, by blue light directed through the flames and by blue light with an optical filter placed in front of the camera. The third option proved best, reducing the observed intensity of the flame by 10,000-fold and yielding highly detailed images.

However, just seeing the target wasn't enough to make the blue-light method work for DIC analysis, Hoehler said. The researchers also had to reduce the image distortion caused by the refraction of light by the flame--a problem akin to the "broken pencil" illusion seen when a pencil is placed in a glass of water.

"Luckily, the behaviors we want DIC to reveal, such as strain and deformation in a heated steel beam, are slow processes relative to the flame-induced distortion, so we just need to acquire a lot of images, collect large amounts of data and mathematically average the measurements to improve their accuracy," Hoehler explained.

To validate the effectiveness of their imagining method, Hoehler and Smith, along with Canadian collaborators John Gales and Seth Gatien, applied it to two large-scale tests. The first examined how fire bends steel beams and the other looked at what happens when partial combustion occurs, progressively charring a wooden panel. For both, the imaging was greatly improved.

"In fact, in the case of material charring, we feel that blue-light imaging may one day help improve standard test methods," Hoehler said. "Using blue light and optical filtering, we can actually see charring that is normally hidden behind the flames in a standard test. The clearer view combined with digital imaging improves the accuracy of measurements of the char location in time and space."

Hoehler also has been involved in the development of a second method for imaging objects through fire with colleagues at NIST's Boulder, Colorado, laboratories. In an upcoming NIST paper in the journal Optica, the researchers demonstrate a laser detection and ranging (LADAR) system for measuring volume change and movement of 3D objects melting in flames, even though moderate amounts of soot and smoke.

Credit: 
National Institute of Standards and Technology (NIST)

Mandate patient access to primary care medical records

Canada's provincial governments should mandate patient access to their electronic medical records, argue authors of a commentary in CMAJ (Canadian Medical Association Journal)

"[W]e believe that Canada's provincial governments should mandate patient portals of access to electronic medical records, as such a commitment to health information transparency would herald a new era in patient empowerment," write Dr. Iris Gorfinkel, PrimeHealth Clinical Research -- Family Practice, and Dr. Joel Lexchin, Faculty of Health, York University, Toronto, Ontario.

Although the Supreme Court of Canada ruled in 1993 that patients have the right to their personal health information, patients face obstacles to accessing primary care records, such as filling out authorization forms, fees and long waits.

Most family physicians use electronic medical records, yet it is difficult for other clinicians to access information on their patients. Some hospitals have electronic portals, such as MyChart, which allow patients to access information on results, reports and other information.

Physician workloads, government funding, costs, security and use by patients not technologically adept are some challenges to be addressed.

"Fully patient-centred care can begin only when patients are able to access their primary care records and share them with their physicians when most needed. Without this ability, patients and their families suffer needlessly, physicians are less effective and the cost-effectiveness of our universal health system is diminished."

Credit: 
Canadian Medical Association Journal