Earth

Story tips from the Department of Energy's Oak Ridge National Laboratory, August 2019

video: Dynamic vision sensors at Oak Ridge National Laboratory were trained to detect and recognize 11 different gestures, like waving and clapping, in real time. The resulting image shows movement on the pixel level.

Image: 
Kemal Fidan/Oak Ridge National Laboratory, US Dept. of Energy

Computing--"Seeing" in real time

Oak Ridge National Laboratory is training next-generation cameras called dynamic vision sensors, or DVS, to interpret live information--a capability that has applications in robotics and could improve autonomous vehicle sensing. Unlike a traditional digital camera that records large amounts of information in frames, a DVS transmits per-pixel changes in light intensity. Individual pixel locations are recorded and time stamped to the microsecond, creating data "events" that are processed by a neuromorphic network--a type of intelligent, energy-efficient computing architecture. "Because the DVS records only changes in what it sees, there is no redundant data," said ORNL SULI intern Kemal Fidan, who spent his summer learning about dynamic vision sensors under ORNL's Robert Patton. This capability makes the sensors fast, power efficient and effective in wide ranges of light intensity. Fidan's project taught a DVS to recognize human gestures such as waving, clapping and two-finger peace signs in real time. --Abby Bower [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov]

Video: https://www.youtube.com/watch?v=w9b9dvt-NYs&feature=youtu.be

Caption: Dynamic vision sensors at Oak Ridge National Laboratory were trained to detect and recognize 11 different gestures, like waving and clapping, in real time. The resulting image shows movement on the pixel level. Credit: Kemal Fidan/Oak Ridge National Laboratory, U.S. Dept. of Energy

Fusion--Tungsten rising

Using additive manufacturing, scientists experimenting with tungsten at Oak Ridge National Laboratory hope to unlock new potential of the high-performance heat-transferring material used to protect components from the plasma inside a fusion reactor. Fusion requires hydrogen isotopes to reach millions of degrees. Tungsten, the metal with the highest melting point, holds promise to withstand extreme temperatures at the edge of this reaction, yet it is brittle and difficult to machine. The ORNL team is using an additive manufacturing technique called electron beam melting to build innovative tungsten fusion components with complex, unique geometries that can't be made through traditional manufacturing. "The electron beam allows us to better control the heat distribution as tungsten is printed layer by layer," said ORNL's Betsy Ellis who led the initial experiments. "After more testing, this method may result in a better quality, full-density structure that's less prone to cracking." [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov]

Video: https://www.youtube.com/watch?v=IqNVjedrpxw&feature=youtu.be

Caption: Using electron beam melting, ORNL researchers are building innovative tungsten fusion components with complex, unique geometries that can't be made through traditional manufacturing. The electron beam allows for better control of heat distribution as tungsten is 3D printed layer by layer. Credit: Betsy Ellis/Oak Ridge National Laboratory, U.S. Dept. of Energy

Climate--Energy demand shuffle

A detailed study by Oak Ridge National Laboratory estimated how much more--or less--energy United States residents might consume by 2050 relative to predicted shifts in seasonal weather patterns across the country. ORNL's Deeksha Rastogi and colleagues used a series of sophisticated climate models run on supercomputing resources at ORNL to estimate changes in household energy demand over a 40-year period. They found that prolonged periods of increased temperatures are expected to drive a rise in electricity needs while shorter, milder cold seasons could reduce natural gas demand. The team's paper also posits that climate shifts could impact a projected decrease in residential energy demand in some rural areas. "Our results provide a highly comprehensive look at residential energy consumption that we hope will influence future analysis to understand residents' choices and behavior relative to climate and socioeconomic conditions," Rastogi said. [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov]

Images: https://www.ornl.gov/sites/default/files/2019-07/Summer_CDD_Change_ORNL.gif
and https://www.ornl.gov/sites/default/files/2019-07/Winter_HDD_Change_ORNL.gif

Caption: A detailed study of climate models at Oak Ridge National Laboratory estimated how much more, or less, energy United States residents might consume over the summer and winter months relative to predicted shifts in seasonal weather patterns across the country. The study was projected through the year 2050. Credit: Deeksha Rastogi/Oak Ridge National Laboratory, U.S. Dept. of Energy

Supercomputing--Galactic winds demystified

Using the Titan supercomputer at Oak Ridge National Laboratory, a team of astrophysicists created a set of galactic wind simulations of the highest resolution ever performed. The simulations will allow researchers to gather and interpret more accurate, detailed data that elucidates how galactic winds affect the formation and evolution of galaxies. Brant Robertson of the University of California, Santa Cruz, and Evan Schneider of Princeton University developed the simulation suite to better understand galactic winds--outflows of gas released by supernova explosions--which could help explain variations in their density and temperature distributions. The improved set of galactic wind simulations will be incorporated into larger cosmological simulations. "We now have a much clearer idea of how the high speed, high temperature gas produced by clusters of supernovae is ejected after mixing with the cooler, denser gas in the disk of the galaxy," Schneider said. --Elizabeth Rosenthal [Contact: Katie Bethea, (865) 576-8039; betheakl@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/2019-07/Robertson%5B2%5D.png

Caption: A galactic wind simulation depicts the galactic disk composed of interstellar gas and stars, shown in red, and the outflows, shown in blue, captured using Titan and the Cholla astrophysics code. Credit: Evan Schneider/Princeton University and Brant Robertson/UC Santa Cruz.

Video: https://www.youtube.com/watch?v=z5lyxPotGkQ&feature=youtu.be

Caption: This video visualizes galactic winds driven by star clusters distributed throughout the galactic disk. Credit: Evan Schneider/Princeton University and Brant Robertson/UC Santa Cruz.

Clean water--Sunny desalination

A new method developed at Oak Ridge National Laboratory improves the energy efficiency of a desalination process known as solar-thermal evaporation. Scientists fashioned tubular devices using nanoengineered graphite foam coated with carbon nanoparticles and a superhydrophobic material that increased the absorption of light energy directly from the sun. The thermal energy was used to heat seawater, producing freshwater vapor that was fed into a condenser to make potable water. The system removed more than 99.5% of salt in simulated seawater and reached 64% solar-thermal conversion efficiency. "Using this improved solar-absorbing material for interfacial water evaporation represents a more efficient use of solar energy," said ORNL's Gyoung Jang. The experiment is detailed in the journal Global Challenges. The method can be scaled up into modular systems to provide clean water at remote locations or to serve communities affected by hurricanes or floods. [Contact: Stephanie Seay, (865) 576-9894; seaysg@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/2019-07/hydrophopicDesal04.jpg

Caption: A new method developed at Oak Ridge National Laboratory improves the energy efficiency of a desalination process known as solar-thermal evaporation. Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

Drop of ancient seawater rewrites Earth's history

image: A diagrammatic representation of the Earth in the Archaean showing subducted ocean floor carrying its chemical signature into the deep mantle. The signature which includes water and chlorine is preserved in melt inclusions contained within olivine and carried back up to surface within komatiite lava flows.

Image: 
Figure created by research team for news release. Wits University

The remains of a microscopic drop of ancient seawater has assisted in rewriting the history of Earth's evolution when it was used to re-establish the time that plate tectonics started on the planet.

Plate tectonics is Earth's vital - and unique - continuous recycling process that directly or indirectly controls almost every function of the planet, including atmospheric conditions, mountain building (forming of continents), natural hazards such as volcanoes and earthquakes, formation of mineral deposits and the maintenance of our oceans. It is the process where the large continental plates of the planet continuously move, and the top layers of the Earth (crust) are recycled into the mantle and replaced by new layers through processes such as volcanic activity.

Where it was previously thought that plate tectonics started about 2.7 billion years ago, a team of international scientists used the microscopic leftovers of a drop of water that was transported into the Earth's deep mantle - through plate tectonics - to show that this process started 600 million years before that. An article on their research that proves plate tectonics started on Earth 3.3 billion years ago was published in the high impact academic journal, Nature, on 16 July.

"Plate tectonics constantly recycles the planet's matter, and without it the planet would look like Mars," says Professor Allan Wilson from the Wits School of Geosciences, who was part of the research team.

"Our research showing that plate tectonics started 3.3 billion years ago now coincides with the period that life started on Earth. It tells us where the planet came from and how it evolved."

Earth is the only planet in our solar system that is shaped by plate tectonics and without it the planet would be uninhabitable.

For their research, the team analysed a piece of rock melt, called komatiite - named after the type occurrence in the Komati river near Barberton in Mpumalanga - that are the leftovers from the hottest magma ever produced in the first quarter of Earth's existence (the Archaean). While most of the komatiites were obscured by later alteration and exposure to the atmosphere, small droplets of the molten rock were preserved in a mineral called olivine. This allowed the team to study a perfectly preserved piece of ancient lava.

"We examined a piece of melt that was 10 microns (0.01mm) in diameter, and analysed its chemical indicators such as H2O content, chlorine and deuterium/hydrogen ratio, and found that Earth's recycling process started about 600 million years earlier than originally thought," says Wilson. "We found that seawater was transported deep into the mantle and then re-emerged through volcanic plumes from the core-mantle boundary."

The research allows insight into the first stages of plate tectonics and the start of stable continental crust.

"What is exciting is that this discovery comes at the 50th anniversary of the discovery of komatiites in the Barberton Mountain Land by Wits Professors, the brothers Morris and Richard Viljoen," says Wilson.

Credit: 
University of the Witwatersrand

Is your supercomputer stumped? There may be a quantum solution

image: This is Chia Cheng 'Jason' Chang

Image: 
Marilyn Chung/Lawrence Berkeley National Laboratory

Some math problems are so complicated that they can bog down even the world's most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

A new study led by a physicist at Lawrence Berkeley National Laboratory (Berkeley Lab), published in the journal Scientific Reports, details how a quantum computing technique called "quantum annealing" can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

Seeking a quantum solution to really big problems

"No quantum annealing algorithm exists for the problems that we are trying to solve," said Chia Cheng "Jason" Chang, a RIKEN iTHEMS fellow in Berkeley Lab's Nuclear Science Division and a research scientist at RIKEN, a scientific institute in Japan.

"The problems we are looking at are really, really big," said Chang, who led the international team behind the study. "The idea here is that the quantum annealer can evaluate a large number of variables at the same time and return the right solution in the end."

The same problem-solving algorithm that Chang devised for the latest study, and that is available to the public via open-source code, could potentially be adapted and scaled for use in systems engineering and operations research, for example, or in other industry applications.

Classical algebra with a quantum computer

"We are cooking up small 'toy' examples just to develop how an algorithm works. The simplicity of current quantum annealers is that the solution is classical - akin to doing algebra with a quantum computer. You can check and understand what you are doing with a quantum annealer in a straightforward manner, without the massive overhead of verifying the solution classically."

Chang's team used a commercial quantum annealer located in Burnaby, Canada, called the D-Wave 2000Q that features superconducting electronic elements chilled to extreme temperatures to carry out its calculations.

Access to the D-Wave annealer was provided via the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory (ORNL). "These methods will help us test the promise of quantum computers to solve problems in applied mathematics that are important to the U.S. Department of Energy's scientific computing mission," said Travis Humble, director of ORNL's Quantum Computing Institute.

Quantum data: A one, a zero, or both at the same time

There are currently two of these machines in operation that are available to the public. They work by applying a common rule in physics: Systems in physics tend to seek out their lowest-energy state. For example, in a series of steep hills and deep valleys, a person traversing this terrain would tend to end up in the deepest valley, as it takes a lot of energy to climb out of it and the least amount of energy to settle in this valley.

The annealer applies this rule to calculations. In a typical computer, memory is stored in a series of bits that are occupied by either one or a zero. But quantum computing introduces a new paradigm in calculations: quantum bits, or qubits. With qubits, information can exist as either a one, a zero, or both at the same time. This trait makes quantum computers better suited to solving some problems with a very large number of possible variables that must be considered for a solution.

Each of the qubits used in the latest study ultimately produces a result of either a one or a zero by applying the lowest-energy-state rule, and researchers tested the algorithm using up to 30 logical qubits.

The algorithm that Chang developed to run on the quantum annealer can solve polynomial equations, which are equations that can have both numbers and variables and are set to add up to zero. A variable can represent any number in a large range of numbers.

When there are 'fewer but very dense calculations'

Berkeley Lab and neighboring UC Berkeley have become a hotbed for R&D in the emerging field of quantum information science, and last year announced the formation of a partnership called Berkeley Quantum to advance this field.

Chang said that the quantum annealing approach used in the study, also known as adiabatic quantum computing, "works well for fewer but very dense calculations," and that the technique appealed to him because the rules of quantum mechanics are familiar to him as a physicist.

The data output from the annealer was a series of solutions for the equations sorted into columns and rows. This data was then mapped into a representation of the annealer's qubits, Chang explained, and the bulk of the algorithm was designed to properly account for the strength of the interaction between the annealer's qubits. "We repeated the process thousands of times" to help validate the results, he said.

"Solving the system classically using this approach would take an exponentially long time to complete, but verifying the solution was very quick" with the annealer, he said, because it was solving a classical problem with a single solution. If the problem was quantum in nature, the solution would be expected to be different every time you measure it.

Real-world applications for a quantum algorithm

As quantum computers are equipped with more qubits that allow them to solve more complex problems more quickly, they can also potentially lead to energy savings by reducing the use of far larger supercomputers that could take far longer to solve the same problems.

The quantum approach brings within reach direct and verifiable solutions to problems involving "nonlinear" systems - in which the outcome of an equation does not match up proportionately to the input values. Nonlinear equations are problematic because they may appear more unpredictable or chaotic than other "linear" problems that are far more straightforward and solvable.

Chang sought the help of quantum-computing experts in quantum computing both in the U.S. and in Japan to develop the successfully tested algorithm. He said he is hopeful the algorithm will ultimately prove useful to calculations that can test how subatomic quarks behave and interact with other subatomic particles in the nuclei of atoms.

While it will be an exciting next step to work to apply the algorithm to solve nuclear physics problems, "This algorithm is much more general than just for nuclear science," Chang noted. "It would be exciting to find new ways to use these new computers."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Ancient plankton help researchers predict near-future climate

The Mauna Loa Observatory in Hawai'i recently recorded the highest concentration of carbon dioxide, or CO2, levels in human history. In fact, the last time CO2 levels surpassed 400 parts per million was during the Pliocene, a geological epoch between two and five million years ago, when oceans surged 50 feet higher and small icecaps barely clung to the poles.

"The Pliocene wasn't a world that humans and our ancestors were a part of," said University of Arizona associate professor of geosciences Jessica Tierney. "We just started to evolve at the end of it."

Now that we've reached 415 parts per million CO2, Tierney thinks the Pliocene can be used to understand the climate shifts of the very near future. Past studies attempted this, but a nagging discrepancy between climate models and fossil data from that part of Earth's history muddled any potential insights.

A new study, published today in Geophysical Research Letters, used a different, more reliable type of fossil data than past studies. Tierney is the lead on the study, which resolved the discrepancy between fossil data and climate model simulations and showed that, yes, the Pliocene is a good analog for future climate predictions.

The Pliocene Problem

Before the industrial revolution, CO2 levels hovered around 280 ppm. For perspective, it took over two million years for CO2 levels to naturally decline from 400 ppm to pre-industrial levels. In just over 150 years, humanity has caused those levels to rebound.

Past proxy measurements of Pliocene sea surface temperatures led scientists to conclude that a warmer Earth caused the tropical Pacific Ocean to be stuck in an equatorial weather pattern called El Niño.

Normally, as the trade winds sweep across the warm surface waters of the Pacific Ocean from east to west, warm water piles up in the eastern Pacific, cooling the western side of the ocean by about 7 to 9 degrees Fahrenheit. But during an El Niño, the temperature difference between the east and west drops to just under 2 degrees, influencing weather patterns around the world, including Southern Arizona. El Niños typically occur about every three to seven years, Tierney said.

The problem is, climate models of the Pliocene, which included CO2 levels of 400 ppm, couldn't seem to simulate a permanent El Niño without making funky, unrealistic changes to model conditions.

"This paper was designed to revisit that concept of the permanent El Niño and see if it really holds up against a reanalysis of the data," she said. "We find it doesn't hold up."

Fat Thermometers

About 20 years ago, scientists found they could deduce past temperatures based on chemical analysis of a specific kind of fossilized shell of a type of plankton called foraminifera.

"We don't have thermometers that can go to the Pliocene, so we have to use proxy data instead," Tierney said.

Since then, scientists have learned that foraminifera measurements can be skewed by ocean chemistry, so Tierney and her team instead used a different proxy measurement - the fat produced by another plankton called coccolithophores. When the environment is warm, coccolithophores produce a slightly different kind of fat than when it's cold, and paleoclimatologists like Tierney can read the changes in the fat, preserved in ocean sediments, to deduce sea-surface temperatures.

"This is a really commonly used and reliable way to look at past temperatures, so a lot of people have made these measurements in the Pliocene. We have data from all over the world," she said. "Now we use this fat thermometer that we know doesn't have complications, and we're sure we can get a cleaner result."

El Niñ...Nope

Tierney and her team found that the temperature difference between the eastern and western sides of the Pacific did decrease, but it was not pronounced enough to qualify as a full-fledged permanent El Niño.

"We didn't have a permanent El Niño, so that was a bit of an extreme interpretation of what happened," she said. "But there is a reduction in the east-west difference - that's still true."

The eastern Pacific got warmer than the western, which caused the trade winds to slacken and changed precipitation patterns. Dry places like Peru and Arizona might have been wetter. These results from the Pliocene agree with what future climate models have predicted, as a result of CO2 levels reaching 400 ppm.

This is promising because now the proxy data matches the Pliocene climate models. "It all checks out," Tierney said.

The Pliocene, however, was during a time in Earth's history when the climate was slowly cooling. Today, the climate is getting hotter very quickly. Can we really expect a similar climate?

"The reason today sea levels and ice sheets don't quite match the climate of the Pliocene is because it takes time for ice sheets to melt," Tierney said. "However, the changes in the atmosphere that happen in response to CO2 - like the changes in the trade winds and rainfall patterns - can definitely occur within the span of a human life."

Credit: 
University of Arizona

Despite treatment, elderly cancer patients have worse outcomes if HIV-positive

TAMPA, Fla. (August 1, 2019) - Elderly cancer patients who are HIV-positive, particularly those with prostate and breast cancers, have worse outcomes compared to cancer patients in the same age range who do not have HIV. A Moffitt Cancer Center researcher, in collaboration with investigators at the National Cancer Institute, Duke University, and Johns Hopkins Bloomberg School of Public Health, took a closer look at the disparity, factoring in whether or not cancer treatment had an impact on outcomes among this patient population. Their findings were published today in JAMA Oncology.

"Previous studies have shown that HIV-infected cancer patients are more likely to die from their cancer than HIV-uninfected cancer patients. However, those studies have not been able to take into account detailed information on the treatments patients may have received, , including the exact type or timing of treatment," said Anna E. Coghill, Ph.D, M.P.H., assistant member of the Cancer Epidemiology Department at Moffitt.

Using the Surveillance, Epidemiology, and End Results Medicare-linked data, the researchers evaluated 288 HIV-infected and 307,980 HIV-uninfected patients, ages 65 years or older, who were diagnosed with non-advanced colorectal, lung, prostate or breast cancer and received stage-appropriate cancer treatment during the year after their cancer diagnosis.

Results showed that cancer-specific mortality was higher in HIV-infected cancer patients compared with their HIV-uninfected counterparts in breast and prostate cancers. Furthermore, they found that HIV-infected women were nearly twice as likely to experience disease relapse or death after successfully completing initial cancer therapy.

"As the HIV population continues to age, the association of HIV infection with poor breast and prostate cancer outcomes will become more important, especially because prostate cancer is projected to become the most common malignancy in the HIV population by 2020," said Coghill. "It is why we are stressing the need for more research on clinical strategies to improve outcomes for HIV-infected cancer patients."

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

'Green' taxes

It is believed that carbon dioxide emissions into the atmosphere are mainly regulated by 'direct' economic instruments - the carbon tax and the Emissions Trading System (ETS). However, a comparative analysis has shown that 'indirect' instruments, such as excise taxes on motor fuel and other energy taxes, did not yield any lesser impact than their 'direct' counterparts, and, over time, were even more effective. This is the conclusion drawn by Ilya Stepanov, researcher at the Higher School of Economics, in his article, 'Taxes in the Energy Sector and Their Role in Reducing Greenhouse Gas Emissions'.

Context

In 2015, 197 countries signed (and the majority have already ratified) the Paris Agreement, thereby enacting the world community's pledge to move towards low-carbon development. This transition marks a major change for the energy sector, which is responsible for two thirds of the world's greenhouse gas emissions. The efficiency of energy production is gradually increasing, and conditions of inter-fuel competition are changing in favor of low-carbon energy sources.

A widely held belief in the scientific literature is that the price of carbon plays a major role in climate policy. It is determined either by an appropriate tax, or through the Emissions Trading System (ETS). Both methods are considered 'direct' economic instruments of climate policy, but they appeared relatively recently.

Initially, the use of fossil fuels was regulated exclusively by energy taxes. The first duties on gasoline appeared in Denmark and Sweden in 1917 and 1924, respectively. Since 1957, fiscal policy has become extended to other types of hydrocarbons, including petroleum products and coal. The main purpose of energy taxes was to regulate the import of energy resources, as well as to ensure stable revenue for the state budget. Environmental motives began to appear only in the 1980s. European countries are the world's leaders of 'green' taxation. At first, the emphasis in fiscal regulation was made on combating local air pollution, and later on global climate change. Carbon regulations were first implemented in the 1990s, and they were actively distributed only in the last decade.

The first carbon tax was introduced in Finland in 1991. Today, the tax is collected in 16 European countries.

The world's first emissions trading system was launched in 2005. Initially it covered 24 European countries; now it includes 31.

Indirect vs Direct Instruments

The fundamental difference between energy taxes and a carbon tax or the ETS is that energy taxes only indirectly contribute to reducing greenhouse gas emissions. If the 'direct' rate is calculated per unit of emissions, then the 'indirect' rate is set in proportion to the amount of energy used, and not the proportion of carbon contained in it.

'Indirect' taxes are applied much more widely than 'direct' taxes, covering more sectors of the economy and sources of emissions. If energy tax rates were to be changed, the impact on reducing greenhouse gas emissions might be greater than that of 'direct' regulatory measures.

In the EU, most of the energy taxation comes from the transport industry. Taxes and excise taxes on motor fuels (diesel, gasoline, kerosene, fuel oil, etc.) form the bulk of tax revenue from the 'indirect' regulation of greenhouse gas emissions.

The role of taxes on energy products in the EU has a fundamental impact on the development of the industry. 'The share of excise taxes on gasoline and diesel in the final cost of production amounts to more than 30% on average, and for some European countries it is above 50%,' the author writes.
-

For most European countries, 'indirect' measures generate more tax revenue than direct ones. In Norway, they generate almost 1.5 times more; in Sweden, almost twice more; and in Denmark, more than 5 times more. This is due to the fact that 'direct' regulatory instruments still have relatively limited coverage. On average, in European countries, the carbon tax covers no more than 25% of carbon dioxide emissions, and the ETS covers only 45%.

The Study

To understand which fiscal instrument is more effective in reducing greenhouse gas emissions, Ilya Stepanov analyzed panel data for 30 European countries from 1995 to 2016.

The researcher first calculated the so-called 'explicit' carbon price for each country. It consists of a carbon tax, the ETS and a collection of other energy charges. He found that the greatest tax burden on carbon dioxide emissions from burning fossil fuels is in Sweden, Finland, Denmark and Malta. In these countries, the 'explicit' carbon price reaches 96 to 117 euros per ton of CO2. The lowest fiscal burden was found in Poland (37 euros per ton), Bulgaria (27 euros/ton) and Hungary (4 euros/ton).

Countries with a low fiscal burden on greenhouse gas emissions have GDPs with a high carbon intensity, while countries with a high 'explicit' carbon price have the opposite,' the author observes.

Stepanov analyzed how changes in different components of the 'explicit' carbon price affected carbon dioxide emissions. The results showed that both 'direct' and 'indirect' price signals negatively affect the carbon intensity of a country's GDP. Thus, an increase in 'direct' fiscal instruments by 1% during 1995-2016, on average, led to a reduction in the carbon intensity of a country's GDP by 2.3%. For comparison, an increase of 1% in 'indirect' price signals resulted in a 4% reduction in the carbon intensity of GDP.

However, when comparing the results of an analysis of the periods of 1995 to 2016 and 2005 to 2016, the difference in the impact of 'indirect' and 'direct' fiscal instruments becomes less significant. The researcher connects this with the introduction of the European Emissions Trading System in 2005 and the spread of the carbon tax to a greater number of countries.

Credit: 
National Research University Higher School of Economics

Old cells, new tricks -- important clue to AML diagnosis and cure discovered

Around 22,000 people will be diagnosed this year in the US with acute myeloid leukemia (AML), the second most common type of leukemia diagnosed in adults and children, and the most aggressive of the leukemias. Less than one third of AML patients survive five years beyond diagnosis.

Researchers from Australia's Monash University have discovered a key reason why this disease is so difficult to treat and therefore cure.

The study, led by Associate Professor Ross Dickins from the Australian Centre for Blood Diseases, is published today in the prestigious journal Cell Stem Cell. The paper identifies an important new concept relevant to clinicians involved in the diagnosis and treatment of AML patients.

Acute myeloid leukemia is characterised by an overproduction of immature white blood cells that fail to mature properly. These leukaemia cells crowd the bone marrow, preventing it from making normal blood cells. In turn this causes anaemia, infections, and if untreated, death.

Acute myeloid leukemia remains a significant health problem, with poor outcomes despite chemotherapy and stem cell transplantation.

For decades it has been thought that AML growth is driven by a sub-population of immature cancer cells called 'leukaemia stem cells', which lose their cancerous properties when they mature. Hence there has been growing international interest in developing therapies aimed at forcing immature cancer cells to 'grow up'.

Using genetically engineered mouse models and human AML cells, Associate Professor Dickins and his Monash-led team have found that maturation of AML cells is not unidirectional as originally thought, but can instead be reversible.

The team, which also includes researchers from the Walter and Eliza Hall Institute of Medical Research and several international collaborators, found that even mature AML cells can 'turn back the clock' to become immature again.

This plasticity means that even mature AML cells can make a major contribution to future leukaemia progression and therapy resistance.

The discovery has significant implications for the way that AML is treated, according to Associate Professor Dickins. "The AML field has traditionally accepted a model where leukemia maturation is a one-way street," he said.

"By demonstrating reversible leukaemia maturation, our study raises doubts around therapeutic strategies that specifically target just leukaemia stem cells. It highlights the need to eradicate all tumour cells irrespective of maturation state."

These new findings are important because several new drugs that force leukaemia cells to mature have now entered the clinic. The study by Dickins and colleagues will help clinicians re-shape their thinking to find ways to eradicate mature leukaemia cells along with their immature counterparts.

Credit: 
Monash University

Study examines direct-to-consumer stem cell clinics in 6 Southwestern states

This direct-to-consumer stem cell marketplace has come under increasing scrutiny, but relatively little is known about the clinics that deliver these treatments or how the treatments they offer align with the expertise of the practitioners providing them. In a paper published August 1 in the journal Stem Cell Reports, investigators offer a detailed characterization of nearly 170 stem cell businesses across six southwestern states. The study focused on Arizona, California, Colorado, Nevada, New Mexico, and Utah, where the researchers estimate that about one-third of all stem cell clinics in the US are located.

"Previous studies have built up a broad picture of the direct-to-consumer stem cell industry," says Emma Frow, an assistant professor in the School for the Future of Innovation in Society and the School of Biological and Health Systems Engineering at Arizona State University, co-first author on the paper along with David Brafman, also an assistant professor of bioengineering at Arizona State University.

"We took a deeper dive into a smaller number of clinics and found that there's a lot of variation among the businesses offering these services," she says. "About 25% focus exclusively on stem cells, but many others are facilities like orthopedic and sports medicine clinics that have added stem cells to their roster of services on offer. For these clinics, it's very difficult to know how much of their business comes from stem cell treatments."

The researchers conducted extensive online searches for stem cell clinics in the six states. "There's no exhaustive list of all the clinics that exist," Frow says. "This is a lively marketplace, with businesses opening and closing and changing their names." For the 169 businesses they identified, they catalogued the treatments being offered, the medical conditions these clinics purported to treat, and the types of cells they claimed to use. For the 25% of clinics focused solely on stem cells, they also looked at the stated expertise of the care providers at these clinics in relation to the medical conditions they offer to treat with stem cells.

The researchers found that orthopedic, inflammatory, and pain conditions were the main types of medical conditions treated with stem cells at direct-to-consumer stem cell clinics in the Southwest. Frow notes that these types of conditions "tend to be chronic problems that often are not curable. The market has really capitalized on targeting conditions that are hard to manage with existing therapies."

Earlier studies have shown a lower percentage of clinics treating inflammatory conditions. "This could mean that the number of clinics treating inflammatory conditions is on the rise or that, in the Southwest, there is more focus on treating inflammatory conditions than in other parts of the US," Frow suggests.

The researchers also found differences in the degree to which the listed expertise of care providers at stem cell clinics matched the medical conditions they treat with stem cells. For example, they identified that specialists in orthopedics and sports medicine were more likely to restrict stem cell treatments to conditions related to their medical specialties, while care providers listing specialties in cosmetic or alternative medicine were more likely to treat a much wider range of conditions with stem cells.

Public discussions of direct-to-consumer stem cell treatments usually treat clinics as though their business models were all similar, but this study highlights some key differences across these clinics. "We think it makes a difference whether a business is focused solely on stem cells or offers it as one treatment among many," Frow says. "And we think it's important to pay attention to the medical qualifications and expertise of the care providers offering stem cell treatments. Just because someone is board certified doesn't necessarily mean they are qualified to provide stem cell treatments. You really need to ask what they are board certified in and whether their medical expertise is well-matched to the condition you are seeking treatment for."

Recent moves by the FDA to tighten up its guidelines and restrict the practices of these clinics have generated a lot of attention. The authors of this study see their work as contributing to these discussions. "We want to bring more transparency to discussions of the direct-to-consumer stem cell marketplace and to empower consumers to figure out what kinds of questions to ask when they're considering treatment," Frow says. "We also want to help the scientific community get a better understanding of the situation and to help the FDA and state medical boards think through their priorities with regards to regulating the market."

Credit: 
Cell Press

Turtle embryos play a role in determining their own sex

image: This image shows a turtle embryo.

Image: 
Ye et. al / Current Biology

In certain turtle species, the temperature of the egg determines whether the offspring is female or male. But now, new research shows that the embryos have some say in their own sexual destiny: they can move around inside the egg to find different temperatures. The study, publishing August 1 in the journal Current Biology, examines how this behavior may help turtles offset the effects of climate change.

"We previously demonstrated that reptile embryos could move around within their egg for thermoregulation, so we were curious about whether this could affect their sex determination," says corresponding author Wei-Guo Du, professor at the Chinese Academy of Sciences. "We wanted to know if and how this behavior could help buffer the impact of global warming on offspring sex ratios in these species."

Du and his colleagues incubated turtle eggs under a range of temperatures both in the laboratory and in outdoor ponds. They found that a single embryo could experience a temperature gradient of up of 4.7°C within its egg. This is significant because any shift larger than 2°C can massively change the offspring sex ratio of many turtle species, Du said.

In half of the eggs, they applied capsazepine, a chemical that blocked temperature sensors, to prevent behavioral thermoregulation. After the eggs hatched, the researchers found that the embryos without behavioral thermoregulation had developed as either almost all males or almost all females, depending on the incubation temperatures. In contrast, embryos that were able to react to nest temperatures moved around inside their eggs; about half of them developed as males and the other half as females.

"The most exciting thing is that a tiny embryo can influence its own sex by moving within the egg," Du says.

By moving around the egg to find what Richard Shine, a professor at Macquarie University of Australia and one of the co-authors, calls the "Goldilocks Zone"--where the temperature is not too hot and not too cold--the turtles can shield against extreme thermal conditions imposed by changing temperatures and produce a relatively balanced sex ratio. "This could explain how reptile species with temperature-dependent sex determination have managed to survive previous periods in Earth history when temperatures were far hotter than at present," he says.

But this behavior has limitations, Du says, depending on the conditions of the egg and the embryo itself. "Embryonic thermoregulation can be limited if the thermal gradient within an egg is too small, or if the embryo is too large to move around or too young to have developed these abilities yet," he says.

Additionally, the behavior cannot buffer the impact of episodes of extremely high temperatures, which are predicted to increase with climate change, Du says.

"The embryo's control over its own sex may not be enough to protect it from the much more rapid climate change currently being caused by human activities, which is predicted to cause severe female-biased populations," he says. "However, the discovery of this surprising level of control in such a tiny organism suggests that in at least some cases, evolution has conferred an ability to deal with such challenges."

Du says that this study indicates that these species may have some ways not yet discovered to buffer this risk. "Our future studies will explore the adaptive significance of embryonic thermoregulation as well as the other behavioral and physiological strategies adopted by embryos and mothers to buffer the impact of climate warming on turtles."

Credit: 
Cell Press

Repairing harmful effects of inbreeding could save the iconic Helmeted Honeyeater

Habitat destruction results in wildlife populations that are small, made up of relatives, and have low genetic variation.

Breeding between relatives (inbreeding) has harmful effects called 'inbreeding depression', often experienced as a shortened life, a poor breeder, or even death.

Not surprisingly then, most animals avoid breeding with their relatives. But when populations become too small, it becomes impossible to find a mate who is not some kind of relation.

Research published today in Current Biology by a collaborative research team led by Monash University reveals just how much damage is done by inbreeding in the critically endangered Helmeted Honeyeater.

Professor Paul Sunnucks from Monash University's School of Biological Sciences, who led the study said the findings have wide-ranging implications for wildlife management.

"Our study combines over 30 years of demanding fieldwork and advanced genetics to quantify how much harm is done by inbreeding in the last wild population of the Helmeted Honeyeater, and identifies ways forward," Professor Sunnucks said.

The Monash-led study involved collaboration with Zoos Victoria, the Victorian Department of Environment, Land, Water and Planning (DELWP), and other conservation partners, with funding from the Australian Research Council. The Helmeted Honeyeater, named for its 'helmet' of head feathers, is a much-loved State emblem found only in a small region of the State of Victoria.

Since European settlement of Australia, a staggering 99% of the floodplain forest essential for Helmeted Honeyeaters has been converted to agricultural land and towns. Consequently, only 50 wild Helmeted Honeyeaters remained by 1989. Thanks to conservation actions including captive breeding at Healesville Sanctuary and habitat restoration, there are now about 230 free-living Helmeted Honeyeaters, living precariously in a single location, Yellingbo Nature Conservation Reserve.

The Helmeted Honeyeater would now very likely be extinct if not for those 30 years of conservation actions involving DELWP, Zoos Victoria, Parks Victoria, Melbourne Water, and hundreds of passionate volunteers centred on the Friends of the Helmeted Honeyeater.

"Most Helmeted Honeyeaters over that time have been given coloured leg-bands so that their success in life and love can be followed," said DELWP Senior Ornithologist Bruce Quin, who led the monitoring.

The result is a detailed account of how long each of the birds lived and how many offspring they had in their lifetimes. Combining this information on breeding success with advanced genetic analysis, the research team could quantify the profound damage caused to Helmeted Honeyeaters by inbreeding: the most inbred birds produced only one-tenth as many young as the least inbred.

"Clearly, inbreeding depression is likely to impact the population's chances of survival," said the paper's first author Dr Katherine Harrisson, a Monash PhD graduate now at La Trobe University, and the Arthur Rylah Institute (DELWP).

While inbreeding depression is a big problem, it can be reduced by bringing in 'new blood' from a closely-related population. Such 'gene pool mixing' is an emerging approach to help threatened species. But the wild population of Helmeted Honeyeater is the last of its kind, so where can new genes come from?

Helmeted Honeyeaters are the most distinctive subspecies of the widespread Yellow-tufted honeyeater. In careful trials of gene pool mixing, Zoos Victoria has cross-bred Helmeted Honeyeaters with members of the most similar other subspecies. "Mixing the two subspecies in captivity is going very well, with no signs of genetic or other problems," said Dr Michael Magrath, a Senior Research Manager from Zoos Victoria. "We have plans to release the first out-crossed birds into the wild population at Yellingbo soon," he said.

Professor Sunnucks said that all being well, gene pool mixing could help overcome the burden of inbreeding depression and bolster an enduring recovery of the Helmeted Honeyeater.

Credit: 
Monash University

UC researchers unlock cancer cells' feeding mechanism, central to tumor growth

image: Atsuo Sasaki, Ph.D., associate professor at the UC College of Medicine

Image: 
Colleen Kelley / UC Creative Services

CINCINNATI--An international team led by researchers from the University of Cincinnati and Japan's Keio and Hiroshima universities has discovered the energy production mechanism of cancerous cells that drives the growth of the nucleolus and causes tumors to rapidly multiply.

The findings, published Aug. 1 in the journal Nature Cell Biology, could lead to the development of new cancer treatments that would stop tumor growth by cutting the energy supply to the nucleolus.

"The nucleolus is the 'eye' of the cancer storm that ravages patients' bodies. Being able to control the eye would be a true game-changer in cancer treatment," said Atsuo Sasaki, PhD, associate professor at the UC College of Medicine and one of the research team's lead investigators.

The nucleolus, located near the center of the nucleus, produces ribosomes. The discovery that cancerous cells have enlarged nucleoli occurred over 100 years ago, and studies have since shown that nucleolus enlargement results in significant ribosome increases, propelling protein synthesis to mass produce cancer cells, according to Sasaki. But, exactly how the nucleolus produces a massive amount of ribosome in cancerous cells has largely remained a mystery, he says.

"Nucleolus enlargement is a telltale sign of cancer, and its size has long been used as a yardstick to determine how advanced cancer is in patients," says Sasaki. "Now, our research team knows that the nucleolus quickly expands by devouring Guanosine Triphosphate, or GTP, a nucleotide and one of the building blocks needed for to create RNA, which is prevalent in cancerous cells."

"We were surprised to find out that among all types of energy that could be used for cell growth, it's GTP that spikes and plays the most crucial role in ribosome increases that are associated with nucleolus enlargement in cancer cells. We knew right away that this was a substantial discovery that would require a sweeping range of expertise to understand what it truly meant," Sasaki adds.

Sasaki says researchers saw an elevated level of inosine monophosphate dehydrogenase (IMPDH) in cancer cells, which accelerates GTP production and in turn fuels nucleolus growth. This is a major step toward solving the mystery surrounding nucleolus growth in cancer cells, he says.

To conduct the research, the multidisciplinary team zeroed in on the energy production pathways in malignant brain tumors and glioblastoma, the deadliest type of brain cancer, in animal models, followed by cohort studies for human specimens. The results showed a significant increase of GTP, which is a form of energy, in glioblastoma. Experts took a deeper look at brain tumor cells and determined that the significantly elevated level of IMPDH in cancer cells accelerates GTP production.

The close relationship between IMPDH and the nucleolus was discovered and prompted Sasaki's team to develop a new method of metabolic analysis, which enabled researchers to obtain critical data for demonstrating that GTP produced from IMPDH activities is used for the nucleolus's ribosome synthesis; this led to the discovery of a clear correlation between the suppression of glioblastoma cell growth and IMPDH inhibition, which prolonged animal models' lives.

"Thanks to our multinational cross-disciplinary collaboration and the team's hard work, we were able to unlock the mechanism through which cancerous cells hijack GTP metabolism to take control over the nucleoli. We are excited to continue our research on GTP for the development of therapies to annihilate the 'eye of cancer' in patients," Sasaki says.

Credit: 
University of Cincinnati

Largest ever study finds links in epilepsy genes

Researchers and patients from Austin Health and the University of Melbourne have been involved in the largest ever study looking at the genetic sequences of people with epilepsy.

The international research, published this week in the American Journal of Human Genetics, involved almost 18,000 people worldwide and identified rare genetic variations that are associated with a higher risk of epilepsy.

Professor Sam Berkovic, Director of Epilepsy with Austin Health and Laureate Professor with the University of Melbourne, said the study found there were genetic links shared by both severe forms of epilepsy and less severe forms of the disease.

"There are approximately 50 million people across the world with epilepsy, a condition that causes repeated seizures due to excessive electrical activity in the brain," Professor Berkovic said.

"Epilepsy comes in a number of different forms ranging from less common variations such as developmental and epileptic encephalopathies that cause severe symptoms, to other, less severe forms such as genetic generalised epilepsy and non-acquired focal epilepsy that account for up to 40 per cent of cases.

"This research is important because the more we understand the genes that are linked to epilepsy, the better we can tailor treatments to reduce the symptoms and let patients live more active lives."

The study brought together more than 200 researchers from across the world to better understand the genetics of the disease.

Researchers used sequencing to look at the genes of 17,606 people from across 37 sites in Europe, North America, Australasia and Asia and found rare genetic variations that are associated with both severe and less severe forms of epilepsy.

The coordination of the clinical data occurred in Melbourne with the gene sequencing performed at the Broad Institute, Boston, led by Dr Benjamin Neale.

1370 patients from Austin Health and the University of Melbourne were part of the study, which was five times larger than any previous research looking at the gene sequencing of epilepsy patients.

"Genetic sequencing has significantly improved our understanding of the risk factors association with epilepsy in recent years," Professor Berkovic said.

"This study shows that more and less severe forms of the disease share similar genetic features, and the more we understand these features the better chance we have to personalise the care we give to patients.

"There are already plans in place to double the size of the study in the next year to further explore the significance of the genetic variations that are linked with epilepsy."

Credit: 
University of Melbourne

Super-resolution microscopy sheds light on how dementia protein becomes dysfunctional

image: The signalling protein Fyn moving and forming clusters in living brain cells - viewed using super-resolution microscopy.

Image: 
Meunier Lab, University of Queensland

University of Queensland researchers have used super-resolution microscopy to observe key molecules at work inside living brain cells, further unravelling the puzzle of memory formation and the elusive causes of dementia.

UQ Queensland Brain Institute's Clem Jones Centre for Ageing and Dementia Research Professors Frédéric Meunier and Jürgen Götz found a protein, Tau, involved in Alzheimer's disease affects the organisation of the signalling protein Fyn, which plays a critical role in memory formation.

"One of the distinguishing features of Alzheimer's disease is the tangles of Tau protein that form inside brain cells, but this is the first time anyone has demonstrated that Fyn nanoclustering is affected by Tau," Professor Götz said.

Professor Meunier said single molecule imaging in living brain cells allowed unprecedented access to the organisation of key proteins in small nanoclusters that were not detectable previously.

"We have shown that Tau controls the Fyn nanoclustering in dendrites, where the communication between brain cells occurs," Professor Meunier said.

"When Tau is mutated, Fyn makes aberrantly large clusters, thereby altering nerve signals and contributing to dysfunction of the synapse-junctions between nerve cells."

Professor Meunier's team used the super-resolution single molecule imaging technique to see how Tau and its mutants control Fyn nanoclustering.

Professor Meunier went on to investigate a different mutant of Tau found in families with a very high risk of developing frontotemporal dementia and found that Fyn was over-clustered in the spines of dendrites.

"Imagine that you have clustering of Fyn, a signalling molecule, throughout your life; it's going to give rise to an over-signalling problem -- this could be one of the ways in which Fyn is toxic to cells," he said.

"The spines of the dendrites are critical to how nerve cells communicate with each other and underpin memory and learning."

Exactly what causes Alzheimer's and other forms of dementia is still a mystery, but Fyn is linked to both the plaques of amyloid protein that form between brain cells, and tangles of Tau protein that form inside brain cells -- two distinguishing features of Alzheimer's disease.

"Super-resolution single molecule imaging gives us an unprecedented insights into what is happening in living nerve cells, with the aim of understanding the biology behind these complex and debilitating diseases," Professor Meunier said.

Credit: 
University of Queensland

Drug combo heralds major shift in chronic lymphocytic leukemia treatment

A combination of two drugs keeps patients with chronic lymphocytic leukemia disease-free and alive longer than the current standard of care, according to a phase-3 clinical trial of more than 500 participants conducted at Stanford Medicine and multiple other institutions.

The results of the trial are likely to change how most people with the common blood cancer are treated in the future, the researchers believe.

"I saw a marked improvement in my symptoms within two weeks of starting treatment, with little or no side effects," said trial participant Dan Rosenbaum, 57. "It's so unbelievable it is almost hard to talk about."

"These results will fully usher the treatment of chronic lymphocytic leukemia into a new era," said Tait Shanafelt, MD, professor of medicine at Stanford. "We've found that this combination of targeted treatments is both more effective and less toxic than the previous standard of care for these patients. It seems likely that, in the future, most patients will be able to forego chemotherapy altogether."

Shanafelt, who is the Jeanie and Stewart Ritchie Professor, is the lead author of the study, which will be published Aug. 1 in The New England Journal of Medicine. The senior author is Martin Tallman, MD, chief of the leukemia service at Memorial Sloan Kettering Cancer Center.

Currently, CLL patients who are fit enough to tolerate aggressive treatment are treated intravenously with a combination of three drugs, two of which -- fludarabine and cyclophosphamide -- kill both healthy and diseased cells by interfering with DNA replication, and another, rituximab, that specifically targets the B cells that run amok in the disease. But fludarabine and cyclophosphamide can cause significant side effects including severe blood complications and life-threatening infections that are difficult for many patients to tolerate.

Rituximab plus ibrutinib

The new drug combination pairs rituximab with another drug, ibrutinib, which also specifically targets B cells.

In the trial, 529 participants with newly diagnosed chronic lymphocytic leukemia were randomly assigned in a 2:1 ratio to receive either six courses of ibrutinib and rituximab, followed by ibrutinib until their disease progressed, or six courses of traditional chemotherapy consisting of the drugs fludarabine, cyclophosphamide and rituximab.

The researchers followed each of the participants, who were recruited at one of more than 180 study sites across the country, for nearly three years and logged the length of both their "progression-free survival," or the period during which their disease did not progress, and their overall survival.

Rosenbaum, a partner in a global strategy consulting firm and avid tennis player, was one of the participants randomly assigned to receive the experimental treatment. He noticed a difference in his symptoms almost immediately.

"I hadn't realized how fatigued I had become," he said of the weeks preceding his treatment. "I could barely play a single set of tennis, and I would be wiped out for days afterward. My lymph nodes were so swollen it was impossible to button the top button of my shirt collar. But within the first week of starting treatment, I noticed I had a little more spring in my step. After 10 days, there was a marked improvement in the size of my lymph glands. And after six weeks, my tumors were no longer detectable by physical exam."

The researchers found that 89.4% of those participants who received the experimental drug combination had still not had leukemia progression about three years later versus 72.9% of those who received the traditional chemotherapy combination.

Difference in overall survival rate

They also saw a statistically significant difference in overall survival between the two groups; 98.8% of the people randomly assigned to receive the new drug combination were alive after three years versus 91.5% of those who had received the traditional treatment.

Although the incidence of serious treatment-related adverse events was similar between the two groups, infectious complications occurred more frequently in the group receiving the traditional treatment.

"I have two children, and I thought carefully about participating in a clinical trial," Rosenbaum said. "But when I learned that the traditional treatment carries a small but not insignificant mortality risk due to secondary infections, the decision became more clear. I've experienced minimal side effects from the combination of ibrutinib and rituximab that have been very manageable. It's been a life-changing experience."

"This is one of those situations we don't often have in oncology," Shanafelt said. "The new treatment is both more effective and better tolerated. This represents a paradigm shift in how these patients should be treated. We can now relegate chemotherapy to a fallback plan rather than a first-line course of action."

Credit: 
Stanford Medicine

Baby spiders really are watching you

image: This is a jumping spider.

Image: 
Joseph Fuqua II/UC Creative Services

Baby jumping spiders can hunt prey just like their parents do because they have vision nearly as good.

A study published in the journal Vision Research helps explain how animals the size of a bread crumb fit all the complex architecture of adult eyes into a much tinier package. 

"Spiderlings can adopt prey-specific hunting strategies. They can solve problems. They're clever about navigating their environment," said Nathan Morehouse, a biologist with the University of Cincinnati. "This suggests their eyes are providing as much high-quality information when they're small as when they're large. And that was a puzzle.

"We thought the adults were pressing the limits of what was physically possible with vision. And then you have babies that are a hundredth that size," Morehouse said. "That makes us wonder how they're accomplishing all of this?"

John Thomas Gote, a University of Pittsburgh student and the study's lead author, said vision develops far differently in spiders compared to people.

"For humans, it takes three to five years before babies can reach the visual acuity of adults," Gote said. "Jumping spiders achieve this as soon as they exit the nest.

The research found that baby spiders have the same number of photoreceptors as adults but packed differently to fit in a smaller space. These 8,000 photoreceptors are smaller than those found in the adults. And many of these cells are shaped like a long cylinder to fit more of them side by side to maintain the visual acuity that helps spiders distinguish objects at a distance.

Morehouse said it's not just the number but the placement of these photoreceptors in the eyes that help an animal make sense of its world.

"If you have a photoreceptor spaced every 5 degrees, you'll only see things that are more than 5 degrees apart. But if they're 1 degree apart, you can distinguish things that are closer together," he said.

Humans can distinguish objects less than .007 degrees apart.

"We're some of the best at this in the animal kingdom. We can tell something a meter apart more than a kilometer away," Morehouse said.

So if you looked down from the top of the Eiffel Tower, you could easily tell a person's hat from their belt, he said.

Many jumping spiders have extraordinary vision, such as tetrachromacy, the ability to see four colors (ultraviolet, red, blue and green). That's even better than our typical trichromatic vision that limits most of us to seeing variations of red, green and blue. Scientists are studying whether some people can see even more colors.

UC researchers used their custom-made micro-ophthalmoscope, similar to those used eye doctors, to peer into the tiny eyes of baby spiders.

"The hardest part is to handle these tiny, fragile creatures with utter care as to not harm them while measuring them," UC biologist Elke Buschbeck said.

Building and refining her lab's micro-ophthalmoscope took years of work, she said.

"It's a powerful, one-of-a-kind research tool that allows us to engage in several exciting projects that were not possible before," she said.

Researchers used the equipment to map the light-sensitive cells and their relationship.

"We knew the angular distance between photoreceptors to understand the spatial acuity of eyes or how the spiders can see patterns in the world and distinguish objects," Morehouse said.

They also conducted a microscopic analysis or histology of the spider eyes. Like most creatures in the animal kingdom, jumping spiders begin life with eyes that are much larger in proportion to the rest of their bodies.

"And the bigger the eye, the better it functions. The more light it captures and the more in focus objects are," Morehouse said. "So one strategy is to start life as close to your adult eye size as you can."

This growth pattern, called negative ontogenetic allometry, explains why some puppies have enormous feet.

"People say, 'That's going to be a big dog' because of the size of the puppy's feet. A puppy grows into its feet in the same way that jumping spiders grow into their eyes," Morehouse said.

Arguably, the large eyes also make baby animals look darn cute, he said. And the fuzzy little jumping spiders with their solicitous, upturned postures already have that going for them, according to some Twitter fans.

One drawback of baby spiders' eyes is they capture less light than those of the adults. That means they can't see as well in dim conditions. This makes baby spiders conspicuous to field researchers working in the dark understory, he said.

"One thing you pick up on is spiderlings act a little drunk. They're a bit stumbly," he said. "They seem a little impaired. And it's probably because the world is a little dimmer, like you're walking through the house with the lights off bumping your shins on the furniture."

Morehouse said UC also encourages and fosters undergraduate research as a Carnegie-ranked research institution.

"With good science and hard work, insights can come from anywhere. Undergraduates can be part of meaningful, world-class research," Morehouse said.

Credit: 
University of Cincinnati