Tech

FSU researchers help discover new genetic variants that cause heart disease in infants

image: From left, College of Medicine graduate student Jamie Johnston, College of Medicine Associate Professor Jose Pinto, College of Medicine graduate student Maicon Landim-Vieira and Department of Biological Science Professor P. Bryant Chase.

Image: 
Photo courtesy of P. Bryant Chase.

Florida State University researchers working in an international collaboration have identified new genetic variants that cause heart disease in infants, and their research has led to novel insights into the role of a protein that affects how the heart pumps blood. It is a discovery that could lead to new treatments for people suffering from heart disease.

In two separate papers, Jose Pinto, an associate professor in the College of Medicine, and P. Bryant Chase, a professor in the Department of Biological Science, worked with doctoral students Jamie Johnston and Maicon Landim-Vieira to explore a disease that caused the heart to pump with too little force. Their work was published in the Journal of Biological Chemistry and in Frontiers in Physiology .

The researchers discovered new interactions within parts of a protein called troponin. Troponin has three parts -- troponin C, troponin I and troponin T -- that work together to regulate the heart's pumping of blood. The FSU researchers uncovered interactions of troponin C with portions of troponin T that can decrease the force of the heartbeat, something scientists had not previously noticed.

"All of these proteins, they work like an orchestra," Pinto said. "What is the main thing for an orchestra? To be in harmony, in balance. You need to have a good balance and you need to be in harmony, otherwise you will not produce good music. If one of these proteins is not in sync with the other proteins, you will not have your orchestra in harmony or balanced well, and then that will lead to the disease."

Most previous work had focused on interactions between troponin C and troponin I, or between troponin T and another protein called tropomyosin. The new interaction between troponin C and troponin T is "an interaction that will modulate how much force the heart generates in each heartbeat," Pinto said. "If you increase the number of these interactions, most likely you decrease contraction of the heart, and if you prevent these interactions, very likely you increase the force of contraction in each heartbeat."

But science sometimes leads to more questions than answers. A related study by the same FSU researchers reported a new combination of genetic variants in a different part of troponin C that also caused heart disease in infants. Rather than uncovering new interactions among the parts of troponin, this study led researchers to conclude that there must be an unknown role for troponin, possibly in the cell nucleus, Chase said.

In that research, DNA sequencing showed that a mother and a father had different variants that both affected the troponin C protein. Although their cell function was altered in such a way that researchers expected them to have heart problems, they did not show signs of heart disease. Their children, however, had both variants, and though their cell functioning appeared to be more normal, they developed deadly heart disease.

"Some experiments provide a lot of immediate insight, but other times we find out that we just don't understand everything that we think we do," Chase said. "As much as we've learned, as much as we do understand, there's a lot more that's unknown. And it's those times that can eventually lead to brand new, unexpected insights."

Understanding the interactions between the parts of the troponin protein and also troponin's various roles in heart cells will help guide new treatments for heart disease, both for the disease caused by the specific genetic variants the researchers discovered and for heart disease in general.

"These diseases are caused by seemingly small changes in the DNA," Chase said. "There are genetic technologies to reverse that, to introduce the common DNA sequence, but applications of genetic technologies to human disease are in their infancy and there's not a surefire and ethical way to apply changes in the genome to all the heart patients who could benefit from it. I'm sure there will be ways to correct genetic variants for a number of diseases, but the medical community is only just beginning to find out how to do that safely for people."

Credit: 
Florida State University

How waves of 'clutches' in the motor cortex help our brains initiate movement

For decades, scientists have wondered why specific cells in the brain that control movement fire when people simply plan or imagine making a movement, or observe someone else making a movement - but do not actually move themselves.

Now, University of Chicago scientists working on this mystery have discovered that signals in the motor cortex act like a series of clutches when it comes to moving, and that these signals can be disrupted to slow the brain's initiation of movement.

The findings, published in the journal Neuron, potentially could lead one day to treatments for people with Parkinson's disease, a movement disorder.

"This work provides the first evidence that large-scale, spatially organized brain patterns are behaviorally relevant," said neuroscientist Nicho Hatsopoulos, PhD, a professor of organismal biology and anatomy and the senior author of the study.

It's long been known that when a person thinks about or plans a movement, neurons fire in the motor cortex and create a signal called a beta oscillation. Hatsopoulos compares the function of this signal to a clutch in a car with a manual transmission: If you push in a clutch pedal, then press on the gas, the car engine will rev -- but it won't move because the car is not in gear. Likewise, if you simply imagine moving your arm or observe someone else moving their arm, this signal in your motor cortex is maintained or even intensifies -- but you don't move your arm. It's only when you're ready to actually move that the beta oscillations cease -- essentially, the clutch engages the engine to the transmission of the car - and your arm moves.

Hatsopoulos and his team have discovered that this 'clutch' signal in the motor cortex is better understood as not one, but rather multiple clutches that engage in an organized spatial pattern that can begin at either end of the motor cortex and terminate at the other. Every time a movement is initiated, this organized wave of clutches -- in actuality, groups of firing neurons -- engages.

"While this clutch-like mechanism has been previously observed at single sites in the motor cortex, we've discovered that movement initiation is associated with a propagating wave of clutches across the cortical surface," said Hatsopoulos. "Moreover, we've provided the first causal evidence that this wave is a necessary condition for movement initiation."

The researchers studied three rhesus macaque monkeys who were rewarded with juice each time they won a video game. The game required the monkeys to use a joystick to move a cursor across a screen to a target. Electrodes implanted in the arm/hand area of the monkeys' motor cortices recorded the neuronal activity of the arm movement involved in manipulating the joystick.

By electrically microstimulating multiple sites in the arm/hand area of the motor cortex to create waves of stimulation, the researchers were able to disrupt the monkeys' reaction time under certain conditions. When they applied stimulation in a way that followed the natural wave of the clutches releasing, the monkey's initiation of movement remained unchanged. But when they stimulated the cells in the opposite direction of the wave, reaction time slowed.

"This study provides for the first time a characterization of this clutch-like mechanism on a trial-by-trial basis," said Karthikeyan Balasubramanian, PhD, a senior researcher in the Department of Organismal Biology and Anatomy, who led the study. "Moreover, our stimulation results suggest that we are causally disrupting the wave-like neural dynamics when we stimulate against the natural wave that is linked to movement initiation."

The stimulation approach could perhaps one day aid people with diseases like Parkinson's by helping them initiate movement through spatio-temporally organized, electrical stimulation of electrodes in their motor cortices. Importantly, this novel stimulation approach may be useful in understanding large-scale neural patterns throughout the brain.

The team is now studying whether similar patterns of signals occur in the motor cortex when moving the tongue, and whether movement initiation of the tongue can also be manipulated through micro stimulation.

Credit: 
University of Chicago Medical Center

Topology protects light propagation in photonic crystal

image: Electron microscopy image of topological photonic crystals in a perforated slab of silicon. The top and bottom crystal structures differ slightly; along the boundary between two parts (dotted line) light can be guided. The disparate mathematical description (topology) of the light fields in the two crystals prescribes that their boundary has to conduct light; that conduction is thus 'topologically protected'.

Image: 
AMOLF

Researchers of research institute AMOLF and TU Delft have seen light propagate in a special material without it suffering from reflections. The material, a photonic crystal, consists of two parts that each have a slightly different pattern of perforations. Light can propagate along the boundary between these two parts in a special way: it is 'topologically protected' and, therefore, does not bounce back at imperfections. Even when the boundary forms a sharp corner, the light follows it without a problem. "For the first time, we have seen these fascinating light waves move at the technologically relevant scale of nanophotonics," says Ewold Verhagen, group leader at AMOLF. The results are published on March 6th in the scientific journal Science Advances.

Topological insulators: special electronics

Verhagen and his collaborator Kobus Kuipers from TU Delft were inspired by electronic materials, where so-called topological insulators form a new class of materials with remarkable behavior. Where most materials are either conductive for electrons or not (which makes them an insulator), topological insulators exhibit a strange form of conduction. "The inside of a topological insulator does not allow electron propagation, but along the edge electrons can move freely", says Verhagen. "Importantly, the conduction is 'topologically protected'; the electrons are not impacted by disorder or imperfections that would typically reflect them. So the conduction is profoundly robust."

Translation to photonics

In the past decade, scientists have tried to find this behavior for the conduction of light as well. "We really wanted to accomplish topological protection of light propagation at the nanoscale and thus open the door to guiding light on optical chips without it being hindered by scattering at imperfections and sharp corners", says Verhagen.

For their experiments, the researchers used two-dimensional photonic crystals with two slightly different hole patterns. The 'edge' that enables light conduction is the interface between the two hole patterns. "Light conduction at the edge is possible because the mathematical description of light in these photonic crystals can be described by specific shapes, or more accurately by topology," Kuipers says. The topology of the two different hole patterns differs and precisely this property allows light conduction at the boundary, similar to electrons in topological insulators. Because the topology of both hole patterns is locked, light conduction cannot be revoked; it is 'topologically protected'."

Imaging topological light

The researchers managed to image light propagation with a microscope and saw that it behaved as predicted. Moreover, they witnessed the topology, or mathematical description, in the observed light. Kuipers: "For these light waves the polarization of light rotates in a certain direction, analogous to the spin of electrons in topological insulators. The spinning direction of light determines the direction in which this light propagates. Because polarization cannot easily change, the light wave can even flow around sharp corners without reflecting or getting scattered, as would happen in a regular waveguide.

Technological relevance

The researchers are the first to directly observe the propagation of topologically protected light on the technologically relevant scale of nanophotonic chips. By purposely using silicon chips and light of a similar wavelength as used in telecommunication, Verhagen expects to increase the application prospects. "We are now going to investigate if there are any practical or fundamental boundaries to topological protection and which functionalities on an optical chip we could improve with these principles. The first thing we are thinking of is to make the integrated light sources on a photonic chip more reliable. This is important in view of energy efficient data processing or 'green ICT'. Also, to efficiently transfer small packages of quantum information, the topological protection of light can be useful.

Credit: 
AMOLF

Skills training opens 'DOORS' to digital mental health for patients with serious mental illness

March 6, 2020 - Digital technologies, especially smartphone apps, have great promise for increasing access to care for patients with serious mental illness such as schizophrenia. A new training program, called DOORS, can help patients get the full benefit of innovative digital mental health tools, reports a study in the March issue of Journal of Psychiatric Practice. The journal is published in the Lippincott portfolio by Wolters Kluwer.

While most patients with serious mental illness now have access to smartphones, a "second digital divide" has become apparent: patients may lack the skills needed to effectively use digital technologies to support mental health, according to the report by John Torous, MD, Director of the Division of Digital Psychiatry at the Beth Israel Deaconess Medical Center, Boston, and colleagues. They write, "The Digital Opportunities for Outcomes in Recovery Services (DOORS) program represents an evidence-based effort to formally bridge this new digital divide and deliver on the potential of digital mental health."

Training Helps Patients Choose and Effectively Use Digital Mental Health Apps

Today, there are thousands of mobile apps designed to help patients with mental illness to monitor and self-manage their symptoms, connect with care, and even predict relapse. However, experience has shown that patients need training in "core competencies, autonomy, and skills required to effectively utilize these novel tools to improve mental health," according to the authors. "We find that people are interested and excited to use their phones towards recovery - but often not provided with the hands-on training or support to feel confident in using technology as part of care. Now with DOORS we can help people unlock the potential of digital health" notes study authors Erica Camacho.

The DOORS program was developed as a pragmatic, hands-on approach to provide training and functional education in digital mental health skills for patients with serious mental illness. Based on self-determination theory, DOORS targeted three key elements - toward the common goal of strengthening the therapeutic alliance between patients and mental health professionals:

Competence - The DOORS program helped patients to develop smartphone skills, evaluate and select digital health tools, and learn how to use digital tools to gain insights into their everyday experiences. A key focus was on learning the most important factors in evaluating health apps - for example, using apps with good privacy protection and a reputable developer.

Autonomy - Patients learned how to use apps to support their personal recovery and set wellness goals, using the data collected by the apps to guide behavior change. For example, using smartphone step counters and exercise apps helped patients meet goals for physical activity: an important approach to reducing symptoms.

Relatedness - Group participants were able to share and learn from each other about digital health tools and strategies, and to work with clinical staff in developing their skills and using digital mental health toward enhancing their personal recovery.

Dr. Torous and colleagues share their experience using DOORS in two settings, or "clubhouses," for people with mental illness: a first episode psychosis (FEP) group and a chronic-phase schizophrenia group. A version of DOORS for younger patients in the FEP group focused more on autonomy; a modified version for patients with the chronic schizophrenia group, who were more familiar with using smartphone apps, focused more on competency. Both groups learned to use a free and open-source app called mindLAMP ("learn, assess, manage, prevent") to monitor their mental health. "Because of continued interest and demand, the groups are still running today at these sites and many new ones as well" notes Elena Rodriguez-Villa of the Beth Israel Deaconess Medical Center team, who currently teaches two DOORS groups.

For both versions, the researchers developed manuals for clinicians leading the DOORS groups, including detailed session outlines, handouts, and references "We hope that, by sharing our facilitator manuals freely online, others will develop, expand, and customize DOORS to suit the needs of their patients," Dr. Torous and coauthors write. Both manuals, future updates, and more resources like slide sets for running groups are available at https://www.digitalpsych.org/.

"Bridging the second digital divide between people with serious mental illness and those without by offering new skills and resources to help people to take full advantage of digital health tools is becoming a global health priority," the researchers conclude.

"DOORS represents one approach toward addressing this gap and ensuring equal access, opportunity, and value of digital health tools for improving care for all patients," comments Dr. Torous. "We are excited for others to join us, expand the program, and create an evolving learning community."

Credit: 
Wolters Kluwer Health

West coast dungeness crab stable or increasing even with intensive harvest, research shows

image: Crab numbers off Central California have climbed five times higher than past decades.

Image: 
NOAA Fisheries/NWFSC

The West Coast Dungeness crab fishery doesn't just support the most valuable annual harvest of seafood on the West Coast. It's a fishery that just keeps on giving.

Fishermen from California to Washington caught almost all the available legal-size male Dungeness crab each year in the last few decades. However, the crab population has either remained stable or continued to increase, according to the first thorough population estimate of the West Coast Dungeness stocks.

"The catches and abundance in Central California especially are increasing, which is pretty remarkable to see year after year," said Kate Richerson, a research scientist at NOAA Fisheries' Northwest Fisheries Science Center in Seattle. Richerson is the lead author of the new study published in the journal Fisheries Research. "There's reason to be optimistic that this fishery will continue to be one of the most productive and on the West Coast."

Other recent research has suggested that the West Coast's signature shellfish could suffer in the future from ocean acidification and other effects related to climate change. That remains a concern, Richerson said, but the study did not detect obvious signs of population-level impacts yet.

Fishing Regulation Success

The secret to the success of the Dungeness crab fishery may be the way fishing regulations protect the crab populations' reproductive potential. Male Dungeness crabs mature and begin reproducing one to two years before they can be caught, so crabs can reproduce even with heavy fishing pressure. Female Dungeness crab can store sperm for more than a year, allowing them to reproduce even in the absence of numerous males. Fishermen must also return females to the water, further protecting the reproductive capacity of the population.

"The management system that is used for Dungeness crab seems to be a perfect fit for their life history because it allows the population to reproduce and grow even with the intensive harvest," Richerson said.

Natural Variability

Crab numbers and reproduction rates do vary from year to year, mostly because of ocean conditions. That also may have contributed to the increasing numbers in Central California. They have risen over the last two decades and now average nearly five times abundance estimates from 1970 to 2000.

Central California crab numbers have increased enough that they are now closer to the size of populations in Northern California, coastal Washington, and Oregon. Those populations do not show the same growth trends as those in Central California, but remain stable overall.

However, a previous increase in the Central California landings from the 1930s to the late 1950s was followed by a dramatic crash about 1960. Catches remained low until the 1980s and then rebounded. Researchers believe those fluctuations likely reflected changing ocean conditions, and could happen again.

"If this is true, the recent increase in Central California crab abundance may be reversed when the system again shifts to a period of later spring transitions," the scientists wrote. "This is likely to have a large impact on the fishery, as well as other interlinked fisheries in the area."

Credit: 
NOAA Fisheries West Coast Region

Biomarker tests for decision-making on chemotherapy for breast cancer: No evidence of transferability

Following a benefit assessment in 2016 and an addendum in 2018, the German Institute for Quality and Efficiency in Health Care (IQWiG) has again examined biomarker-based tests for women with primary breast cancer. These tests aim to identify patients who could omit adjuvant chemotherapy because they have a low risk of recurrence, i.e. they can assume that the cancer will not return after successful initial treatment.

In 2019, the Federal Joint Committee (G-BA) introduced the Oncotype DX test into standard care for certain women without lymph node involvement. The G-BA commissioned the Institute to search for, present and assess the current state of knowledge on a biomarker-based strategy for decision-making on adjuvant systemic chemotherapy in primary breast cancer. If the conclusion on the benefit of Oncotype DX could be transferred to other tests, these tests could also be introduced into standard care.

IQWiG did not find any other randomized controlled trial (RCT) relevant to this question, but identified a few prognosis and concordance studies. On the basis of these studies, the Institute does not consider a transfer of the conclusion on the benefit of Oncotype DX to other tests to be viable - especially because the tests assign different patients to the group "low risk of recurrence".

No evidence of concordance

Several tests based on biomarkers (such as the expression profiles of different genes) are available; these tests are designed to help certain breast cancer patients decide for or against adjuvant chemotherapy. However, so far only for Oncotype DX is an RCT available that shows a hint of a benefit. This conclusion could be transferred to other tests if they were concordant with Oncotype DX, that is, if they assigned a low, medium or high risk of recurrence to approximately the same women.

This concordance of risk classifications was examined in 7 studies. However, none of the studies applied the same test thresholds as in the RCT on Oncotype DX, which makes it difficult to assess the transferability of the conclusion on the benefit of Oncotype DX. Furthermore, no distinction was made between patients over and under 50 years of age or between post- and premenopausal patients, which would also have been useful in the evaluation.

The agreement of the test results was only between 43 and 74%, which means that they varied greatly in the risk assessment of the tested women. IQWiG's Director Jürgen Windeler concluded: "The tests therefore assign different patients to the risk groups. This can only mean that they overlook a number of women who could omit chemotherapy without relevantly increasing their risk of recurrence - and in return suggest that many other women could omit chemotherapy, even though it can by no means be ruled out that the cancer will return."

Prognoses comparable, informative value limited

IQWiG also considered 12 prospectively planned cohort studies with an observation period of at least 5 years. However, in 7 of these studies it is not ensured that the absence of tumour samples was caused by chance. The possibility of a systematic, i.e. disease-related, selection reduces the certainty of results of these studies.
The mortality of low-risk groups after omission of chemotherapy was examined in 4 studies. Similar rates were shown for women without lymph node involvement after Oncotype DX (maximum 7 to 14%) and after 3 other tests (11 to 13%). At 5 to 10% (Oncotype DX) and 6 to 10% (5 other tests), the risk of metastasis (risk of distant recurrence) after omission of chemotherapy, which was investigated in 10 studies, was also comparable.
However, the proportions of patients without lymph node involvement who were assigned to the low-risk groups differed markedly in these prognosis studies. Daniel Fleer from the Non-Drug Interventions Department, who was responsible for the rapid report, notes: "The spectrum ranges from 20 to 86% of women. Together with the sometimes low certainty of results in these studies, this also questions the transferability of a conclusion on benefit from one of these tests to the others."

Credit: 
Institute for Quality and Efficiency in Health Care

Showing robots how to do your chores

Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.

Making progress on that vision, MIT researchers have designed a system that lets these types of robots learn complicated tasks that would otherwise stymie them with too many confusing rules. One such task is setting a dinner table under certain conditions.

At its core, the researchers' "Planning with Uncertain Specifications" (PUnS) system gives robots the humanlike planning ability to simultaneously weigh many ambiguous -- and potentially contradictory -- requirements to reach an end goal. In doing so, the system always chooses the most likely action to take, based on a "belief" about some probable specifications for the task it is supposed to perform.

In their work, the researchers compiled a dataset with information about how eight objects -- a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl -- could be placed on a table in various configurations. A robotic arm first observed randomly selected human demonstrations of setting the table with the objects. Then, the researchers tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.

To succeed, the robot had to weigh many possible placement orderings, even when items were purposely removed, stacked, or hidden. Normally, all that would confuse robots too much. But the researchers' robot made no mistakes over several real-world experiments, and only a handful of mistakes over tens of thousands of simulated test runs.

"The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code," says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group, who emphasizes that their work is just one step in fulfilling that vision. "That way, robots won't have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home."

Joining Shah on the paper are AeroAstro and Interactive Robotics Group graduate student Shen Li and Interactive Robotics Group leader Julie Shah, an associate professor in AeroAstro and the Computer Science and Artificial Intelligence Laboratory.

Bots hedging bets

Robots are fine planners in tasks with clear "specifications," which help describe the task the robot needs to fulfill, considering its actions, environment, and end goal. Learning to set a table by observing demonstrations, is full of uncertain specifications. Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item's immediate availability or social conventions. Present approaches to planning are not capable of dealing with such uncertain specifications.

A popular approach to planning is "reinforcement learning," a trial-and-error machine-learning technique that rewards and penalizes them for actions as they work to complete a task. But for tasks with uncertain specifications, it's difficult to define clear rewards and penalties. In short, robots never fully learn right from wrong.

The researchers' system, called PUnS (for Planning with Uncertain Specifications), enables a robot to hold a "belief" over a range of possible specifications. The belief itself can then be used to dish out rewards and penalties. "The robot is essentially hedging its bets in terms of what's intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification," Ankit Shah says.

The system is built on "linear temporal logic" (LTL), an expressive language that enables robotic reasoning about current and future outcomes. The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs. The robot's observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas. Each formula encoded a slightly different preference -- or specification -- for setting the table. That probability distribution becomes its belief.

"Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually," Ankit Shah says.

Following criteria

The researchers also developed several criteria that guide the robot toward satisfying the entire belief over those candidate formulas. One, for instance, satisfies the most likely formula, which discards everything else apart from the template with the highest probability. Others satisfy the largest number of unique formulas, without considering their overall probability, or they satisfy several formulas that represent highest total probability. Another simply minimizes error, so the system ignores formulas with high probability of failure.

Designers can choose any one of the four criteria to preset before training and testing. Each has its own tradeoff between flexibility and risk aversion. The choice of criteria depends entirely on the task. In safety critical situations, for instance, a designer may choose to limit possibility of failure. But where consequences of failure are not as severe, designers can choose to give robots greater flexibility to try different approaches.

With the criteria in place, the researchers developed an algorithm to convert the robot's belief -- the probability distribution pointing to the desired formula -- into an equivalent reinforcement learning problem. This model will ping the robot with a reward or penalty for an action it takes, based on the specification it's decided to follow.

In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries. In real-world demonstrations, it showed behavior similar to how a human would perform the task. If an item wasn't initially visible, for instance, the robot would finish setting the rest of the table without the item. Then, when the fork was revealed, it would set the fork in the proper place. "That's where flexibility is very important," Shah says. "Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup."

Next, the researchers hope to modify the system to help robots change their behavior based on verbal instructions, corrections, or a user's assessment of the robot's performance. "Say a person demonstrates to a robot how to set a table at only one spot. The person may say, 'do the same thing for all other spots,' or, 'place the knife before the fork here instead,'" Shah says. "We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations."

Credit: 
Massachusetts Institute of Technology

Exploring the deep tissues using photoacoustic imaging

image: Image1

Image: 
Chulhong Kim (POSTECH)

Photoacoustic imaging has gained global attention for capturing images without causing pains or using ionizing radiation. Recently, many researchers have heavily studied on observing deep tissues to apply the photoacoustic imaging to clinical diagnosis and practices.

Prof. Chulhong Kim of Creative IT Engineering from POSTECH and his student, Byullee Park conducted joint research with Prof. Hyungwoo Kim and Kyung Min Lee of Cheonnam National University and proposed a new contrast agent for the photoacoustic imaging of deep tissues. They used a nickel-based nanoparticle as a contrast agent that absorbs light at 1,064 nm wavelength. They obtained images of the tissues penetrated in maximum 3.4 cm depth in live animals which is the deepest image observed using this wavelength compared to the previous studies.

The principle of photoacoustic imaging is that it allows lights to be absorbed by tissues which then causes a thermoelastic expansion of the tissues of the organs instantly. This generates sound wave (photoacoustic) signals that are detected as ultrasound wave sensors, producing images. The conventional optical microscopic imaging technologies only allow observing tissues in 1 mm depth. On the other hand, the photoacoustic imaging system produces images of the deep tissues in animals and humans based on optical contrast.

However, the photoacoustic imaging is challenging despite of intense research activities to observe deep tissues in various organs more closely. It is difficult to deliver enough light at 650~900nm short-wavelength with an affordable cost to deep tissues in the body. For this reason, the commercial and clinical translation of photoacoustic imaging are challenging.

To improve this limitation of the photoacoustic imaging, the research team introduced a nanoparticle nickel-based contrast agent, that absorbs light specifically and strongly at 1,064 nm wavelength, to observe deep tissues. They verified biocompatibility of nickel-based nanoparticles and obtained photoacoustic images in deep tissues (3.4cm depth) of lymph nodes, gastrointestinal tracts, bladders of live rats by inserting the nanoparticles.

The first author of the paper, Byullee Park said, "This research is different from the previous studies that used short wavelength. We used long wavelength lasers and were able to minimize damages in the tissues. We were also able to obtain images of deep tissues by delivering lights to organs located in deep inside of the animal."

When this newly developed photoacoustic imaging technique is applied to clinical practices, it can help diagnosis of diseases related to deep organs by producing images noninvasively and without a risk of explosion to radiation unlike other imaging methods that need radiations such as computed tomography (CT). Furthermore, lasers of 1,064 nm wavelength are comparatively economical, and can be used with other commercial ultrasound machines, which bring anticipation of its early clinical applications.

"Our research is the first example of imaging the deepest tissues in the body among all the research papers on photoacoustic imaging so far. It is very meaningful that it has taken a step further to clinical feasibility of photoacoustic imaging," said Prof. Chulhong Kim, the corresponding author of the paper.

Credit: 
Pohang University of Science & Technology (POSTECH)

Comprehensive review of heterogeneously integrated 2D materials

image: Schematic illustration of the newly emerged 2D heterostructures research with various heterogeneous integration of 2D materials.

Image: 
Author

In a paper published in NANO, a group of researchers from Sungkyunkwan University, South Korea provide a comprehensive review of heterogeneously integrated two dimensional (2D) materials from an extensive library of atomic 2D materials with selectable material properties to open up fascinating possibilities for the design of functional novel devices.

Since the discovery of Graphene by Andre Geim and Konstantin Novoselov, 2D materials, e.g., graphene, black phosphorous (BP), transition metal dichalcogenides (TMDCs), and hexagonal boron nitride (h-BN) have attracted extensive attention due to their broad physical properties and wide range of applications to electronic and optoelectronic devices. Research on these 2D materials has matured to the point where an extensive library of atomically thin 2D materials with selectable material properties has been created and continues to grow.

By combining or stacking these 2D materials, it is possible to construct 2D heterostructures, which are built by directly stacking individual monolayers comprising different materials. Each monolayer within a 2D heterostructure is highly stable, due to strong covalent bonds between the atoms within that monolayer. However, the forces between the monolayers that keep said monolayers stacked one above the other to form the 2D heterostructure happen to be relatively weak van der Waals interactions. Due to this, each of the monolayers retains its intrinsic properties. Moreover, unlike in conventional semiconductor heterostructures where component material selection is restricted to those with similar lattice structures, the lattice mismatch requirements of stacked heterostructures can be relaxed due to the weakness of the van der Waal's forces. This means that one can combine insulating, semiconducting, or metallic 2D materials to form a single 2D heterostructure despite their different lattice structures.

When a monolayer is stacked in combination with other monolayers made out of different materials, a variety of new heterostructures with atomically thin 2D heterojunctions can be created. Heterostructures made from a particular combination of materials will have a certain set of physical characteristics depending on which materials they are made from. The unusual physical characteristics of 2D heterostructures make them suitable for use in a wide range of applications.

In this review, various 2D heterostructures are discussed along with an explanation of novel electronic and optoelectronic properties, advanced synthesis technical developments, as well as new functional applications available. It provides an understanding of the current research trends in 2D materials, so as to explore future possibilities for nanomaterial research.

Credit: 
World Scientific

Making puffer fish toxin in a flask

In Japan, puffer fish is considered a delicacy, but the tickle to the taste buds comes with a tickle to the nerves: fugu contains tetrodotoxin, a strong nerve toxin. In low doses, tetrodotoxin is shown in clinical trials to be a replacement for opioids for relieving cancer related pain. In the journal Angewandte Chemie, scientists have introduced a new route for the total synthesis (complete production of a natural product from current materials) of tetrodotoxin.

Eating fugu initially elicits a light prickling in the mouth, which can have a relaxing or euphoric effect--assuming the cook knows what he or she is doing. If the fish is incorrectly prepared, things can end badly: Tetrodotoxin blocks the voltage-gated sodium channels, and thus nerve impulses. This may result in paralysis and even difficulty breathing. In the EU, the importation and preparation of fugu as food is forbidden. In Japan and other countries, a number of strict laws regulate the preparation and consumption of puffer fish products. However, there are occasional deaths.

In very low doses, tetrodotoxin is a pain-reliever and could be used to treat severe pain, such as in the treatment of cancer. Early clinical studies are underway. It is thus important to develop a simple, reliable synthetic method to provide access to tetrodotoxin and structurally related compounds--for research and eventually robust and inexpensive production.

Tetrodotoxin has a unique, highly complex, cage-like structure (a tricyclic orthoester) as well as a cyclic guanidine component. Guanidine is an important component of many biological molecules, including arginine. The tetrodotoxin framework is highly oxidized and has five hydroxy groups (-OH) as substituents. A number of different total syntheses of tetrodotoxin have previously been published, including one from researchers led by Satoshi Yokoshima at the Nagoya University (Japan) in 2017. Now Yokoshima and his team have introduced another, novel total synthesis.

The key step is a Diels-Alder reaction between a known starting compound (an enone) and a silicon-containing component (a siloxyldiene) to make a tricyclic intermediate with the right spatial (steric) arrangement to allow for proper attachment of the hydroxy groups and later, construction of the "cage". Formation of the guanidine component begins with introduction of an amino group--either by a conventional four-step method or a three-step reaction sequence based on a newly developed conversion of a terminal alkyne to a nitrile. Finally, the "bridges" needed for formation of the cage are built up over several steps. A cross-coupling reaction was used for introducing a carbon substituent (hydroxymethyl group) on the cage. Employing other components for the cross coupling reaction might lead to producing structurally related molecules.

Credit: 
Wiley

Neuroscientists discover new structure of important protein in the brain

image: Animation of the four stages of the LeuT transporter put together for a full cycle.

Image: 
University of Copenhagen

After five years of experimentation, researchers from the University of Copenhagen have succeeded in crystallising and mapping a novel conformation of LeuT, a bacterial protein that belongs to the same family of proteins as the brain's so-called neurotransmitter transporters.

These transporters are special proteins that sit in the cell membrane. As a kind of vacuum cleaner, they reuptake some of the neurotransmitters that nerve cells release when sending a signal to one another.

Some drugs or substances work by blocking the transporters, increasing the amount of certain neurotransmitters outside the nerve cells. For example, antidepressants inhibit the reuptake of the neurotransmitter serotonin, while a narcotic such as cocaine inhibits the reuptake of the neurotransmitter dopamine.

'Transporters are extremely important for regulating the signalling between neurons in the brain and thus the balance of how the whole system works. You cannot do without them', says Kamil Gotfryd, first author and Associate Professor at the Department of Biomedical Sciences who, during the study, was a postdoc at the Department of Neuroscience.

'Not only does the new discovery give us additional basic scientific knowledge about the complex transporter proteins. It also has perspectives in relation to developing pharmacological methods, with which we can change the function of transporters. In other words, the discovery may lead to better drugs', he adds.

From bacteria to human brains

Evolutionary, transporters derive from the most primitive bacteria, which have developed them to absorb nutrients, such as amino acids, from the environment in order to survive.

Since then, specialised transporters have developed to perform a variety of functions. For example, to transport neurotransmitters into neurons in the human brain. Still, the basic principle is the same, namely that the transporter functions by alternately opening and closing to the interior and exterior of a cell, respectively.

When a transporter is open outwardly, it may capture transmitter substances or amino acids. Thereafter, the protein uses sodium ions to change its structure so that it will close outwardly and instead open to the interior of the cell where the transported substance is released and absorbed.

Full cycle

In recent years, X-ray crystallography has enabled researchers to map three stages of the transporter mechanism: Outwardly open, outwardly occluded and inwardly open.

In order for the cycle to be complete, researchers have long concluded that there must also be an inwardly occluded stage of the protein. However, since this structure is unstable, it has long been difficult to freeze it and thus be able to map it.

But now, after many trials, researchers at the University of Copenhagen have succeeded in retaining a transporter for the transmitter leucine - a LeuT - in precisely that stage.

'We have been working on this for five years, and no matter what we did, we never got the structure we wanted. But suddenly it happened', says Professor and Head of Department Ulrik Gether of the Department of Neuroscience.

'Our study is in fact - I would say - 'the missing link'. This structure has been missing and it has been important to understand the entire cycle which the transporter is going through', he adds.

A key to more discoveries

Ulrik Gether explains that the key to solving the long-standing mystery was partly a mutation of the transporter and partly a replacement of the substance leucine by the related, but slightly larger phenylalanine molecule.

The combination, so to say, held the transporter long enough in the desired position for researchers to purify, crystallize, and map its structure.

At the same time, Ulrik Gether explains that the high degree of similarity between different types of transporters allows researchers to draw parallels to the transporters of a wide range of other neurotransmitters.

'Now that we know more about LeuT, the result may be transferred to other transporters of other neurotransmitters. We believe that we can generalise and create better models for, in example, dopamine, serotonin and GABA transporters which are targets for drugs to treat ADHD, depression and epilepsy, respectively', says Ulrik Gether.

According to the Head of the Department, the next step is to continue working with the transporters found in human nerve cells.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

Scientists propose nanoparticles that can treat cancer with magnetic fluid hyperthermia

image: SCAMT Lab, ITMO University

Image: 
ITMO.NEWS

A group of Russian scientists have synthesized manganese-zinc ferrite nanoparticles that can potentially be used in cancer treatment. Due to their unique magnetic properties, the particles can serve as deactivators of affected cells while having almost no negative impact on healthy tissues. The results have been published in the Journal of Sol-Gel Science and Technology.

One of the most important global goals in today's medicine is finding ways to combat cancer. Currently, there are several kinds of treatments with differing effectiveness and various side effects. In most cases, the treatment causes harmful impact not only to cancer cells but also the adjacent healthy tissues or the body at large.

Magnetic fluid hyperthermia is a promising method that can help alleviate the side effects of cancer treatment. This method involves introducing a magnetic nanoparticles-containing sol into a tumor followed by its exposure to a variable magnetic field. This causes the heating of the nanoparticles and leads to the deactivation of cancer cells. However, the majority of the materials used for this purpose are toxic to the body. What is more, the particles continue to heat up to relatively high temperatures, which entails serious damage to healthy tissues.

These problems could be solved by the application of special nanoparticles which can change their magnetic properties depending on the temperature. In physics, there is such a notion as the Curie temperature (also known as the Curie point), which is the temperature at which a sharp decrease in magnetization is observed.

"When the Curie temperature is reached, a ferromagnetic changes into a paramagnetic, consequently the particles cease to be as susceptible to the magnetic field and their further heating stops," explains Vasilii Balanov, a Master's student at ITMO University and one of the research's authors. "When the temperature drops back again, the particles resume their heating. Essentially, we observe a self-management of temperature in a narrow range. If we select a composition that experiences such a transition at the temperature we need, then it could prove effective for magnetic fluid hyperthermia."

Choosing the material, the scientists opted for ferrites - compounds of iron oxide (III)Fe2O3 with oxides of other metals. Generally, thanks to their properties, these materials are widely applied in computer technologies, but, as it turned out, they can also be used for medical purposes.

"We took the particles with the general formula Zn(x)Mn(1-x)Fe2O4, in which zinc and manganese are selected in a certain proportion," expounds Vasilii Balanov. "They don't have a toxic effect on the body, and with the right ratio of manganese and zinc we were able to achieve a Curie temperature in the range of 40-60 degrees Celsius. This temperature allows us to deactivate cancer cells, concurrently, the short-term thermal contact is relatively harmless to healthy tissues."

As of now, the scientists have already synthesized the nanoparticles and studied their magnetic properties. The experiments confirmed that the material doesn't heat up above 60 degrees Celsius when exposed to a variable magnetic field. Coming next will be the experiments on living cells and, if these are successful, on animals.

Credit: 
ITMO University

Super magnets from a 3D printer

Magnetic materials are an important component of mechatronic devices such as wind power stations, electric motors, sensors and magnetic switch systems. Magnets are usually produced using rare earths and conventional manufacturing methods. A team of researchers at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has worked together with researchers from the Graz University of Technology, the University of Vienna and the research institution Joanneum Research to produce specially designed magnets using a 3D printer. The results were published in the journal Materials.

Permanent magnets are incorporated into a number of mechatronic applications. Traditional manufacturing methods such as sintering or injection moulding are not always able to cope with increasing miniaturisation and the resulting geometric requirements for magnets, and this is a trend which is sent to continue in the future. Additive manufacturing processes offer the necessary freedom of design.

The research team, involving Prof. Dr. Jörg Franke from the Institute for Factory Automation and Production Systems at FAU, has now succeeded in creating super magnets using laser-based 3D printing. Metallic powder of the magnetic material is added layer by layer and the particles are joined by melting.

The process allows magnets to be printed with a relatively high density at the same time as controlling their microstructure. This allows researchers to tailor the magnetic properties to suit the required application exactly.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Argonne's pioneering user facility to add magic number factory

One of the big questions in physics and chemistry is, how were the heavy elements from iron to uranium created?  The Argonne Tandem Linac Accelerator System (ATLAS) at the U.S. Department of Energy’s (DOE) Argonne National Laboratory is being upgraded with new capabilities to help find the answer to that question and many others.

Of five DOE Office of Science user facilities at Argonne, ATLAS is the longest lived. “Inaugurated in 1978, ATLAS is ever changing and developing new technological advances and responding to emerging research opportunities,” says ATLAS director Guy Savard. It is now being outfitted with an “N = 126 factory,” scheduled to go online later this year. This new capability will soon be producing beams of heavy atomic nuclei consisting of 126 neutrons. This is made possible, in part, by the addition of a cooler-buncher that cools the beam and converts it from continuous to bunched.

For many decades, ATLAS has been a leading U.S. facility for nuclear structure research and is the world-leading facility in the provision of stable beams for nuclear structure and astrophysics research. ATLAS can accelerate beams ranging across the elements, from hydrogen to uranium, to high energies, then it smashes them into targets for studies of various nuclear structures.

Since its inception, ATLAS has brought together the world’s leading scientists and engineers to solve some of the most complex scientific problems in nuclear physics and astrophysics. In particular, it has been instrumental in determining properties of atomic nuclei, the core of matter and the fuel of stars.

“Inaugurated in 1978, ATLAS is ever changing and developing new technological advances and responding to emerging research opportunities.” — ATLAS director Guy Savard

The forthcoming N = 126 factory will be generating beams of atomic nuclei with a “magic number” of neutrons, 126. As Savard explains, “Physics has seven magic numbers: 2, 8, 20, 28, 50, 82 and 126. Atomic nuclei with these numbers of neutrons or protons are exceptionally stable. This stability makes them ideal for research purposes in general.”

Scientists at ATLAS will be generating N = 126 nuclei to test a reigning theory of astrophysics — that the rapid capture of neutrons during the explosion and collapse of massive stars and the collision of neutron stars is responsible for the formation of about half the heavy elements from iron through uranium.

The N = 126 factory will be accelerating a beam composed of a xenon isotope with 82 neutrons into a target composed of a platinum isotope with 120 neutrons. The resulting collisions will transfer neutrons from the xenon beam into a platinum target, yielding isotopes with 126 neutrons and close to that amount. The very heavy neutron-rich isotopes are directed to experimental stations for study.

“The planned studies at ATLAS will provide the first data on neutron-rich isotopes with around 126 neutrons and should play a critical role in understanding the formation of heavy elements, the last stage in the evolution of stars,” said Savard. “These and other studies will keep ATLAS at the frontier of science.”

Credit: 
DOE/Argonne National Laboratory

A filter for cleaner qubits

image: Schematic of a Josephson quantum filter (JQF). The data qubit (DQ) to be protected and the JQF are directly coupled to a semi-infinite waveguide, through which control pulses for the DQ are applied.

Image: 
Department of Physic, College of Liberal Arts and Sciences,TMDU

Researchers at the Tokyo Medical and Dental University (TMDU), RIKEN, and the University of Tokyo propose an improved method for isolating the qubits in a quantum computer from the external environment, which may help usher in the era of practical quantum computing

Tokyo, Japan - A research team at the Tokyo Medical and Dental University (TMDU), RIKEN, and the University of Tokyo have demonstrated how to increase the lifetime of qubits inside quantum computers by using an additional "filter" qubit. This work may help make higher fidelity quantum computers that can be used in financial, cryptographic, and chemistry applications.

Quantum computers are poised to make a large impact in a variety of fields, from internet security to drug development. Instead of being limited to binary 0s and 1s of classical computers, the qubits in quantum computers can take on values that are arbitrary superpositions of the two. This allows quantum computers the potential to solve certain problems, like cracking cryptographic ciphers, much faster than current machines.

However, there is a fundamental tradeoff between the lifetime of the qubit superpositions and the processing speed. This is because the qubits must be carefully shielded from interacting with the environment, or the fragile superposition will snap back to being just a one or zero in a process called decoherence. To delay this loss of quantum fidelity, qubits in quantum computers are coupled only weakly to the control line through which the qubit control pulses are applied. Unfortunately, such a weak coupling limits the speed that computations can be run.

Now, the team at the Tokyo Medical and Dental University (TMDU) theoretically show how coupling a second "filter" qubit to the control line can greatly reduce the noise and spontaneous radiative losses that lead to decoherence. This allows the connections to be strong, which lends itself to faster cycle times.

"In our solution, the filter qubit acts like a nonlinear mirror, which completely reflects radiation from the qubit due to destructive interference but transmits strong control pulses due to absorption saturation" says first author Kazuki Koshino.

This research helps bring about a future in which quantum computers can be found in every business and research lab. Many operational research firms would like to use quantum computers to solve optimization problems that were considered too intensive for conventional computers, while chemists would like to use them to simulate the motion of atoms inside molecules.

"Quantum computers are improved day by day by companies including IBM and Google. As they become faster and more robust, they can be even more widespread," says senior author Yasunobu Nakamura.

Credit: 
Tokyo Medical and Dental University