Tech

Smaller, faster, greener

When you think about your carbon footprint, what comes to mind? Driving and flying, probably. Perhaps home energy consumption or those daily Amazon deliveries. But what about watching Netflix or having Zoom meetings? Ever thought about the carbon footprint of the silicon chips inside your phone, smartwatch or the countless other devices inside your home?

Every aspect of modern computing, from the smallest chip to the largest data center comes with a carbon price tag. For the better part of a century, the tech industry and the field of computation as a whole have focused on building smaller, faster, more powerful devices -- but few have considered their overall environmental impact.

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are trying to change that.

"Over the next decade, the demand, number and types of devices is only going to grow," said Udit Gupta, a PhD candidate in Computer Science at SEAS. "We want to know what impact that will have on the environment and how we, as a field, should be thinking about how we adopt more sustainable practices."

Gupta, along with Gu-Yeon Wei, the Robert and Suzanne Case Professor of Electrical Engineering and Computer Science, and David Brooks, the Haley Family Professor of Computer Science, will present a paper on the environmental footprint of computing at the IEEE International Symposium on High-Performance Computer Architecture on March 3rd, 2021.

The SEAS research is part of a collaboration with Facebook, where Gupta is an intern, and Arizona State University.

The team not only explored every aspect of computing, from chip architecture to data center design, but also mapped the entire lifetime of a device, from manufacturing to recycling, to identify the stages where the most emissions occur.

The team found that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure.

"A lot of the focus has been on how we reduce the amount of energy used by computers, but we found that it's also really important to think about the emissions from just building these processors," said Brooks. "If manufacturing is really important to emissions, can we design better processors? Can we reduce the complexity of our devices so that manufacturing emissions are lower?"

Take chip design, for example.

Today's chips are optimized for size, performance and battery life. The typical chip is about 100 square millimeters of silicon and houses billions of transistors. But at any given time, only a portion of that silicon is being used. In fact, if all the transistors were fired up at the same time, the device would exhaust its battery life and overheat. This so-called dark silicon improves a device's performance and battery life but it's wildly inefficient if you consider the carbon footprint that goes into manufacturing the chip.

"You have to ask yourself, what is the carbon impact of that added performance," said Wei. "Dark silicon offers a boost in energy efficiency but what's the cost in terms of manufacturing? Is there a way to design a smaller and smarter chip that uses all of the silicon available? That is a really intricate, interesting, and exciting problem."

The same issues face data centers. Today, data centers, some of which span many millions of square feet, account for 1 percent of global energy consumption, a number that is expected to grow.

As cloud computing continues to grow, decisions about where to run applications -- on a device or in a data center -- are being made based on performance and battery life, not carbon footprint.

"We need to be asking what's greener, running applications on the device or in a data center," said Gupta. "These decisions must optimize for global carbon emissions by taking into account application characteristics, efficiency of each hardware device, and varying power grids over the day."

The researchers are also challenging industry to look at the chemicals used in manufacturing.

Adding environmental impact to the parameters of computational design requires a massive cultural shift in every level of the field, from undergraduate CS students to CEOs.

To that end, Brooks has partnered with Embedded EthiCS, a Harvard program that embeds philosophers directly into computer science courses to teach students how to think through the ethical and social implications of their work. Brooks is including an Embedded EthiCS module on computational sustainability in COMPSCI 146: Computer Architecture this spring.

The researchers also hope to partner with faculty from Environmental Science and Engineering at SEAS and the Harvard University Center for the Environment to explore how to enact change at the policy level.

"The goal of this paper is to raise awareness of the carbon footprint associated with computing and to challenge the field to add carbon footprint to the list of metrics we consider when designing new processes, new computing systems, new hardware, and new ways to use devices. We need this to be a primary objective in the development of computing overall," said Wei.

The paper was co-authored by Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee and Carole-Jean Wu from Facebook and Young Geun Kim from Arizona State University.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Cooperative eco-driving automation improves energy efficiency and safety

image: Modeling an algorithmic controller in your car that talks to stoplights and integrates HD maps means energy savings and a safer driving environment. Simulation results show that the cooperative automated eco-driving algorithm saves energy -- 7% under light traffic and 23% under heavy traffic.

Image: 
Sarah Atkinson/Michigan Tech

Imagine you're driving up a hill toward a traffic light. The light is still green so you're tempted to accelerate to make it through the intersection before the light changes. Then, a device in your car receives a signal from the controller mounted on the intersection alerting you that the light will change in two seconds -- clearly not enough time to beat the light. You take your foot off the gas pedal and decelerate, saving on fuel. You feel safer, too, knowing you didn't run a red light and potentially cause a collision in the intersection.

Connected and automated vehicles, which can interact vehicle to vehicle (V2V) and between vehicles and roadway infrastructure like traffic signals and stop signs (V2I), promise to save energy and improve safety. In a new study published in Transportation Research Part B, engineers from Michigan Technological University propose a modeling framework for V2V and V2I cooperative driving.

Cooperative driving helps cars and their drivers safely and efficiently navigate. The framework uses an eco-driving algorithm that prioritizes saving fuel and reducing emissions. The automated algorithm calculates location-based traffic control devices and roadway constraints using maps and geographic information. The research is led by Kuilin Zhang, associate professor of civil and environmental engineering and affiliated associate professor of computer science at Michigan Tech, along with Shuaidong Zhao '18, now a senior quantitative analyst at National Grid.

For the past three years, Houghton, Michigan, has been home to roadside units installed on five of the city's traffic signals that make V2I communication possible. Zhang conducted a simulation analysis using real traffic signal phasing and timing messages from the Ann Arbor connected vehicle test environment and plans to expand testing in the Houghton area.

"The whole idea of cooperative driving automation is that the signals in the intersection tell your car what's happening ahead," Zhang said. "The sensor at the intersection can benefit all connected vehicles passing through the intersection. The automated eco-driving algorithm improves the driving decisions of the connected and automated vehicles."

The simulation results show that the cooperative automated eco-driving algorithm saves energy -- 7% under light traffic and 23% under heavy traffic along the corridor.

"The stop and go, stop and go, it may use a lot of energy," Zhang said. "The concept of eco-driving incorporates how the vehicle makes driving decisions using data not only from vehicles in front of it, but also with information given from a traffic signal."

Zhang's model pulls in high-definition (HD) maps, which use a connected vehicle's hardware and software to provide down-to-the-centimeter accuracy in navigation. HD maps incorporate multiple types of environmental sensing: long-range radar, lidar, camera footage, short/medium-range radar and ultrasound.

Zhang said for autonomous driving, it's important to know landmarks to control the car's driving, as well as hill grades; using a hill to slow or accelerate a car can also increase energy savings. It's easy to conserve energy on a straight highway; on busy arterial streets with traffic and stoplights, energy conservation isn't so simple. On city streets, Zhang and Zhao's online predictive connected and automated eco-driving model considers traffic control devices and road geometry constraints under light and heavy traffic conditions.

Credit: 
Michigan Technological University

New GSA bulletin articles published ahead of print in February

Boulder, Colo., USA: Several articles were published online ahead of print
for GSA Bulletin in February. Topics include earthquake cycles in
southern Cascadia, fault dynamics in the Gulf of Mexico, debris flow after
wildfires, the assembly of Rodinia, and the case for no ring fracture in
Mono basin.

Jurassic evolution of the Qaidam Basin in western China: Constrained by
stratigraphic

succession, detrital zircon U-Pb geochronology and Hf isotope analysis

Tao Qian; Zongxiu Wang; Yu Wang; Shaofeng Liu; Wanli Gao ...

Abstract:
The formation and evolution of an intracontinental basin triggered via the
subduction or collision of plates at continental margins can record
intracontinental tectonic processes. As a typical intracontinental basin
during the Jurassic, the Qaidam Basin in western China records how this
extensional basin formed and evolved in response to distant subduction or
collisional processes and tectonism caused by stresses transmitted from
distant convergent plate margins. The Jurassic evolution of the Qaidam
Basin, in terms of basin-filling architecture, sediment dispersal pattern
and basin properties, remains speculative; hence, these uncertainties need
to be revisited. An integrated study of the stratigraphic succession,
conglomerates, U-Pb geochronology, and Hf isotopes of detrital zircons was
adopted to elucidate the Jurassic evolutionary process of the Qaidam Basin.
The results show that a discrete Jurassic terrestrial succession
characterized by alluvial fan, braided stream, braided river delta, and
lacustrine deposits developed on the western and northern margins of the
Qaidam Basin. The stratigraphic succession, U-Pb age dating, and Hf isotope
analysis, along with the reconstructed provenance results, suggest
small-scale distribution of Lower Jurassic sediments deposited via
autochthonous sedimentation on the western margin of the basin, with
material mainly originating from the Altyn Tagh Range. Lower Jurassic
sediments in the western segment of the northern basin were shed from the
Qilian Range (especially the South Qilian) and Eastern Kunlun Range. And
coeval sediments in the eastern segment of the northern basin were
originated from the Quanji massif. During the Middle-Late Jurassic, the
primary source areas were the Qilian Range and Eastern Kunlun Range, which
fed material to the whole basin. The Jurassic sedimentary environment in
the Qaidam Basin evolved from a series of small-scale, scattered, and
rift-related depressions distributed on the western and northern margins
during the Early Jurassic to a larger, extensive, and unified depression
occupying the whole basin in the Middle Jurassic. The Altyn Tagh Range rose
to a certain extent during the Early Jurassic but lacked large-scale
strike-slip tectonism throughout the Jurassic. At that time, the North
Qaidam tectonic belt had not yet been uplifted and did not shed material
into the basin during the Jurassic. The Qaidam Basin experienced
intracontinental extensional tectonism with a northeast-southwest trend
throughout the Jurassic in response to far-field effects driven by the
sequential northward or northeastward amalgamation of blocks to the
southern margin of the Qaidam Block and successive accretion of the
Qiangtang Block and Lhasa Block onto the southern Eurasian margin during
the Late Triassic−Early Jurassic and Late Jurassic−Early Cretaceous,
respectively.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35757.1/595025/Jurassic-evolution-of-the-Qaidam-Basin-in-western  

Late Mesoproterozoic low-P/T−type metamorphism in the North Wulan
terrane: Implications for the assembly of Rodinia

Lu Wang; Stephen T. Johnston; Nengsong Chen; Heng Wang; Bin Xia ...

Abstract:
Regional metamorphism provides critical constraints for unravelling
lithosphere evolution and geodynamic settings, especially in an orogenic
system. Recently, there has been a debate on the Rodinia-forming Tarimian
orogeny within the Greater Tarim block in NW China. The North Wulan
terrane, involved in the Paleozoic Qilian orogen, was once part of the
Greater Tarim block. This investigation of petrography, whole-rock and
mineral geochemistry, phase equilibrium modeling, and in situ monazite U-Pb
dating of garnetite, pelitic gneiss, and quartz schist samples from the
Statherian−Calymmian unit of the North Wulan terrane provides new
constraints on the evolutionary history of the Greater Tarim block at the
end of the Mesoproterozoic during the assembly of Rodinia. The studied
samples yielded three monazite U-Pb age groups of ca. 1.32 Ga, 1.1 Ga, and
0.45 Ga that are interpreted to be metamorphic in origin. The tectonic
significance of the early ca. 1.32 Ga metamorphism is uncertain and may
indicate an extensional setting associated with the final breakup of
Columbia. The ca. 1.1 Ga low-pressure, high-temperature (low-P/T
)−type granulite-facies metamorphism is well preserved and characterized by
a clockwise P-T path with a minimum estimation of
∼840−900 °C and ∼7−11 kbar for peak metamorphism, followed by postpeak
decompression and cooling. A tectonothermal disturbance occurred at ca.
0.45 Ga, but with limited influence on the preexisting mineral compositions
of the studied samples. The characteristics of the metamorphism indicate an
arc−back-arc environment with ongoing subduction of oceanic lithosphere at
ca. 1.1 Ga. Combined with previous studies, we suggest that the Greater
Tarim block probably experienced a prolonged subduction-to-collision
process at ca. 1.1−0.9 Ga during the assembly of Rodinia, with a position
between western Laurentia and India−East Antarctica.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35629.1/594988/Late-Mesoproterozoic-low-P-T-type-metamorphism-in  

Imaging the Late Triassic lithospheric architecture of the Yidun
Terrane, eastern Tibetan Plateau: Observations and interpretations

Qiong-Yao Zhan; Di-Cheng Zhu; Qing Wang; Peter A. Cawood; Jin-Cheng Xie ...

Abstract:
The present-day lithospheric architecture of modern and ancient orogens can
be imaged by geophysical techniques. For ancient orogens, unravelling their
architecture at the time of formation is hindered by later tectono-magmatic
events. In this paper, we use spatial variations in radiogenic isotopic
compositions of Late Triassic magmatism from the Yidun Terrane, eastern
Tibetan Plateau, to establish its lithospheric architecture during the
Triassic. Comprehensive geochemical and isotopic data of Late Triassic
magmatic rocks from four transects across the Yidun Terrane document
eastward enrichment in whole-rock Nd, Sr, and zircon Hf isotopic
compositions. Mafic and felsic rocks of major plutons show coherent and
nonlinear trends in the Zr and P2O5 systematics and
have limited variation of isotopic compositions. This indicates that Late
Triassic magmatic differentiation was dominated by fractionation of
mantle-derived mafic magmas. The spatial isotopic trends result from
changing mantle sources, including variable contributions of isotopically
depleted asthenospheric mantle and isotopically enriched subcontinental
lithospheric mantle (SCLM) to magma sources. The spatial variation of
mantle sources suggests a westward thinning of the SCLM during the
Triassic. We propose that this architecture is most likely associated with
eastward subduction of oceanic lithosphere of the Jinshajiang Ocean located
at the west of the Yidun Terrane, immediately prior to the Late Triassic
magmatism.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35778.1/594989/Imaging-the-Late-Triassic-lithospheric  

Cambrian magmatic flare-up, central Tibet: Magma mixing in
proto-Tethyan arc along north Gondwanan margin

Pei-yuan Hu; Qing-guo Zhai; Peter A. Cawood; Guo-chun Zhao; Jun Wang ...

Abstract:
Accompanying Gondwana assembly, widespread but diachronous Ediacaran−early
Paleozoic magmatism of uncertain origin occurred along the supercontinent’s
proto-Tethyan margin. We report new geochemical, isotopic, and
geochronological data for Cambrian magmatic rocks (ca. 500 Ma) from the
Gondwana-derived North Lhasa terrane, located in the present-day central
Tibetan Plateau. The magmatic rocks are composed of basalts, gabbros,
quartz monzonites, granitoids (with mafic microgranular enclaves), and
rhyolites. Nd-Hf isotopic and whole-rock geochemical data indicate that
these rocks were probably generated by mixing of mantle-derived mafic and
crust-derived felsic melts. The mantle end-member volumes of mafic,
intermediate, and felsic rocks are ∼75%−100%, 50%−60%, and 0−30%,
respectively. Integration of our new data with previous studies suggests
that the North Lhasa terrane experienced long-term magmatism through the
Ediacaran to Ordovician (ca. 572−483 Ma), with a magmatic flare-up at ca.
500 Ma. This magmatism, in combination with other Ediacaran−early Paleozoic
magmatism along the proto-Tethyan margin, was related to an Andean-type
arc, with the magmatic flare-up event related to detachment of the oceanic
slab following collisional accretion of Asian microcontinental fragments to
northern Gondwana. Diachroneity of the proto-Tethyan arc system along the
northern Gondwanan margin (ca. 581−531 Ma along the Arabian margin and ca.
512−429 Ma along the Indian-Australian margin) may have been linked to
orogenesis within Gondwana. The North Lhasa terrane was probably involved
in both Arabian and Indian-Australian proto-Tethyan Andean-type orogens,
based on its paleogeographic location at the northern end of the East
African orogen.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35859.1/594953/Cambrian-magmatic-flare-up-central-Tibet-Magma  

Rift inheritance controls the switch from thin- to thick-skinned
thrusting and basal décollement re-localization at the
subduction-to-collision transition

Stefano Tavani; Pablo Granado; Amerigo Corradetti; Giovanni Camanni;
Gianluca Vignaroli ...

Abstract:
In accretionary convergent margins, the subduction interface is formed by a
lower plate décollement above which sediments are scraped off and
incorporated into the accretionary wedge. During subduction, the basal
décollement is typically located within or at the base of the sedimentary
pile. However, the transition to collision implies the accretion of the
lower plate continental crust and deformation of its inherited rifted
margin architecture. During this stage, the basal décollement may remain
confined to shallow structural levels as during subduction or re-localize
into the lower plate middle-lower crust. Modes and timing of such
re-localization are still poorly understood. We present cases from the
Zagros, Apennines, Oman, and Taiwan belts, all of which involve a former
rifted margin and point to a marked influence of inherited rift-related
structures on the décollement re-localization. A deep décollement level
occurs in the outer sectors of all of these belts, i.e., in the zone
involving the proximal domain of pre-orogenic rift systems. Older—and
shallower—décollement levels are preserved in the upper and inner zones of
the tectonic pile, which include the base of the sedimentary cover of the
distal portions of the former rifted margins. We propose that thinning of
the ductile middle crust in the necking domains during rifting, and its
complete removal in the hyperextended domains, hampered the development of
deep-seated décollements during the inception of shortening. Progressive
orogenic involvement of the proximal rift domains, where the ductile middle
crust was preserved upon rifting, favors its reactivation as a décollement
in the frontal portion of the thrust system. Such décollement eventually
links to the main subduction interface, favoring underplating and the
upward motion of internal metamorphic units, leading to their final
emplacement onto the previously developed tectonic stack.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35800.1/594954/Rift-inheritance-controls-the-switch-from-thin-to  

No ring fracture in Mono Basin, California

Wes Hildreth; Judy Fierstein; Juliet Ryan-Davis

Abstract:
In Mono Basin, California, USA, a near-circular ring fracture 12 km in
diameter was proposed by R.W. Kistler in 1966 to have originated as the
protoclastic margin of the Cretaceous Aeolian Buttes pluton, to have been
reactivated in the middle Pleistocene, and to have influenced the arcuate
trend of the chain of 30 young (62−0.7 ka) rhyolite domes called the Mono
Craters. In view of the frequency and recency of explosive eruptions along
the Mono chain, and because many geophysicists accepted the ring fracture
model, we assembled evidence to test its plausibility. The shear zone
interpreted as the margin of the Aeolian Buttes pluton by Kistler is 50−400
m wide but is exposed only along a 7-km-long set of four southwesterly
outcrops that subtend only a 70° sector of the proposed ring. The southeast
end of the exposed shear zone is largely within the older June Lake pluton,
and at its northwest end, the contact of the Aeolian Buttes pluton with a
much older one crosses the shear zone obliquely. Conflicting attitudes of
shear structures are hard to reconcile with intrusive protoclasis. Also
inconsistent with the margin of the ovoid intrusion proposed by Kistler,
unsheared salients of the pluton extend ∼1 km north of its postulated
circular outline at Williams Butte, where there is no fault or other
structure to define the northern half of the hypothetical ring. The shear
zone may represent regional Cretaceous transpression rather than the margin
of a single intrusion. There is no evidence for the Aeolian Buttes pluton
along the aqueduct tunnel beneath the Mono chain, nor is there evidence for
a fault that could have influenced its vent pattern. The apparently arcuate
chain actually consists of three linear segments that reflect Quaternary
tectonic influence and not Cretaceous inheritance. A rhyolitic magma
reservoir under the central segment of the Mono chain has erupted many
times in the late Holocene and as recently as 700 years ago. The ring
fracture idea, however, prompted several geophysical investigations that
sought a much broader magma body, but none identified a low-density or
low-velocity anomaly beneath the purported 12-km-wide ring, which we
conclude does not exist.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35747.1/594955/No-ring-fracture-in-Mono-Basin-California  

Timing and amount of southern Cascadia earthquake subsidence over the
past 1700 years at northern Humboldt Bay, California, USA

Jason S. Padgett; Simon E. Engelhart; Harvey M. Kelsey; Robert C. Witter;
Niamh Cahill ...

Abstract:
Stratigraphic, lithologic, foraminiferal, and radiocarbon analyses indicate
that at least four abrupt mud-over-peat contacts are recorded across three
sites (Jacoby Creek, McDaniel Creek, and Mad River Slough) in northern
Humboldt Bay, California, USA (∼44.8°N, −124.2°W). The stratigraphy records
subsidence during past megathrust earthquakes at the southern Cascadia
subduction zone ∼40 km north of the Mendocino Triple Junction. Maximum and
minimum radiocarbon ages on plant macrofossils from above and below
laterally extensive (>6 km) contacts suggest regional synchroneity of
subsidence. The shallowest contact has radiocarbon ages that are consistent
with the most recent great earthquake at Cascadia, which occurred at 250
cal yr B.P. (1700 CE). Using Bchron and OxCal software, we model ages for
the three older contacts of ca. 875 cal yr B.P., ca. 1120 cal yr B.P., and
ca. 1620 cal yr B.P. For each of the four earthquakes, we analyze
foraminifera across representative mud-over-peat contacts selected from
McDaniel Creek. Changes in fossil foraminiferal assemblages across all four
contacts reveal sudden relative sea-level (RSL) rise (land subsidence) with
submergence lasting from decades to centuries. To estimate subsidence
during each earthquake, we reconstructed RSL rise across the contacts using
the fossil foraminiferal assemblages in a Bayesian transfer function. The
coseismic subsidence estimates are 0.85 ± 0.46 m for the 1700 CE
earthquake, 0.42 ± 0.37 m for the ca. 875 cal yr B.P. earthquake, 0.79 ±
0.47 m for the ca. 1120 cal yr B.P. earthquake, and ≥0.93 m for the ca.
1620 cal yr B.P. earthquake. The subsidence estimate for the ca. 1620 cal
yr B.P. earthquake is a minimum because the pre-subsidence paleoenvironment
likely was above the upper limit of foraminiferal habitation. The
subsidence estimate for the ca. 875 cal yr B.P. earthquake is less than
(<50%) the subsidence estimates for other contacts and suggests that
subsidence magnitude varied over the past four earthquake cycles in
southern Cascadia.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35701.1/594743/Timing-and-amount-of-southern-Cascadia-earthquake  

Fault kinematics: A record of tectono-climatically controlled
sedimentation along passive margins, an example from the U.S. Gulf of
Mexico

Abah P. Omale; Juan M. Lorenzo; Ali AlDhamen; Peter D. Clift; A. Alexander
G. Webb

Abstract:
Faults offsetting sedimentary strata can record changes in sedimentation
driven by tectonic and climatic forcing. Fault kinematic analysis is
effective at evaluating changes in sediment volumes at salt/shale-bearing
passive margins where sediment loading drives faulting. We explore these
processes along the northern Gulf of Mexico. Incremental throw along 146
buried faults studied across onshore Louisiana revealed continual Cenozoic
fault reactivation punctuated by inactive periods along a few faults. Fault
scarp heights measured from light detection and ranging (LiDAR) data are
interpreted to show that Cenozoic fault reactivation continued through the
Pleistocene. The areas of highest fault throw and maximum sediment
deposition shifted from southwest Louisiana in the early Miocene to
southeast Louisiana in the middle−late Miocene. These changes in the locus
of maximum fault reactivation and sediment deposition were controlled by
changing tectonics and climate in the source areas. Early Miocene fault
throw estimates indicate a depocenter farther east than previously mapped
and support the idea that early Miocene Appalachian Mountain uplift and
erosion routed sediment to southeast Louisiana. By correlating changes in
fault throw with changes in sediment deposition, we suggest that (1) fault
kinematic analysis can be used to evaluate missing sediment volumes because
fault offsets can be preserved despite partial erosion, (2) fault throw
estimates can be used to infer changes in past tectonic and climate-related
processes driving sedimentation, and (3) these observations are applicable
to other passive margins with mobile substrates and faulted strata within
overfilled sedimentary basins.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35623.1/594599/Fault-kinematics-A-record-of-tectono-climatically  

An extreme climate gradient-induced ecological regionalization in the
Upper Cretaceous Western Interior Basin of North America

Landon Burgener; Ethan Hyland; Emily Griffith; Helena Mitášová; Lindsay E.
Zanno ...

Abstract:
The Upper Cretaceous Western Interior Basin of North America provides a
unique laboratory for constraining the effects of spatial climate patterns
on the macroevolution and spatiotemporal distribution of biological
communities across geologic timescales. Previous studies suggested that
Western Interior Basin terrestrial ecosystems were divided into distinct
southern and northern communities, and that this provincialism was
maintained by a putative climate barrier at ∼50°N paleolatitude; however,
this climate barrier hypothesis has yet to be tested. We present mean
annual temperature (MAT) spatial interpolations for the Western Interior
Basin that confirm the presence of a distinct terrestrial climate barrier
in the form of a MAT transition zone between 48°N and 58°N paleolatitude
during the final 15 m.y. of the Cretaceous. This transition zone was
characterized by steep latitudinal temperature gradients and divided the
Western Interior Basin into warm southern and cool northern biomes.
Similarity analyses of new compilations of fossil pollen and leaf records
from the Western Interior Basin suggest that the biogeographical
distribution of primary producers in the Western Interior Basin was heavily
influenced by the presence of this temperature transition zone, which in
turn may have impacted the distribution of the entire trophic system across
western North America.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35904.1/594464/An-extreme-climate-gradient-induced-ecological  

Debris flow initiation from ravel-filled channel bed failure following
wildfire in a bedrock landscape with limited sediment supply

Marisa C. Palucis; Thomas P. Ulizio; Michael P. Lamb

Abstract:
Steep, rocky landscapes often produce large sediment yields and debris
flows following wildfire. Debris flows can initiate from landsliding or
rilling in soil-mantled portions of the landscape, but there have been few
direct observations of debris flow initiation in steep, rocky portions of
the landscape that lack a thick, continuous soil mantle. We monitored a
steep, first-order catchment that burned in the San Gabriel Mountains,
California, USA. Following fire, but prior to rainfall, much of the
hillslope soil mantle was removed by dry ravel, exposing bedrock and
depositing ∼0.5 m of sandy sediment in the channel network. During a
one-year recurrence rainstorm, debris flows initiated in the channel
network, evacuating the accumulated dry ravel and underlying cobble bed,
and scouring the channel to bedrock. The channel abuts a plowed terrace,
which allowed a complete sediment budget, confirming that ∼95% of sediment
deposited in a debris flow fan matched that evacuated from the channel,
with a minor rainfall-driven hillslope contribution. Subsequent larger
storms produced debris flows in higher-order channels but not in the
first-order channel because of a sediment supply limitation. These
observations are consistent with a model for post-fire ravel routing in
steep, rocky landscapes where sediment was sourced by incineration of
vegetation dams—following ∼30 years of hillslope soil production since the
last fire—and transported downslope by dry processes, leading to a
hillslope sediment-supply limitation and infilling of low-order channels
with relatively fine sediment. Our observations of debris flow initiation
are consistent with failure of the channel bed alluvium due to grain size
reduction from dry ravel deposits that allowed high Shields numbers and
mass failure even for moderate intensity rainstorms.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35822.1/594456/Debris-flow-initiation-from-ravel-filled-channel  

The assembly of the South China and Indochina blocks: Constraints from
the Triassic felsic volcanics in the Youjiang Basin

Chengshi Gan; Yuejun Wang; Yuzhi Zhang; Xin Qian; Aimei Zhang

Abstract:
The Youjiang Basin is usually regarded as an important foreland basin in
the southern part of the South China Block that is related to the
convergence of the South China and Indochina blocks during the
Permian-Triassic. However, the nature of the basin remains controversial
due to questions about the subduction polarity and suture boundary between
these two blocks. Permian-Triassic felsic volcanics across the Dian-Qiong
and Song Ma suture zones could offer new insights into the convergent
processes of the South China and Indochina blocks. This study presents
detailed petrological, zircon U-Pb dating, and Hf-O isotope and whole-rock
geochemical analyses for the Triassic felsic volcanics of the Youjiang
Basin (northeast of the Dian-Qiong). The dacites and rhyolites from the
Beisi and Baifeng Formations were dated at ca. 240−245 Ma. All of the
felsic volcanics are characterized by high SiO2 (69.40−73.15
wt%), FeOt/MgO, 10000*Ga/Al, and TZr, δ 18O (9.7−11.8‰) and negative εNd(t) (from −9.6 to
−12.3) and zircon εHf(t) (from −6.2 to −14.5) with A-type
granitoid geochemical affinities, suggesting the reworking of crustal rocks
in an extensional setting. Permian-Triassic felsic igneous rocks display
similar geochemical signatures across the Dian-Qiong suture zone, whereas
they show distinctive Sr-Nd and zircon Hf-O isotopes across the Song Ma
suture zone. The felsic igneous rocks to the northeast of the Song Ma
suture zone have much lower εNd(t) and higher δ18O
with negative zircon εHf(t) than those to the southwest, which
have positive zircon εHf(t). Combined with other geological and
geophysical features, it is inferred that the Song Ma suture zone was
probably the suture boundary between the South China and Indochina blocks,
and the Youjiang Basin was likely a peripheral foreland basin in response
to the southwestward convergence of the South China Block toward the
Indochina Block.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35816.1/594457/The-assembly-of-the-South-China-and-Indochina  

High-latitude ice and climate control on sediment supply across SW
Gondwana during the late Carboniferous and early Permian

N. Griffis; I. Montañez; R. Mundil; D. Le Heron; P. Dietrich ...

Abstract:
The response of sediment routing to climatic changes across
icehouse-to-greenhouse turnovers is not well documented in Earth’s
pre-Cenozoic sedimentary record. Southwest Gondwana hosts one of the
thickest and most laterally extensive records of Earth’s penultimate
icehouse, the late Paleozoic ice age. We present the first high-resolution
U-Pb zircon chemical abrasion−isotope dilution−thermal ionization mass
spectrometry (CA-ID-TIMS) analysis of late Paleozoic ice age deposits in
the Kalahari Basin of southern Africa, which, coupled with existing
CA-ID-TIMS zircon records from the Paraná and Karoo Basins, we used to
refine the late Paleozoic ice age glacial history of SW Gondwana. Key
findings from this work suggest that subglacial evidence in the Kalahari
region is restricted to the Carboniferous (older than 300 Ma), with
glacially influenced deposits culminating in this region by the earliest
Permian (296 Ma). The U-Pb detrital zircon geochronologic records from the
Paraná Basin of South America, which was located downstream of the Kalahari
Basin in the latest Carboniferous and Permian, indicate that large-scale
changes in sediment supplied to the Paraná were contemporaneous with shifts
in the SW Gondwana ice record. Gondwanan deglaciation events were
associated with the delivery of far-field, African-sourced sediments into
the Paraná Basin. In contrast, Gondwanan glacial periods were associated
with the restriction of African-sourced sediments into the basin. We
interpret the influx of far-field sediments into the Paraná Basin as an
expansion of the catchment area for the Paraná Basin during the
deglaciation events, which occurred in the latest Carboniferous (300−299
Ma), early Permian (296 Ma), and late early Permian (<284 Ma). The
coupled ice and detrital zircon records for this region of Gondwana present
opportunities to investigate climate feedbacks associated with changes in
freshwater and nutrient delivery to late Paleozoic ocean basins across the
turnover from icehouse to greenhouse conditions.

View article:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35852.1/594458/High-latitude-ice-and-climate-control-on-sediment  

Credit: 
Geological Society of America

Here's how insects coax plants into making galls

image: Hormaphis cornu aphids feed on witch hazel leaves and coax the plants into making galls.

Image: 
David Stern

Insects can reprogram plant growth, transforming ordinary plant parts into intricately patterned shelters that are safe havens for feeding and reproduction.

These structures, called galls, have fascinated biologists for centuries. They're crafted by a variety of insects, including some species of aphids, mites, and wasps. And they take on innumerable forms, each specific in shape and size to the insect species that's created it - from knobs to cone-shaped protrusions to long, thin spikes. Some even resemble flowers.

Insects create galls by manipulating the development of plants, but figuring out exactly how they perform this feat "feels like one of the great unsolved problems in biology," says David Stern, a group leader at the Howard Hughes Medical Institute's Janelia Research Campus. "How does an organism of one kingdom take control of the genome of an organism in another kingdom to completely reorganize its development, to produce a home for itself?"

Now, Stern and his colleagues have identified the first examples of insect genes that directly guide gall development. These genes are turned on in aphids' salivary glands and appear to direct gall formation when the insects spit their saliva into the plants. One gene the team identified determines whether such galls will be red or green, the researchers report in a paper published March 2 in Current Biology.

"I think they've discovered essentially new territory," says Patrick Abbot, a molecular ecologist at Vanderbilt University who wasn't involved in the work. There's a strong likelihood that similar genes are found in other insects, he says. "It makes me want to run to the lab and start looking back through my data."

Figuring out how to study gall formation has been a longstanding challenge, Stern says - one that's interested him since he was a graduate student doing fieldwork in Malaysia. Gall-making insects aren't laboratory model organisms like fruit flies, and not as much is known about their genetics.

A few years ago, while wandering the woods of Janelia's riverside campus, Stern made a convenient observation. Hormaphis cornu aphids make galls on witch hazel trees, small flowering trees that are abundant on campus. Even on a single leaf, Stern noticed, some Hormaphis aphids were making green galls, while others were making red ones. It set up a natural experiment - a chance to compare two visibly distinct kinds of galls and figure out what's genetically different between the aphids that make them.

When Stern and his team sequenced the genomes of aphids that made green galls and those that made red galls, they pinpointed a gene that varied between the two genomes. Aphids with one version of a gene that they named "determinant of gall color" made green galls; aphids with a different version made red ones. The finding piqued their curiosity, as the gene didn't look like any previously identified genes.

To dive deeper, they collected aphids from both witch hazel trees and river birch trees. (Hormaphis cornu aphids live on river birch trees in the summer, but don't make galls there.) Back in the lab, the researchers carefully dissected out the insects' tiny salivary glands. In these glands, the team hunted for genes that were turned on only in the aphids that made galls. The researchers found that the gene determinant of gall color was similar to hundreds of other genes that were all turned on specifically in the gall forming aphids. Stern's team dubbed this group bicycle genes.

The gall-making aphids on the witch hazel trees switch on these genes to make BICYCLE proteins. The insects might spit these proteins into plant cells to reprogram leaf tissue into making a gall instead of normal plant parts, says Aishwarya Korgaonkar, a research scientist in the Stern lab who helped lead the project.

The team is now working to identify the plant molecules targeted by the aphids' BICYCLE proteins, says Korgaonkar. That could help them understand just how BICYCLE proteins goad plants into forming galls.

"After years of wondering what's going on, it's very rewarding to have something to show for it," Stern says.

Credit: 
Howard Hughes Medical Institute

Complex fluid dynamics may explain hydroplaning

image: Experimental setup for visualizing water flow in tire grooves, along with some sample results.

Image: 
Serge Simoens

WASHINGTON, March 2, 2021 -- When a vehicle travels over a wet or flooded road, water builds up in front of the tire and generates a lift force. In a phenomenon known as hydroplaning, this force can become large enough to lift the vehicle off the ground.

In Physics of Fluids, by AIP Publishing, scientists from the CNRS, the University of Lyon, and The Michelin Group use a laser imaging technique to study water flow in front of and through tire grooves.

To counteract hydroplaning, tread designs are chosen to drain water from the front of the tire without decreasing its ability to adhere to the road. Very few quantitative experimental studies of the movement of water through tire grooves have been done, so little is known about the exact flow patterns in these situations.

The only previously published work reporting quantitative velocity measurements in tire grooves was done with a high-speed camera and used millet seeds as water tracers. The seeds are about 1.5 millimeters in diameter, though, and provide poor contrast, so velocity information inside the grooves was not usable for a flow analysis.

Currently, research into hydroplaning uses a test track equipped with a transparent window embedded in the ground. The area above is flooded and a tire rolling over the window is observed with a high-speed camera.

The investigators developed a more sophisticated approach involving fluorescent seeding particles to visualize the flow and used a sheet of laser light to illuminate the area. The fluorescent particles were only 35 microns in diameter, about half the thickness of a human hair, with a density close to water.

"The first remarkable feature of the flow inside grooves is the presence of white elongated filaments or columns," said author Damien Cabut. "This indicates the presence of a gaseous phase, possibly air bubbles or cavitation."

There are two phases in the grooves, liquid and gas, which complicates the analysis. The investigators found vortices and bubbles in some grooves. The authors showed the number of vortices inside a groove is related to the ratio of the groove's width to its height.

"One vortex creation mechanism could be linked to the flow around the sharp edge of the tire rib. This effect is similar to one observed for delta wings in aerodynamic lift," said Cabut.

The flow structure in the grooves was found to be similar for increasing vehicle speeds when distances and velocities were properly scaled up. This could have implications for hydroplaning.

Cabut said more work needs to be done to understand the formation of vortices and the role of bubbles in the grooves. The experimental setup they developed will be a great help with that future work.

Credit: 
American Institute of Physics

Ecology: The scientific literature dominated by men and a handful of countries

Publishing in peer-reviewed scientific journals is crucial for the development of a researcher's career. The scientists that publish the most often in the most prestigious journals generally acquire greater renown, as well as higher responsibilities. However, a team involving two CNRS researchers* has just shown that the vast majority of scientific articles in the fields of ecology and conservation biology are authored by men working in a few Western countries. They represent 90% of the 1,051 authors that have published the most frequently in the 13 major scientific journals in the field since 1945. Three quarters of these men are affiliated with institutions in just five countries (the United States, Canada, Australia, the United Kingdom, and Germany). However, there are signs of improvement, as women are increasingly among the authors that publish the most, representing 18% of the youngest authors, whereas they represent only 3% of the oldest ones. The geographic diversity of the countries in which authors work also increased markedly by 15% since 1980. Published in Conservation Letters on 2 March 2021, this study calls for combating the process of discrimination engendered by the publication system by proposing concrete measures to halt the overrepresentation of men and Western countries.

Credit: 
CNRS

New study gives the most detailed look yet at the neuroscience of placebo effects

image: fMRI activity during pain is reduced in the areas shown in blue. Many of these are involved in constructing the experience of pain, including the feeling of suffering, and motivating actions to avoid it. Activity is increased in the areas shown in red and yellow. These are broadly involved in the control of cognition and memory. The involvement of these areas varied across studies, suggesting that different types of placebo effects involved different brain mechanisms.

Image: 
Image provided by M.Zunhammer et al.

A large proportion of the benefit that a person gets from taking a real drug or receiving a treatment to alleviate pain is due to an individual's mindset, not to the drug itself. Understanding the neural mechanisms driving this placebo effect has been a longstanding question. A meta-analysis published in Nature Communications finds that placebo treatments to reduce pain, known as placebo analgesia, reduce pain-related activity in multiple areas of the brain.

Previous studies of this kind have relied on small-scale studies, so until now, researchers did not know if the neural mechanisms underlying placebo effects observed to date would hold up across larger samples. This study represents the first large-scale mega-analysis, which looks at individual participants' whole brain images. It enabled researchers to look at parts of the brain that they did not have sufficient resolution to look at in the past. The analysis was comprised of 20 neuroimaging studies with 600 healthy participants. The results provide new insight on the size, localization, significance and heterogeneity of placebo effects on pain-related brain activity.

The research reflects the work of an international collaborative effort by the Placebo Neuroimaging Consortium, led by Tor Wager , the Diana L. Taylor Distinguished Professor in Neuroscience at Dartmouth and Ulrike Bingel, a professor at the Center for Translational Neuro- and Behavioral Sciences in the department of neurology at University Hospital Essen, for which Matthias Zunhammer and Tamás Spisák at the University Hospital Essen, served as co-authors. The meta-analysis is the second with this sample and builds on the team's earlier research using an established pain marker developed earlier by Wager's lab.

"Our findings demonstrate that the participants who showed the most pain reduction with the placebo also showed the largest reductions in brain areas associated with pain construction," explains co-author Wager, who is also the principal investigator of the Cognitive and Affective Neuroscience Lab at Dartmouth. "We are still learning how the brain constructs pain experiences, but we know it's a mix of brain areas that process input from the body and those involved in motivation and decision-making. Placebo treatment reduced activity in areas involved in early pain signaling from the body, as well as motivational circuits not tied specifically to pain."

Across the studies in the meta-analysis, participants had indicated that they felt less pain; however, the team wanted to find out if the brain responded to the placebo in a meaningful way. Is the placebo changing the way a person constructs the experience of pain or is it changing the way a person thinks about it after the fact? Is the person really feeling less pain?

With the large sample, the researchers were able to confidently localize placebo effects to specific zones of the brain, including the thalamus and the basal ganglia. The thalamus serves as a gateway for sights and sounds and all kinds of sensory motor input. It has lots of different nuclei, which act like processing stations for different kinds of sensory input. The results showed that parts of the thalamus that are most important for pain sensation were most strongly affected by the placebo. In addition, parts of the somatosensory cortex that are integral to the early processing of painful experiences were also affected. The placebo effect also impacted the basal ganglia, which are important for motivation and connecting pain and other experiences to action. "The placebo can affect what you do with the pain and how it motivates you, which could be a larger part of what's happening here," says Wager. "It's changing the circuitry that's important for motivation."

The findings revealed that placebo treatments reduce activity in the posterior insula, which is one of the areas that are involved in early construction of the pain experience. This is the only site in the cortex that you can stimulate and invoke the sense of pain. The major ascending pain pathway goes from parts of the thalamus to the posterior insula. The results provide evidence that the placebo affects that pathway for how pain is constructed.

Prior research has illustrated that with placebo effects, the prefrontal cortex is activated in anticipation of pain. The prefrontal cortex helps keep track of the context of the pain and maintain the belief that it exists. When the prefrontal cortex is activated, there are pathways that trigger opioid release in the midbrain that can block pain and pathways that can modify pain signaling and construction.

The team found that activation of the prefrontal cortex is heterogeneous across studies, meaning that no particular areas in this region were activated consistently or strongly across the studies. These differences across studies are similar to what is found in other areas of self-regulation, where different types of thoughts and mindsets can have different effects. For example, other work in Wager's laboratory has found that rethinking pain by using imagery and storytelling typically activates the prefrontal cortex, but mindful acceptance does not. Placebo effects likely involve a mix of these types of processes, depending on the specifics of how it is given and people's predispositions.

"Our results suggest that placebo effects are not restricted solely to either sensory/nociceptive or cognitive/affective processes, but likely involves a combination of mechanisms that may differ depending on the placebo paradigm and other individual factors," explains Bingel. "The study's findings will also contribute to future research in the development of brain biomarkers that predict an individual's responsiveness to placebo and help distinguish placebo from analgesic drug responses, which is a key goal of the new collaborative research center, Treatment Expectation."

Understanding the neural systems that utilize and moderate placebo responses has important implications for clinical care and drug-development. The placebo responses could be utilized in a context-, patient-, and disease-specific manner. The placebo effect could also be leveraged alongside a drug, surgery, or other treatment, as it could potentially enhance patient outcomes.

Wager and Bingel are available for comment.

Credit: 
Dartmouth College

Most older adults haven't gotten screened or tested for hearing loss, poll finds

image: Key findings from the National Poll on Healthy Aging's report on hearing loss screening and testing among adults over 50.

Image: 
University of Michigan

Eighty percent of Americans over 50 say their primary care doctor hasn't asked about their hearing in the past two years, and nearly as many - 77% -- haven't had their hearing checked by a professional in that same time, according to a new national poll report.

That's despite a growing body of evidence about the importance of hearing to other aspects of life, from dementia and risk of falls to the ability to stay connected to friends and family.

Men were more likely than women to say they'd had a recent hearing screening or test, and so were people ages 65 to 80 compared with those in their pre-Medicare years, according to the findings from the National Poll on Healthy Aging, based at the University of Michigan's Institute for Healthcare Policy and Innovation. But even among men and those over 65, 72% hadn't been tested.

Older adults who said they were in fair or poor physical or mental health overall were less likely to have had their hearing tested in the past two years. This was despite the fact that they were more likely to experience hearing issues.

In all, 16% of the older adults polled said they had fair or poor hearing ability. But the percentage who reported they had fair or poor hearing rose to 28% among those who called their physical health fair or poor, and 31% among those who rated their mental health fair or poor.

"Hearing loss can occur throughout life, but the risk rises with age as our ears lose function. Many people don't realize they've lost hearing ability unless they're screened or tested," says Michael McKee, M.D., M.P.H., a family medicine physician and health services researcher at Michigan Medicine, U-M's academic medical center. "Age-related hearing loss can have wide-ranging consequences, and can be addressed with assistive technologies, yet these data show a major gap in detection, and disparities between groups."

McKee, and Department of Family Medicine chair Philip Zazove, M.D., who both use cochlear implants, worked with the poll team to develop the questions and analyze the results contained in the new poll report.

The poll receives support from AARP and Michigan Medicine, and draws from the answers of a national sample of more than 2,000 adults aged 50 to 80.

In all, 6% of older adults said they currently use a device to aid their hearing, even though numerous studies show that at least 50% of older adults probably have some degree of hearing loss.

Zazove notes that health insurance plans vary widely in their coverage of hearing screening by primary care providers, hearing tests by audiologists and purchase of hearing aids and cochlear implants.

"Having to bear the cost of testing and devices can be a barrier to timely care, on top of the social stigma attached to age-related hearing loss and wearing a device," he says. "These findings spotlight a tremendous opportunity for primary care and audiology clinicians to partner better, and for health policy decisionmakers to engage on this issue."

Traditional Medicare does not cover routine hearing tests or devices, though it encourages primary care providers to use standard questionnaires about hearing during annual wellness visits. Medicare Advantage plans and employer-based insurance plans may cover some hearing-related services, while Medicaid coverage varies by state, and Veterans Health coverage is mainly for hearing issues connected to military service.

Despite the fact that most of them had not been screened or tested for hearing loss recently, 62% of the older adults polled felt it's somewhat or very important to have such tests every two years.

"These poll results are especially timely given the U.S. Food and Drug Administration's expected regulations regarding over-the-counter hearing aids, which could improve access but also make screening and testing more important for those who might seek to buy their own device without a prescription," says Preeti Malani, M.D., the director of the NPHA, who has training in geriatrics as well as infectious disease and is a physician at Michigan Medicine.

Congress directed the FDA to develop regulations for OTC hearing aids in 2017, and they were due to be unveiled in August 2020 but were delayed due to the COVID-19 pandemic.

"A person's ability to hear greatly affects how they interact with other people, loved ones, and the environment around them," says Alison Bryant, Ph.D., senior vice president of research for AARP. "It's discouraging to learn that the majority of adults over 50 are not getting their hearing tested regularly, and may not know that their hearing is declining."

Credit: 
Michigan Medicine - University of Michigan

Child abuse surges in times of crisis - the pandemic may be different

While natural disasters and economic recessions traditionally unleash an uptick in child abuse, a new study suggests that cases may have declined in the first months of the pandemic, compared with the same timeframe in previous years.

In the study, led by UCSF Benioff Children's Hospitals and Children's Mercy Kansas City, researchers tracked the number of pediatric inpatients ages 5 and under in 52 children's hospitals nationwide for the first eight months of 2020. They found a steep decline in the number of ER visits and hospital admissions, including those requiring treatment for physical abuse. This started in mid-March - around the time some states issued shelter-in-place orders - according to the study, which publishes March 1, 2021, in Pediatrics.

When the researchers looked at the proportion of patients whose abuse had resulted in admission to the ICU and other markers of severe injury, in the period from March 16 to Aug. 31, they found little difference between the same period for prior years.

"If the proportion of children diagnosed with more severe abusive injuries had increased during the pandemic, this would indicate that declines in physical abuse were driven by children with less severe abusive injuries not presenting for medical care or being missed by clinicians," said first author Sunitha Kaiser, MD, a pediatric hospitalist at UCSF Benioff Children's Hospitals and associate professor in the UCSF departments of pediatrics, and epidemiology and biostatistics.

"Instead, we found the severity of injuries was similar to pre-pandemic levels, which suggests that physical abuse may have decreased similarly across the full spectrum of severity," she said.

The researchers found that there was a lower percentage of physically abused infants needing ICU care during the pandemic period compared to the same timeframe in previous years: 15.4 percent versus 21.3 percent. The study found little difference between those timeframes in the proportion of abused children who had died in the hospital (about 2 percent), and the proportion of abused children admitted for abusive head trauma.

CARES, Eviction Protections May Have Prevented Child Abuse

While further studies may reveal different patterns, including the possibility that evidence of abuse may not be apparent for months to follow, Kaiser suggests that interventions such as financial stipends from the Coronavirus Aid, Relief, and Economic Security (CARES) Act and eviction protections may have alleviated adult stress, preventing spikes in violence toward children.

"Our take-home message is that policies that help reduce stress on families should continue to be prioritized to prevent unnecessary harms to children. Clinicians, teachers and caretakers should also continue to be very vigilant in suspecting and reporting potential abuse, because we know it is historically under-detected and under-reported."

Other explanations for the study's findings include failure by clinicians to identify abuse, a scenario that Kaiser says is less likely, because patient volumes had dropped during the pandemic enabling doctors to potentially dedicate more time to patients presenting with injuries of questionable causes.

A 2016 paper cited in the study found that the rate of abusive head trauma in children under 5 increased from 9.8 per 100,000 child years before 2007, to 15.6 per 100,000 child years during the recession of 2007 to 2009.

Credit: 
University of California - San Francisco

Plastic solar cells combine high-speed optical communication with indoor energy harvesting

image: a, Schematic of the OPV device architecture; b, fabricated OPV sample including eight individual cells and four common ground pads; c, block diagram of the multiple-input multiple output (MIMO) visible light data transmission system; d, experimental 2-by-2 MIMO setup with a single imaging lens; e, estimated and measured signal-to-noise ratio (SNR) of the two MIMO channels; f, adaptive bit loading applied to the orthogonal frequency division multiplexing (OFDM) data encoding scheme. The organic materials used in the OPV are PTB7-Th and EH-IDTBR. The subcarriers that exhibit the highest SNR are exposed to signals with up to 256 unique signal constellation points leading to the transmission of 8 (log2(256)) bits per transmission step. For comparison, on-off keying (OOK) would only allow one bit per transmission. In the 2-by-2 MIMO system, there are two independent channels and consequently, the maximum number of bits that can be transmitted per transmission step is 16 in the high SNR regions.

Image: 
by Iman Tavakkolnia, Lethy K. Jagadamma, Rui Bian, Pavlos P. Manousiadis, Stefan Videv, Graham A. Turnbull, Ifor D. W. Samuel and Harald Haas

Around the world there are currently more than 18 billion internet-connected mobile devices. In the next 10 years, anticipated growth in the Internet of Things (IoT) and in machine-type communication in general, will lead to a world of hundreds of billions of data-connected objects. Such growth poses two very challenging problems:

How can we securely connect so many wireless devices to the Internet when the radio-frequency bandwidth has already become very scarce?

How can all these devices be powered?

Regular, manual charging of all mobile Internet-connected devices will not be feasible, and connection to the power-grid cannot be generally assumed. Therefore, many of these mobile devices will need to be able to harvest energy to become largely energy-autonomous.

In a new paper published in Light Science & Application, researchers from the University of Strathclyde and the University of St Andrews have demonstrated a plastic solar panel that combines indoor optical energy harvesting with simultaneously receiving multiple high-speed data signals by multiple-input multiple-output (MIMO) visible light communications (VLC).

The research, led by Professor Harald Haas from the Strathclyde LiFi Research and Development Centre, and Professors Ifor Samuel and Graham Turnbull at the St Andrews Organic Semiconductor Centre, makes an important step towards the future realization of self-powered data-connected devices.

The research teams showed that organic photovoltaics (OPVs), solar cells made from similar plastic-like materials to those used in OLED smartphone displays, are suitable for high-speed optical data receivers that can also harvest power. Using an optimized combination of organic semiconductor materials, stable OPVs were designed and fabricated for efficient power conversion of indoor lighting. A panel of 4 OPV cells was then used in an optical wireless communication experiment, receiving a data rate of 363 Mb/s from an array of 4 laser diodes (each laser transmitting a separate signal), while simultaneously harvesting 11 mW of optical power.

Prof Turnbull explained:

"Organic photovoltaics offers an excellent platform for indoor power harvesting for mobile devices. Their advantage over silicon is that the materials can be designed to achieve maximum quantum efficiency for typical LED lighting wavelengths. Combined with the data reception capability, this opens up a significant opportunity for self-powered Internet of Things devices."

Prof Haas added:

"Organic photovoltaic cells are very attractive because they are easily made and can be flexible, allowing mass integration into internet-connected devices. In addition, compared to inorganic detectors, OPVs have the potential to be significantly cheaper, which is a key driver to their large-scale commercial adoption.

Visible light communication provides unregulated, safe and vast resources to alleviate emerging wireless capacity bottlenecks. Of course, visible light can also provide energy. To achieve both objectives with a single device, new solar cells are needed. They must be capable of simultaneously harvesting energy and detecting data at high speeds. It is, therefore, essential to develop solar cells that have two key features: a) they exhibit a very large electrical bandwidth in the photovoltaic mode of operation, and b) have a large collection area to be able to collect a sufficient number of photons to achieve high signal-to-noise ratio (SNR) and harvest maximum energy from light. Regrettably, the two requirements are typically mutually exclusive because a large detector area results in a high capacitance and hence low electrical bandwidth.
In this research, we have overcome this fundamental limitation by using an array of OPV cells as a MIMO receiver to establish multiple parallel and independent data channels while being able to accumulate the harvested energies of all individual solar cells. To the best of our knowledge, this has never been shown before. This work therefore lays the foundation for the creation of a very large, massive MIMO solar cell receiver enabling hundreds and potentially thousands of individual data streams while using the huge collection area to harvest large amounts of energy from light (both data carrying and ambient light). It is imaginable to turn entire walls into a gigabit per second data detector while harvesting sufficient energy to power many distributed intelligent sensors, data processing and communication nodes."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Desert beetle: a help for the drying planet

video: Desert beetle

Image: 
NCU/Anna Jaszczuk

A number of scientists whose work is inspired by natural behavior is constantly growing. The lotus flower, with its ability to self- clean, is commonly described in literature and can be best examples the trend. Researchers started to wonder why the flower behaves in this manner and they decided to study its structure with the use of microscopes. Hence, they could draw the conclusion that the structure is highly hydrophobic, i.e. it maintains water drops on the surface.  Water then collects particles of dust and by flowing down, removes them by flowing down. It means the adhesion forces, those responsible for accumulating water on the flower, are weak, but at the same time, dirt easily attaches to droplets, which results in self-cleaning. Owing to the observation, self-cleaning surfaces such as painted surfaces, roof tiles, or textiles, have been developed.

Rose petals, however, show a different structure. A water drop, falling on a petal whose surface is hydrophobic, adheres and does not flow down.  The petal effect is connected with the development of  hydrophobic surface characterized by high adhesion.

Frogs able to walk on ceilings make another interesting case. Here, the question arises: why will it not fall down from the ceiling the surface of which is rough? Scientists decided to examine the structure of the frog toe and reproduce it. Now, a similar solution is applied on self-adhesive envelopes. The  glue is protected with a paper strap which can be peeled off with ease. Still, when the glue gets in contact with some other kind of paper, and the envelope is closed, it will not be opened without cutting.

Nature has created even more complex systems. To exemplify it, the desert beetle armor structure reveals a dual character as it is both, hydrophilic and hydrophobic, and thus, there are areas absorbing and repelling water from the surface. Due to this phenomenon, beetles survive in such a hostile environment as desert; nothing sticks to their armor, wet sand in particular, and water is collected over the hydrophobic areas which allows them to drink and exist.

- When watching a channel, I saw a program in which a beetle standing on its legs and catching the morning dew was presented. The insect gets water from the mist, says Dr hab. Joanna Kujawa Prof. NCU from the  Faculty of Chemistry. - Because the remaining parts of armor surface are covered with wax, water flows down and the beetle is able to drink it and survive in such a harsh climate.

Researchers started wondering how to transfer the solution from nature to the laboratory because such a phenomenon is employed in the process of membrane distillation.  - In this case, enzymes are uptaken into the membrane by absorption, i.e. some surface adhesion, not by chemical bonding - explains Prof. Dr hab. Wojciech Kujawski from the Faculty of Chemistry, NCU. - If it is physical absorption, desorption can easily occur, as the forces acting are weak.

It is all about reinforcing membranes which, due to chemical bonds, are more durable.  Membranes degrade over time , but it occurs slower than in the case of those developed by applying another layer. Chitosan use has turned out to be a good solution as the material is abundant and easily available on Earth. Chitin which can be easily transformed into chitosan occurs naturally in shells, e.g. shrimp shells. There are heaps of seafood shells and no idea what to do with them. The scientists from Toru? claim the beetle armor structure can be mimicked and the stored chitosan can be reused, which is a complex attitude to the topic. This is also in line with the zero waste trend.

Due to chitosan, water will flow down even more readily, thus functioning like wax on the beetle armor. The chemists have decided to attach chitosan in the hydrophilic region.

- It is required in membrane distillation that the membrane surface must be porous and hydrophobic - explains Prof. Kujawa. You can find many examples of chitosan use in membranes, but nobody has ever attached it by chemical bonds. It opened a new perspective for us. If we bond chitosan, it will remain in place; hence, the connection will be stable.

The scientists first modified chitosan and then attached it chemically to the membrane. In the current research, they have first modified membrane, and then attached chitosan. Therefore, the membrane is more hydrophilic and more water can be passed through it.

- It is difficult to compare our results to those of other authors since no papers on similar topics have been published - says  Prof. Kujawa. - Researchers who physically applied chitosan to modified membranes  also observed improvement, but not to such an extent as in our studies. Owing to this, we can adjust the material to a chosen process.

A membrane developed upon physical modification is single-use only, as chitosan is later washed off (eluted). - In the interest of knowledge, we performed a stability test of chemically modified membranes used for water desalination. The test was carried out in  ten cycles, over a few days each - reveals Prof. Kujawa. - We noticed slight changes, but nothing fell apart. 

The chemists from Toru? have also tested the membranes for their resistance to fouling.  The studies were made with the use of fruit juices. Fruit pulp interaction with the membrane resulted in the collection of remnants on the surface, pore clogging, and thus, the membrane could not be reused. However, on the surface containing chitosan, which additionally shows bactericidal properties, completely different interactions occur.  Fruit pulp does not stick, or even if it happens so, it can be easily washed off under the stream of water, without chemicals. The solution can find various practical applications.

The NCU chemists have written a couple of papers on the topic. The first one concerned the introduction of modified chitosan to a membrane and was published in Desalination. The next one about the introduction of chitosan to a modified membrane, was published in ACS Applied Materials and Interfaces.

The research is carried out in cooperation with a foreign partner, prof. Samer Al-Gharabli from the Pharmaceutical and Chemical Engineering Department, German Jordanian University, Amman (Jordan).  - As part of this cooperation, we conduct joint research focusing on the design and formation of the so-called "smart materials" which are intelligent separation materials with controlled properties for a wide range of applications - says Prof. Kujawa.

- Owing to their discoveries, the scientists want to develop membranes which will simultaneously  transport water and retain salts or other impurities even more efficiently. It is obviously connected with water shortages on our planet - explains Prof. Kujawski. - In Poland, we will face the problem even earlier than the biggest pessimists expect. A few years ago, I participated a seminar in Jordan during which I heard a statement that lack of water should not be perceived from our state, but a smallest administrative unit perspective. If you divide a country into smaller squares, it suddenly turns out that the per cent of population affected by water limitations is rapidly growing.  In Poland, there is access to water along rivers, but when I visited Zakopane 20 years ago, I could hear people saying 'save water, our streams are drying up'. Wells are impurified, there are no fresh water sources, so the problem of drying up and ground level water lowering is growing.

So, scientists are looking for different ways of drinking water production. At present, membrane techniques dominate, particularly reverse osmosis.

It is the reverse pressure process in which non-porous membranes are used. - We apply the pressure of 60 bar and push water through them - explains Prof. Kujawski. The process is called reverse osmosis because in a typical osmosis phenomenon water is drawn from a dilute to a concentrated solution. Here, water is pushed from a concentrated solution through a membrane.

Currently, according to water protection regulations, units producing water with the use of reverse osmosis are required to dispose of residues or concentrated saline. Once, installations were located by the seaside, and the residues came back to water immediately. Nowadays, other solutions need to be found to reuse saline. It can, for instance, be even more concentrated so that it will start crystallizing. The thus produced salt can be used  in various industrial processes, i.e. for chlorine or sodium hydroxide production. Around Toru?,  chlorine is produced from saline in two  big plants, in W?oc?awek and Inowroc?aw.

- In order to process saline, reverse distillation can also be employed, and this is an example of our dealing with beetles, says Prof. Kujawski.  We apply hydrophobic, porous membranes, those transporting liquids from the feeding to the receiving phase, and, since salt as such is non-volatile, we carry only the component which can be evaporated through the membrane pores.

Although reverse osmosis is the dominating membrane technique from those currently applied, it is not trouble-free. As the effect of the process, osmotic pressure which can be very high  appears.  To apply reverse osmosis, pressure values have to be higher than the osmotic pressure. It means that in the beginning pressure higher than  that osmotic has to be applied and this is the price of the process. In membrane distillation, however,  energetic effort is significantly smaller as the whole process is based on slightly different physicochemical properties. Distillation is particularly applicable  in hot climate areas, in countries such as Italy, Spain Greece, where solar panels can be effectively used. When a hotel located off the beaten track has to be supplied with water, a solar panel can be mounted on its roof. The panel heats water for membrane distillation. As a result, on the one hand, the place is supplied with hot water which is distributed, and on the other hand, cool water is condensed. Therefore, drinking water can be cheaply produced, but only in limited amounts. As far as reverse osmosis is concerned, we are talking about millions of liters daily.

Moreover, in countries where access to cheap energy sources is  reasonably easy, the so-called electrodialysis can be used. This process involves the employment of special kinds of membranes which enable the transport of ions, but not water. Cations move towards the cathode, anions - towards the anode and water remains.

There is also the so-called natural osmosis, applicable for wastewater purification and water recycling. Water permeates the membrane from the dilute solution to the concentrated one. Next, water has to be somehow retrieved from the concentrated solution, which is diluted during the process. For this purpose, an additional  method is used.

As a process, membrane distillation has been known for 50 years. Although it attracted scientists' attention in the early 1970s, it has been commercially applied for less than twenty years, and only in small-scale, low-efficiency installations for producing drinking water in single houses or hotels. In Europe, membrane distillation is most extensively researched in Almeria (Spain). - In Spain, the process is powered by solar energy - says Prof. Kujawski. - The Spanish have a huge mirror which collects sunrays, and which heats not only water but also metals. Heat is used for heating and, additionally, the efficiency of different configurations is examined. I had an opportunity to visit the place a few years ago, and I have to admit it is impressive.

Chemists can reassure us that people already drink seawater, not necessarily in Poland, but in Israel, for example. The reverse osmosis process for producing drinking water was implemented there; hotels on the Maldives are equipped with systems for membrane distillation. - In America,  tribes leading nomadic lifestyle still exist - says Prof. Kujawski. - Scientists from one of universities equipped a school bus with solar panels on the roof and a membrane distillation system inside. They travel and produce water for the nomads who move across the areas where available water is contaminated with elements such as arsenic.

It needs to be emphasized that water obtained by membrane distillation is distilled and requires mineralization before consumption. As the scientists joke, it is bare and has to be dressed.

It is difficult to estimate whether the production of drinking water from seawater is expensive. It all depends on the amounts we want to produce and which technology we choose. Countries located by the Persian Gulf applied thermal methods which were first developed for producing drinking water from seawater, and in which seawater is repeatedly evaporated and condensed. The process is highly energy-consuming, but the heating potential in these countries is huge. Later, in the early 1960s, first membranes were developed, and soon after, they were used for filtration. - You need to remember that when we lack drinking, we will try to get it regardless of the price - summarizes Prof. Kujawski.

Credit: 
Nicolaus Copernicus University in Torun

TPU scientists develop efficient method to create high-strength materials for flexible electronics

image: TPU researchers Raul David Rodriguez Contreras and Evgeniya Sheremet

Image: 
TPU researchers Raul David Rodriguez Contreras and Evgeniya Sheremet

TPU researchers jointly with their colleagues from foreign universities have developed a method that allows for a laser-driven integration of metals into polymers to form electrically conductive composites. The research findings are presented in Ultra-Robust Flexible Electronics by Laser-Driven Polymer-Nanomaterials Integration article Ultra-Robust Flexible Electronics by Laser-Driven Polymer-Nanomaterials Integration, published in Advanced Functional Materials academic journal (Q1, IF 16,836).

"Currently developing breakthrough technologies such as the Internet of Things, flexible electronics, brain-computer interfaces will have a great impact on society in the next few years. The development of these technologies requires crucially new materials that exhibit superior mechanical, chemical and electric stability, comparatively low cost to produce on a large scale, as well as biocompatibility for certain applications. In this context, polymers and a globally widespread polyethylene terephthalate (PET), in particular, are of special interest. However, conventional methods of polymers modification to add the required functionality, as a rule, change conductivity of the entire polymer volume, which significantly limits their application for complex topologies of 3-manifolds,"Raul David Rodriguez Contreras, Professor of the TPU Research School of Chemistry and Applied Biomedical Sciences, says.

The scientists offered their method. First, aluminum nanoparticles are deposited on PET substrates and, then, the samples are irradiated by laser pulses. Thus, a conductive composite is locally formed in the irradiated areas. The researches chose aluminum because it is a cheap and readily available metal. Silver is frequently used as a conductor for flexible electronics. Therefore, the obtained samples with aluminum nanoparticles were compared with a silver conductive paste and graphene-based materials.

"Mechanical stability tests (abrasion, impact and stripping tests) proved that composites based on aluminum nanoparticles surpass other materials. Moreover, the material structure itself turned out to be very interesting. During laser processing, aluminium carbide is formed on sample surfaces. Furthermore, polymers induce the formation of graphene-like carbon structures. We did not expect this effect. Besides, by adjusting laser power, we can control material conductivity. In practice, using a laser, it is possible to "draw" almost any conductive structure on polymer surface and make it locally conductive,"Evgeniya Sheremet, Professor of the TPU Research School of High-Energy Physics, explains.

According to the scientists, the laser integration of metals into polymers was used in flexible electronics for the first time. There are methods based on "metal explosion" by laser and its application into polymers at a high speed, but they are more complicated in terms of technological implementation. The method of the TPU researchers implies two basic technological steps: application of nanoparticles on polymer surface and laser processing. In addition, the method is applicable to a wide variety of materials.

"What can it be used for? First, it can be used for flexible electronics. One of the problems in this field is a low mechanical stability of products. There are many approaches to improve it. However, normally, the obtained materials would not have passed our tests. There is also photocatalysis, flexible sensors for robotics, light-emitting diodes and biomedical products among the potential fields of application," the article authors explain.

Further on, the research team is planning to test the new method on other materials such as silver, copper, carbon tubes and to use various polymers. The scientists from TPU, University of Electronic Science and Technology of China, Leibniz Institute of Polymer Research Dresden and the University of Amsterdam took part in the research work. The project is supported by the TPU Competitiveness Enhancement Program VIU-ISHFVP-198/2020.

Credit: 
Tomsk Polytechnic University

Ultra-fast electron measurement provides important findings for the solar industry

image: In the FLASH I experimental hall "Albert Einstein"

Image: 
DESY / Heiner Mueller-Elsner

The key are the ultra-fast flashes of light, with which the team led by Dr. Friedrich Roth works at FLASH in Hamburg, the world's first free-electron laser in the X-ray region. "We took advantage of the special properties of this X-ray source and expanded them with time-resolved X-ray photoemission spectroscopy (TR-XPS). This method is based on the external photoelectric effect, for the explanation of which Albert Einstein received the Nobel Prize in Physics in 1921.

"For the first time, we were able to directly analyze the specific charge separation and subsequent processes when light hits a model system such as an organic solar cell. We were also able to determine the efficiency of the charge separation in real-time," explains Dr. Roth from the Institute of Experimental Physics at TU Bergakademie Freiberg.

With photon science to better solar cells

In contrast to previous methods, the researchers were able to identify a previously unobserved channel for charge separation. "With our measurement method, we can carry out a time-resolved, atom-specific analysis. This gives us a fingerprint that can be assigned to the associated molecule. We can see when the electrons energized by the optical laser arrive at the acceptor molecule, how long they stay and when or how they disappear again," says Prof. Serguei Molodtsov, explaining the measurement method. He heads the research group "Structural Research with X-ray Free Electron Lasers (XFELs) and Synchrotron Radiation" at the Freiberg Institute of Experimental Physics and is a Scientific Director at the European X-ray Free Electron Laser (EuXFEL).

Analyze weak points and increase quantum efficiency

Real-time analysis and the measurement of internal parameters are important aspects of basic research that the solar industry, in particular, can benefit from. "With our measurements, we draw important conclusions about the interfaces at which free charge carriers are formed or lost and thus weaken the performance of solar cells," adds Dr. Roth. With the findings of the Freiberg researchers, for example, optimization possibilities at the molecular level or in the field of materials science can be derived and quantum efficiency optimize newly emerging photovoltaic and photocatalytic systems. The quantum efficiency describes the ratio of the incident light to the photon stream (current that is generated). The team published the results in a current specialist publication, the journal Nature Communications.

Credit: 
University of Freiberg / TU Bergakademie Freiberg

A mechanism by which cells build 'mini-muscles' underneath their nucleus identified

image: On the left: Super-resolution microscopy image of a migrating human osteosarcoma cell. Magenta marks focal adhesions, and green non-muscle myosin II (NMII). F-actin bundles are shown in grey. On the right: Schematics of cortical stress ?ber assembly process. Myosin II pulse orchestrates the initial assembly and maturation of a cortical stress ?ber, followed by focal adhesion maturation at the ends of the contractile bundle. Note that the color-coding is di?erent between the two panels.

Image: 
Jaakko Lehtom&auml;ki

Research groups at the University of Helsinki uncovered how motor protein myosin, which is responsible for contraction of skeletal muscles, functions also in non-muscle cells to build contractile structures at the inner face of the cell membrane. This is the first time when such 'mini-muscles', also known as stress fibers, have been seen to emerge spontaneously through myosin-driven reorganization of the pre-existing actin filament network in cells. Defects in the assembly of these 'mini-muscles' in cells lead to multiple disorders in humans, and in the most severe cases to cancer progression.

A new study published in eLife, drills into the core mechanisms of stress fiber assembly, and reveals how stress fibers can be built directly at the cell cortex: a specialized network of actin filaments on the inner face of the cell membrane. The research, carried out in the groups of Academy Professor Pekka Lappalainen at HiLIFE Institute of Biotechnology, and Docent Sari Tojkander at Faculty of Veterinary Medicine, University of Helsinki, uncovers that myosin pulses, which were previously connected to shape-changes in the epithelial tissues during animal development, can template assembly of stress fibers at the cell cortex. In this process, non-muscle myosin II, a close relative to the protein responsible for muscle contraction, is locally and temporally recruited to the cortex, where it organizes the initially mesh-like actin filament network into parallel rod-like structures. These structures then engage the growth and maturation of focal adhesions at the both ends of the actomyosin bundle, finally creating a stress fiber at the cell cortex (see Figure).

"Previous studies from our group at University of Helsinki and other laboratories abroad demonstrated that stress fibers can arise at the front of the cell from small actin- and myosin-containing precursor structures, and that stress fibers disassemble at the back of the cell as it moves forward. Now we reveal a completely new mechanism by which stress fibers can form in cells, and provide an explanation for why 'mysterious' myosin pulses occur at the cell cortex," Lappalainen comments.

"Intriguingly, we also observed that this type of stress fiber generation was most prominent under the nucleus, which stores all genetic information and is the largest organelle in our cells. It could be that cortical stress fibers protect the nucleus or aid the movement of the nucleus along with the rest of the cell body," adds Dr. Jaakko Lehtimaki who is the lead author of this study.

The new findings bring forth an important new feature in the stress fiber toolbox. Cells in the three-dimensional tissue environment rarely display stress fiber precursors typically seen in cells migrating on a cell culture dish. Thus, myosin pulse-mediated assembly process enables assembly of contractile structures in cells migrating in various environments. Because myosin pulses have been witnessed in many different cell- and tissue types, this might serve as a universal mechanism for local force-production in the non-muscle tissues.

The role of myosin and actin proteins

The most abundant components of our muscles are myosin motor proteins, and bar-like filaments assembled from protein actin. Coordinated 'crawling' of myosin motor proteins along actin filaments is the principal mechanism that generates the force for muscle contraction. However, such myosin-based force-production is not limited to muscles, because also cells in other tissues within our bodies have similar contractile structures. These 'mini-muscles' of non-muscle cells, called stress fibers, are composed of the same central players (actin and myosin) as the contractile units of muscles.

Within our bodies, skeletal muscles attach to bones via tendons, whereas special adhesion structures named focal adhesions connect stress fibers to the surroundings of the cell. This enables the stress fibers to sense and emit forces between cells and their environment. In addition to being the major force-sensitive structures in cells, stress fibers are important for proper differentiation that is, specialization of cells to different tasks in the body. They also protect the nucleus when the cell is migrating in a challenging three-dimensional tissue environment. Consequently, defects in stress fiber assembly in cells contribute to multiple disorders, such as atherosclerosis, neuropathies, and cancer progression.

Credit: 
University of Helsinki

University launches isolated power supply chip with new design

image: Solution of the fully integrated isolated power chip in this work.

Image: 
PAN Dongfang

Recently, research group led by Professor CHENG Lin from School of Microelectronics, University of science and technology of China has made significant achievements in the field of fully integrated isolated power chip design. They proposed a chip based on glass fan-out wafer-level package (FOWLP), achieving 46.5% peak transformation efficiency and 50mW/mm2 power density.

Compared with the traditional isolated power supply chip, this new design interconnects the receiving and transmitting chips through the micro transformer made of the rewiring layer, showing no need of additional transformer chips. In this way, it lowered the need for three or even four chips in the existing chip design, so as to greatly improve the efficiency of isolated power supply.

In addition, they proposed a grid voltage control technology with variable capacitor, which maintains the grid peak voltage in the best safe voltage range even in a wider supply voltage range.

This design improves the conversion efficiency and power density of the chip effectively, providing a new solution for the design of isolated power chip in the future.

Credit: 
University of Science and Technology of China