Connect with us

The Conversation

Parents can soon use QR codes to reveal heavy metal content in baby food

Published

on

theconversation.com – C. Michael White, Distinguished Professor of Pharmacy Practice, University of Connecticut – 2025-02-14 07:41:00

It’s impossible to eliminate heavy metals from baby food entirely, but testing can help consumers make informed decisions.
Jeff Greenberg via Getty Images

C. Michael White, University of Connecticut

Parents across the U.S. should soon be able to determine how much lead, arsenic, cadmium and mercury are in the food they feed their babies, thanks to a California law, the first of its kind, that took effect this year.

As of Jan. 1, 2025, every company that sells baby food products in California is required to test for these four heavy metals every month. That comes five years after a congressional report warned about the presence of dangerously high levels of lead and other heavy metals in baby food.

Every baby food product packaged in jars, pouches, tubs and boxes sold in California must carry a QR code on its label that consumers can scan to check the most recent heavy metal readings, although many are not yet complying.

Because companies seldom package products for a single state, parents and caregivers across the country will be able to scan these QR codes or go online to the companies’ websites and see the results.

I am a pharmacist researcher who has studied heavy metals in mineral supplements, dietary supplements and baby food for several years. My research highlights how prevalent these toxic agents are in everyday products such as baby food. I believe the new California law offers a solid first step in giving people the ability to limit the intake of these substances.

How do heavy metals get into foods?

Soil naturally contains heavy metals. The earth formed as a hot molten mass. As it cooled, heavier elements settled into its center regions, called the mantle and core. Volcanic eruptions in certain areas have brought these heavy metals to the surface over time. The volcanic rock erodes to form heavy metal-laden soil, contaminating nearby water supplies.

Another major source of soil contamination is the exhaust from fossil fuels, and in particular leaded gasoline. Some synthetic fertilizers contribute, too.

Heavy metals in the soil can pass into foods via several routes. Plants that yield foods such as sweet potatoes and carrots, apples, cinnamon, rice and plant-based protein powder are especially good at extracting them from contaminated soil.

Sometimes the contamination happens after harvesting. For example, local water that contains heavy metals is often used to rinse debris and bugs off natural products, such as leaves used to make a widely used supplement called kratom. When the water evaporates, the heavy metals are retained on the surface. Sometimes drying products in the open air, such as cacao beans for dark chocolate, allows dust laden with heavy metals to stick to their surface.

Producers can reduce heavy metal contamination in food in several ways, which range from modestly to very effectively. First, they can reserve more contaminated areas for growing crops that are less prone to taking in heavy metals from the soil, such as peppers, beans, squash, melons and cucumbers, and conversely grow more susceptible crops in less-contaminated areas. They can also dry plants on uncontaminated soil and filter heavy metals out of water before washing produce.

Producers are starting to use genetic engineering and crossbreeding to create susceptible plants that take up fewer heavy metals through their roots, but this approach is still in its early stages.

A hand holds a spoon of baby food to a baby's lips
Sweet potatoes and other root vegetables are especially susceptible to absorbing heavy metals from soil.
skaman306 via Getty Images

How much is too much?

Although there is no entirely safe level of chronic heavy metal ingestion, heavy metals are all around us and are impossible to avoid entirely.

In January 2025, the U.S. Food and Drug Administration released its first-ever guidance for manufacturers that sets limits on the amount of lead that baby food can contain. But the FDA guidance does not require companies to adhere to the limits.

In that guidance, the FDA suggested a limit of 10 parts per billion of lead for baby foods that contain fruits, vegetables, meats or combinations of those items, with or without grains. Yogurts, custards and puddings should have the same cutoff, according to the agency. Root vegetables and dry infant cereals, meanwhile, should contain less than 20 parts per billion of lead. The FDA regulations don’t apply to some products babies frequently consume, such as formula, teething crackers and other snacks.

The agency has not defined firm limits for the consumption of other heavy metals, but its campaign against heavy metals in baby food, called Closer to Zero, reflects that a lower dose is better.

That campaign also laid out plans to propose limits for other heavy metals such as arsenic and mercury.

Modestly exceeding the agency’s recommended dosage for lead or arsenic a few times a month is unlikely to have noticeable negative health effects. However, chronically ingesting too much lead or inorganic arsenic can negatively affect childhood health, including cognitive development, and can cause softening of bones.

How California’s QR codes can help parents and other caregivers

It’s unclear how many products consistently exceed these recommendations.

A study by Consumer Reports in 2018
found that 33 of 50 products had concerning levels of at least one heavy metal. In 2023, researchers repeated testing on seven of the failing products and found that heavy metal levels were now lower in three, the same in one, and slightly higher in three.

Because these tests assess products bought and tested at one specific time, they may not reflect the average heavy metal content in the same product over the entire year. These levels can vary over time if the manufacturer sources ingredients from different parts of the country or the world at different times of the year.

YouTube video
Consumers can call up heavy metal testing results with their smartphones at the grocery store.

That’s where California’s new law can help. The law requires manufacturers to gather and divulge real-time information on heavy metal contamination monthly. By scanning a QR code on a box of Gerber Teether Snacks or a jar of Beech Nut Naturals sweet potato puree, parents and caregivers can call up test results on a smartphone and learn how much lead, arsenic, cadmium and mercury were found in those specific products manufactured recently. These test results can also be accessed by entering a product’s name or batch number on the manufacturer’s website.

Slow rollout

In an investigation by Consumer Reports and a child advocacy group called Unleaded Kids, only four companies out of 28 were fully in compliance with the California law as of early this year. Some noncompliant companies had developed no infrastructure, some had developed websites but no heavy metal information was logged in, and some had information but required consumers to enter batch numbers to access results, without the required QR codes on the product packaging.

The law requires companies to provide this information for foods produced after Jan. 1, 2025, with no provisions for extensions, and the major producers agreed to comply not only for California residents but to provide the results nationwide. California enforces noncompliance by embargoing misbranded baby food products, issuing penalties, and suspending or revoking registrations and licenses.

When companies’ testing and reporting systems are fully up and running, a quick scan at the grocery store will allow consumers to adapt their purchases to minimize infants’ exposures to heavy metals. Initially, parents and caregivers may find it overwhelming to decide between one chicken and rice product that is higher in lead but lower in arsenic than a competitor’s product, for example.

However, they may also encounter instances where one baby food product clearly contains less of three heavy metals and only slightly more for the fourth heavy metal than a comparable product from a different manufacturer. That information can more clearly inform their choice.

Regardless of the readings, health experts advise parents and caregivers not to eliminate all root vegetables, apples and rice but instead to feed babies a wide variety of foods.The Conversation

C. Michael White, Distinguished Professor of Pharmacy Practice, University of Connecticut

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Parents can soon use QR codes to reveal heavy metal content in baby food appeared first on theconversation.com

The Conversation

What causes the powerful winds that fuel dust storms, wildfires and blizzards? A weather scientist explains

Published

on

theconversation.com – Chris Nowotarski, Associate Professor of Atmospheric Science, Texas A&M University – 2025-03-20 07:49:00

When huge dust storms like this one in the Phoenix suburbs in 2022 hit, it’s easy to see the power of the wind.
Christopher Harris/iStock Images via Getty Plus

Chris Nowotarski, Texas A&M University

Windstorms can seem like they come out of nowhere, hitting with a sudden blast. They might be hundreds of miles long, stretching over several states, or just in your neighborhood.

But they all have one thing in common: a change in air pressure.

Just like air rushing out of your car tire when the valve is open, air in the atmosphere is forced from areas of high pressure to areas of low pressure.

The stronger the difference in pressure, the stronger the winds that will ultimately result.

A weather map with a line between high and low pressure stretching across the U.S.
On this forecast for March 18, 2025, from the National Oceanic and Atmospheric Administration, ‘L’ represents low-pressure systems. The shaded area over New Mexico and west Texas represents strong winds and low humidity that combine to raise the risk of wildfires.
NOAA Weather Prediction Center

Other forces related to the Earth’s rotation, friction and gravity can also alter the speed and direction of winds. But it all starts with this change in pressure over a distance – what meteorologists like me call a pressure gradient.

So how do we get pressure gradients?

Strong pressure gradients ultimately owe their existence to the simple fact that the Earth is round and rotates.

Because the Earth is round, the sun is more directly overhead during the day at the equator than at the poles. This means more energy reaches the surface of the Earth near the equator. And that causes the lower part of the atmosphere, where weather occurs, to be both warmer and have higher pressure on average than the poles.

Nature doesn’t like imbalances. As a result of this temperature difference, strong winds develop at high altitudes over midlatitude locations, like the continental U.S. This is the jet stream, and even though it’s several miles up in the atmosphere, it has a big impact on the winds we feel at the surface.

Wind speed and direction in the upper atmosphere on March 14, 2025, show waves in the jet stream. Downstream of a trough in this wave, winds diverge and low pressure can form near the surface.
NCAR

Because Earth rotates, these upper-altitude winds blow from west to east. Waves in the jet stream – a consequence of Earth’s rotation and variations in the surface land, terrain and oceans – can cause air to diverge, or spread out, at certain points. As the air spreads out, the number of air molecules in a column decreases, ultimately reducing the air pressure at Earth’s surface.

The pressure can drop quite dramatically over a few days or even just a few hours, leading to the birth of a low-pressure system – what meteorologists call an extratropical cyclone.

The opposite chain of events, with air converging at other locations, can form high pressure at the surface.

In between these low-pressure and high-pressure systems is a strong change in pressure over a distance – a pressure gradient. And that pressure gradient leads to strong winds. Earth’s rotation causes these winds to spiral around areas of high and low pressure. These highs and lows are like large circular mixers, with air blowing clockwise around high pressure and counterclockwise around low pressure. This flow pattern blows warm air northward toward the poles east of lows and cool air southward toward the equator west of lows.

A maps shows pressure changes don't follow a straight line.
A map illustrates lines of surface pressure, called isobars, with areas of high and low pressure marked for March 14, 2025. Winds are strongest when isobars are packed most closely together.
Plymouth State University, CC BY-NC-SA

As the waves in the jet stream migrate from west to east, so do the surface lows and highs, and with them, the corridors of strong winds.

That’s what the U.S. experienced when a strong extratropical cyclone caused winds stretching thousands of miles that whipped up dust storms and spread wildfires, and even caused tornadoes and blizzards in the central and southern U.S. in March 2025.

Whipping up dust storms and spreading fires

The jet stream over the U.S. is strongest and often the most “wavy” in the springtime, when the south-to-north difference in temperature is often the strongest.

Winds associated with large-scale pressure systems can become quite strong in areas where there is limited friction at the ground, like the flat, less forested terrain of the Great Plains. One of the biggest risks is dust storms in arid regions of west Texas or eastern New Mexico, exacerbated by drought in these areas.

Downtown is barely visible through a haze of dust.
A dust storm hit Albuquerque, N.M., on March 18, 2025. Another dust storm a few dats earlier in Kansas caused a deadly pileup involving dozens of vehices on I-70.
AP Photo/Roberto E. Rosales

When the ground and vegetation are dry and the air has low relative humidity, high winds can also spread wildfires out of control.

Even more intense winds can occur when the pressure gradient interacts with terrain. Winds can sometimes rush faster downslope, as happens in the Rockies or with the Santa Ana winds that fueled devastating wildfires in the Los Angeles area in January.

Violent tornadoes and storms

Of course, winds can become even stronger and more violent on local scales associated with thunderstorms.

When thunderstorms form, hail and precipitation in them can cause the air to rapidly fall in a downdraft, causing very high pressure under these storms. That pressure forces the air to spread out horizontally when it reaches the ground. Meteorologists call these straight line winds, and the process that forms them is a downburst. Large thunderstorms or chains of them moving across a region can cause large swaths of strong wind over 60 mph, called a derecho.

Finally, some of nature’s strongest winds occur inside tornadoes. They form when the winds surrounding a thunderstorm change speed and direction with height. This can cause part of the storm to rotate, setting off a chain of events that may lead to a tornado and winds as strong as 300 mph in the most violent tornadoes.

YouTube video
How a tornado forms. Source: NOAA.

Tornado winds are also associated with an intense pressure gradient. The pressure inside the center of a tornado is often very low and varies considerably over a very small distance.

It’s no coincidence that localized violent winds from thunderstorm downbursts and tornadoes often occur amid large-scale windstorms. Extratropical cyclones often draw warm, moist air northward on strong winds from the south, which is a key ingredient for thunderstorms. Storms also become more severe and may produce tornadoes when the jet stream is in close proximity to these low-pressure centers. In the winter and early spring, cold air funneling south on the northwest side of strong extratropical cyclones can even lead to blizzards.

So, the same wave in the jet stream can lead to strong winds, blowing dust and fire danger in one region, while simultaneously triggering a tornado outbreak and a blizzard in other regions.The Conversation

Chris Nowotarski, Associate Professor of Atmospheric Science, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post What causes the powerful winds that fuel dust storms, wildfires and blizzards? A weather scientist explains appeared first on theconversation.com

Continue Reading

The Conversation

5 years on, true counts of COVID-19 deaths remain elusive − and research is hobbled by lack of data

Published

on

theconversation.com – Dylan Thomas Doyle, Ph.D. Candidate in Information Science, University of Colorado Boulder – 2025-03-20 07:47:00

National COVID-19 memorial wall for the five-year anniversary on March 11, 2025, in London, England.
Andrew Aitchison/In Pictures via Getty Images

Dylan Thomas Doyle, University of Colorado Boulder

In the early days of the COVID-19 pandemic, researchers struggled to grasp the rate of the virus’s spread and the number of related deaths. While hospitals tracked cases and deaths within their walls, the broader picture of mortality across communities remained frustratingly incomplete.

Policymakers and researchers quickly discovered a troubling pattern: Many deaths linked to the virus were never officially counted. A study analyzing data from over 3,000 U.S. counties between March 2020 and August 2022 found nearly 163,000 excess deaths from natural causes that were missing from official mortality records.

Excess deaths, meaning those that exceed the number expected based on historical trends, serve as a key indicator of underreported deaths during health crises. Many of these uncounted deaths were later tied to COVID-19 through reviews of medical records, death certificates and statistical modeling.

In addition, lack of real-time tracking for medical interventions during those early days slowed vaccine development by delaying insights into which treatments worked and how people were responding to newly circulating variants.

Five years since the beginning of COVID-19, new epidemics such as bird flu are emerging worldwide, and researchers are still finding it difficult to access the data about people’s deaths that they need to develop lifesaving interventions.

How can the U.S. mortality data system improve? I’m a technology infrastructure researcher, and my team and I design policy and technical systems to reduce inefficiency in health care and government organizations. By analyzing the flow of mortality data in the U.S., we found several areas of the system that could use updating.

Critical need for real-time data

A death record includes key details beyond just the fact of death, such as the cause, contributing conditions, demographics, place of death and sometimes medical history. This information is crucial for researchers to be able to analyze trends, identify disparities and drive medical advances.

Approximately 2.8 million death records are added to the U.S. mortality data system each year. But in 2022 – the most recent official count available – when the world was still in the throes of the pandemic, 3,279,857 deaths were recorded in the federal system. Still, this figure is widely considered to be a major undercount of true excess deaths from COVID-19.

In addition, real-time tracking of COVID-19 mortality data was severely lacking. This process involves the continuous collection, analysis and reporting of deaths from hospitals, health agencies and government databases by integrating electronic health records, lab reports and public health surveillance systems. Ideally, it provides up-to-date insights for decision-making, but during the COVID-19 pandemic, these tracking systems lagged and failed to generate comprehensive data.

Two health care workers in full PPE attending to a patient lying on hospital bed
Getting real-time COVID-19 data from hospitals and other agencies into the hands of researchers proved difficult.
Gerald Herbert/AP Photo

Without comprehensive data on prior COVID-19 infections, antibody responses and adverse events, researchers faced challenges designing clinical trials to predict how long immunity would last and optimize booster schedules.

Such data is essential in vaccine development because it helps identify who is most at risk, which variants and treatments affect survival rates, and how vaccines should be designed and distributed. And as part of the broader U.S. vital records system, mortality data is essential for medical research, including evaluating public health programs, identifying health disparities and monitoring disease.

At the heart of the problem is the inefficiency of government policy, particularly outdated public health reporting systems and slow data modernization efforts that hinder timely decision-making. These long-standing policies, such as reliance on paper-based death certificates and disjointed state-level reporting, have failed to keep pace with real-time data needs during crises such as COVID-19.

These policy shortcomings lead to delays in reporting and lack of coordination between hospital organizations, state government vital records offices and federal government agencies in collecting, standardizing and sharing death records.

History of US mortality data

The U.S. mortality data system has been cobbled together through a disparate patchwork of state and local governments, federal agencies and public health organizations over the course of more than a century and a half. It has been shaped by advances in public health, medical record-keeping and technology. From its inception to the present day, the mortality data system has been plagued by inconsistencies, inefficiencies and tensions between medical professionals, state governments and the federal government.

The first national efforts to track information about deaths began in the 1850s when the U.S. Census Bureau started collecting mortality data as part of the decennial census. However, these early efforts were inconsistent, as death registration was largely voluntary and varied widely across states.

In the early 20th century, the establishment of the National Vital Statistics System brought greater standardization to mortality data. For example, the system required all U.S. states and territories to standardize their death certificate format. It also consolidated mortality data at the federal level, whereas mortality data was previously stored at the state level.

However, state and federal reporting remained fragmented. For example, states had no unifom timeline for submitting mortality data, resulting in some states taking months or even years to finalize and release death records. Local or state-level paperwork processing practices also remained varied and at times contradictory.

Close-up of blank form titled CERTIFICATE OF DEATH
Death record processing varies by state.
eric1513/iStock via Getty Images Plus

To begin to close gaps in reporting timelines to aid medical researchers, in 1981 the National Center for Health Statistics – a division of the Centers for Disease Control and Prevention – introduced the National Death Index. This is a centralized database of death records collected from state vital statistics offices, making it easier to access death data for health and medical research. The system was originally paper-based, with the aim of allowing researchers to track the deaths of study participants without navigating complex bureaucracies.

As time has passed, the National Death Index and state databases have become increasingly digital. The rise of electronic death registration systems in recent decades has improved processing speed when it comes to researchers accessing mortality data from the National Death Index. However, while the index has solved some issues related to gaps between state and federal data, other issues, such as high fees and inconsistency in state reporting times, still plague it.

Accessing the data that matters most

With the Trump administration’s increasing removal of CDC public health datasets, it is unclear whether policy reform for mortality data will be addressed anytime soon.

Experts fear that the removal of CDC datasets has now set precedent for the Trump administration to cross further lines in its attempts to influence the research and data published by the CDC. The longer-term impact of the current administration’s public health policy on mortality data and disease response are not yet clear.

What is clear is that five years since COVID-19, the U.S. mortality tracking system remains unequipped to meet emerging public health crises. Without addressing these challenges, the U.S. may not be able to respond quickly enough to public health crises threatening American lives.The Conversation

Dylan Thomas Doyle, Ph.D. Candidate in Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post 5 years on, true counts of COVID-19 deaths remain elusive − and research is hobbled by lack of data appeared first on theconversation.com

Continue Reading

The Conversation

Atlantic sturgeon were fished almost to extinction − ancient DNA reveals how Chesapeake Bay population changed over centuries

Published

on

theconversation.com – Natalia Przelomska, Research Associate in Archaeogenomics, National Museum of Natural History, Smithsonian Institution – 2025-03-20 07:47:00

Sturgeon can be several hundred pounds each.
cezars/E+ via Getty Images

Natalia Przelomska, Smithsonian Institution and Logan Kistler, Smithsonian Institution

Sturgeons are one of the oldest groups of fishes. Sporting an armor of five rows of bony, modified scales called dermal scutes and a sharklike tail fin, this group of several-hundred-pound beasts has survived for approximately 160 million years. Because their physical appearance has changed very little over time, supported by a slow rate of evolution, sturgeon have been called living fossils.

Despite their survival through several geological time periods, many present-day sturgeon species are at threat of extinction, with 17 of 27 species listed as “critically endangered.”

Conservation practitioners such as the Virginia Commonwealth University monitoring team are working hard to support recovery of Atlantic sturgeon in the Chesapeake Bay area. But it’s not clear what baseline population level people should strive toward restoring. How do today’s sturgeon populations compare with those of the past?

Three people carefully lower a large fish over the side of a boat toward the water
VCU monitoring team releases an adult Atlantic sturgeon back into the estuary.
Matt Balazik

We are a molecular anthropologist and a biodiversity scientist who focus on species that people rely on for subsistence. We study the evolution, population health and resilience of these species over time to better understand humans’ interaction with their environments and the sustainability of food systems.

For our recent sturgeon project, we joined forces with fisheries conservation biologist Matt Balazik, who conducts on-the-ground monitoring of Atlantic sturgeon, and Torben Rick, a specialist in North American coastal zooarchaeology. Together, we wanted to look into the past and see how much sturgeon populations have changed, focusing on the James River in Virginia. A more nuanced understanding of the past could help conservationists better plan for the future.

Sturgeon loomed large for millennia

In North America, sturgeon have played important subsistence and cultural roles in Native communities, which marked the seasons by the fishes’ behavioral patterns. Large summertime aggregations of lake sturgeon (Acipenser fulvescens) in the Great Lakes area inspired one folk name for the August full moon – the sturgeon moon. Woodland Era pottery remnants at archaeological sites from as long as 2,000 years ago show that the fall and springtime runs of Atlantic sturgeon (Acipenser oxyrinchus) upstream were celebrated with feasting.

triangular-shaped bone with round cavities
Archaeologists uncover bony scutes – modified scales that resemble armor for the living fish – in places where people relied on sturgeon for subsistence.
Logan Kistler and Natalia Przelomska

Archaeological finds of sturgeon remains support that early colonial settlers in North America, notably those who established Jamestown in the Chesapeake Bay area in 1607, also prized these fish. When Captain John Smith was leading Jamestown, he wrote “there was more sturgeon here than could be devoured by dog or man.” The fish may have helped the survival of this fortress-colony that was both stricken with drought and fostering turbulent relationships with the Native inhabitants.

This abundance is in stark contrast to today, when sightings of migrating fish are sparse. Exploitation during the past 300 years was the key driver of Atlantic sturgeon decline. Demand for caviar drove the relentless fishing pressure throughout the 19th century. The Chesapeake was the second-most exploited sturgeon fishery on the Eastern Seaboard up until the early 20th century, when the fish became scarce.

Man pulls large fish over side of boat
Conservation biologists capture the massive fish for monitoring purposes, which includes clipping a tiny part of the fin for DNA analysis.
Matt Balazik

At that point, local protection regulations were established, but only in 1998 was a moratorium on harvesting these fish declared. Meanwhile, abundance of Atlantic sturgeon remained very low, which can be explained in part by their lifespan. Short-lived fish such as herring and shad can recover population numbers much faster than Atlantic sturgeon, which live for up to 60 years and take a long time to reach reproductive age – up to around 12 years for males and as many as 28 years for females.

To help manage and restore an endangered species, conservation biologists tend to split the population into groups based on ranges. The Chesapeake Bay is one of five “distinct population segments” the U.S. Endangered Species Act listing in 2012 created for Atlantic sturgeon.

Since then, conservationists have pioneered genetic studies on Atlantic sturgeon, demonstrating through the power of DNA that natal river – where an individual fish is born – and season of spawning are both important for distinguishing subpopulations within each regional group. Scientists have also described genetic diversity in Atlantic sturgeon; more genetic variety suggests they have more capacity to adapt when facing new, potentially challenging conditions.

map highlighting Maycock's Point, Hatch Site, Jamestown and Williamsburg on the James River
The study focused on Atlantic sturgeon from the Chesapeake Bay region, past and present. The four archaeological sites included are highlighted.
Przelomska NAS et al., Proc. R. Soc. B 291: 20241145, CC BY

Sturgeon DNA, then and now

Archaeological remains are a direct source of data on genetic diversity in the past. We can analyze the genetic makeup of sturgeons that lived hundreds of years ago, before intense overfishing depleted their numbers. Then we can compare that baseline with today’s genetic diversity.

The James River was a great case study for testing out this approach, which we call an archaeogenomics time series. Having obtained information on the archaeology of the Chesapeake region from our collaborator Leslie Reeder-Myers, we sampled remains of sturgeon – their scutes and spines – at a precolonial-era site where people lived from about 200 C.E. to about 900 C.E. We also sampled from important colonial sites Jamestown (1607-1610) and Williamsburg (1720-1775). And we complemented that data from the past with tiny clips from the fins of present-day, live fish that Balazik and his team sampled during monitoring surveys.

scattering of small bone shards spilling out of ziplock bag, with a purple-gloved hand
Scientists separate Atlantic sturgeon scute fragments from larger collections of zooarchaeological remains, to then work on the scutes in a lab dedicated to studying ancient DNA.
Torben Rick and Natalia Przelomska

DNA tends to get physically broken up and biochemically damaged with age. So we relied on special protocols in a lab dedicated to studying ancient DNA to minimize the risk of contamination and enhance our chances of successfully collecting genetic material from these sturgeon.

Atlantic sturgeon have 122 chromosomes of nuclear DNA – over five times as many as people do. We focused on a few genetic regions, just enough to get an idea of the James River population groupings and how genetically distinct they are from one another.

We were not surprised to see that fall-spawning and spring-spawning groups were genetically distinct. What stood out, though, was how starkly different they were, which is something that can happen when a population’s numbers drop to near-extinction levels.

We also looked at the fishes’ mitochondrial DNA, a compact molecule that is easier to obtain ancient DNA from compared with the nuclear chromosomes. With our collaborator Audrey Lin, we used the mitochondrial DNA to confirm our hypothesis that the fish from archaeological sites were more genetically diverse than present-day Atlantic sturgeon.

Strikingly, we discovered that mitochondrial DNA did not always group the fish by season or even by their natal river. This was unexpected, because Atlantic sturgeon tend to return to their natal rivers for breeding. Our interpretation of this genetic finding is that over very long timescales – many thousands of years – changes in the global climate and in local ecosystems would have driven a given sturgeon population to migrate into a new river system, and possibly at a later stage back to its original one. This notion is supported by other recent documentation of fish occasionally migrating over long distances and mixing with new groups.

Our study used archaeology, history and ecology together to describe the decline of Atlantic sturgeon. Based on the diminished genetic diversity we measured, we estimate that the Atlantic sturgeon populations we studied are about a fifth of what they were before colonial settlement. Less genetic variability means these smaller populations have less potential to adapt to changing conditions. Our findings will help conservationists plan into the future for the continued recovery of these living fossils.The Conversation

Natalia Przelomska, Research Associate in Archaeogenomics, National Museum of Natural History, Smithsonian Institution and Logan Kistler, Curator of Archaeobotany and Archaeogenomics, National Museum of Natural History, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Atlantic sturgeon were fished almost to extinction − ancient DNA reveals how Chesapeake Bay population changed over centuries appeared first on theconversation.com

Continue Reading

Trending