Promising assisted reproductive technologies come with ethical, legal and social challenges – a developmental biologist and a bioethicist discuss IVF, abortion and the mice with two dads
Promising assisted reproductive technologies come with ethical, legal and social challenges – a developmental biologist and a bioethicist discuss IVF, abortion and the mice with two dads
Assisted reproductive technologies are medical procedures that help people experiencing difficulty having or an inability to have biological children of their own. From in vitro fertilization to genetic screening to creation of viable eggs from the skin cells of two male mice, each new development speaks to the potential of reproductive technologies to expand access to the experience of pregnancy.
Translating advances from the lab to the clinic, however, comes with challenges that go far beyond the purely technical.
Conversations around the ethics and implications of cutting-edge research often happen after the fact, when the science and technology have advanced beyond the point at which open dialogue could best protect affected groups. In the spirit of starting such cross-discipline conversations earlier, we invited developmental biologist Keith Latham of Michigan State University and bioethicist Mary Faith Marshall of the University of Virginia to discuss the ethical and technological potential of in vitro gametogenesis and assisted reproductive technology post-Roe.
How new are the ethical considerations raised by assisted reproductive technologies?
Keith
Every new technology raises many of the same questions, and likely new ones. On the safety and risk-benefit side of the ethical conversation, there’s nothing here that we haven’t dealt with since the 1970s with other reproductive technologies. But it’s important to keep asking questions, because the benefits are hugely dependent on the success rate. There are potential biological costs, but also possible social costs, financial costs, societal costs and many others.
Mary Faith
It’s probably been that way even longer. One of my mentors, Joseph Francis Fletcher, a pioneering bioethicist and Episcopal priest, wrote a book called “Morals and Medicine” in 1954. It was the first non-Roman Catholic treatment of bioethics. And he raised a lot of these issues there, including the technological imperative – the idea that because we can develop the technology to do something, we therefore should develop it.
Fletcher also said that the use of artifice, or human-made creations, is supremely human. That’s what we do: We figure out how things work and we develop new technologies like vaccines and heart-lung machines based on evolving scientific knowledge.
I think that in most cases, scientists should be involved in thinking about the implications of their work. But often, researchers focus more on the direct applications of their work than the potential indirect consequences.
Given the evolution of assisted reproductive technology, and the fact that its evolution is going to continue, I think one of the central questions to consider is, what are the goals of developing it? For assisted reproduction, it’s to help infertile people and people in nontraditional relationships have children.
What are some recent developments in the field of assisted reproductive technology?
Keith
One recent advance in assisted reproductive technology is the expansion of pre-implantation genetic testing methods, particularly DNA sequencing. Many genes come in different variants, or alleles, that can be inherited from each parent. Providers can determine whether an embryo bears a “bad” allele that may increase the risk of certain diseases and select embryos with “healthy” alleles.
Genetic screening raises several ethical concerns. For example, the parents’ genetic profiles could be unwillingly inferred from that of the embryo. This possibility may deter prospective parents from having children, and such knowledge may also have potential effects on any future child. The cost of screening and potential need for additional cycles of IVF may also increase disparities.
There are also considerations about the accuracy of screening predictions without accounting for environmental effects, and what level of genetic risk is “serious” enough for an embryo to be excluded. More extensive screening also raises concerns about possible misuse for purposes other than disease prevention, such as production of “designer babies.”
In vitro gametogenesis involves making egg or sperm cells from other adult cells in the body.
At a genome-editing conference in March 2023, researchers announced that they were able to delete and duplicate whole chromosomes from the skin cells of male mice to make eggs. This method is one potential way to make eggs that do not carry genetic abnormalities.
They were very upfront that this was done at 1% efficiency in mice, which could be lower in humans. That means something bad happened to 99% of the embryos. The biological world is not typically binary, so a portion of that surviving 1% could still be abnormal. Just because the mice survived doesn’t mean they’re OK. I would say at this point, it would be unethical to try this on people.
As with some forms of genetic screening, using this technique to reduce the risk of one disease could inadvertently increase the risk of another. Determining that it is absolutely safe to duplicate a chromosome would require knowing every allele of every gene on that chromosome, and what each allele could do to the health of a person. That’s a pretty tall order, as that could involve identifying hundreds to thousands of genes, and the effects of all their variants may not be known.
Mary Faith
That raises the issue of efficacy and costs to yet another order of magnitude.
Keith
Genome editing with CRISPR technology in people carries similar concerns. Because of potential limitations in how precise the technology can be, it may be difficult for researchers to say they are absolutely 100% certain there won’t be off-target changes in the genome. Proceeding without that full knowledge could be risky.
But that’s where bioethicists need to come into play. Researchers don’t know what the full risk is, so how do you make that risk-benefit calculation?
Mary Faith
There’s the option of a voluntary global moratorium on using these technologies on human embryos. But somebody somewhere is still going to do it, because the technology is just sitting there, waiting to be moved forward.
How will the legal landscape affect the development and implementation of assisted reproductive technologies?
Mary Faith
Any research that involves human embryos is in some ways politicized. Not only because the government provides funding to the basic science labs that conduct this research, but because of the wide array of beliefs that members of the public at large have about when life begins or what personhood means.
The Dobbs decision, which overturned the constitutional right to an abortion, has implications for assisted reproduction and beyond. Most people who are pregnant don’t even know they’re pregnant at the earliest stages, and somewhere around 60% of those pregnancies end naturally because of genetic aberrations. Between 1973 and 2005, over 400 women were arrested for miscarriage in the U.S., and I think that number is going to grow. The implications for reproductive health care, and for assisted reproduction in the future, are challenging and frightening.
What will abortion restrictions mean for people who have multiple-gestation pregnancies, in which they carry more than one embryo at the same time? In order to have one healthy child born from that process, the other embryos often need to be removed so they don’t all die. In the past 40 years, the number of twin births doubled and triplet and higher-order births quadrupled, primarily because of fertility treatments.
IVF may transfer one, two, or sometimes three embryos at a time. The cost of care for preterm birth, which is one possible outcome of multiple-gestation pregnancies, can be high. That’s in addition to the cost of delivery. IVF clinics are increasingly transferring just one embryo to mitigate such concerns.
The life-at-conception bills that have been put forth in some U.S. state legislatures and Congress may contain language claiming they are not meant to prevent IVF. But the language of the bills could be extended to affect procedures such as IVF with pre-implantation genetic testing to detect chromosomal abnormalities, particularly when single-embryo transfer is the goal. Pre-implantation genetic testing has been increasing, with one study estimating that over 40% of all IVF cycles in the U.S. in 2018 involved genetic screening.
Could life-at-conception bills criminalize clinics that don’t transfer embryos known to be genetically abnormal? Freezing genetically abnormal embryos could avoid destroying them, but that raises questions of, to what end? Who would pay for the storage, and who would be responsible for those embryos?
How can we determine whether the risks outweigh the benefits when so much is unknown?
Keith
Conducting studies in animal models is an important first step. In some cases, it either hasn’t been done or hasn’t been done extensively. Even with animal studies, you have to recognize that mice, rabbits and monkeys are not human. Animal models may reduce some risks before a technology is used in people, but they won’t eliminate all risks, because of biological differences between species.
The death of Jesse Gelsinger, who was a participant in a gene therapy clinical trial in 1999, led to a halt in all gene therapy clinical trials in the U.S. for a time. When the Food and Drug Administration investigated what went wrong, they found huge numbers of adverse events in both humans and animals that should have been reported to the advisory committee but weren’t. Notably, the principal investigator of the trial was also the primary shareholder of the biotech company that made the drug being tested. That raises questions about the reality of oversight.
I think something like that earlier NIH advisory committee but for reproductive technologies would still be advisable. But researchers, policymakers and regulators need to learn from the lessons of the past to try to ensure that – especially in early-phase research – we’re very thoughtful about the potential risks and that research participants really understand what the implications are for participation in research. That would be one model for translating research from the animal into the human.
The FDA approved a gene therapy for a form of congenital vision loss in 2017. The child in this photo, then 8, first received gene therapy at age 4. Bill West/AP Photo
Keith
A process to make sure that the people conducting studies don’t have a conflict of interest, like having the potential to commercially profit from the technology, would be useful.
Caution, consensus and cooperation should not take second place to profit motives. Altering the human genome in a way that allows human-made genetic changes to be propagated throughout the population has a potential to alter the genetics of the human species as a whole.
Mary Faith
That raises the question of how long it will take for long-term effects to show. It’s one thing for an implanted egg not to survive. But how long will it take to know whether there are effects that aren’t obvious at birth?
Keith
We’re still collecting long-term outcome data for people born using different reproductive technologies. So far there have been no obviously horrible consequences. But some abnormalities could take decades to manifest, and there are many variables to contend with.
One can arguably say that there’s substantial good in helping couples have babies. There can be a benefit to their emotional well-being, and reproduction is a natural part of human health and biology. And a lot of really smart, dedicated people are putting a lot of energy into making sure that the risks are minimized. We can also look to some of the practices and approaches to oversight that have been used over the past several decades.
Mary Faith
And thinking about international guidelines, such as from the Council for International Medical Science and other groups, could provide guidance on protecting human research subjects.
Keith
You hate to advocate for a world where the automatic response to anything new is “no, don’t do that.” My response is, “Show me it’s safe before you do it.” I don’t think that’s unreasonable.
Some people have a view that scientists don’t think about the ethics of research and what’s right and wrong, advisable or inadvisable. But we do think about it. I co-direct a research training program that includes teaching scientists how to responsibly and ethically conduct research, including speakers who specifically address the ethics of reproductive technologies. It is valuable to have a dialogue between scientists and ethicists, because ethicists will often think about things from a different perspective.
As people go through their scientific careers and see new technologies unfold over time, these discussions can help them develop a deeper appreciation and understanding of the broader impact of their research. It becomes our job to make sure that each generation of scientists is motivated to think about these things.
Mary Faith
It’s also really important to include stakeholders – people who are nonscientists, people who experience barriers to reproduction and people who are opposed to the idea – so they have a voice at the table as well. That’s how you get good policies, right? You have everyone who should be at the table, at the table.
Contaminated milk from one plant in Illinois sickened thousands with Salmonella in 1985 − as outbreaks rise in the US, lessons from this one remain true
theconversation.com – Michael Petros, Clinical Assistant Professor of Environmental and Occupational Health Sciences, University of Illinois Chicago – 2025-05-07 07:34:00
A valve that mixed raw milk with pasteurized milk at Hillfarm Dairy may have been the source of contamination. This was the milk processing area of the plant. AP Photo/Mark Elias
In 1985, contaminated milk in Illinois led to a Salmonella outbreak that infected hundreds of thousands of people across the United States and caused at least 12 deaths. At the time, it was the largest single outbreak of foodborne illness in the U.S. and remains the worst outbreak of Salmonella food poisoning in American history.
Many questions circulated during the outbreak. How could this contamination occur in a modern dairy farm? Was it caused by a flaw in engineering or processing, or was this the result of deliberate sabotage? What roles, if any, did politics and failed leadership play?
From my 50 years of working in public health, I’ve found that reflecting on the past can help researchers and officials prepare for future challenges. Revisiting this investigation and its outcome provides lessons on how food safety inspections go hand in hand with consumer protection and public health, especially as hospitalizations and deaths from foodborne illnesses rise.
Contamination, investigation and intrigue
The Illinois Department of Public Health and the U.S. Centers for Disease Control and Prevention led the investigation into the outbreak. The public health laboratories of the city of Chicago and state of Illinois were also closely involved in testing milk samples.
Investigators and epidemiologists from local, state and federal public health agencies found that specific lots of milk with expiration dates up to April 17, 1985, were contaminated with Salmonella. The outbreak may have been caused by a valve at a processing plant that allowed pasteurized milk to mix with raw milk, which can carry several harmful microorganisms, including Salmonella.
Overall, labs and hospitals in Illinois and five other Midwest states – Indiana, Iowa, Michigan, Minnesota and Wisconsin – reported over 16,100 cases of suspected Salmonella poisoning to health officials.
To make dairy products, skimmed milk is usually separated from cream, then blended back together in different levels to achieve the desired fat content. While most dairies pasteurize their products after blending, Hillfarm Dairy in Melrose Park, Illinois, pasteurized the milk first before blending it into various products such as skim milk and 2% milk.
Subsequent examination of the production process suggested that Salmonella may have grown in the threads of a screw-on cap used to seal an end of a mixing pipe. Investigators also found this strain of Salmonella 10 months earlier in a much smaller outbreak in the Chicago area.
The contaminated milk was produced at Hillfarm Dairy in Melrose Park, which was operated at the time by Jewel Companies Inc. During an April 3 inspection of the company’s plant, the Food and Drug Administration found 13 health and safety violations.
The legal fallout of the outbreak expanded when the Illinois attorney general filed suit against Jewel Companies Inc., alleging that employees at as many as 18 stores in the grocery chain violated water pollution laws when they dumped potentially contaminated milk into storm sewers. Later, a Cook County judge found Jewel Companies Inc. in violation of the court order to preserve milk products suspected of contamination and maintain a record of what happened to milk returned to the Hillfarm Dairy.
Political fallout also ensued. The Illinois governor at the time, James Thompson, fired the director of the Illinois Public Health Department when it was discovered that he was vacationing in Mexico at the onset of the outbreak and failed to return to Illinois. Notably, the health director at the time of the outbreak was not a health professional. Following this episode, the governor appointed public health professional and medical doctor Bernard Turnock as director of the Illinois Department of Public Health.
In 1987, after a nine-month trial, a jury determined that Jewel officials did not act recklessly when Salmonella-tainted milk caused one of the largest food poisoning outbreaks in U.S. history. No punitive damages were awarded to victims, and the Illinois Appellate Court later upheld the jury’s decision.
Raw milk is linked to many foodborne illnesses.
Lessons learned
History teaches more than facts, figures and incidents. It provides an opportunity to reflect on how to learn from past mistakes in order to adapt to future challenges. The largest Salmonella outbreak in the U.S. to date provides several lessons.
For one, disease surveillance is indispensable to preventing outbreaks, both then and now. People remain vulnerable to ubiquitous microorganisms such as Salmonella and E. coli, and early detection of an outbreak could stop it from spreading and getting worse.
Additionally, food production facilities can maintain a safe food supply with careful design and monitoring. Revisiting consumer protections can help regulators keep pace with new threats from new or unfamiliar pathogens.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Centrist
The article provides an analytical, factual recounting of the 1985 Salmonella outbreak, with an emphasis on public health, safety standards, and lessons learned from past mistakes. It critiques the failures in leadership and oversight during the incident but avoids overt ideological framing. While it highlights political accountability, particularly the firing of a public health official and the appointment of a medical professional, it does so in a balanced manner without assigning blame to a specific political ideology. The content stays focused on the public health aspect and the importance of professional leadership, reflecting a centrist perspective in its delivery.
The 2002 sci-fi thriller “Minority Report” depicted a dystopian future where a specialized police unit was tasked with arresting people for crimes they had not yet committed. Directed by Steven Spielberg and based on a short story by Philip K. Dick, the drama revolved around “PreCrime” − a system informed by a trio of psychics, or “precogs,” who anticipated future homicides, allowing police officers to intervene and prevent would-be assailants from claiming their targets’ lives.
The film probes at hefty ethical questions: How can someone be guilty of a crime they haven’t yet committed? And what happens when the system gets it wrong?
While there is no such thing as an all-seeing “precog,” key components of the future that “Minority Report” envisioned have become reality even faster than its creators imagined. For more than a decade, police departments across the globe have been using data-driven systems geared toward predicting when and where crimes might occur and who might commit them.
Far from an abstract or futuristic conceit, predictive policing is a reality. And market analysts are predicting a boom for the technology.
Predictive policing relies on artificial intelligence and data analytics to anticipate potential criminal activity before it happens. It can involve analyzing large datasets drawn from crime reports, arrest records and social or geographic information to identify patterns and forecast where crimes might occur or who may be involved.
Law enforcement agencies have used data analytics to track broad trends for many decades. Today’s powerful AI technologies, however, take in vast amounts of surveillance and crime report data to provide much finer-grained analysis.
Police departments use these techniques to help determine where they should concentrate their resources. Place-based prediction focuses on identifying high-risk locations, also known as hot spots, where crimes are statistically more likely to happen. Person-based prediction, by contrast, attempts to flag individuals who are considered at high risk of committing or becoming victims of crime.
These types of systems have been the subject of significant public concern. Under a so-called “intelligence-led policing” program in Pasco County, Florida, the sheriff’s department compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.
Lawsuits forced the Pasco County, Fla., Sheriff’s Office to end its troubled predictive policing program.
Four residents sued the county in 2021, and last year they reached a settlement in which the sheriff’s office admitted that it had violated residents’ constitutional rights to privacy and equal treatment under the law. The program has since been discontinued.
This is not just a Florida problem. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were likely to commit new crimes or become victims of future shootings. In 2021, the Los Angeles Police Department discontinued its use of PredPol, a software program designed to forecast crime hot spots but was criticized for low accuracy rates and reinforcing racial and socioeconomic biases.
Necessary innovations or dangerous overreach?
The failure of these high-profile programs highlights a critical tension: Even though law enforcement agencies often advocate for AI-driven tools for public safety, civil rights groups and scholars have raised concerns over privacy violations, accountability issues and the lack of transparency. And despite these high-profile retreats from predictive policing, many smaller police departments are using the technology.
Most American police departments lack clear policies on algorithmic decision-making and provide little to no disclosure about how the predictive models they use are developed, trained or monitored for accuracy or bias. A Brookings Institution analysis found that in many cities, local governments had no public documentation on how predictive policing software functioned, what data was used, or how outcomes were evaluated.
Predictive policing can perpetuate racial bias.
This opacity is what’s known in the industry as a “black box.” It prevents independent oversight and raises serious questions about the structures surrounding AI-driven decision-making. If a citizen is flagged as high-risk by an algorithm, what recourse do they have? Who oversees the fairness of these systems? What independent oversight mechanisms are available?
These questions are driving contentious debates in communities about whether predictive policing as a method should be reformed, more tightly regulated or abandoned altogether. Some people view these tools as necessary innovations, while others see them as dangerous overreach.
A better way in San Jose
But there is evidence that data-driven tools grounded in democratic values of due process, transparency and accountability may offer a stronger alternative to today’s predictive policing systems. What if the public could understand how these algorithms function, what data they rely on, and what safeguards exist to prevent discriminatory outcomes and misuse of the technology?
The city of San Jose, California, has embarked on a process that is intended to increase transparency and accountability around its use of AI systems. San Jose maintains a set of AI principles requiring that any AI tools used by city government be effective, transparent to the public and equitable in their effects on people’s lives. City departments also are required to assess the risks of AI systems before integrating them into their operations.
If taken correctly, these measures can effectively open the black box, dramatically reducing the degree to which AI companies can hide their code or their data behind things such as protections for trade secrets. Enabling public scrutiny of training data can reveal problems such as racial or economic bias, which can be mitigated but are extremely difficult if not impossible to eradicate.
Research has shown that when citizens feel that government institutions act fairly and transparently, they are more likely to engage in civic life and support public policies. Law enforcement agencies are likely to have stronger outcomes if they treat technology as a tool – rather than a substitute – for justice.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Center-Left
The article provides an analysis of predictive policing, highlighting both the technological potential and ethical concerns surrounding its use. While it presents factual information, it leans towards caution and skepticism regarding the fairness, transparency, and potential racial biases of these systems. The framing of these issues, along with an emphasis on democratic accountability, transparency, and civil rights, aligns more closely with center-left perspectives that emphasize government oversight, civil liberties, and fairness. The critique of predictive policing technologies without overtly advocating for their abandonment reflects a balanced but cautious stance on technology’s role in law enforcement.
Evolution has fostered many reproductive strategies across the spectrum of life. From dandelions to giraffes, nature finds a way.
One of those ways creates quite a bit of suffering for humans: pollen, the infamous male gametophyte of the plant kingdom.
In the Southeastern U.S., where I live, you know it’s spring when your car has turned yellow and pollen blankets your patio furniture and anything else left outside. Suddenly there are long lines at every car wash in town.
Even people who aren’t allergic to pollen – clearly an advantage for a pollination ecologist like me – can experience sneezing and watery eyes during the release of tree pollen each spring. Enough particulate matter in the air will irritate just about anyone, even if your immune system does not launch an all-out attack.
So, why is there so much pollen? And why does it seem to be getting worse?
2 ways trees spread their pollen
Trees don’t have an easy time in the reproductive game. As a tree, you have two options to disperse your pollen.
Option 1: Employ an agent, such as a butterfly or bee, that can carry your pollen to another plant of the same species.
The downside of this option is that you must invest in a showy flower display and a sweet scent to advertise yourself, and sugary nectar to pay your agent for its services.
A bee enjoys pollen from a cherry blossom. Pollen is a primary source of protein for bees. Ivan Radic/Flickr, CC BY
Option 2, the budget option, is much less precise: Get a free ride on the wind.
Wind was the original pollinator, evolving long before animal-mediated pollination. Wind doesn’t require a showy flower nor a nectar reward. What it does require for pollination to succeed is ample amounts of lightweight, small-diameter pollen.
Why wind-blown pollen makes allergies worse
Wind is not an efficient pollinator, however. The probability of one pollen grain landing in the right location – the stigma or ovule of another plant of the same species – is infinitesimally small.
Therefore, wind-pollinated trees must compensate for this inefficiency by producing copious amounts of pollen, and it must be light enough to be carried.
For allergy sufferers, that can mean air filled with microscopic pollen grains that can get into your eyes, throat and lungs, sneak in through window screens and convince your immune system that you’ve inhaled a dangerous intruder.
Plants relying on animal-mediated pollination, by contrast, can produce heavier and stickier pollen to adhere to the body of an insect. So don’t blame the bees for your allergies – it’s really the wind.
Climate change has a role here, too
Plants initiate pollen release based on a few factors, including temperature and light cues. Many of our temperate tree species respond to cues that signal the beginning of spring, including warmer temperatures.
Studies have found that pollen seasons have intensified in the past three decades as the climate has warmed. One study that examined 60 location across North America found pollen seasons expanded by an average of 20 days from 1990 to 2018 and pollen concentrations increased by 21%.
Anyone who has lived in the Southeast for the past couple of decades has likely noticed this. The region has more tornado warnings, more severe thunderstorms, more power outages. This is especially true in the mid-South, from Mississippi to Alabama.
Severity of wind and storm events mapped from NOAA data, 2012-2019, shows high activity over Mississippi and Alabama. Red areas have the most severe events. Christine Cairns Fortuin
Since wind is the vector of airborne pollen, windier conditions can also make allergies worse. Pollen remains airborne for longer on windy days, and it travels farther.
To make matters worse, increasing storm activity may be doing more than just transporting pollen. Storms can also break apart pollen grains, creating smaller particles that can penetrate deeper into the lungs.
Many allergy sufferers may notice worsening allergies during storms.
The peak of spring wind and storm season tends to correspond to the timing of the release of tree pollen that blankets our world in yellow. The effects of climate change, including longer pollen seasons and more pollen released, and corresponding shifts in windy days and storm severity are helping to create the perfect pollen storm.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Centrist
The content is a scientific and educational article focusing on the biology of pollen, its effects on allergies, and the influence of climate change on pollen production. It presents factual information supported by research studies and references, without taking a partisan stance. While it acknowledges climate change as a factor, the discussion remains grounded in scientific observation rather than political opinion, leading to a neutral, centrist tone.