Connect with us

The Conversation

Gravitational wave detector LIGO is back online after 3 years of upgrades – how the world’s most sensitive yardstick reveals secrets of the universe

Published

on

Gravitational wave detector LIGO is back online after 3 years of upgrades – how the world’s most sensitive yardstick reveals secrets of the universe

When two massive objects – like black holes or neutron stars – merge, they warp space and time.
Mark Garlick/Science Photo Library via Getty Images

Chad Hanna, Penn State

After a three-year hiatus, scientists in the U.S. have just turned on detectors capable of measuring gravitational waves – tiny ripples in space itself that travel through the universe.

Unlike light waves, gravitational waves are nearly unimpeded by the galaxies, stars, gas and dust that fill the universe. This means that by measuring gravitational waves, astrophysicists like me can peek directly into the heart of some of these most spectacular phenomena in the universe.

Since 2020, the Laser Interferometric Gravitational-Wave Observatory – commonly known as LIGO – has been sitting dormant while it underwent some exciting upgrades. These improvements will significantly boost the sensitivity of LIGO and should allow the facility to observe more-distant objects that produce smaller ripples in spacetime.

By detecting more events that create gravitational waves, there will be more opportunities for astronomers to also observe the light produced by those same events. Seeing an event through multiple channels of information, an approach called multi-messenger astronomy, provides astronomers rare and coveted opportunities to learn about physics far beyond the realm of any laboratory testing.

A diagram showing the Sun and Earth warping space.
According to Einstein’s theory of general relativity, massive objects warp space around them.
vchal/iStock via Getty Images

Ripples in spacetime

According to Einstein’s theory of general relativity, mass and energy warp the shape of space and time. The bending of spacetime determines how objects move in relation to one another – what people experience as gravity.

Gravitational waves are created when massive objects like black holes or neutron stars merge with one another, producing sudden, large changes in space. The process of space warping and flexing sends ripples across the universe like a wave across a still pond. These waves travel out in all directions from a disturbance, minutely bending space as they do so and ever so slightly changing the distance between objects in their way.

When two massive objects – like a black hole or a neutron star – get close together, they rapidly spin around each other and produce gravitational waves. The sound in this NASA visualization represents the frequency of the gravitational waves.

Even though the astronomical events that produce gravitational waves involve some of the most massive objects in the universe, the stretching and contracting of space is infinitesimally small. A strong gravitational wave passing through the Milky Way may only change the diameter of the entire galaxy by three feet (one meter).

The first gravitational wave observations

Though first predicted by Einstein in 1916, scientists of that era had little hope of measuring the tiny changes in distance postulated by the theory of gravitational waves.

Around the year 2000, scientists at Caltech, the Massachusetts Institute of Technology and other universities around the world finished constructing what is essentially the most precise ruler ever built – the LIGO observatory.

An L-shaped facility with two long arms extending out from a central building.
The LIGO detector in Hanford, Wash., uses lasers to measure the minuscule stretching of space caused by a gravitational wave.
LIGO Laboratory

LIGO is comprised of two separate observatories, with one located in Hanford, Washington, and the other in Livingston, Louisiana. Each observatory is shaped like a giant L with two, 2.5-mile-long (four-kilometer-long) arms extending out from the center of the facility at 90 degrees to each other.

To measure gravitational waves, researchers shine a laser from the center of the facility to the base of the L. There, the laser is split so that a beam travels down each arm, reflects off a mirror and returns to the base. If a gravitational wave passes through the arms while the laser is shining, the two beams will return to the center at ever so slightly different times. By measuring this difference, physicists can discern that a gravitational wave passed through the facility.

LIGO began operating in the early 2000s, but it was not sensitive enough to detect gravitational waves. So, in 2010, the LIGO team temporarily shut down the facility to perform upgrades to boost sensitivity. The upgraded version of LIGO started collecting data in 2015 and almost immediately detected gravitational waves produced from the merger of two black holes.

Since 2015, LIGO has completed three observation runs. The first, run O1, lasted about four months; the second, O2, about nine months; and the third, O3, ran for 11 months before the COVID-19 pandemic forced the facilities to close. Starting with run O2, LIGO has been jointly observing with an Italian observatory called Virgo.

Between each run, scientists improved the physical components of the detectors and data analysis methods. By the end of run O3 in March 2020, researchers in the LIGO and Virgo collaboration had detected about 90 gravitational waves from the merging of black holes and neutron stars.

The observatories have still not yet achieved their maximum design sensitivity. So, in 2020, both observatories shut down for upgrades yet again.

Two people in white lab outfits working on complicated machinery.
Upgrades to the mechanical equipment and data processing algorithms should allow LIGO to detect fainter gravitational waves than in the past.
LIGO/Caltech/MIT/Jeff Kissel, CC BY-ND

Making some upgrades

Scientists have been working on many technological improvements.

One particularly promising upgrade involved adding a 1,000-foot (300-meter) optical cavity to improve a technique called squeezing. Squeezing allows scientists to reduce detector noise using the quantum properties of light. With this upgrade, the LIGO team should be able to detect much weaker gravitational waves than before.

My teammates and I are data scientists in the LIGO collaboration, and we have been working on a number of different upgrades to software used to process LIGO data and the algorithms that recognize signs of gravitational waves in that data. These algorithms function by searching for patterns that match theoretical models of millions of possible black hole and neutron star merger events. The improved algorithm should be able to more easily pick out the faint signs of gravitational waves from background noise in the data than the previous versions of the algorithms.

A GIF showing a star brightening over a few days.
Astronomers have captured both the gravitational waves and light produced by a single event, the merger of two neutron stars. The change in light can be seen over the course of a few days in the top right inset.
Hubble Space Telescope, NASA and ESA

A hi-def era of astronomy

In early May 2023, LIGO began a short test run – called an engineering run – to make sure everything was working. On May 18, LIGO detected gravitational waves likely produced from a neutron star merging into a black hole.

LIGO’s 20-month observation run 04 will officially start on May 24, and it will later be joined by Virgo and a new Japanese observatory – the Kamioka Gravitational Wave Detector, or KAGRA.

While there are many scientific goals for this run, there is a particular focus on detecting and localizing gravitational waves in real time. If the team can identify a gravitational wave event, figure out where the waves came from and alert other astronomers to these discoveries quickly, it would enable astronomers to point other telescopes that collect visible light, radio waves or other types of data at the source of the gravitational wave. Collecting multiple channels of information on a single event – multi-messenger astrophysics – is like adding color and sound to a black-and-white silent film and can provide a much deeper understanding of astrophysical phenomena.

Astronomers have only observed a single event in both gravitational waves and visible light to date – the merger of two neutron stars seen in 2017. But from this single event, physicists were able to study the expansion of the universe and confirm the origin of some of the universe’s most energetic events known as gamma-ray bursts.

With run O4, astronomers will have access to the most sensitive gravitational wave observatories in history and hopefully will collect more data than ever before. My colleagues and I are hopeful that the coming months will result in one – or perhaps many – multi-messenger observations that will push the boundaries of modern astrophysics.The Conversation

Chad Hanna, Professor of Physics, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast

Published

on

theconversation.com – John Duah, Assistant Professor of Health Services Administration, Auburn University – 2024-11-21 07:21:00

Health officials work to connect the dots during the early stages of an outbreak.
Maxiphoto/iStock via Getty Images Plus

John Duah, Auburn University

A cluster of people talking on social media about their mysterious rashes. A sudden die-off of birds at a nature preserve. A big bump in patients showing up to a city’s hospital emergency rooms.

These are the kinds of events that public health officials are constantly on the lookout for as they watch for new disease threats.

Health emergencies can range from widespread infectious disease outbreaks to natural disasters and even acts of terrorism. The scope, timing or unexpected nature of these events can overwhelm routine health care capacities.

I am a public health expert with a background in strengthening health systems, infectious disease surveillance and pandemic preparedness.

Rather than winging it when an unusual health event crops up, health officials take a systematic approach. There are structures in place to collect and analyze data to guide their response. Public health surveillance is foundational for figuring out what’s going on and hopefully squashing any outbreak before it spirals out of control.

Tracking day by day

Indicator-based surveillance is the routine, systematic collection of specific health data from established reporting systems. It monitors trends over time; the goal is to detect anomalies or patterns that may signal a widespread or emerging public health threat.

Hospitals are legally required to report data on admissions and positive test results for specific diseases, such as measles or polio, to local health departments. The local health officials then compile the pertinent data and share it with state or national public health agencies, such as the U.S. Centers for Disease Control and Prevention.

When doctors diagnose a positive case of influenza, for example, they report it through the National Respiratory and Enteric Virus Surveillance System, which tracks respiratory and gastrointestinal illnesses. A rise in the number of cases could be a warning sign of a new outbreak. Likewise, the National Syndromic Surveillance Program collects anonymized data from emergency departments about patients who report symptoms such as fever, cough or respiratory distress.

Public health officials keep an eye on wastewater as well. A variety of pathogens shed by infected people, who may be asymptomatic, can be identified in sewage. The CDC created the National Wastewater Surveillance System to help track the virus that causes COVID-19. Since the pandemic, it’s expanded in some areas to monitor additional pathogens, including influenza, respiratory syncytial virus (RSV) and norovirus. Wastewater surveillance adds another layer of data, allowing health officials to catch potential outbreaks in the community, even when many infected individuals show no symptoms and may not seek medical care.

Having these surveillance systems in place allows health experts to detect early signs of possible outbreaks and gives them time to plan and respond effectively.

lots of people wearing PPE in a hospital hallway
An extremely busy emergency room could be a signal that an outbreak is underway.
Jeffrey Basinger/Newsday via Getty Images

Watching for anything outside the norm

Event-based surveillance watches in real time for anything that could indicate the start of an outbreak.

This can look like health officials tracking rumors, news articles or social media mentions of unusual illnesses or sudden deaths. Or it can be emergency room reports of unusual spikes in numbers of patients showing up with specific symptoms.

Local health care workers, community leaders and the public all support this kind of public health surveillance when they report unexpected health events through hotlines and online forms or just call, text or email their public health department. Local health workers can assess the information and escalate it to state or national authorities.

Public health officials have their ears to the ground in these various ways simultaneously. When they suspect the start of an outbreak, a number of teams spring into action, deploying different, coordinated responses.

Collecting samples for more analysis

Once event-based surveillance has picked up an unusual report or a sudden pattern of illness, health officials try to gather medical samples to get more information about what might be going on. They may focus on people, animals or specific locations, depending on the suspected source. For example, during an avian flu outbreak, officials take swabs from birds, both live and dead, and blood samples from people who have been exposed.

Health workers collect material ranging from nose or throat swabs, fecal, blood or tissue samples, and water and soil samples. Back in specialized laboratories, technicians analyze the samples, trying to identify a specific pathogen, determine whether it is contagious and evaluate how it might spread. Ultimately, scientists are trying to figure out the potential impact on public health.

Finding people who may have been exposed

Once an outbreak is detected, the priority quickly shifts to containment to prevent further spread. Public health officials turn into detectives, working to identify people who may have had direct contact with a known infected person. This process is called contact tracing.

Often, contact tracers work backward from a positive laboratory confirmation of the index case – that is, the first person known to be infected with a particular pathogen. Based on interviews with the patient and visiting places they had been, the local health department will reach out to people who may have been exposed. Health workers can then provide guidance about how to monitor potential symptoms, arrange testing or advise about isolating for a set amount of time to prevent further spread.

truck advertising 'COVID Trace' app
Many states, including Nevada, set up contact tracing apps to help people determine whether they may have been exposed to the coronavirus.
Gabe Ginsberg/Experience Strategy Associates via Getty Images

Contact tracing played a pivotal role during the early days of the COVID-19 pandemic, helping health departments monitor possible cases and take immediate action to protect public health. By focusing on people who had been in close contact with a confirmed case, public health agencies could break the chain of transmission and direct critical resources to those who were affected.

Though contact tracing is labor- and resource-intensive, it is a highly effective method of stopping outbreaks before they become unmanageable. In order for contact tracing to be effective, though, the public has to cooperate and comply with public health measures.

Stopping an outbreak before it’s a pandemic

Ultimately, public health officials want to keep as many people as possible from getting sick. Strategies to try to contain an outbreak include isolating patients with confirmed cases, quarantining those who have been exposed and, if necessary, imposing travel restrictions. For cases involving animal-to-human transmission, such as bird flu, containment measures may also include strict protocols on farms to prevent further spread.

Health officials use predictive models and data analysis tools to anticipate spread patterns and allocate resources effectively. Hospitals can streamline infection control based on these forecasts, while health care workers receive timely updates and training in response protocols. This process ensures that everyone is informed and ready to act to maximize public safety.

No one knows what the next emerging disease will be. But public health workers are constantly scanning the horizon for threats and ready to jump into action.The Conversation

John Duah, Assistant Professor of Health Services Administration, Auburn University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast appeared first on theconversation.com

Continue Reading

The Conversation

Doctor’s bills often come with sticker shock for patients − but health insurance could be reinvented to provide costs upfront

Published

on

theconversation.com – Michal Horný, Assistant Professor of Health Policy and Management, UMass Amherst – 2024-11-21 07:21:00

The price of the doctor’s visit you calculated online might not reflect what you’ll actually be billed.
CSA Images/Getty Images

Michal Horný, UMass Amherst

You have scheduled an appointment with a health care provider, but no matter how hard you try, no one seems to be able to reliably tell you how much that visit will cost you. Will you have to pay US$20, $1,000 – or even more?

Patients are increasingly on the hook for health care costs through deductibles, co-pays and other fees. As a result, patients are demanding credible cost information before appointments to choose where they seek care and control their budget.

Yet, in spite of recent legislation and regulations, upfront information on patient out-of-pocket costs is still difficult to obtain from both health care providers and insurers.

Predicting out-of-pocket costs

Why is it so difficult to tell patients in advance how much their care is going to cost?

This is a question health economists like me try to answer. Although the fundamental reason is simply the unpredictable nature of health care, the fact that it translates to unpredictable out-of-pocket costs for patients is a policy choice.

Health insurance plans in the U.S. such as Medicare and Medicare Advantage, as well as most individual and group plans, leave a percentage of the cost of care for patients to settle out of pocket. These include deductibles – the amount patients have to pay for a service before their insurance kicks in – or coinsurance, a percentage of the cost of care that patients must pay after they have met their deductible.

Understandably, most patients want to know their out-of-pocket costs before a doctor’s office visit or a trip to the hospital. However, the cost of care – and thus the percentage of the cost patients will pay – often isn’t available until after care has been delivered. This is because of the way health care providers are paid for their work.

Stethoscope lying on top of health insurance bill
How many health care services you’ll need for a given illness or procedure can be unpredictable.
DNY59/E+ via Getty Images

Health care providers typically seek payments for each patient retrospectively, based on the volume and intensity of services they have delivered. But both are hard to predict. A physician usually needs to see a patient before deciding how to address their health care needs. Sometimes, an extra test or imaging scan is needed to confirm a diagnosis or plan treatment.

Crucially, a variety of unexpected complications can occur even during routine procedures. Addressing these unforeseen complications often requires providing unanticipated services and involving other health care providers who might not have been part of the visit otherwise. And these extra services cost money.

As long as policymakers keep health care payments tied to the volume and intensity of performed medical services – which are uncertain – and patient cost-sharing tied to health care payments, patients will not be able to know what their out-of-pocket costs will be in advance. Simply making health care service prices publicly available will not change that.

What can be done to guarantee out-of-pocket costs before patients have their appointments?

Health care delivery as a supply chain

One idea researchers have proposed is to reorganize health care delivery into a supply chain. This would shift production risk to health care providers similarly to how other complex products are offered to consumers.

Consider air travel tickets. Consumers taking a flight from one city to another receive services from multiple entities, such as airlines, airports, aviation fuel suppliers and catering companies. Many of these entities face operational uncertainties such as departure delays or variable fuel consumption due to unpredictable weather. But airlines – as the final link in the supply chain – provide consumers with upfront prices for the entire trip.

The No Surprises Act reduces patient bills from out-of-network providers.

In health care, the principal provider from whom a patient seeks care could serve as the price-guaranteeing entity. They would collect a single, guaranteed price for the appointment and compensate other providers involved as needed. Some researchers have proposed aspects of this idea as a potential way to reduce surprise billing from out-of-network emergency physicians working at in-network hospitals.

However, such a major reorganization of health care delivery would be extremely challenging, as it would require all providers to enter into new contractual arrangements with each other. It would not only cause a legal undertaking of unprecedented scale, but it could also end up being financially devastating for small physician practices.

Co-payment-only health plans

There are other approaches to providing patients with reliable, upfront prices that would not require a complete overhaul of the health care system. The U.S. already has much of the needed infrastructure in place: health insurance.

A primary purpose of health insurance is to protect beneficiaries from financial shocks. Health insurers could modify the benefit design of policies to ensure patients obtain guaranteed out-of-pocket cost information before receiving care.

One way to achieve that would be saying goodbye to deductibles and coinsurance and having insured patients pay for their care only in the form of co-paymentsfixed dollar amounts per encounter, such as $20 per doctor’s visit, $35 per prescription drug fill or $500 per hospital stay. Some insurance plans already offer this.

However, this approach removes incentives for patients to seek care from providers that offer quality services at a low price. It also could potentially increase monthly health insurance costs, also called premiums.

Person with head in hand in front of laptop, holding medical bill as another person looks on with them
Improving how health care is delivered could make for more transparent out-of-pocket costs for patients.
skynesher/E+ via Getty Images

Innovative health insurance design

Based on my own research, I propose that an alternative solution to providing patients with reliable, upfront prices could be implementing episode-based cost-sharing into health insurance plans.

Under this model, health insurers would create bundles of services that patients may receive during a health care visit. This approach would provide patients with a single upfront price for the entire bundle based only on factors known in advance, such as their health insurance benefits and who their principal health care provider is. For example, you would have a guaranteed price tag for the cost of going to the hospital to give birth to a child or replace a joint.

Any deviation from the ultimate cost of care due to unforeseen situations patients have little control over would be borne by the insurer. That is what insurers do for a living – they know how to manage risk. Such a modification to health insurance benefit design would protect patients from unexpected health care costs, while preserving the incentive to seek care with high-value providers. It would also help keep health insurance premiums intact.

Seeking care for a health concern is already stressful. It does not have to be more stressful because of cost uncertainty. Several approaches to help patients know how much their care is going to cost in advance are available for policymakers to consider. In the meantime, patients may need to pick up the phone, call their hospital billing office and hope that the amount they obtain will be close to the amount they will eventually find on their medical bills.The Conversation

Michal Horný, Assistant Professor of Health Policy and Management, UMass Amherst

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Doctor’s bills often come with sticker shock for patients − but health insurance could be reinvented to provide costs upfront appeared first on theconversation.com

Continue Reading

The Conversation

Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement

Published

on

theconversation.com – Vinny Negi, Research Scientist in Endocrinology and Metabolism, University of Pittsburgh – 2024-11-20 07:36:00

The islets of Langerhans play a crucial role in blood sugar regulation.
Fayette A Reynolds/Berkshire Community College Bioscience Image Library via Flickr

Vinny Negi, University of Pittsburgh

Diabetes develops when the body fails to manage its blood glucose levels. One form of diabetes causes the body to not produce insulin at all. Called Type 1 diabetes, or T1D, this autoimmune disease happens when the body’s defense system mistakes its own insulin-producing cells as foreign and kills them. On average, T1D can lead patients to lose an average of 32 years of healthy life.

Current treatment for T1D involves lifelong insulin injections. While effective, patients taking insulin risk developing low blood glucose levels, which can cause symptoms such as shakiness, irritability, hunger, confusion and dizziness. Severe cases can result in seizures or unconsciousness. Real-time blood glucose monitors and injection devices can help avoid low blood sugar levels by controlling insulin release, but they don’t work for some patients.

For these patients, a treatment called islet transplantation can help better control blood glucose by giving them both new insulin-producing cells as well as cells that prevent glucose levels from falling too low. However, it is limited by donor availability and the need to use immunosuppressive drugs. Only about 10% of T1D patients are eligible for islet transplants.

In my work as a diabetes researcher, my colleagues and I have found that making islets from stem cells can help overcome transplantation challenges.

History of islet transplantation

Islet transplantation for Type 1 diabetes was FDA approved in 2023 after more than a century of investigation.

Insulin-producing cells, also called beta cells, are located in regions of the pancreas called islets of Langerhans. They are present in clusters of cells that produce other hormones involved in metabolism, such as glucagon, which increases blood glucose levels; somatostatin, which inhibits insulin and glucagon; and ghrelin, which signals hunger. Anatomist Paul Langerhans discovered islets in 1869 while studying the microscopic anatomy of the pancreas, observing that these cell clusters stained distinctly from other cells.

The road to islet transplantation has faced many hurdles since pathologist Gustave-Édouard Laguesse first speculated about the role islets play in hormone production in the late 19th century. In 1893, researchers attempted to treat a 13-year-old boy dying of diabetes with a sheep pancreas transplant. While they saw a slight improvement in blood glucose levels, the boy died three days after the procedure.

Microscopy image of oblong blob of yellow and pink cells surrounded by violet cells
The islets of Langerhans, located in the pancreas and colored yellow here, secrete hormones such as insulin and glucagon.
Steve Gschmeissner/Science Photo Library via Getty Images

Interest in islet transplantation was renewed in 1972, when scientist Paul E. Lacy successfully transplanted islets in a diabetic rat. After that, many research groups tried islet transplantation in people, with no or limited success.

In 1999, transplant surgeon James Shapiro and his team successfully transplanted islets in seven patients in Edmonton, Canada, by transplanting a large number of islets from two to three donors at once and using immunosuppressive drugs. Through the Edmonton protocol, these patients were able to manage their diabetes without insulin for a year. By 2012, over 1,800 patients underwent islet transplants based on this technique, and about 90% survived through seven years of follow-up. The first FDA-approved islet transplant therapy is based on the Edmonton protocol.

Stem cells as a source of islets

Islet transplantation is now considered a minor surgery, where islets are injected into a vein in the liver using a catheter. As simple as it may seem, there are many challenges associated with the procedure, including its high cost and a limited availability of donor islets. Transplantation also requires lifelong use of immunosuppressive drugs that allow the foreign islets to live and function in the body. But the use of immunosuppressants also increases the risk of other infections.

To overcome these challenges, researchers are looking into using stem cells to create an unlimited source of islets.

There are two kinds of stem cells scientists are using for islet transplants: embryonic stem cells, or ESCs, and induced pluripotent stem cells, or iPSCs. Both types can mature into islets in the lab.

Each has benefits and drawbacks.

There are ethical concerns regarding ESCs, since they are obtained from dead human embryos. Transplanting ESCs would still require immunosuppressive drugs, limiting their use. Thus, researchers are working to either encapsulate or make mutations in ESC islets to protect them from the body’s immune system.

Conversely, iPSCs are obtained from skin, blood or fat cells of the patient undergoing transplantation. Since the transplant involves the patient’s own cells, it bypasses the need for immunosuppressive drugs. But the cost of generating iPSC islets for each patient is a major barrier.

A long life with Type 1 diabetes is possible.

Stem cell islet challenges

While iPSCs could theoretically avoid the need for immunosuppressive drugs, this method still needs to be tested in the clinic.

T1D patients who have genetic mutations causing the disease currently cannot use iPSC islets, since the cells that would be taken to create stem cells may also carry the same disease-causing mutation of their islet cells. Many available gene-editing tools could potentially remove those mutations and generate functional iPSC islets.

In addition to the challenge of genetic tweaking, price is a major issue for islet transplantation. Transplanting islets made from stem cells is more expensive than insulin therapy because of higher manufacturing costs. Efforts to scale up the process and make it more cost effective include creating biobanks for iPSC matching. This would allow iPSC islets to be used for more than one patient, reducing costs by avoiding the need to generate freshly modified islets for each patient. Embryonic stem cell islets have a similar advantage, as the same batch of cells can be used for all patients.

There is also a risk of tumors forming from these stem cell islets after transplantation. So far, lab studies on rodents and clinical trials in people have rarely shown any cancer. This suggests the chances of these cells forming a tumor are low.

That being said, many rounds of research and development are required before stem cell islets can be used in the clinic. It is a laborious trek, but I believe a few more optimizations can help researchers beat diabetes and save lives.

Article updated to clarify that Type 1 diabetes causes the body to not produce insulin.The Conversation

Vinny Negi, Research Scientist in Endocrinology and Metabolism, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement appeared first on theconversation.com

Continue Reading

Trending