Connect with us

The Conversation

A brief history of Medicaid and America’s long struggle to establish a health care safety net

Published

on

theconversation.com – Ben Zdencanovic, Postdoctoral Associate in History and Policy, University of California, Los Angeles – 2025-03-18 07:53:00

President Lyndon B. Johnson, left, next to former President Harry S. Truman, signs into law the measure creating Medicare and Medicaid in 1965.
AP Photo

Ben Zdencanovic, University of California, Los Angeles

The Medicaid system has emerged as an early target of the Trump administration’s campaign to slash federal spending. A joint federal and state program, Medicaid provides health insurance coverage for more than 72 million people, including low-income Americans and their children and people with disabilities. It also helps foot the bill for long-term care for older people.

In late February 2025, House Republicans advanced a budget proposal that would potentially cut US$880 billion from Medicaid over 10 years. President Donald Trump has backed that House budget despite repeatedly vowing on the campaign trail and during his team’s transition that Medicaid cuts were off the table.

Medicaid covers one-fifth of all Americans at an annual cost that coincidentally also totals about $880 billion, $600 billion of which is funded by the federal government. Economists and public health experts have argued that big Medicaid cuts would lead to fewer Americans getting the health care they need and further strain the low-income families’ finances.

As a historian of social policy, I recently led a team that produced the first comprehensive historical overview of Medi-Cal, California’s statewide Medicaid system. Like the broader Medicaid program, Medi-Cal emerged as a compromise after Democrats failed to achieve their goal of establishing universal health care in the 1930s and 1940s.

Instead, the United States developed its current fragmented health care system, with employer-provided health insurance covering most working-age adults, Medicare covering older Americans, and Medicaid as a safety net for at least some of those left out.

Health care reformers vs. the AMA

Medicaid’s history officially began in 1965, when President Lyndon B. Johnson signed the system into law, along with Medicare. But the seeds for this program were planted in the 1930s and 1940s. When President Franklin D. Roosevelt’s administration was implementing its New Deal agenda in the 1930s, many of his advisers hoped to include a national health insurance system as part of the planned Social Security program.

Those efforts failed after a heated debate. The 1935 Social Security Act created the old-age and unemployment insurance systems we have today, with no provisions for health care coverage.

Nevertheless, during and after World War II, liberals and labor unions backed a bill that would have added a health insurance program into Social Security.

Harry Truman assumed the presidency after Roosevelt’s death in 1945. He enthusiastically embraced that legislation, which evolved into the “Truman Plan.” The American Medical Association, a trade group representing most of the nation’s doctors, feared heightened regulation and government control over the medical profession. It lobbied against any form of public health insurance.

YouTube video
This PBS ‘Origin of Everything!’ video sums up how the U.S. wound up with its complex health care system.

During the late 1940s, the AMA poured millions of dollars into a political advertising campaign to defeat Truman’s plan. Instead of mandatory government health insurance, the AMA supported voluntary, private health insurance plans. Private plans such as those offered by Kaiser Permanente had become increasingly popular in the 1940s in the absence of a universal system. Labor unions began to demand them in collective bargaining agreements.

The AMA insisted that these private, employer-provided plans were the “American way,” as opposed to the “compulsion” of a health insurance system operated by the federal government. They referred to universal health care as “socialized medicine” in widely distributed radio commercials and print ads.

In the anticommunist climate of the late 1940s, these tactics proved highly successful at eroding public support for government-provided health care. Efforts to create a system that would have provided everyone with health insurance were soundly defeated by 1950.

JFK and LBJ

Private health insurance plans grew more common throughout the 1950s.

Federal tax incentives, as well as a desire to maintain the loyalty of their professional and blue-collar workers alike, spurred companies and other employers to offer private health insurance as a standard benefit. Healthy, working-age, employed adults – most of whom were white men – increasingly gained private coverage. So did their families, in many cases.

Everyone else – people with low incomes, those who weren’t working and people over 65 – had few options for health care coverage. Then, as now, Americans without private health insurance tended to have more health problems than those who had it, meaning that they also needed more of the health care they struggled to afford.

But this also made them risky and unprofitable for private insurance companies, which typically charged them high premiums or more often declined to cover them at all.

Health care activists saw an opportunity. Veteran health care reformers such as Wilbur Cohen of the Social Security Administration, having lost the battle for universal coverage, envisioned a narrower program of government-funded health care for people over 65 and those with low incomes. Cohen and other reformers reasoned that if these populations could get coverage in a government-provided health insurance program, it might serve as a step toward an eventual universal health care system.

While President John F. Kennedy endorsed these plans, they would not be enacted until Johnson was sworn in following JFK’s assassination. In 1965, Johnson signed a landmark health care bill into law under the umbrella of his “Great Society” agenda, which also included antipoverty programs and civil rights legislation.

That law created Medicare and Medicaid.

From Reagan to Trump

As Medicaid enrollment grew throughout the 1970s and 1980s, conservatives increasingly conflated the program with the stigma of what they dismissed as unearned “welfare.” In the 1970s, California Gov. Ronald Reagan developed his national reputation as a leading figure in the conservative movement in part through his high-profile attempts to cut and privatize Medicaid services in his state.

Upon assuming the presidency in the early 1980s, Reagan slashed federal funding for Medicaid by 18%. The cuts resulted in some 600,000 people who depended on Medicaid suddenly losing their coverage, often with dire consequences.

Medicaid spending has since grown, but the program has been a source of partisan debate ever since.

In the 1990s and 2000s, Republicans attempted to change how Medicaid was funded. Instead of having the federal government match what states were spending at different levels that were based on what the states needed, they proposed a block grant system. That is, the federal government would have contributed a fixed amount to a state’s Medicaid budget, making it easier to constrain the program’s costs and potentially limiting how much health care it could fund.

These efforts failed, but Trump reintroduced that idea during his first term. And block grants are among the ideas House Republicans have floated since Trump’s second term began to achieve the spending cuts they seek.

Women carry boxes labeled 'We need Medicaid for Long Term Care' and We need Medicaid for Pediatric Care' at a protest in 2017.
Protesters in New York City object to Medicaid cuts sought by the first Trump administration in 2017.
Erik McGregor/LightRocket via Getty Images

The ACA’s expansion

The 2010 Affordable Care Act greatly expanded the Medicaid program by extending its coverage to adults with incomes at or below 138% of the federal poverty line. All but 10 states have joined the Medicaid expansion, which a U.S. Supreme Court ruling made optional.

As of 2023, Medicaid was the country’s largest source of public health insurance, making up 18% of health care expenditures and over half of all spending on long-term care. Medicaid covers nearly 4 in 10 children and 80% of children who live in poverty. Medicaid is a particularly crucial source of coverage for people of color and pregnant women. It also helps pay for low-income people who need skilled nursing and round-the-clock care to live in nursing homes.

In the absence of a universal health care system, Medicaid fills many of the gaps left by private insurance policies for millions of Americans. From Medi-Cal in California to Husky Health in Connecticut, Medicaid is a crucial pillar of the health care system. This makes the proposed House cuts easier said than done.The Conversation

Ben Zdencanovic, Postdoctoral Associate in History and Policy, University of California, Los Angeles

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post A brief history of Medicaid and America’s long struggle to establish a health care safety net appeared first on theconversation.com

The Conversation

How does your brain create new memories? Neuroscientists discover ‘rules’ for how neurons encode new information

Published

on

theconversation.com – William Wright, Postdoctoral Scholar in Neurobiology, University of California, San Diego – 2025-04-17 13:00:00

Neurons that fire together sometimes wire together.
PASIEKA/Science Photo Library via Getty Images

William Wright, University of California, San Diego and Takaki Komiyama, University of California, San Diego

Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades.

But how does your brain achieve this incredible feat?

In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn.

Learning in the brain

The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data.

These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses.

It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain.

Diagram of neuron, featuring a relatively large cell body with a long branching tail extending from it
Neurons are the basic units of the brain.
OpenStax, CC BY-SA

For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain.

In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear.

Defining the rules

We decided to monitor the activity of individual synaptic connections within the brain during learning to see whether we could identify activity patterns that determine which connections would get stronger or weaker.

To do this, we genetically encoded biosensors in the neurons of mice that would light up in response to synaptic and neural activity. We monitored this activity in real time as the mice learned a task that involved pressing a lever to a certain position after a sound cue in order to receive water.

We were surprised to find that the synapses on a neuron don’t all follow the same rule. For example, scientists have often thought that neurons follow what are called Hebbian rules, where neurons that consistently fire together, wire together. Instead, we saw that synapses on different locations of dendrites of the same neuron followed different rules to determine whether connections got stronger or weaker. Some synapses adhered to the traditional Hebbian rule where neurons that consistently fire together strengthen their connections. Other synapses did something different and completely independent of the neuron’s activity.

Our findings suggest that neurons, by simultaneously using two different sets of rules for learning across different groups of synapses, rather than a single uniform rule, can more precisely tune the different types of inputs they receive to appropriately represent new information in the brain.

In other words, by following different rules in the process of learning, neurons can multitask and perform multiple functions in parallel.

Future applications

This discovery provides a clearer understanding of how the connections between neurons change during learning. Given that most brain disorders, including degenerative and psychiatric conditions, involve some form of malfunctioning synapses, this has potentially important implications for human health and society.

For example, depression may develop from an excessive weakening of the synaptic connections within certain areas of the brain that make it harder to experience pleasure. By understanding how synaptic plasticity normally operates, scientists may be able to better understand what goes wrong in depression and then develop therapies to more effectively treat it.

Microscopy image of mouse brain cross-section with lower middle-half dusted green
Changes to connections in the amygdala – colored green – are implicated in depression.
William J. Giardino/Luis de Lecea Lab/Stanford University via NIH/Flickr, CC BY-NC

These findings may also have implications for artificial intelligence. The artificial neural networks underlying AI have largely been inspired by how the brain works. However, the learning rules researchers use to update the connections within the networks and train the models are usually uniform and also not biologically plausible. Our research may provide insights into how to develop more biologically realistic AI models that are more efficient, have better performance, or both.

There is still a long way to go before we can use this information to develop new therapies for human brain disorders. While we found that synaptic connections on different groups of dendrites use different learning rules, we don’t know exactly why or how. In addition, while the ability of neurons to simultaneously use multiple learning methods increases their capacity to encode information, what other properties this may give them isn’t yet clear.

Future research will hopefully answer these questions and further our understanding of how the brain learns.The Conversation

William Wright, Postdoctoral Scholar in Neurobiology, University of California, San Diego and Takaki Komiyama, Professor of Neurobiology, University of California, San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post How does your brain create new memories? Neuroscientists discover ‘rules’ for how neurons encode new information appeared first on theconversation.com

Continue Reading

The Conversation

OpenAI beats DeepSeek on sentence-level reasoning

Published

on

theconversation.com – Manas Gaur, Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County – 2025-04-17 07:42:00

DeepSeek’s language AI rocked the tech industry, but it comes up short on one measure.
Lionel Bonaventure/AFP via Getty Images

Manas Gaur, University of Maryland, Baltimore County

ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including scientific and legal citations. It turns out that measuring how accurate an AI model’s citations are is a good way of assessing the model’s reasoning abilities.

An AI model “reasons” by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school.

Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters.

The question is, can today’s models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose.

I’m a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate research citations and provide understandable reasoning.

We used the benchmark to compare the performance of two popular AI reasoning models, DeepSeek’s R1 and OpenAI’s o1. Though DeepSeek made headlines with its stunning efficiency and cost-effectiveness, the Chinese upstart has a way to go to match OpenAI’s reasoning performance.

Sentence specific

The accuracy of citations has a lot to do with whether the AI model is reasoning about information at the sentence level rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations.

In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that explain the whole paragraph or document, not the relatively fine-grained information in the sentence.

Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts than in the middle. This makes it difficult for them to fully understand all the important information throughout a long document.

Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like summarizing or paraphrasing.

The Reasons benchmark addresses this weakness by examining large language models’ citation generation and reasoning.

YouTube video
How DeepSeek R1 and OpenAI o1 compare generally on logic problems.

Testing citations and reasoning

Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI’s o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning.

To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model’s reasoning is − that is, how often it produces an inaccurate or misleading response.

Our testing revealed significant performance differences between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI’s o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1’s across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks.

OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1’s rate of nearly 85% in the attribution-based reasoning task.

In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores.

DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn’t as natural-sounding as OpenAI’s o1. This shows that o1 was better at presenting that information in clear, natural language.

OpenAI holds the advantage

On other benchmarks, DeepSeek R1 performs on par with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency.

Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI’s offering maintaining a significant advantage in reasoning and knowledge integration capabilities.

These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its deep research tool, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response.

The jury is still out on the tool’s value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you.The Conversation

Manas Gaur, Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post OpenAI beats DeepSeek on sentence-level reasoning appeared first on theconversation.com

Continue Reading

The Conversation

Are twins allergic to the same things?

Published

on

theconversation.com – Breanne Hayes Haney, Allergy and Immunology Fellow-in-Training, School of Medicine, West Virginia University – 2025-04-14 07:42:00

If one has a reaction to a new food, is the other more likely to as well?
BjelicaS/iStock via Getty Images Plus

Breanne Hayes Haney, West Virginia University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Are twins allergic to the same things? – Ella, age 7, Philadelphia


Allergies, whether spring sneezes due to pollen or trouble breathing triggered by a certain food, are caused by a combination of someone’s genes and the environment they live in.

The more things two people share, the higher their chances of being allergic to the same things. Twins are more likely to share allergies because of everything they have in common, but the story doesn’t end there.

I’m an allergist and immunologist, and part of my job is treating patients who have environmental, food or drug allergies. Allergies are really complex, and a lot of factors play a role in who gets them and who doesn’t.

What is an allergy?

Your immune system makes defense proteins called antibodies. Their job is to keep watch and attack any invading germs or other dangerous substances that get inside your body before they can make you sick.

An allergy happens when your body mistakes some usually harmless substance for a harmful intruder. These trigger molecules are called allergens.

diagram of Y-shaped antibodies sticking to other molecules
Y-shaped antibodies are meant to grab onto any harmful germs, but sometimes they make a mistake and grab something that isn’t actually a threat: an allergen.
ttsz/iStock via Getty Images Plus

The antibodies stick like suction cups to the allergens, setting off an immune system reaction. That process leads to common allergy symptoms: sneezing, a runny or stuffy nose, itchy, watery eyes, a cough. These symptoms can be annoying but minor.

Allergies can also cause a life-threatening reaction called anaphylaxis that requires immediate medical attention. For example, if someone ate a food they were allergic to, and then had throat swelling and a rash, that would be considered anaphylaxis.

The traditional treatment for anaphylaxis is a shot of the hormone epinephrine into the leg muscle. Allergy sufferers can also carry an auto-injector to give themselves an emergency shot in case of a life-threatening case of anaphylaxis. An epinephrine nasal spray is now available, too, which also works very quickly.

A person can be allergic to things outdoors, like grass or tree pollen and bee stings, or indoors, like pets and tiny bugs called dust mites that hang out in carpets and mattresses.

A person can also be allergic to foods. Food allergies affect 4% to 5% of the population. The most common are to cow’s milk, eggs, wheat, soy, peanuts, tree nuts, fish, shellfish and sesame. Sometimes people grow out of allergies, and sometimes they are lifelong.

Who gets allergies?

Each antibody has a specific target, which is why some people may only be allergic to one thing.

The antibodies responsible for allergies also take care of cleaning up any parasites that your body encounters. Thanks to modern medicine, people in the United States rarely deal with parasites. Those antibodies are still ready to fight, though, and sometimes they misfire at silly things, like pollen or food.

Hygiene and the environment around you can also play a role in how likely it is you’ll develop allergies. Basically, the more different kinds of bacteria that you’re exposed to earlier in life, the less likely you are to develop allergies. Studies have even shown that kids who grow up on farms, kids who have pets before the age of 5, and kids who have a lot of siblings are less likely to develop allergies. Being breastfed as a baby can also protect against having allergies.

Children who grow up in cities are more likely to develop allergies, probably due to air pollution, as are children who are around people who smoke.

Kids are less likely to develop food allergies if they try foods early in life rather than waiting until they are older. Sometimes a certain job can contribute to an adult developing environmental allergies. For example, hairdressers, bakers and car mechanics can develop allergies due to chemicals they work with.

Genetics can also play a huge role in why some people develop allergies. If a mom or dad has environmental or food allergies, their child is more likely to have allergies. Specifically for peanut allergies, if your parent or sibling is allergic to peanuts, you are seven times more likely to be allergic to peanuts!

two boys in identical shirts side by side look at each other
Do you have an allergy twin in your family?
Ronnie Kaufman/DigitalVision via Getty Images Plus

Identical in allergies?

Back to the idea of twins: Yes, they can be allergic to the same things, but not always.

Researchers in Australia found that 60% to 70% of twins in one study both had environmental allergies, and identical twins were more likely to share allergies than fraternal (nonidentical) twins. Identical twins share 100% of their genes, while fraternal twins only share about 50% of their genes, the same as any pair of siblings.

A lot more research has been done on the genetics of food allergies. One peanut allergy study found that identical twins were more likely to both be allergic to peanuts than fraternal twins were.

So, twins can be allergic to the same things, and it’s more likely that they will be, based on their shared genetics and growing up together. But twins aren’t automatically allergic to the exact same things.

Imagine if two twins are separated at birth and raised in different homes: one on a farm with pets and one in the inner city. What if one’s parents smoke, and the others don’t? What if one lives with a lot of siblings and the other is an only child? They certainly could develop different allergies, or maybe not develop allergies at all.

Scientists like me are continuing to research allergies, and we hope to have more answers in the future.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Breanne Hayes Haney, Allergy and Immunology Fellow-in-Training, School of Medicine, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Are twins allergic to the same things? appeared first on theconversation.com

Continue Reading

Trending