Connect with us

The Conversation

How government and industry can team up to make the technology safer without hindering innovation

Published

on

theconversation.com – Paulo Carvão, Senior Fellow, Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School – 2025-03-07 07:17:00

One of President Donald Trump’s first executive orders in his second term called for developing an AI action plan.
Photo by Anna Moneymaker/Getty Images

Paulo Carvão, Harvard Kennedy School

Imagine a not-too-distant future where you let an intelligent robot manage your finances. It knows everything about you. It follows your moves, analyzes markets, adapts to your goals and invests faster and smarter than you can. Your investments soar. But then one day, you wake up to a nightmare: Your savings have been transferred to a rogue state, and they’re gone.

You seek remedies and justice but find none. Who’s to blame? The robot’s developer? The artificial intelligence company behind the robot’s “brain”? The bank that approved the transactions? Lawsuits fly, fingers point, and your lawyer searches for precedents, but finds none. Meanwhile, you’ve lost everything.

This is not the doomsday scenario of human extinction that some people in the AI field have warned could arise from the technology. It is a more realistic one and, in some cases, already present. AI systems are already making life-altering decisions for many people, in areas ranging from education to hiring and law enforcement. Health insurance companies have used AI tools to determine whether to cover patients’ medical procedures. People have been arrested based on faulty matches by facial recognition algorithms.

By bringing government and industry together to develop policy solutions, it is possible to reduce these risks and future ones. I am a former IBM executive with decades of experience in digital transformation and AI. I now focus on tech policy as a senior fellow at Harvard Kennedy School’s Mossavar-Rahmani Center for Business and Government. I also advise tech startups and invest in venture capital.

Drawing from this experience, my team spent a year researching a way forward for AI governance. We conducted interviews with 49 tech industry leaders and members of Congress, and analyzed 150 AI-related bills introduced in the last session of Congress. We used this data to develop a model for AI governance that fosters innovation while also offering protections against harms, like a rogue AI draining your life savings.

Striking a balance

The increasing use of AI in all aspects of people’s lives raises a new set of questions to which history has few answers. At the same time, the urgency to address how it should be governed is growing. Policymakers appear to be paralyzed, debating whether to let innovation flourish without controls or risk slowing progress. However, I believe that the binary choice between regulation and innovation is a false one.

Instead, it’s possible to chart a different approach that can help guide innovation in a direction that adheres to existing laws and societal norms without stifling creativity, competition and entrepreneurship.

YouTube video
Bloomberg Intelligence analyst Tamlin Bason explains the regulatory landscape and the need for a balanced approach to AI governance.

The U.S. has consistently demonstrated its ability to drive economic growth. The American tech innovation system is rooted in entrepreneurial spirit, public and private investment, an open market and legal protections for intellectual property and trade secrets. From the early days of the Industrial Revolution to the rise of the internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions.

In January 2025, President Donald Trump issued an executive order calling for the development of an AI action plan for America. My team and I have developed an AI governance model that can underpin an action plan.

A new governance model

Previous presidential administrations have waded into AI governance, including the Biden administration’s since-recinded executive order. There has also been an increasing number of regulations concerning AI passed at the state level. But the U.S. has mostly avoided imposing regulations on AI. This hands-off approach stems in part from a disconnect between Congress and industry, with each doubting the other’s understanding of the technologies requiring governance.

The industry is divided into distinct camps, with smaller companies allowing tech giants to lead governance discussions. Other contributing factors include ideological resistance to regulation, geopolitical concerns and insufficient coalition-building that have marked past technology policymaking efforts. Yet, our study showed that both parties in Congress favor a uniquely American approach to governance.

Congress agrees on extending American leadership, addressing AI’s infrastructure needs and focusing on specific uses of the technology – instead of trying to regulate the technology itself. How to do it? My team’s findings led us to develop the Dynamic Governance Model, a policy-agnostic and nonregulatory method that can be applied to different industries and uses of the technology. It starts with a legislative or executive body setting a policy goal and consists of three subsequent steps:

  1. Establish a public-private partnership in which public and private sector experts work together to identify standards for evaluating the policy goal. This approach combines industry leaders’ technical expertise and innovation focus with policymakers’ agenda of protecting the public interest through oversight and accountability. By integrating these complementary roles, governance can evolve together with technological developments.

  2. Create an ecosystem for audit and compliance mechanisms. This market-based approach builds on the standards from the previous step and executes technical audits and compliance reviews. Setting voluntary standards and measuring against them is good, but it can fall short without real oversight. Private sector auditing firms can provide oversight so long as those auditors meet fixed ethical and professional standards.

  3. Set up accountability and liability for AI systems. This step outlines the responsibilities that a company must bear if its products harm people or fail to meet standards. Effective enforcement requires coordinated efforts across institutions. Congress can establish legislative foundations, including liability criteria and sector-specific regulations. It can also create mechanisms for ongoing oversight or rely on existing government agencies for enforcement. Courts will interpret statutes and resolve conflicts, setting precedents. Judicial rulings will clarify ambiguous areas and contribute to a sturdier framework.

Benefits of balance

I believe that this approach offers a balanced path forward, fostering public trust while allowing innovation to thrive. In contrast to conventional regulatory methods that impose blanket restrictions on industry, like the one adopted by the European Union, our model:

  • is incremental, integrating learning at each step.
  • draws on the existing approaches used in the U.S. for driving public policy, such as competition law, existing regulations and civil litigation.
  • can contribute to the development of new laws without imposing excessive burdens on companies.
  • draws on past voluntary commitments and industry standards, and encourages trust between the public and private sectors.

The U.S. has long led the world in technological growth and innovation. Pursuing a public-private partnership approach to AI governance should enable policymakers and industry leaders to advance their goals while balancing innovation with transparency and responsibility. We believe that our governance model is aligned with the Trump administration’s goal of removing barriers for industry but also supports the public’s desire for guardrails.The Conversation

Paulo Carvão, Senior Fellow, Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post How government and industry can team up to make the technology safer without hindering innovation appeared first on theconversation.com

The Conversation

As views on spanking shift worldwide, most US adults support it, and 19 states allow physical punishment in schools

Published

on

theconversation.com – Christina Erickson, Associate Dean in the College of Nursing and Professional Disciplines, University of North Dakota – 2025-04-18 07:39:00

Spanking in the U.S. generally ends around age 12, when children become big enough to resist or fight back.
Sandro Di Carlo Darsa/Brand X Pictures via Getty Images

Christina Erickson, University of North Dakota

Nearly a half-century after the Supreme Court ruled that school spankings are permissible and not “cruel and unusual punishment”, many U.S. states allow physical punishment for students who have misbehaved.

Today, over a third of the states allow teachers to paddle or spank students. More than 100,000 students are paddled in U.S. schools each year.

Christina Erickson, an associate dean and professor of social work at the University of North Dakota, wrote a book on the subject: “Spanked: How Hitting Our Children is Harming Ourselves.” She discussed the scope of the practice and its effects with The Conversation.

What spanking legislation exists worldwide?

Around the world, 68 countries have banned the hitting of children in any form, including spanking. This movement began in 1979 with Sweden’s ban on all forms of physical punishment, including spanking in any setting, and including in the family home.

The pace of change quickened in the early 2000s when more countries adopted similar laws. For example, the legal language of countries like Nepal rests on an emerging definition of children as rights holders similar to adults and as humans worth protecting from harm.

Back view of students sitting at desks inside a classroom.
Spanking in schools is legal in 19 states.
Maskot/Getty Images

What are US policies toward spanking?

Each state in the U.S. has its own child abuse laws, and all states, tribes and territories aim to protect children from abuse. But all state laws also allow parents to hit their children if it does not leave an injury or a mark.

A typical example is Oklahoma’s definition of child abuse and neglect. It includes an exception that permits parents to use ordinary force as a means of discipline, including spanking, using an implement like a switch or a paddle. However, leaving evidence of hitting, such as welts, bruises, swelling or lacerations, is illegal and considered child abuse in all states.

Parental spanking of children is considered unique from other physical violence because of the relational context and the purpose. Laws entitle parents to hit their children for the purpose of teaching a lesson or punishing them to improve behavior. Children are the only individuals in society who can be hit by another person and the law does not regard it as assault.

Spanking’s impact on a child is unfortunately similar to abusive hitting. Spanking has been labeled as an “Adverse Childhood Experience,” or ACE. These are events that cause poor health outcomes over the span of one’s life.

The practice of spanking also affects parents. Acceptance of the physical discipline of spanking puts parents at risk for the escalation of physical punishment that leads to abuse.

Parents who spank their child have the potential to abuse them and be caught in a legal and child protection system that aims to protect children from harm. It is unclear what triggers a parent to cross over from discipline into abuse. Research shows that spanking at a young age, such as a 1-year-old, increases the chance of involvement by Child Protective Services by 33%.

Some school districts require permission from parents to allow disciplinary paddling in school, while others do not require any communication. State law does not assure agreement between parents and school districts on what offenses warrant a paddling. Parents may feel they have no alternative but to keep their child in school, or fear reprisal from school administrators. Some students are old enough to denounce the punishment themselves.

YouTube video
In this school district, physical punishment is used only when parents give written permission.

Is spanking considered the same as hitting?

The term spank conceals the concept of hitting and is so commonplace it goes unquestioned, despite the fact that it is a grown adult hitting a person much smaller than them. The concept is further concealed because hitting a child’s bottom hides any injuries that may occur.

Types of hitting that are categorized as spanking have narrowed over the years but still persist. Some parents still use implements such as tree switches, wooden spoons, shoes or paddles to “spank” children, raising the chances for abuse.

Most spanking ends by the age of 12, partly because children this age are able to fight back. When a child turns 18, parental hitting becomes the same as hitting any other adult, a form of domestic violence or assault throughout the U.S.

There is a lack of a consistent understanding of what constitutes a spanking. The definition of spanking is unique to each family. The number of hits, clothed or not, or using an implement, all reflect geographical or familial differences in understanding what a spanking is.

How do US adults view spanking?

People in the United States generally accept spanking as part of raising children: 56% of U.S. adults strongly agree or agree that “… it is sometimes necessary to discipline a child with a good, hard spanking.” This view has been slowly changing since 1986, when 83% of adults agreed with that statement.

The laws worldwide that protect children from being hit usually begin by disallowing nonparental adults to hit children. This is happening in the U.S. too, where 31 states have banned paddling in schools.

At a national level, efforts have been made to end physical punishment in schools. However, 19 states still allow spanking of children in public schools, which was upheld by a 1977 Supreme Court case.

With the slow but steady drop of parents who believe that sometimes children need a good hard spanking, as well as the ban of paddling in schools in 31 states, one could argue that the U.S. is moving toward a reduction in spanking.

What does research say about spanking?

Spanking’s negative influence on children’s behavior has been documented for decades. Spanking seems to work in the moment when it comes to changing or stopping the immediate behavior, but the negative effects are hidden in the short term and occur later in the child’s life. Yet because the spanking seemed to work at the time, the parent doesn’t connect the continued bad behavior of the child to the spanking.

An abundance of research shows that spanking causes increased negative behaviors in childhood. Spanking lowers executive functioning for children, increases dating violence as teenagers and even increases struggles with mental health and substance abuse in adulthood. Spanking does not teach new or healthy behaviors, and is a stress-inducing event for the child and the adult hitting them.

No studies have shown positive long-term benefits from spanking. Because of the long-standing and expansive research findings showing a range of harm from spanking and the increased association with child abuse, the American Psychological Association recommends that parents should never spank their children.

What are some resources for parents?

Consider these questions when choosing a discipline method for your child:

  • Is the expectation of your child developmentally accurate? One of the most common reasons parents spank is because they are expecting a behavior the child is not developmentally able to execute.

  • Can the discipline you choose grow with your child? Nearly all spanking ends by age 12, when kids are big enough to fight back. Choose discipline methods you can use over the long term, such as additional chores, apologies, difficult conversations and others that can grow with your child.

  • Might there be another explanation for your child’s behavior? Difficulty of understanding, fear or miscommunication? Think of your child as a learner and use a growth mindset to help your child learn from their life experiences.

Parents are the leaders of their families. Good leaders show strength in nonthreatening ways, listen to others and explain their decisions. Don’t spoil your kids. But being firm does not have to include hitting.

Is spanking children good for parents?

Doubtful. Parents who hit their kids may be unaware that it influences their frustration in other relationships. Expressing aggression recharges an angry and short-tempered internal battery that transfers into other parts of the adults’ lives.

Practicing calm when with your children will help you be calmer at work and in your other relationships. Listening to and speaking with a child about challenges, even from a very early age, is the best way to make it part of your relationship for the rest of your life.

Choose a method that allows you to grow. Parents matter too.The Conversation

Christina Erickson, Associate Dean in the College of Nursing and Professional Disciplines, University of North Dakota

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post As views on spanking shift worldwide, most US adults support it, and 19 states allow physical punishment in schools appeared first on theconversation.com

Continue Reading

The Conversation

How does your brain create new memories? Neuroscientists discover ‘rules’ for how neurons encode new information

Published

on

theconversation.com – William Wright, Postdoctoral Scholar in Neurobiology, University of California, San Diego – 2025-04-17 13:00:00

Neurons that fire together sometimes wire together.
PASIEKA/Science Photo Library via Getty Images

William Wright, University of California, San Diego and Takaki Komiyama, University of California, San Diego

Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades.

But how does your brain achieve this incredible feat?

In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn.

Learning in the brain

The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data.

These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses.

It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain.

Diagram of neuron, featuring a relatively large cell body with a long branching tail extending from it
Neurons are the basic units of the brain.
OpenStax, CC BY-SA

For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain.

In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear.

Defining the rules

We decided to monitor the activity of individual synaptic connections within the brain during learning to see whether we could identify activity patterns that determine which connections would get stronger or weaker.

To do this, we genetically encoded biosensors in the neurons of mice that would light up in response to synaptic and neural activity. We monitored this activity in real time as the mice learned a task that involved pressing a lever to a certain position after a sound cue in order to receive water.

We were surprised to find that the synapses on a neuron don’t all follow the same rule. For example, scientists have often thought that neurons follow what are called Hebbian rules, where neurons that consistently fire together, wire together. Instead, we saw that synapses on different locations of dendrites of the same neuron followed different rules to determine whether connections got stronger or weaker. Some synapses adhered to the traditional Hebbian rule where neurons that consistently fire together strengthen their connections. Other synapses did something different and completely independent of the neuron’s activity.

Our findings suggest that neurons, by simultaneously using two different sets of rules for learning across different groups of synapses, rather than a single uniform rule, can more precisely tune the different types of inputs they receive to appropriately represent new information in the brain.

In other words, by following different rules in the process of learning, neurons can multitask and perform multiple functions in parallel.

Future applications

This discovery provides a clearer understanding of how the connections between neurons change during learning. Given that most brain disorders, including degenerative and psychiatric conditions, involve some form of malfunctioning synapses, this has potentially important implications for human health and society.

For example, depression may develop from an excessive weakening of the synaptic connections within certain areas of the brain that make it harder to experience pleasure. By understanding how synaptic plasticity normally operates, scientists may be able to better understand what goes wrong in depression and then develop therapies to more effectively treat it.

Microscopy image of mouse brain cross-section with lower middle-half dusted green
Changes to connections in the amygdala – colored green – are implicated in depression.
William J. Giardino/Luis de Lecea Lab/Stanford University via NIH/Flickr, CC BY-NC

These findings may also have implications for artificial intelligence. The artificial neural networks underlying AI have largely been inspired by how the brain works. However, the learning rules researchers use to update the connections within the networks and train the models are usually uniform and also not biologically plausible. Our research may provide insights into how to develop more biologically realistic AI models that are more efficient, have better performance, or both.

There is still a long way to go before we can use this information to develop new therapies for human brain disorders. While we found that synaptic connections on different groups of dendrites use different learning rules, we don’t know exactly why or how. In addition, while the ability of neurons to simultaneously use multiple learning methods increases their capacity to encode information, what other properties this may give them isn’t yet clear.

Future research will hopefully answer these questions and further our understanding of how the brain learns.The Conversation

William Wright, Postdoctoral Scholar in Neurobiology, University of California, San Diego and Takaki Komiyama, Professor of Neurobiology, University of California, San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post How does your brain create new memories? Neuroscientists discover ‘rules’ for how neurons encode new information appeared first on theconversation.com

Continue Reading

The Conversation

OpenAI beats DeepSeek on sentence-level reasoning

Published

on

theconversation.com – Manas Gaur, Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County – 2025-04-17 07:42:00

DeepSeek’s language AI rocked the tech industry, but it comes up short on one measure.
Lionel Bonaventure/AFP via Getty Images

Manas Gaur, University of Maryland, Baltimore County

ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including scientific and legal citations. It turns out that measuring how accurate an AI model’s citations are is a good way of assessing the model’s reasoning abilities.

An AI model “reasons” by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school.

Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters.

The question is, can today’s models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose.

I’m a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate research citations and provide understandable reasoning.

We used the benchmark to compare the performance of two popular AI reasoning models, DeepSeek’s R1 and OpenAI’s o1. Though DeepSeek made headlines with its stunning efficiency and cost-effectiveness, the Chinese upstart has a way to go to match OpenAI’s reasoning performance.

Sentence specific

The accuracy of citations has a lot to do with whether the AI model is reasoning about information at the sentence level rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations.

In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that explain the whole paragraph or document, not the relatively fine-grained information in the sentence.

Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts than in the middle. This makes it difficult for them to fully understand all the important information throughout a long document.

Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like summarizing or paraphrasing.

The Reasons benchmark addresses this weakness by examining large language models’ citation generation and reasoning.

YouTube video
How DeepSeek R1 and OpenAI o1 compare generally on logic problems.

Testing citations and reasoning

Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI’s o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning.

To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model’s reasoning is − that is, how often it produces an inaccurate or misleading response.

Our testing revealed significant performance differences between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI’s o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1’s across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks.

OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1’s rate of nearly 85% in the attribution-based reasoning task.

In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores.

DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn’t as natural-sounding as OpenAI’s o1. This shows that o1 was better at presenting that information in clear, natural language.

OpenAI holds the advantage

On other benchmarks, DeepSeek R1 performs on par with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency.

Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI’s offering maintaining a significant advantage in reasoning and knowledge integration capabilities.

These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its deep research tool, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response.

The jury is still out on the tool’s value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you.The Conversation

Manas Gaur, Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post OpenAI beats DeepSeek on sentence-level reasoning appeared first on theconversation.com

Continue Reading

Trending