Connect with us

The Conversation

From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience

Published

on

theconversation.com – Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College – 2024-07-02 07:28:40

The electroencephalogram allowed scientists to record and read brain activity.
Kateryna Kon/Science Photo Library via Getty Images

Erika Nyhus, Bowdoin College

Electroencephalography, or EEG, was invented 100 years ago. In the years since the invention of this device to monitor brain electricity, it has had an incredible impact on how scientists study the human brain.

Since its first use, the EEG has shaped researchers’ understanding of cognition, from perception to memory. It has also been important for diagnosing and guiding treatment of multiple brain disorders, including epilepsy.

I am a cognitive neuroscientist who uses EEG to study how people remember events from their past. The EEG’s 100-year anniversary is an opportunity to reflect on this discovery’s significance in neuroscience and medicine.

Discovery of EEG

On July 6, 1924, psychiatrist Hans Berger performed the first EEG recording on a human, a 17-year-old boy undergoing neurosurgery. At the time, Berger and other researchers were performing electrical recordings on the brains of animals.

What set Berger apart was his obsession with finding the physical basis of what he called psychic energy, or mental effort, in people. Through a series of experiments spanning his early career, Berger measured brain volume and temperature to study changes in mental processes such as intellectual work, attention and desire.

He then turned to recording electrical activity. Though he recorded the first traces of EEG in the human brain in 1924, he did not publish the results until 1929. Those five intervening years were a tortuous phase of self-doubt about the source of the EEG signal in the brain and refining the experimental setup. Berger recorded hundreds of EEGs on multiple subjects, including his own children, with both experimental successes and setbacks.

This is among the first EEG readings published in Hans Berger's study. The top trace is the EGG while the bottom is a reference trace of 10 Hz.
Two EEG traces, the top more irregular in rhythm than the bottom.
Hans Berger/Über das Elektrenkephalogramm des Menchen. Archives für Psychiatrie. 1929; 87:527-70 via Wikimedia Commons

Finally convinced of his results, he published a series of papers in the journal Archiv für Psychiatrie and had hopes of winning a Nobel Prize. Unfortunately, the research community doubted his results, and years passed before anyone else started using EEG in their own research.

Berger was eventually nominated for a Nobel Prize in 1940. But Nobels were not awarded that year in any category due to World War II and Germany’s occupation of Norway.

Neural oscillations

When many neurons are active at the same time, they produce an electrical signal strong enough to spread instantaneously through the conductive tissue of the brain, skull and scalp. EEG electrodes placed on the head can record these electrical signals.

Since the discovery of EEG, researchers have shown that neural activity oscillates at specific frequencies. In his initial EEG recordings in 1924, Berger noted the predominance of oscillatory activity that cycled eight to 12 times per second, or 8 to 12 hertz, named alpha oscillations. Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate.

Neural oscillations are thought to be important for effective communication between specialized brain regions. For example, theta oscillations that cycle at 4 to 8 hertz are important for communication between brain regions involved in memory encoding and retrieval in animals and humans.

Finger pointing at EEG reading
Different frequencies of neural oscillations indicate different types of brain activity.
undefined undefined/iStock via Getty Images Plus

Researchers then examined whether they could alter neural oscillations and therefore affect how neurons talk to each other. Studies have shown that many behavioral and noninvasive methods can alter neural oscillations and lead to changes in cognitive performance. Engaging in specific mental activities can induce neural oscillations in the frequencies those mental activities use. For example, my team’s research found that mindfulness meditation can increase theta frequency oscillations and improve memory retrieval.

Noninvasive brain stimulation methods can target frequencies of interest. For example, my team’s ongoing research found that brain stimulation at theta frequency can lead to improved memory retrieval.

EEG has also led to major discoveries about how the brain processes information in many other cognitive domains, including how people perceive the world around them, how they focus their attention, how they communicate through language and how they process emotions.

Diagnosing and treating brain disorders

EEG is commonly used today to diagnose sleep disorders and epilepsy and to guide brain disorder treatments.

Scientists are using EEG to see whether memory can be improved with noninvasive brain stimulation. Although the research is still in its infancy, there have been some promising results. For example, one study found that noninvasive brain stimulation at gamma frequency – 25 hertz – improved memory and neurotransmitter transmission in Alzheimer’s disease.

Back of person's head enveloped by the many, small round electrodes of an EEG cap
Researchers and clinicians use EEG to diagnose conditions like epilepsy.
BSIP/Collection Mix: Subjects via Getty Images

A new type of noninvasive brain stimulation called temporal interference uses two high frequencies to cause neural activity equal to the difference between the stimulation frequencies. The high frequencies can better penetrate the brain and reach the targeted area. Researchers recently tested this method in people using 2,000 hertz and 2,005 hertz to send 5 hertz theta frequency at a key brain region for memory, the hippocampus. This led to improvements in remembering the name associated with a face.

Although these results are promising, more research is needed to understand the exact role neural oscillations play in cognition and whether altering them can lead to long-lasting cognitive enhancement.

The future of EEG

The 100-year anniversary of the EEG provides an opportunity to consider what it has taught us about brain function and what this technique can do in the future.

In a survey commissioned by the journal Nature Human Behaviour, over 500 researchers who use EEG in their work were asked to make predictions on the future of the technique. What will be possible in the next 100 years of EEG?

Some researchers, including myself, predict that we’ll use EEG to diagnose and create targeted treatments for brain disorders. Others anticipate that an affordable, wearable EEG will be widely used to enhance cognitive function at home or will be seamlessly integrated into virtual reality applications. The possibilities are vast.The Conversation

Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience appeared first on theconversation.com

The Conversation

Opioid-free surgery treats pain at every physical and emotional level

Published

on

theconversation.com – Heather Margonari, Lead Coordinator for the Opioid Free Pathway, University of Pittsburgh – 2024-11-25 07:42:00

Opioids have been an essential part of anesthesia, but they aren’t the only way to manage pain.

Hispanolistic/E+ via Getty Images

Heather Margonari, University of Pittsburgh; Jacques E. Chelly, University of Pittsburgh, and Shiv K. Goel, University of Pittsburgh

The opioid crisis remains a significant public health challenge in the United States. In 2022, over 2.5 million American adults had an opioid use disorder, and opioids accounted for nearly 76% of overdose deaths.

Some patients are fearful of using opioids after surgery due to concerns about dependence and potential side effects, even when appropriately prescribed by a doctor to manage pain. Surgery is often the first time patients receive an opioid prescription, and their widespread use raises concerns about patients becoming long-term users. Leftover pills from a patient’s prescriptions may also be misused.

Researchers like us are working to develop a personalized and comprehensive surgical experience that doesn’t use opioids. Our approach to opioid-free surgery addresses both physical and emotional well-being through effective anesthesia and complementary pain-management techniques.

What is opioid-free anesthesia?

Clinicians have used morphine and other opioids to manage pain for thousands of years. These drugs remain integral to anesthesia.

Most surgical procedures use a strategy called balanced anesthesia, which combines drugs that induce sleep and relax muscles with opioids to control pain. However, using opioids in anesthesia can lead to unwanted side effects, such as serious cardiac and respiratory problems, nausea and vomiting, and digestive issues.

Concerns over these adverse effects and the opioid crisis have fueled the development of opioid-free anesthesia. This approach uses non-opioid drugs to relieve pain before, during and after surgery while minimizing the risk of side effects and dependency. Studies have shown that opioid-free anesthesia can provide similar levels of pain relief to traditional methods using opioids.

Opioid-free anesthesia is currently based on a multimodal approach. This means treatments are designed to target various pain receptors beyond opioid receptors in the spinal cord. Multimodal analgesia uses a combination of at least two medications or anesthetic techniques, each relieving pain through distinct mechanisms. The aim is to effectively block or modulate pain signals from the brain, spinal cord and the nerves of the body.

Close-up of IV bag with other medical equipment in the background out of focus

Balanced anesthesia combines a number of different drugs to ensure a smooth surgery.

bymuratdeniz/E+ via Getty Images

For instance, nonsteroidal anti-inflammatory drugs, or NSAIDs, such as ibuprofen work by inhibiting COX enzymes that promote inflammation. Acetaminophen, or Tylenol, similarly inhibits COX enzymes. While both acetaminophen and NSAIDs primarily target pain at the surgical site, they can also exert effects at the spinal level after several days of use.

A class of drugs called gabapentinoids, which include gabapentin and pregabalin, target certain proteins to dampen nerve signal transmission. This decreases neuropathic pain by reducing nerve inflammation.

The anesthetic ketamine disrupts pain pathways that contribute to a condition called central sensitization. This disorder occurs when nerve cells in the spinal cord and brain amplify pain signals even when the original injury or source of pain has healed. As a result, normal sensations such as light touch or mild pressure may be perceived as painful, and painful stimuli may feel more intense than usual. By lessening pain sensitivity, ketamine can help reduce the risk of chronic pain.

Regional anesthesia involves injecting local anesthetics near nerves to block pain signals to the brain. This method allows patients to remain awake but pain-free in the numbed area, reducing the need for general anesthesia and its side effects. Common regional techniques include epidurals, spinal anesthesia and nerve blocks.

By activating different pain pathways simultaneously, multimodal approaches aim to enhance pain relief synergistically.

Psychology of pain perception

Psychological factors can significantly influence a patient’s perception of pain. Research indicates that mental health conditions such as anxiety, depression and sleep disturbances can increase pain levels by up to 50%. This suggests that addressing mood and sleep issues can be essential for pain management and improving overall patient well-being.

Psychological states can intensify the perception of pain by significantly influencing the neural pathways related to pain processing. For example, anxiety and stress activate the body’s fight or flight response, prompting the release of stress hormones that heighten nerve sensitivity. This can make pain feel more intense. Research has also found that higher anxiety levels before surgery are linked to increased anesthesia use during surgery and opioid consumption after surgery.

Patient lying on operating table under blanket, smiling up at provider

Addressing pain before an operation can make patients feel better post-op.

ljubaphoto/E+ via Getty Images

Complementary and alternative techniques that address psychological factors can reduce pain and opioid use by modulating pain transmission in the nervous system and activating neurochemical pathways that promote pain relief.

For example, aromatherapy uses essential oils to stimulate the olfactory system. This can help reduce pain perception and enhance overall well-being by evoking emotional responses and promoting relaxation.

Music therapy stimulates the auditory system, which can distract patients from pain, lower anxiety levels and foster emotional healing. This can ultimately lead to reduced pain perception.

Relaxation exercises, such as deep breathing and progressive muscle relaxation, activate the parasympathetic nervous system and help promote a state of rest. Engaging the parasympathetic system helps the body conserve energy, slow your heart rate, lower blood pressure and relieve muscle tension. This can lead to decreased pain sensitivity by promoting a state of calmness.

Acupuncture involves inserting thin needles into specific body points, stimulating the release of endorphins and other neurotransmitters. These molecules can interrupt pain signals and promote healing processes within the body.

Moving toward opioid-free surgery

Transitioning away from opioids in surgery requires a shift in both practice and mindset across the entire health care team. Beyond anesthesiologists, other providers, including surgeons, nurses and medical trainees, also use opioids in patient care. All providers would need to be open to using alternative pain management techniques throughout the surgical process.

In response to the increasing patient demand for opioid-free surgical care, our team at the University of Pittsburgh Medical Center launched the patient-initiated Opioid-Free Surgical Pain Management Program in May 2024. To address both the physical and emotional dimensions of pain while optimizing recovery and safety, we recruited surgeons, anesthesiologists, nurses, pharmacists and hospital administrators to participate in the initiative.

Over the course of six months, our team enrolled 109 patients, 79 of whom successfully underwent surgeries without opioids. Barriers to participating in the program included patient perception of severe pain, inadequately addressing stress and anxiety before the operation and limited education in the department about the program.

However, subsequent refinements to the program – such as giving patients muscle relaxants while they were recovering from anesthesia – improved participation and reduced opioid use. Importantly, none of the 19 patients who received opioids while recovering in the hospital post-op required further opioid prescriptions at discharge.

These results reflect the promise of our pathway to minimize reliance on opioids while ensuring effective pain management. Enhanced psychological support for patients and education for providers in surgery departments can broaden the effectiveness of a comprehensive approach to managing pain.The Conversation

Heather Margonari, Lead Coordinator for the Opioid Free Pathway, University of Pittsburgh; Jacques E. Chelly, Professor of Anesthesiology, Perioperative Medicine and Orthopedic Surgery, University of Pittsburgh, and Shiv K. Goel, Clinical Associate Professor of Anesthesiology, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Opioid-free surgery treats pain at every physical and emotional level appeared first on theconversation.com

Continue Reading

The Conversation

Meat has a distinct taste, texture and aroma − a biochemist explains how plant-based alternatives mimic the real thing

Published

on

theconversation.com – Julie Pollock, Associate Professor of Chemistry, University of Richmond – 2024-11-25 07:37:00

Lots of restaurants and food manufacturers offer plant-based meat alternatives.

istetiana/Moment via Getty Images

Julie Pollock, University of Richmond

When you bite into a juicy hamburger, slice into the perfect medium-rare steak or gobble down a plateful of chicken nuggets, your senses are most likely responding to the food’s smell, taste, texture and color. For a long time, these four attributes set meat apart from other food groups.

But in recent years, food companies have started to focus on the development of meat alternatives. Many people believe that transitioning away from meat-heavy diets can help with environmental sustainability as well as improve their own health.

The two main focuses of research have been on plant-based meat alternatives and lab-grown meat. Both have interesting challenges. Lab-grown meat requires growing animal cells and generating a meat product. Plant-based meat alternatives use plant materials to recreate animal-like structures and flavors.

Major food companies that have generated plant-based meat alternatives that consumers seem to enjoy include Impossible, Beyond Meat, Mosa Meat and Quorn.

From a scientific perspective, the development of plant-based meat alternatives is especially intriguing, because food manufacturers and researchers attempt to create products with similar textures, flavors, appearances and nutrient compositions to those juicy hamburgers or tender chicken fingers.

As a biochemist who teaches students about how food fuels our bodies, I focus my research on the composition and the production of these products and how they can mimic animal meat is intriguing to me.

Animal meats are composed primarily of protein, fat and water, with small amounts of carbohydrates, vitamins and minerals. The animal tissue consumed is typically muscle, which has a distinctive shape made from fibers of protein that are bundled together with connective tissue.

A diagram showing a muscle, with bands of muscle tissue inside, and bands of muscle fibers inside each band of tissue.

Muscles, which animal meat comes from, contain muscle fibers banded together by connective tissue.

OpenStax/Wikimedia Commons, CC BY-SA

The size and shape of the protein fibers influence the texture of the meat. The amount and identity of natural lipids – fats and oils – found within a specific muscle tissue can influence the protein structure, and therefore the flavor, tenderness and juiciness, of the meat. Meat products also have a high water content.

Typically, plant-based meat alternatives are made using nonanimal proteins, as well as chemical compounds that enhance the flavor, fats, coloring agents and binding agents. These products also contain more than 50% water. To produce plant-based meat alternatives, the ingredients are combined to mimic animal muscle tissue, and then supplemented with additives such as flavor enhancers.

Developing a meatlike texture

Most meat replacements are derived from soy protein because it is relatively cheap and easily absorbs both water and fat, binding these substances so they don’t separate. Some companies will use other proteins, such as wheat gluten, legumes – lentils, chickpeas, peas, beans – and proteins from seed oils.

Since most animal meats include some amount of fat, which adds flavor and texture to the product, plant-based meat alternative manufacturers will often add fats such as canola oil, coconut oil or sunflower oil to make the product softer and tastier.

A jar filled with oil and water, with the oil settling on top in a layer.

Fats, like oil, don’t readily mix with water. They need to be emulsified to become one homogeneous substance.

FotografiaBasica/E+ via Getty Images

Proteins and fats don’t easily mix with water – that’s why the ingredients in salad dressings will sometimes separate into layers. When using these components, food manufacturers need to emulsify, or mix them, together. Emulsification is essential to making sure the proteins, fats and water form an integrated network with an appealing texture. Otherwise, the food product can end up greasy, spongy or just plain disgusting.

Many vegan meat alternatives also use gelling agents that bind water and fat. They help with emulsification because they contain starch, which interacts strongly with water and fat. This allows for more of a mixed network of the proteins, fats and water, making them meatier and more appealing to consumers.

Creating a product with a meatlike texture is not just a dump and stir process. Since animal meat is primarily muscle tissue, it has a unique spatial arrangement of the proteins, fats and water.

In order to mimic this structure, manufacturers use processes such as stretching, kneading, folding, layering, 3D printing and extrusion. Right now, the most popular processing method is extrusion.

Extrusion is a method by which the dry ingredients – plant proteins and fats – are fed into a machine along with a steady stream of water. The inner part of the machine rotates like a screw, combining the molecules, converting the structure of the plant material from spherical shapes to fibers.

Each plant protein behaves differently in the manufacturing process, so some plant-based meat alternatives might use different ingredients, depending on their structures.

Adding the savory flavor

Although the texture is essential, meat also has a distinctive savory and umami flavor.

A set of chemical reactions called Maillard browning helps develop the complex, rich flavor profiles of animal meats while they cook. So, additives such as yeast extracts, miso, mushrooms and spices can enhance the flavor of plant-based alternatives by allowing Maillard reactions to occur.

The aroma of cooked meats typically comes from chemical reactions between sugars and amino acids. Amino acids are the basic components of proteins. Lots of research has focused on attempting to replicate some of those reactions.

To promote these reactions, alternative meat developers will add browning agents, including specific amino acids such as cysteine, methionine and lysine, sugars and the vitamin thiamin. Adding natural smoke flavorings derived from hickory or mesquite can also give alternative meats a similar aroma.

A cross-section of a plant-based burger, which is brown and looks meatlike.

Plant-based burgers made with more lentil or pea protein tend to look more brown and meatlike.

Bloomberg Creative/Bloomberg Creative Photos via Getty Images

Eating with the eyes

As the first-century Roman lover of food Apicius said, “We eat with our eyes first.”

That means that even if the texture is perfect and the flavors are on point, the consumer will still decide whether they want to buy and eat the vegan meat by the way it looks.

For this reason, food manufacturers will usually develop plant-based meat alternatives that look like classic meat dishes – hamburgers, meatballs, sausages or nuggets. They’ll also add natural coloring agents such as beetroot, annatto, caramel and vegetable juices that make plant-based alternatives look more like the color of traditional meat.

Plant proteins such as soy and wheat gluten do not brown like animal meat. So, some food manufacturers will increase the proportion of pea and lentil proteins they’re using, which makes the meat alternative look more brown while cooking.

With some research, it’s not too difficult to mimic the structure, texture, flavor and appearance of animal meats. But the question remains: Will people purchase and consume them?

It seems people do want plant-based meat. Countries all around the world have increased their demand for these products. In 2023, the global market was over US$7 billion, and it is predicted to grow by almost 20% by 2030.The Conversation

Julie Pollock, Associate Professor of Chemistry, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Meat has a distinct taste, texture and aroma − a biochemist explains how plant-based alternatives mimic the real thing appeared first on theconversation.com

Continue Reading

The Conversation

AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond

Published

on

theconversation.com – Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan – 2024-11-22 07:25:00

One AI harm is pervasive facial recognition, which erodes privacy.
DSCimage/iStock via Getty Images

Sylvia Lu, University of Michigan

As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.

These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety.

AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?

The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that.

Slow burns

One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives.

Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.

Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.

Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.

This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate biases.

Why regulation lags behind

Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used across multiple contexts.

Courts and regulatory bodies are accustomed to dealing with concrete harms, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time.

Social media algorithms, for example, can gradually erode users’ mental health, but because these harms build slowly, they are difficult to address within the confines of current legal standards.

Four types of algorithmic harm

Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into four legal areas: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems.

The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm.

The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users.

The third type of harm is diminishing equality. AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.

The fourth type of harm is impairing safety. AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health.

Because these cumulative harms often arise from AI applications protected by trade secret laws, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it’s nearly impossible to hold companies accountable.

This UNESCO video features researchers from around the world explaining the issues around the ethics and regulation of AI.

Closing the accountability gap

Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.

As AI systems become more widely used in critical societal functions – from health care to education and employment – the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and disproportionately hitting the most vulnerable.

With generative AI multiplying and exacerbating AI harms, I believe it’s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological advancement.

The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance.The Conversation

Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond appeared first on theconversation.com

Continue Reading

Trending