Connect with us

The Conversation

I unintentionally created a biased AI algorithm 25 years ago – tech companies are still making the same mistake

Published

on

I unintentionally created a biased AI algorithm 25 years ago – tech companies are still making the same mistake

Facial recognition software misidentifies Black women more than other people.
JLco – Ana Suanes/iStock via Getty Images

John MacCormick, Dickinson College

In 1998, I unintentionally created a racially biased artificial intelligence algorithm. There are lessons in that story that resonate even more strongly today.

The dangers of bias and errors in AI algorithms are now well known. Why, then, has there been a flurry of blunders by tech companies in recent months, especially in the world of AI chatbots and image generators? Initial versions of ChatGPT produced racist output. The DALL-E 2 and Stable Diffusion image generators both showed racial bias in the pictures they created.

My own epiphany as a white male computer scientist occurred while teaching a computer science class in 2021. The class had just viewed a video poem by Joy Buolamwini, AI researcher and artist and the self-described poet of code. Her 2019 video poem “AI, Ain’t I a Woman?” is a devastating three-minute exposé of racial and gender biases in automatic face recognition systems – systems developed by tech companies like Google and Microsoft.

The systems often fail on women of color, incorrectly labeling them as male. Some of the failures are particularly egregious: The hair of Black civil rights leader Ida B. Wells is labeled as a “coonskin cap”; another Black woman is labeled as possessing a “walrus mustache.”

Echoing through the years

I had a horrible déjà vu moment in that computer science class: I suddenly remembered that I, too, had once created a racially biased algorithm. In 1998, I was a doctoral student. My project involved tracking the movements of a person’s head based on input from a video camera. My doctoral adviser had already developed mathematical techniques for accurately following the head in certain situations, but the system needed to be much faster and more robust. Earlier in the 1990s, researchers in other labs had shown that skin-colored areas of an image could be extracted in real time. So we decided to focus on skin color as an additional cue for the tracker.

a color video frame showing a young man entering a room with a red curve overlaying the image outlining his head
The author’s 1998 head-tracking algorithm used skin color to distinguish a face from the background of an image.
Source: John MacCormick, CC BY-ND

I used a digital camera – still a rarity at that time – to take a few shots of my own hand and face, and I also snapped the hands and faces of two or three other people who happened to be in the building. It was easy to manually extract some of the skin-colored pixels from these images and construct a statistical model for the skin colors. After some tweaking and debugging, we had a surprisingly robust real-time head-tracking system.

Not long afterward, my adviser asked me to demonstrate the system to some visiting company executives. When they walked into the room, I was instantly flooded with anxiety: the executives were Japanese. In my casual experiment to see if a simple statistical model would work with our prototype, I had collected data from myself and a handful of others who happened to be in the building. But 100% of these subjects had “white” skin; the Japanese executives did not.

Miraculously, the system worked reasonably well on the executives anyway. But I was shocked by the realization that I had created a racially biased system that could have easily failed for other nonwhite people.

Privilege and priorities

How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.

Ten years before I created the head-tracking system, the scholar Peggy McIntosh proposed the idea of an “invisible knapsack” carried around by white people. Inside the knapsack is a treasure trove of privileges such as “I can do well in a challenging situation without being called a credit to my race,” and “I can criticize our government and talk about how much I fear its policies and behavior without being seen as a cultural outsider.”

In the age of AI, that knapsack needs some new items, such as “AI systems won’t give poor results because of my race.” The invisible knapsack of a white scientist would also need: “I can develop an AI system based on my own appearance, and know it will work well for most of my users.”

AI researcher and artist Joy Buolamwini’s video poem ‘AI, Ain’t I a Woman?’

One suggested remedy for white privilege is to be actively anti-racist. For the 1998 head-tracking system, it might seem obvious that the anti-racist remedy is to treat all skin colors equally. Certainly, we can and should ensure that the system’s training data represents the range of all skin colors as equally as possible.

Unfortunately, this does not guarantee that all skin colors observed by the system will be treated equally. The system must classify every possible color as skin or nonskin. Therefore, there exist colors right on the boundary between skin and nonskin – a region computer scientists call the decision boundary. A person whose skin color crosses over this decision boundary will be classified incorrectly.

Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.

A simple analogy can explain this. Imagine you are given a choice between two tasks. Task A is to identify one particular type of tree – say, elm trees. Task B is to identify five types of trees: elm, ash, locust, beech and walnut. It’s obvious that if you are given a fixed amount of time to practice, you will perform better on Task A than Task B.

In the same way, an algorithm that tracks only white skin will be more accurate than an algorithm that tracks the full range of human skin colors. Even if they are aware of the need for diversity and fairness, scientists can be subconsciously affected by this competing need for accuracy.

Hidden in the numbers

My creation of a biased algorithm was thoughtless and potentially offensive. Even more concerning, this incident demonstrates how bias can remain concealed deep within an AI system. To see why, consider a particular set of 12 numbers in a matrix of three rows and four columns. Do they seem racist? The head-tracking algorithm I developed in 1998 is controlled by a matrix like this, which describes the skin color model. But it’s impossible to tell from these numbers alone that this is in fact a racist matrix. They are just numbers, determined automatically by a computer program.

a matrix of numbers in three rows and four columns
This matrix is at the heart of the author’s 1998 skin color model. Can you spot the racism?
Source: John MacCormick, CC BY-ND

The problem of bias hiding in plain sight is much more severe in modern machine-learning systems. Deep neural networks – currently the most popular and powerful type of AI model – often have millions of numbers in which bias could be encoded. The biased face recognition systems critiqued in “AI, Ain’t I a Woman?” are all deep neural networks.

The good news is that a great deal of progress on AI fairness has already been made, both in academia and in industry. Microsoft, for example, has a research group known as FATE, devoted to Fairness, Accountability, Transparency and Ethics in AI. A leading machine-learning conference, NeurIPS, has detailed ethics guidelines, including an eight-point list of negative social impacts that must be considered by researchers who submit papers.

Who’s in the room is who’s at the table

On the other hand, even in 2023, fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.

The systems suffer from exactly the same problems as my 1998 head tracker. Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.

So, how far has the AI field really come since it was possible, over 25 years ago, for a doctoral student to design and publish the results of a racially biased algorithm with no apparent oversight or consequences? It’s clear that biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.

These days it’s a cliché to say industry and academia need diverse groups of people “in the room” designing these algorithms. It would be helpful if the field could reach that point. But in reality, with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.

That’s why the fundamental lessons of my 1998 head tracker are even more important today: It’s easy to make a mistake, it’s easy for bias to enter undetected, and everyone in the room is responsible for preventing it.The Conversation

John MacCormick, Professor of Computer Science, Dickinson College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond

Published

on

theconversation.com – Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan – 2024-11-22 07:25:00

One AI harm is pervasive facial recognition, which erodes privacy.
DSCimage/iStock via Getty Images

Sylvia Lu, University of Michigan

As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.

These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety.

AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?

The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that.

Slow burns

One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives.

Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.

Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.

Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.

This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate biases.

Why regulation lags behind

Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used across multiple contexts.

Courts and regulatory bodies are accustomed to dealing with concrete harms, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time.

Social media algorithms, for example, can gradually erode users’ mental health, but because these harms build slowly, they are difficult to address within the confines of current legal standards.

Four types of algorithmic harm

Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into four legal areas: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems.

The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm.

The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users.

The third type of harm is diminishing equality. AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.

The fourth type of harm is impairing safety. AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health.

Because these cumulative harms often arise from AI applications protected by trade secret laws, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it’s nearly impossible to hold companies accountable.

This UNESCO video features researchers from around the world explaining the issues around the ethics and regulation of AI.

Closing the accountability gap

Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.

As AI systems become more widely used in critical societal functions – from health care to education and employment – the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and disproportionately hitting the most vulnerable.

With generative AI multiplying and exacerbating AI harms, I believe it’s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological advancement.

The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance.The Conversation

Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond appeared first on theconversation.com

Continue Reading

The Conversation

Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace

Published

on

theconversation.com – Alexandra Plakias, Associate Professor of Philosophy, Hamilton College – 2024-11-22 07:25:00

‘I don’t even know what to say to that.’
Catherine Falls Commercial/Moment via Getty Images

Alexandra Plakias, Hamilton College

The holidays offer many opportunities for awkward moments. Political discussions, of course, hold plenty of potential. But any time opinions differ, where estrangements have caused lingering rifts, or when behaviors veer toward the inappropriate, awkwardness can set in.

Awkwardness is what happens in social interactions when you suddenly find yourself without a script to guide you through. Maybe the situation is new or catches you off guard. Maybe you don’t know what’s expected of you, or you aren’t sure what role you’re playing in the social drama around you. It’s characterized by feelings of self-consciousness, uncertainty and discomfort.

As a philosopher who studies moral psychology, I’m interested in awkwardness because I wanted to understand the ways social discomfort stops people from engaging with difficult topics and challenging conversations. Awkwardness seems to inhibit people, even when their moral values suggest they should speak up. But it has a positive role to play, too – it can alert people to areas where their social norms are lacking or outdated.

People often blame themselves when things take a turn toward the awkward. But awkwardness is really a collective failure – people aren’t awkward, situations are. And they become awkward because you don’t have the resources to navigate your way through tricky social situations.

Awkwardness is often confused with embarrassment, but the two are different in important ways, and so are their remedies. Embarrassment is a response to a personal failing or gaffe, and the right response is to acknowledge it, own it and move on. Because awkwardness is caused by a lack of social guidance, you can try to anticipate and head it off before it happens, or you can respond to it by trying to develop better or clearer social scripts to help you – and others – navigate similar situations in the future.

After researching and writing an entire book on awkwardness, I’ve come to the conclusion that it’s not something we can – or should – avoid altogether. But there are a few strategies people can use to minimize awkwardness and deal with it when it does, inevitably, happen.

1. Know your goals, know your roles

Uncertainty is the oxygen of awkwardness. Before you engage in a potentially awkward or contentious interaction, ask yourself: What do I want to get out of this?

When you’re clear on your goals for the interaction, not only are you better able to perform your role in it, but you’re also giving clearer signals to others, helping them perform their roles in the unfolding social drama.

So, if you’re worried it’ll be awkward when your uncle starts in on his annual political rant, think about what you want the outcome to be. Do you want to convince him he’s wrong? Unlikely to happen. Do you want other family members to feel less anxious? Do you want your own views to be heard?

I’m not suggesting that some forethought will make things go smoothly or guarantee that no one’s feelings will be hurt. But it will help you feel more confident in your ability to navigate toward your desired outcome.

woman bringing pie to a family dinner table
Serving dessert could provide a lifeline to someone looking for a diversion.
Drazen Zigic/iStock via Getty Images Plus

2. There’s no ‘I’ in awkward

Awkward situations breed intense self-consciousness. This is both uncomfortable and counterproductive. By focusing on yourself, you’re not attuned to the people around you or the signals they’re sending – signals that could offer you a pathway out of the awkward situation. So make sure you’re paying attention to the other players in the drama, not just your own discomfort.

3. Plan, coordinate and be explicit

People do so much planning in other areas of their lives, yet they expect social interactions to just flow effortlessly. But like a vacation or a hike in the woods, sometimes a conversation goes better when you approach it with a map. Have some go-to topics or questions at hand.

And you don’t have to go it alone. If you’re worried about broaching a sensitive topic, or interacting with a particularly prickly guest, coordinate with a friend or relative.

If you expect to see someone with whom you have an unresolved relationship – an estranged family member, an old friend you ghosted – try to do some prep work in advance. Emails or letters can give people a chance to process reactions without putting them on the spot.

Even having a scripted activity on deck can make things less awkward. It doesn’t have to be anything formal, like a board game. Just keep some tasks available for guests who might otherwise lurk uncomfortably – like shaking up the salad dressing or putting forks on the table.

4. Laugh it off

If, despite your best efforts, awkwardness does strike, offer people a way out – they’ll probably grab it. This doesn’t need to be momentous; it could be a little joke, a small-talk topic, or even – and only if things get very desperate – knocking a spoon off the table to break the silence.

5. Consider the alternatives

These strategies might help you avoid awkwardness. But take a moment to consider whether you really want to. Awkwardness is the result of social uncertainty; it slows things down and curbs your confidence.

In its absence, other emotions can set in. Having things out in the open can be a relief, but it can also lead to anger, sadness and other feelings that might best be saved for another occasion.

So if things are awkward, it’s worth looking around to see what role that awkwardness is playing, and what might take its place if it’s gone.The Conversation

Alexandra Plakias, Associate Professor of Philosophy, Hamilton College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace appeared first on theconversation.com

Continue Reading

The Conversation

No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners

Published

on

theconversation.com – Rosemary Trout, Associate Clinical Professor of Culinary Arts & Food Science, Drexel University – 2024-11-22 07:24:00

Fall means cranberry season − and sweet seasonal holiday dishes.
AP Photo/Sergei Grits

Rosemary Trout, Drexel University

The holidays are full of delicious and indulgent food and drinks. It’s hard to resist dreaming about cookies, specialty cakes, rich meats and super saucy side dishes.

Lots of the healthy raw ingredients used in holiday foods can end up overshadowed by sugar and starch. While adding extra sugar may be tasty, it’s not necessarily good for metabolism. Understanding the food and culinary science behind what you’re cooking means you can make a few alterations to a recipe and still have a delicious dish that’s not overloaded with sugar.

Particularly, if you’re a person living with Type 1 diabetes, the holidays may come with an additional layer of stress and wild blood glucose levels. It’s no time for despair though – it is the holidays, after all.

Cranberries are one seasonal, tasty fruit that can be modified in recipes to be more Type 1 diabetic-friendly – or friendly to anyone looking for a sweet dish without the extra sugar.

I am a food scientist and a Type 1 diabetic. Understanding food composition, ingredient interactions and metabolism has been a literal lifesaver for me.

Type 1 diabetes defined

Type 1 diabetes is all day every day, with no breaks during sleep, no holidays or weekends off, no remission and no cure. Type 1 diabetics don’t make insulin, a hormone that is required to live that promotes the uptake of glucose, or sugar, into cells. The glucose in your cells then supplies your body with energy at the molecular level.

Consequently, Type 1 diabetics take insulin by injection, or via an insulin pump attached to their bodies, and hope that it works well enough to stabilize blood sugar and metabolism, minimize health complications over time and keep us alive.

Type 1 diabetics mainly consider the type and amount of carbohydrates in foods when figuring out how much insulin to take, but they also need to understand the protein and fat interactions in food to dose, or bolus, properly.

In addition to insulin, Type 1 diabetics don’t make another hormone, amylin, which slows gastric motility. This means food moves more quickly through our digestive tract, and we often feel very hungry. Foods that are high in fat, proteins and fiber can help to stave off hunger for a while.

Cranberries, a seasonal treat

Cranberries are native to North America and grow well in the Northeastern and Midwestern states, where they are in season between late September and December. They’re a staple on holiday tables all over the country.

A bowl of cranberries with the zest of an orange on top.
Cranberries are a classic Thanksgiving side dish, but cranberry sauce tends to contain a lot of sugar.
bhofack2/iStock via Getty Images

One cup of whole, raw cranberries contains 190 calories. They are 87% water, with trace amounts of protein and fat, 12 grams of carbohydrates and just over 4 grams of soluble fiber. Soluble fiber combines well with water, which is good for digestive health and can slow the rise of blood glucose.

Cranberries are high in potassium, which helps with electrolyte balance and cell signaling, as well as other important nutrients such as antioxidants, beta-carotene and vitamin C. They also contain vitamin K, which helps with healthy blood clotting.

Cranberries’ flavor and aroma come from compounds in the fruit such as cinnamates that add cinnamon notes, vanillin for hints of vanilla, benzoates and benzaldehyde, which tastes like almonds.

Cranberries are high in pectin, a soluble starch that forms a gel and is used as a setting agent in making jams and jellies, which is why they thicken readily with minimal cooking. Their beautiful red jewel-tone color is from a class of compounds called anthocyanins and proanthocyanidins, which are associated with treating some types of infection.

They also contain phenolics, which are protective compounds produced by the plant. These compounds, which look like rings at the molecular level, interact with proteins in your saliva to produce a dry, astringent sensation that makes your mouth pucker. Similarly, a compound called benzoic acid naturally found in cranberries adds to the fruit’s sourness.

These chemical ingredients make them extremely sour and bitter, and difficult to consume raw. To mitigate these flavors and effects, most cranberry recipes call for lots of sugar.

All that extra sugar can make cranberry dishes hard to consume for Type 1 diabetics, because the sugars cause a rapid rise in blood glucose.

Cranberries without sugar?

Type 1 diabetics – or anyone who wants to reduce the added sugars they’re consuming – can try a few culinary tactics to lower their sugar intake while still enjoying this holiday treat.

Don’t cook your cranberries much longer after they pop. You’ll still have a viscous cranberry liquid without the need for as much sugar, since cooking concentrates some of the bitter compounds, making them more pronounced in your dish.

A line of spoons, each heaped with a pile of powdered spice.
Adding spices to your cranberries can enhance the dish’s flavor without extra sugar.
klenova/iStock via Getty Images

Adding cinnamon, clove, cardamom, nutmeg and other warming spices gives the dish a depth of flavor. Adding heat with a spicy chili pepper can make your cranberry dish more complex while reducing sourness and astringency. Adding salt can reduce the cranberries’ bitterness, so you won’t need lots of sugar.

For a richer flavor and a glossy quality, add butter. Butter also lubricates your mouth, which tends to compliment the dish’s natural astringency. Other fats such as heavy cream or coconut oil work, too.

Adding chopped walnuts, almonds or hazelnuts can slow glucose absorption, so your blood glucose may not spike as quickly. Some new types of sweeteners, such as allulose, taste sweet but don’t raise blood sugar, requiring minimal to no insulin. Allulose has GRAS – generally regarded as safe – status in the U.S., but it isn’t approved as an additive in Europe.

This holiday season you can easily cut the amount of sugar added to your cranberry dishes and get the health benefits without a blood glucose spike.The Conversation

Rosemary Trout, Associate Clinical Professor of Culinary Arts & Food Science, Drexel University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners appeared first on theconversation.com

Continue Reading

Trending