Connect with us

The Conversation

FTC probe of OpenAI: Consumer protection is the opening salvo of US AI regulation

Published

on

FTC probe of OpenAI: Consumer protection is the opening salvo of US AI regulation

The FTC probe of ChatGPT maker OpenAI aligns with concerns that members of Congress have expressed.
AP Photo/Michael Dwyer

Anjana Susarla, Michigan State University

The Federal Trade Commission has launched an investigation of ChatGPT maker OpenAI for potential violations of consumer protection laws. The FTC sent the company a 20-page demand for information in the week of July 10, 2023. The move comes as European regulators have begun to take action, and Congress is working on legislation to regulate the artificial intelligence industry.

The FTC has asked OpenAI to provide details of all complaints the company has received from users regarding “false, misleading, disparaging, or harmful” statements put out by OpenAI, and whether OpenAI engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. The agency has asked detailed questions about how OpenAI obtains its data, how it trains its models, the processes it uses for human feedback, risk assessment and mitigation, and its mechanisms for privacy protection.

As a researcher of social media and AI, I recognize the immensely transformative potential of generative AI models, but I believe that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.

Hidden power

At the heart of chatbots such as ChatGPT and image generation tools such as DALL-E lies the power of generative AI models that can create realistic content from text, images, audio and video inputs. These tools can be accessed through a browser or a smartphone app.

Since these AI models have no predefined use, they can be fine-tuned for a wide range of applications in a variety of domains ranging from finance to biology. The models, trained on vast quantities of data, can be adapted for different tasks with little to no coding and sometimes as easily as by describing a task in simple language.

Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets, the public doesn’t know the nature of the data used to train them. The opacity of training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make it difficult for anyone to audit these models. Consequently, it’s difficult to prove that the way they are built or trained causes harm.

Hallucinations

In language model AIs, a hallucination is a confident response that is inaccurate and seemingly not justified by a model’s training data. Even some generative AI models that were designed to be less prone to hallucinations have amplified them.

There is a danger that generative AI models can produce incorrect or misleading information that can end up being damaging to users. A study investigating ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either generating citations to nonexistent papers or reporting nonexistent results. My collaborators and I found similar patterns in our investigations.

Such hallucinations can cause real damage when the models are used without adequate supervision. For example, ChatGPT falsely claimed that a professor it named had been accused of sexual harassment. And a radio host has filed a defamation lawsuit against OpenAI regarding ChatGPT falsely claiming that there was a legal complaint against him for embezzlement.

Bias and discrimination

Without adequate safeguards or protections, generative AI models trained on vast quantities of data collected from the internet can end up replicating existing societal biases. For example, organizations that use generative AI models to design recruiting campaigns could end up unintentionally discriminating against some groups of people.

When a journalist asked DALL-E 2 to generate images of “a technology journalist writing an article about a new AI system that can create remarkable and strange images,” it generated only pictures of men. An AI portrait app exhibited several sociocultural biases, for example by lightening the skin color of an actress.

Data privacy

Another major concern, especially pertinent to the FTC investigation, is the risk of privacy breaches where the AI may end up revealing sensitive or confidential information. A hacker could gain access to sensitive information about people whose data was used to train an AI model.

Researchers have cautioned about risks from manipulations called prompt injection attacks, which can trick generative AI into giving out information that it shouldn’t. “Indirect prompt injection” attacks could trick AI models with steps such as sending someone a calendar invitation with instructions for their digital assistant to export the recipient’s data and send it to the hacker.

A man in a business suit stands with his right hand raised in a wood-paneled room.
OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee on May 16, 2023. AI regulation legislation is in the works, but the FTC beat Congress to the punch.
AP Photo/Patrick Semansky

Some solutions

The European Commission has published ethical guidelines for trustworthy AI that include an assessment checklist for six different aspects of AI systems: human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

Better documentation of AI developers’ processes can help in highlighting potential harms. For example, researchers of algorithmic fairness have proposed model cards, which are similar to nutritional labels for food. Data statements and datasheets, which characterize data sets used to train AI models, would serve a similar role.

Amazon Web Services, for instance, introduced AI service cards that describe the uses and limitations of some models it provides. The cards describe the models’ capabilities, training data and intended uses.

The FTC’s inquiry hints that this type of disclosure may be a direction that U.S. regulators take. Also, if the FTC finds OpenAI has violated consumer protection laws, it could fine the company or put it under a consent decree.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=268739

The Conversation

An 83-year-old short story by Borges portends a bleak future for the internet

Published

on

theconversation.com – Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis – 2024-11-19 07:22:00

Fifty years before the architecture for the web was created, Jorge Luis Borges had already imagined an analog equivalent.
Sophie Bassouls/Sygma via Getty Images

Roger J. Kreuz, University of Memphis

How will the internet evolve in the coming decades?

Fiction writers have explored some possibilities.

In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.

Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.

The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.

To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.

Stephenson’s record as a prognosticator has been impressive – he anticipated the metaverse in his 1992 novel “Snow Crash,” and a key plot element of his “Diamond Age,” released in 1995, is an interactive primer that functions much like a chatbot.

On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.

Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges.

The rise of the chatbots

Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.

The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.

To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.

The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”

This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.

As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.

And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.

Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web.

An infinite − and useless − library

It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.

A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.

How bad might this get?

Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.

In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet.

Illustration of connected gold hexagons that expand endlessly into the horizon.
In Borges’ imaginary, endlessly expansive library of content, finding something meaningful is like finding a needle in a haystack.
aire images/Moment via Getty Images

Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.

The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.

Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.

Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?

The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post An 83-year-old short story by Borges portends a bleak future for the internet appeared first on theconversation.com

Continue Reading

The Conversation

A better understanding of what people do on their devices is key to digital well-being

Published

on

theconversation.com – Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State – 2024-11-19 07:21:00

What you do on your screens matters as much as how much time you spend on them.
Klaus Vedfelt/DigitalVision via Getty Images

Rinanda Shaleha, Penn State

In an era where digital devices are everywhere, the term “screen time” has become a buzz phrase in discussions about technology’s impact on people’s lives. Parents are concerned about their children’s screen habits. But what if this entire approach to screen time is fundamentally flawed?

While researchers have made advances in measuring screen use, a detailed critique of the research in 2020 revealed major issues in how screen time is conceptualized, measured and studied. I study how digital technology affects human cognition and emotions. My ongoing research with cognitive psychologist Nelson Roque builds on that critique’s findings.

We categorized existing screen-time measures, mapping them to attributes like whether they are duration-based or context-specific, and are studying how they relate to health outcomes such as anxiety, stress, depression, loneliness, mood and sleep quality, creating a clearer framework for understanding screen time. We believe that grouping all digital activities together misses how different types of screen use affect people.

By applying this framework, researchers can better identify which digital activities are beneficial or potentially harmful, allowing people to adopt more intentional screen habits that support well-being and reduce negative mental and emotional health effects.

Screen time isn’t one thing

Screen time, at first glance, seems easy to understand: It’s simply the time spent on devices with screens such as smartphones, tablets, laptops and TVs. But this basic definition hides the variety within people’s digital activities. To truly understand screen time’s impact, you need to look closer at specific digital activities and how each affects cognitive function and mental health.

In our research, we divide screen time into four broad categories: educational use, work-related use, social interaction and entertainment.

For education, activities like online classes and reading articles can improve cognitive skills like problem-solving and critical thinking. Digital tools like mobile apps can support learning by boosting motivation, self-regulation and self-control.

But these tools also pose challenges, such as distracting learners and contributing to poorer recall compared with traditional learning methods. For young users, screen-based learning may even have negative impacts on development and their social environment.

Screen time for work, like writing reports or attending virtual meetings, is a central part of modern life. It can improve productivity and enable remote work. However, prolonged screen exposure and multitasking may also lead to stress, anxiety and cognitive fatigue.

Screen use for social connection helps people interact with others through video chats, social media or online communities. These interactions can promote social connectedness and even improve health outcomes such as decreased depressive symptoms and improved glycemic control for people with chronic conditions. But passive screen use, like endless social media scrolling, can lead to negative experiences such as cyberbullying, social comparison and loneliness, especially for teens.

Screen use for entertainment provides relaxation and stress relief. Mindfulness apps or meditation tools, for example, can reduce anxiety and improve emotional regulation. Creative digital activities, like graphic design and music production, can reduce stress and improve mental health. However, too much screen use may reduce well-being by limiting physical activity and time for other rewarding pursuits.

Context matters

Screen time affects people differently based on factors like mood, social setting, age and family environment. Your emotions before and during screen use can shape your experience. Positive interactions can lift your mood, while loneliness might deepen with certain online activities. For example, we found that differences in age and stress levels affect how readily people become distracted on their devices. Alerts and other changes distract users, which makes it more challenging to focus on tasks.

The social context of screen use also matters. Watching a movie with family can strengthen bonds, while using screens alone can increase feelings of isolation, especially when it replaces face-to-face interactions.

Family influence plays a role, too. For example, parents’ screen habits affect their children’s screen behavior, and structured parental involvement can help reduce excessive use. It highlights the positive effect of structured parental involvement, along with mindful social contexts, in managing screen time for healthier digital interactions.

A woman, man and child look at a tablet screen in a living room
Shared screen time with family and friends can boost well-being.
kate_sept2004/E+ via Getty Images

Consistency and nuance

Technology now lets researchers track screen use accurately, but simply counting hours doesn’t give us the full picture. Even when we measure specific activities, like social media or gaming, studies don’t often capture engagement level or intent. For example, someone might use social media to stay informed or to procrastinate.

Studies on screen time often vary in how they define and categorize it. Some focus on total screen exposure without differentiating between activities. Others examine specific types of use but may not account for the content or context. This lack of consistency in defining screen time makes it hard to compare studies or generalize findings.

Understanding screen use requires a more nuanced approach than tracking the amount of time people spend on their screens. Recognizing the different effects of specific digital activities and distinguishing between active and passive use are crucial steps. Using standardized definitions and combining quantitative data with personal insights would provide a fuller picture. Researchers can also study how screen use affects people over time.

For policymakers, this means developing guidelines that move beyond one-size-fits-all limits by focusing on recommendations suited to specific activities and individual needs. For the rest of us, this awareness encourages a balanced digital diet that blends enriching online and offline activities for better well-being.The Conversation

Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post A better understanding of what people do on their devices is key to digital well-being appeared first on theconversation.com

Continue Reading

The Conversation

From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries

Published

on

theconversation.com – Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology – 2024-11-18 07:27:00

Modern bike helmets are made through complex materials engineering.
Johner Images via Getty Images

Jud Ready, Georgia Institute of Technology

Imagine – it’s the mid-1800s, and you’re riding your high-wheeled, penny-farthing bicycle down a dusty road. Sure, it may have some bumps, but if you lose your balance, you’re landing on a relatively soft dirt road. But as the years go by, these roads are replaced with pavement, cobblestones, bricks or wooden slats. All these materials are much harder and still quite bumpy.

As paved roads grew more common across the U.S. and Europe, bicyclists started to suffer gruesome skull fractures and other serious head injuries during falls.

As head injuries became more common, people started seeking out head protection. But the first bike helmets were very different than helmets of today.

I’m a materials engineer who teaches a course at Georgia Tech about materials science and engineering in sports. The class covers many topics, but particularly helmets, as they’re used in many different sports, including cycling, and the materials they’re made of play an important role in how they work. Over the decades, people have used a wide variety of materials to protect their heads while biking, and companies continue to develop new and innovative materials.

In the beginning, there was the pith helmet.

Pith helmets

The first head protection concept introduced to the biking world was a hat made from pith, which is the spongy rind found in the stem of sola plants, aeschynomene aspera. Pith helmet craftsmen would press the pith into sheets and laminate it across dome-shaped molds to form a helmet shape. Then, they’d cover the hats in canvas as a form of weatherproofing.

A hat made of a brown material with a flat rim.
Hats made out of pith were used by militaries as well as for head protection while biking.
Auckland Museum, CC BY-SA

Pith helmets were far from what we would consider a helmet today, but they persisted until the early 20th century, when bicycle-racing clubs emerged. Since pith helmets offered little to no ventilation, the racers began to use halo-shaped leather helmets. These had better airflow and were more comfortable, although they weren’t much better at protecting the head.

A bike helmet made from leather strips connected into a dome on the head of a mannequin.
Leather strip bike helmets were made in the 1930s.
Museums Victoria, CC BY-SA

Leather halo helmets

The initial concept for the halo helmet used a simple leather strip wrapped around the forehead. But these halo helmets quickly evolved, as riders arranged additional strips longitudinally from front to back. They wrapped the leather bands in wool.

For better head protection, the helmet makers then started adding more layers of leather strips to increase the helmet’s thickness. Eventually, they added different materials such as cotton, foam and other textiles into these leather layers for better protection.

While these had better airflow than the pith hats, the leather “hairnet” helmets continued to offer very little protection during a fall on a paved surface. And, like pith, the leather helmets degraded when exposed to sweat and rain.

Despite these drawbacks, leather strip helmets dominated the market for several decades as cycling continued to evolve throughout the 20th century.

Then, in the 1970s, a nonprofit dedicated to testing motorcycle helmets called the Snell Foundation released new standards for bike helmets. They set their standards so high that only lightweight motorcycle helmets could pass, which most bicyclists refused to wear.

New materials and new helmets

The motorcycle equipment manufacturing company Bell Motorsports responded to the new standards by releasing the Bell Biker in 1975. This helmet used expanded polystyrene, or EPS. EPS is the same foam used to manufacture styrofoam coolers. It’s lightweight and absorbs energy well.

Constructing the Bell Biker involved spraying EPS into a dome shaped mold. The manufacturers used small pellets of a very hard plastic – polycarbonate, or PC – to mold an outer shell and then adhere it to the outside of the EPS.

Mottled white foam
Expanded polystyrene, or EPS, is a foam used in styrofoam coolers as well as the core of bike helmets.
Tiia Monto/Wikimedia Commons, CC BY-SA

Unlike the pith and leather helmets, this design was lightweight, load bearing, impact absorbing and well ventilated. The PC shell provided a smooth surface so that during a fall, the helmet would skid along the pavement instead of getting jerked around and caught, which could cause abrupt head rotation and lead to concussions and other head and neck injuries.

Over the next two decades, as cycling became more popular, helmet manufacturers tried to strike the perfect balance between lightweight and ventilated helmets, while simultaneously providing impact protection.

In order to decrease weight, a company called Giro Sport Design created an all-EPS helmet covered by a thin lycra fabric cover instead of a hard PC shell. This design eliminated the weight of the PC shell and improved ventilation.

In 1989, a company called Pro Tec introduced a helmet with a nylon mesh infused in the EPS foam core. The nylon mesh dramatically increased the helmet’s structural support without the added weight of the PC shell.

A man standing by a bike wearing a green helmet that's made of a thin material with a long tail.
Many racing cyclists found teardrop-style helmets to be more aerodynamic.
Bongarts/Getty Images, CC BY-NC-ND

Meanwhile, as cycling became more competitive, many riders and manufacturers started designing more aerodynamic helmets using the existing materials. A revolutionary teardrop style helmet debuted in the 1984 Olympics.

Now, even casual biking enthusiasts will don teardrop helmets.

Helmets on the market today

Helmet makers continue to innovate. Today, many commercial brands use a hard polyethylene terephthalate, or PET, shell around the EPS foam in place of a PC shell to increase the helmet’s protection and lifespan, while decreasing cost.

Meanwhile, some brands still use PC shells. Instead of gluing them to the EPS foam, the shell serves as the mold itself, with the EPS expanding to fit inside it. Manufacturing helmets this way eliminates several process steps, as well as any gaps between the foam and shell. This process makes the helmet both stronger and cheaper to manufacture.

As helmets evolve to provide more protection with still lighter weight, materials called copolymers, such as acrylonitrile-butadiene-styrene, are replacing PC and PET shell materials.

Materials that are easier and cheaper to manufacture, such as expanded polyurethane and expanded polypropylene, are also starting to replace the ubiquitous EPS core.

Just as the leather and pith helmets would look strange to a cyclist today, a century from now, bike helmets could be made with entirely new and innovative materials.The Conversation

Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries appeared first on theconversation.com

Continue Reading

Trending