Connect with us

The Conversation

Disability community has long wrestled with ‘helpful’ technologies – lessons for everyone in dealing with AI

Published

on

theconversation.com – Elaine Short, Assistant Professor of Computer Science, Tufts University – 2024-07-01 07:19:34

A robotic arm helps a disabled person paint a picture.

Jenna Schad /Tufts University

Elaine Short, Tufts University

You might have heard that artificial intelligence is going to revolutionize everything, save the world and give everyone superhuman powers. Alternatively, you might have heard that it will take your job, make you lazy and stupid, and make the world a cyberpunk dystopia.

Consider another way to look at AI: as an assistive technology – something that helps you function.

With that view, also consider a community of experts in giving and receiving assistance: the disability community. Many disabled people use technology extensively, both dedicated assistive technologies such as wheelchairs and general-use technologies such as smart home devices.

Equally, many disabled people receive professional and casual assistance from other people. And, despite stereotypes to the contrary, many disabled people regularly give assistance to the disabled and nondisabled people around them.

Disabled people are well experienced in receiving and giving social and technical assistance, which makes them a valuable source of insight into how everyone might relate to AI systems in the future. This potential is a key driver for my work as a disabled person and researcher in AI and robotics.

Actively learning to live with help

While virtually everyone values independence, no one is fully independent. Each of us depends on others to grow our food, care for us when we are ill, give us advice and emotional support, and help us in thousands of interconnected ways. Being disabled means having support needs that are outside what is typical and therefore those needs are much more visible. Because of this, the disability community has reckoned more explicitly with what it means to need help to live than most nondisabled people.

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can’t substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone.

The curb-cut effect – how technologies built for disabled people help everyone – has become a principle of good design.

This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles.

Partnering in assistance

You have probably had the experience of someone trying to help you without listening to what you actually need. For example, a parent or friend might “help” you clean and instead end up hiding everything you need.

Disability advocates have long battled this type of well-meaning but intrusive assistance – for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control.

The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over.

A key goal of my lab’s work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. For example, most people find it difficult to use a joystick to move a robot arm: The joystick can only move from front to back and side to side, but the arm can move in almost as many ways as a human arm.

The author discusses her work on robots that are designed to help people.

To help, AI can predict what someone is planning to do with the robot and then move the robot accordingly. Previous research assumed that people would ignore this help, but we found that people quickly figured out that the system is doing something, actively worked to understand what it was doing and tried to work with the system to get it to do what they wanted.

Most AI systems don’t make this easy, but my lab’s new approaches to AI empower people to influence robot behavior. We have shown that this results in better interactions in tasks that are creative, like painting. We also have begun to investigate how people can use this control to solve problems outside the ones the robots were designed for. For example, people can use a robot that is trained to carry a cup of water to instead pour the water out to water their plants.

Training AI on human variability

The disability-centered perspective also raises concerns about the huge datasets that power AI. The very nature of data-driven AI is to look for common patterns. In general, the better-represented something is in the data, the better the model works.

If disability means having a body or mind outside what is typical, then disability means not being well-represented in the data. Whether it’s AI systems designed to detect cheating on exams instead detecting students’ disabilities or robots that fail to account for wheelchair users, disabled people’s interactions with AI reveal how those systems are brittle.

One of my goals as an AI researcher is to make AI more responsive and adaptable to real human variation, especially in AI systems that learn directly from interacting with people. We have developed frameworks for testing how robust those AI systems are to real human teaching and explored how robots can learn better from human teachers even when those teachers change over time.

Thinking of AI as an assistive technology, and learning from the disability community, can help to ensure that the AI systems of the future serve people’s needs – with people in the driver’s seat.The Conversation

Elaine Short, Assistant Professor of Computer Science, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Disability community has long wrestled with ‘helpful’ technologies – lessons for everyone in dealing with AI appeared first on theconversation.com

The Conversation

An 83-year-old short story by Borges portends a bleak future for the internet

Published

on

theconversation.com – Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis – 2024-11-19 07:22:00

Fifty years before the architecture for the web was created, Jorge Luis Borges had already imagined an analog equivalent.
Sophie Bassouls/Sygma via Getty Images

Roger J. Kreuz, University of Memphis

How will the internet evolve in the coming decades?

Fiction writers have explored some possibilities.

In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.

Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.

The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.

To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.

Stephenson’s record as a prognosticator has been impressive – he anticipated the metaverse in his 1992 novel “Snow Crash,” and a key plot element of his “Diamond Age,” released in 1995, is an interactive primer that functions much like a chatbot.

On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.

Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges.

The rise of the chatbots

Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.

The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.

To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.

The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”

This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.

As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.

And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.

Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web.

An infinite − and useless − library

It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.

A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.

How bad might this get?

Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.

In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet.

Illustration of connected gold hexagons that expand endlessly into the horizon.
In Borges’ imaginary, endlessly expansive library of content, finding something meaningful is like finding a needle in a haystack.
aire images/Moment via Getty Images

Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.

The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.

Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.

Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?

The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post An 83-year-old short story by Borges portends a bleak future for the internet appeared first on theconversation.com

Continue Reading

The Conversation

A better understanding of what people do on their devices is key to digital well-being

Published

on

theconversation.com – Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State – 2024-11-19 07:21:00

What you do on your screens matters as much as how much time you spend on them.
Klaus Vedfelt/DigitalVision via Getty Images

Rinanda Shaleha, Penn State

In an era where digital devices are everywhere, the term “screen time” has become a buzz phrase in discussions about technology’s impact on people’s lives. Parents are concerned about their children’s screen habits. But what if this entire approach to screen time is fundamentally flawed?

While researchers have made advances in measuring screen use, a detailed critique of the research in 2020 revealed major issues in how screen time is conceptualized, measured and studied. I study how digital technology affects human cognition and emotions. My ongoing research with cognitive psychologist Nelson Roque builds on that critique’s findings.

We categorized existing screen-time measures, mapping them to attributes like whether they are duration-based or context-specific, and are studying how they relate to health outcomes such as anxiety, stress, depression, loneliness, mood and sleep quality, creating a clearer framework for understanding screen time. We believe that grouping all digital activities together misses how different types of screen use affect people.

By applying this framework, researchers can better identify which digital activities are beneficial or potentially harmful, allowing people to adopt more intentional screen habits that support well-being and reduce negative mental and emotional health effects.

Screen time isn’t one thing

Screen time, at first glance, seems easy to understand: It’s simply the time spent on devices with screens such as smartphones, tablets, laptops and TVs. But this basic definition hides the variety within people’s digital activities. To truly understand screen time’s impact, you need to look closer at specific digital activities and how each affects cognitive function and mental health.

In our research, we divide screen time into four broad categories: educational use, work-related use, social interaction and entertainment.

For education, activities like online classes and reading articles can improve cognitive skills like problem-solving and critical thinking. Digital tools like mobile apps can support learning by boosting motivation, self-regulation and self-control.

But these tools also pose challenges, such as distracting learners and contributing to poorer recall compared with traditional learning methods. For young users, screen-based learning may even have negative impacts on development and their social environment.

Screen time for work, like writing reports or attending virtual meetings, is a central part of modern life. It can improve productivity and enable remote work. However, prolonged screen exposure and multitasking may also lead to stress, anxiety and cognitive fatigue.

Screen use for social connection helps people interact with others through video chats, social media or online communities. These interactions can promote social connectedness and even improve health outcomes such as decreased depressive symptoms and improved glycemic control for people with chronic conditions. But passive screen use, like endless social media scrolling, can lead to negative experiences such as cyberbullying, social comparison and loneliness, especially for teens.

Screen use for entertainment provides relaxation and stress relief. Mindfulness apps or meditation tools, for example, can reduce anxiety and improve emotional regulation. Creative digital activities, like graphic design and music production, can reduce stress and improve mental health. However, too much screen use may reduce well-being by limiting physical activity and time for other rewarding pursuits.

Context matters

Screen time affects people differently based on factors like mood, social setting, age and family environment. Your emotions before and during screen use can shape your experience. Positive interactions can lift your mood, while loneliness might deepen with certain online activities. For example, we found that differences in age and stress levels affect how readily people become distracted on their devices. Alerts and other changes distract users, which makes it more challenging to focus on tasks.

The social context of screen use also matters. Watching a movie with family can strengthen bonds, while using screens alone can increase feelings of isolation, especially when it replaces face-to-face interactions.

Family influence plays a role, too. For example, parents’ screen habits affect their children’s screen behavior, and structured parental involvement can help reduce excessive use. It highlights the positive effect of structured parental involvement, along with mindful social contexts, in managing screen time for healthier digital interactions.

A woman, man and child look at a tablet screen in a living room
Shared screen time with family and friends can boost well-being.
kate_sept2004/E+ via Getty Images

Consistency and nuance

Technology now lets researchers track screen use accurately, but simply counting hours doesn’t give us the full picture. Even when we measure specific activities, like social media or gaming, studies don’t often capture engagement level or intent. For example, someone might use social media to stay informed or to procrastinate.

Studies on screen time often vary in how they define and categorize it. Some focus on total screen exposure without differentiating between activities. Others examine specific types of use but may not account for the content or context. This lack of consistency in defining screen time makes it hard to compare studies or generalize findings.

Understanding screen use requires a more nuanced approach than tracking the amount of time people spend on their screens. Recognizing the different effects of specific digital activities and distinguishing between active and passive use are crucial steps. Using standardized definitions and combining quantitative data with personal insights would provide a fuller picture. Researchers can also study how screen use affects people over time.

For policymakers, this means developing guidelines that move beyond one-size-fits-all limits by focusing on recommendations suited to specific activities and individual needs. For the rest of us, this awareness encourages a balanced digital diet that blends enriching online and offline activities for better well-being.The Conversation

Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post A better understanding of what people do on their devices is key to digital well-being appeared first on theconversation.com

Continue Reading

The Conversation

From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries

Published

on

theconversation.com – Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology – 2024-11-18 07:27:00

Modern bike helmets are made through complex materials engineering.
Johner Images via Getty Images

Jud Ready, Georgia Institute of Technology

Imagine – it’s the mid-1800s, and you’re riding your high-wheeled, penny-farthing bicycle down a dusty road. Sure, it may have some bumps, but if you lose your balance, you’re landing on a relatively soft dirt road. But as the years go by, these roads are replaced with pavement, cobblestones, bricks or wooden slats. All these materials are much harder and still quite bumpy.

As paved roads grew more common across the U.S. and Europe, bicyclists started to suffer gruesome skull fractures and other serious head injuries during falls.

As head injuries became more common, people started seeking out head protection. But the first bike helmets were very different than helmets of today.

I’m a materials engineer who teaches a course at Georgia Tech about materials science and engineering in sports. The class covers many topics, but particularly helmets, as they’re used in many different sports, including cycling, and the materials they’re made of play an important role in how they work. Over the decades, people have used a wide variety of materials to protect their heads while biking, and companies continue to develop new and innovative materials.

In the beginning, there was the pith helmet.

Pith helmets

The first head protection concept introduced to the biking world was a hat made from pith, which is the spongy rind found in the stem of sola plants, aeschynomene aspera. Pith helmet craftsmen would press the pith into sheets and laminate it across dome-shaped molds to form a helmet shape. Then, they’d cover the hats in canvas as a form of weatherproofing.

A hat made of a brown material with a flat rim.
Hats made out of pith were used by militaries as well as for head protection while biking.
Auckland Museum, CC BY-SA

Pith helmets were far from what we would consider a helmet today, but they persisted until the early 20th century, when bicycle-racing clubs emerged. Since pith helmets offered little to no ventilation, the racers began to use halo-shaped leather helmets. These had better airflow and were more comfortable, although they weren’t much better at protecting the head.

A bike helmet made from leather strips connected into a dome on the head of a mannequin.
Leather strip bike helmets were made in the 1930s.
Museums Victoria, CC BY-SA

Leather halo helmets

The initial concept for the halo helmet used a simple leather strip wrapped around the forehead. But these halo helmets quickly evolved, as riders arranged additional strips longitudinally from front to back. They wrapped the leather bands in wool.

For better head protection, the helmet makers then started adding more layers of leather strips to increase the helmet’s thickness. Eventually, they added different materials such as cotton, foam and other textiles into these leather layers for better protection.

While these had better airflow than the pith hats, the leather “hairnet” helmets continued to offer very little protection during a fall on a paved surface. And, like pith, the leather helmets degraded when exposed to sweat and rain.

Despite these drawbacks, leather strip helmets dominated the market for several decades as cycling continued to evolve throughout the 20th century.

Then, in the 1970s, a nonprofit dedicated to testing motorcycle helmets called the Snell Foundation released new standards for bike helmets. They set their standards so high that only lightweight motorcycle helmets could pass, which most bicyclists refused to wear.

New materials and new helmets

The motorcycle equipment manufacturing company Bell Motorsports responded to the new standards by releasing the Bell Biker in 1975. This helmet used expanded polystyrene, or EPS. EPS is the same foam used to manufacture styrofoam coolers. It’s lightweight and absorbs energy well.

Constructing the Bell Biker involved spraying EPS into a dome shaped mold. The manufacturers used small pellets of a very hard plastic – polycarbonate, or PC – to mold an outer shell and then adhere it to the outside of the EPS.

Mottled white foam
Expanded polystyrene, or EPS, is a foam used in styrofoam coolers as well as the core of bike helmets.
Tiia Monto/Wikimedia Commons, CC BY-SA

Unlike the pith and leather helmets, this design was lightweight, load bearing, impact absorbing and well ventilated. The PC shell provided a smooth surface so that during a fall, the helmet would skid along the pavement instead of getting jerked around and caught, which could cause abrupt head rotation and lead to concussions and other head and neck injuries.

Over the next two decades, as cycling became more popular, helmet manufacturers tried to strike the perfect balance between lightweight and ventilated helmets, while simultaneously providing impact protection.

In order to decrease weight, a company called Giro Sport Design created an all-EPS helmet covered by a thin lycra fabric cover instead of a hard PC shell. This design eliminated the weight of the PC shell and improved ventilation.

In 1989, a company called Pro Tec introduced a helmet with a nylon mesh infused in the EPS foam core. The nylon mesh dramatically increased the helmet’s structural support without the added weight of the PC shell.

A man standing by a bike wearing a green helmet that's made of a thin material with a long tail.
Many racing cyclists found teardrop-style helmets to be more aerodynamic.
Bongarts/Getty Images, CC BY-NC-ND

Meanwhile, as cycling became more competitive, many riders and manufacturers started designing more aerodynamic helmets using the existing materials. A revolutionary teardrop style helmet debuted in the 1984 Olympics.

Now, even casual biking enthusiasts will don teardrop helmets.

Helmets on the market today

Helmet makers continue to innovate. Today, many commercial brands use a hard polyethylene terephthalate, or PET, shell around the EPS foam in place of a PC shell to increase the helmet’s protection and lifespan, while decreasing cost.

Meanwhile, some brands still use PC shells. Instead of gluing them to the EPS foam, the shell serves as the mold itself, with the EPS expanding to fit inside it. Manufacturing helmets this way eliminates several process steps, as well as any gaps between the foam and shell. This process makes the helmet both stronger and cheaper to manufacture.

As helmets evolve to provide more protection with still lighter weight, materials called copolymers, such as acrylonitrile-butadiene-styrene, are replacing PC and PET shell materials.

Materials that are easier and cheaper to manufacture, such as expanded polyurethane and expanded polypropylene, are also starting to replace the ubiquitous EPS core.

Just as the leather and pith helmets would look strange to a cyclist today, a century from now, bike helmets could be made with entirely new and innovative materials.The Conversation

Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries appeared first on theconversation.com

Continue Reading

Trending