Connect with us

The Conversation

Online child safety laws could help or hurt – 2 pediatricians explain what’s likely to work and what isn’t

Published

on

theconversation.com – Megan Moreno, Professor of Pediatrics, University of Wisconsin-Madison – 2024-04-04 07:44:57

Society has a complicated relationship with adolescents. We want to protect them as children and yet launch them into adulthood. Adolescents face risks from testing out independence, navigating peer relationships, developing an identity and making mistakes in these processes.

Today’s teens have new areas of risk and opportunity as they navigate the digital world, and this has led to debate over their social media use.

Concern about social media use by 13- to 17-year-olds has led to a patchwork of state initiatives as well as proposed federal legislation. Following the Surgeon General’s Advisory on Social Media and Youth Mental Health, issued on May 23, 2023, the Biden administration convened the Kids Online Health and Safety Task Force.

We are pediatricians who study child online behavior, and we are co-directors of the American Academy of Pediatrics Center of Excellence on Social Media and Youth Mental Health.

As we consider the role of the federal government in regulating teen social media use, we believe it is important to consider how to support adolescents’ drive for independence and social interactions, while protecting them from serious harm or having their identities commodified by powerful technology companies.

Without commenting on any specific piece of active legislation, here are the elements of any potential policy related to children and technology that we believe would be helpful, and those we are concerned could be harmful.

Ideal legislation

Key to any effective online child safety legislation is accountability, so that platforms are designed with the needs of children and adolescents in mind, rather than being driven by engagement and revenue goals.

Default privacy protections are also crucial. Young people often receive – and don’t want – contact from unknown adults. These are typically marketers or random strangers, dubbed “randos.” Teens often teach each other ways to try to be safe, leading to widespread practices that may or may not be effective.

Methods for stopping online child sexual exploitation are not adequate, and elements of proposed legislation could help by limiting who can contact teens outside of their known social circles. Making young users’ accounts private by default would allow them to have online interactions just with friends and communities they seek out. Encouraging collaboration among technology platforms to flag social media users who pose a threat and identify problematic practices is also crucial.

Another helpful element of child online safety legislation is requiring better access to and control over platform settings. One challenge for social media users of all ages is to find and navigate the different available settings. These could be standardized to be readily accessible rather than requiring multiple clicks to find protections buried in an app’s settings. Young people describe wanting more control in their platform use, including the ability to control their content, reset or update their algorithms, and delete data or accounts.

Prohibiting data collection from young people would also help. Behavioral data from digital breadcrumbs reveals a lot about users, which allows technology companies to sort them into categories to predict what they might buy or click on next. This practice is unethical because it can be used to exploit susceptibility to self-harm and low impulse control. It also is incompatible with the adolescent development ideal of exploration – teens are supposed to test things out, push boundaries and change. Teens are harmed when apps and sites nudge them in particular directions in order to profit from them.

Kids’ data privacy is a major concern.

Legislation could also require technology companies to take user-reported problems more seriously. The companies could make clear the process for reporting problematic content or people, and what steps they will take after a report. Anecdotally, we have both heard in our pediatric clinical practices that teens don’t make these reports because they don’t trust that anything will happen in response. There are several possible approaches, including direct reporting to platforms as well as designating an intermediary to receive reports about problematic interactions on platforms.

Legislation could also focus on limiting the impact of misinformation. Misinformation is another problem teens encounter that is likely to grow with generative artificial intelligence. Platforms could mandate watermarking of AI-generated content. Platforms could also prevent the spread of untrustworthy content by identifying super-producers and applying rate limits so that they can’t clog everyone’s feeds.

The federal government could also fund additional research. Despite the past decade of prolific social media research, there remains a lack of common data formats, metrics to measure key concepts, and interventions to promote well-being. Funding to support research, including projects that include investigators from government, academia and industry, should lead to progress and innovation in this area.

Finally, legislation could help advance age verification. To enhance protections for adolescents, platforms need to know if a user is a young person. Age assurance and age verification are complicated topics that researchers, policymakers and technology developers are studying to determine how to accomplish it without compromising privacy. One option could be a new setting that allows a device to indicate to platforms, browsers and apps what age range the user is in and implement age-appropriate protections for young users.

Legislation that would be harmful

Requiring parent permissions would be harmful. This restrictive approach would limit access to safe places for many young people and exclude teens who are in unsupportive family settings. These approaches also put the burden on parents to be gatekeepers for every decision about platform access, which has the potential to increase family conflict.

Shutting down particular social media is also problematic. Singling out individual platforms does not address the systemic revenue-driven designs and business models that exist throughout the industry.

Thirteen is a common minimum age for social media platforms. Imposing age limits from 13 to 16 would also not be helpful. This proposal is not supported by clear evidence about what age range is best for all teens. It is developmentally appropriate for 13-year-olds to want to connect with their peers online.

Adolescents themselves support needing to meet developmental milestones to be allowed to use social media, and they acknowledge that individual teens may meet these at different ages. In other words, some teens have no problems at age 13, while others will continue to have problems with social media at age 17. Age restrictions may serve to distract from making sure platforms are following guidelines and best practices for all ages.

Social media has its upsides and downside for adolescents.

Limits of legislation

Young people often navigate online interaction with little help from adults. There’s a need for additional approaches to engage, educate and involve parents – and other adults who work with and care for young people – in supporting young people as they enter the online world.

There are numerous other critical areas of work, including bullying, mental health and parent burnout that need separate consideration. These areas are likely to need distinct policy approaches. But policy alone is not likely to solve all of these complex, intertwined issues that intersect in the digital world.

Moving forward

Legislation is a powerful approach to increase safety for young people online. It is important to recognize that teens themselves, as super-users in these spaces, have thoughtful ideas of their own about possible legislative and design elements to enhance their safety.

Families and adults who work with youth also need resources to better support adolescents. The Center of Excellence on Social Media and Youth Mental Health seeks to provide those resources through a Q&A portal, ongoing learning opportunities and resources.

Finally, adults must also be accountable for their own social media and technology use. Many teens report that parents’ social media use distracts from parent-child interaction and that adult social media use negatively affects them. To support young people, adults should model appropriate online behavior – including being able to set their own phones down to be present for the critical, often tumultuous, yet amazing stage of their adolescents’ development.

Read More

The post Online child safety laws could help or hurt – 2 pediatricians explain what’s likely to work and what isn’t appeared first on theconversation.com

The Conversation

AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond

Published

on

theconversation.com – Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan – 2024-11-22 07:25:00

One AI harm is pervasive facial recognition, which erodes privacy.
DSCimage/iStock via Getty Images

Sylvia Lu, University of Michigan

As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.

These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety.

AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?

The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that.

Slow burns

One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives.

Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.

Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.

Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.

This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate biases.

Why regulation lags behind

Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used across multiple contexts.

Courts and regulatory bodies are accustomed to dealing with concrete harms, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time.

Social media algorithms, for example, can gradually erode users’ mental health, but because these harms build slowly, they are difficult to address within the confines of current legal standards.

Four types of algorithmic harm

Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into four legal areas: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems.

The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm.

The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users.

The third type of harm is diminishing equality. AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.

The fourth type of harm is impairing safety. AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health.

Because these cumulative harms often arise from AI applications protected by trade secret laws, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it’s nearly impossible to hold companies accountable.

This UNESCO video features researchers from around the world explaining the issues around the ethics and regulation of AI.

Closing the accountability gap

Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.

As AI systems become more widely used in critical societal functions – from health care to education and employment – the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and disproportionately hitting the most vulnerable.

With generative AI multiplying and exacerbating AI harms, I believe it’s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological advancement.

The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance.The Conversation

Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond appeared first on theconversation.com

Continue Reading

The Conversation

Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace

Published

on

theconversation.com – Alexandra Plakias, Associate Professor of Philosophy, Hamilton College – 2024-11-22 07:25:00

‘I don’t even know what to say to that.’
Catherine Falls Commercial/Moment via Getty Images

Alexandra Plakias, Hamilton College

The holidays offer many opportunities for awkward moments. Political discussions, of course, hold plenty of potential. But any time opinions differ, where estrangements have caused lingering rifts, or when behaviors veer toward the inappropriate, awkwardness can set in.

Awkwardness is what happens in social interactions when you suddenly find yourself without a script to guide you through. Maybe the situation is new or catches you off guard. Maybe you don’t know what’s expected of you, or you aren’t sure what role you’re playing in the social drama around you. It’s characterized by feelings of self-consciousness, uncertainty and discomfort.

As a philosopher who studies moral psychology, I’m interested in awkwardness because I wanted to understand the ways social discomfort stops people from engaging with difficult topics and challenging conversations. Awkwardness seems to inhibit people, even when their moral values suggest they should speak up. But it has a positive role to play, too – it can alert people to areas where their social norms are lacking or outdated.

People often blame themselves when things take a turn toward the awkward. But awkwardness is really a collective failure – people aren’t awkward, situations are. And they become awkward because you don’t have the resources to navigate your way through tricky social situations.

Awkwardness is often confused with embarrassment, but the two are different in important ways, and so are their remedies. Embarrassment is a response to a personal failing or gaffe, and the right response is to acknowledge it, own it and move on. Because awkwardness is caused by a lack of social guidance, you can try to anticipate and head it off before it happens, or you can respond to it by trying to develop better or clearer social scripts to help you – and others – navigate similar situations in the future.

After researching and writing an entire book on awkwardness, I’ve come to the conclusion that it’s not something we can – or should – avoid altogether. But there are a few strategies people can use to minimize awkwardness and deal with it when it does, inevitably, happen.

1. Know your goals, know your roles

Uncertainty is the oxygen of awkwardness. Before you engage in a potentially awkward or contentious interaction, ask yourself: What do I want to get out of this?

When you’re clear on your goals for the interaction, not only are you better able to perform your role in it, but you’re also giving clearer signals to others, helping them perform their roles in the unfolding social drama.

So, if you’re worried it’ll be awkward when your uncle starts in on his annual political rant, think about what you want the outcome to be. Do you want to convince him he’s wrong? Unlikely to happen. Do you want other family members to feel less anxious? Do you want your own views to be heard?

I’m not suggesting that some forethought will make things go smoothly or guarantee that no one’s feelings will be hurt. But it will help you feel more confident in your ability to navigate toward your desired outcome.

woman bringing pie to a family dinner table
Serving dessert could provide a lifeline to someone looking for a diversion.
Drazen Zigic/iStock via Getty Images Plus

2. There’s no ‘I’ in awkward

Awkward situations breed intense self-consciousness. This is both uncomfortable and counterproductive. By focusing on yourself, you’re not attuned to the people around you or the signals they’re sending – signals that could offer you a pathway out of the awkward situation. So make sure you’re paying attention to the other players in the drama, not just your own discomfort.

3. Plan, coordinate and be explicit

People do so much planning in other areas of their lives, yet they expect social interactions to just flow effortlessly. But like a vacation or a hike in the woods, sometimes a conversation goes better when you approach it with a map. Have some go-to topics or questions at hand.

And you don’t have to go it alone. If you’re worried about broaching a sensitive topic, or interacting with a particularly prickly guest, coordinate with a friend or relative.

If you expect to see someone with whom you have an unresolved relationship – an estranged family member, an old friend you ghosted – try to do some prep work in advance. Emails or letters can give people a chance to process reactions without putting them on the spot.

Even having a scripted activity on deck can make things less awkward. It doesn’t have to be anything formal, like a board game. Just keep some tasks available for guests who might otherwise lurk uncomfortably – like shaking up the salad dressing or putting forks on the table.

4. Laugh it off

If, despite your best efforts, awkwardness does strike, offer people a way out – they’ll probably grab it. This doesn’t need to be momentous; it could be a little joke, a small-talk topic, or even – and only if things get very desperate – knocking a spoon off the table to break the silence.

5. Consider the alternatives

These strategies might help you avoid awkwardness. But take a moment to consider whether you really want to. Awkwardness is the result of social uncertainty; it slows things down and curbs your confidence.

In its absence, other emotions can set in. Having things out in the open can be a relief, but it can also lead to anger, sadness and other feelings that might best be saved for another occasion.

So if things are awkward, it’s worth looking around to see what role that awkwardness is playing, and what might take its place if it’s gone.The Conversation

Alexandra Plakias, Associate Professor of Philosophy, Hamilton College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace appeared first on theconversation.com

Continue Reading

The Conversation

No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners

Published

on

theconversation.com – Rosemary Trout, Associate Clinical Professor of Culinary Arts & Food Science, Drexel University – 2024-11-22 07:24:00

Fall means cranberry season − and sweet seasonal holiday dishes.
AP Photo/Sergei Grits

Rosemary Trout, Drexel University

The holidays are full of delicious and indulgent food and drinks. It’s hard to resist dreaming about cookies, specialty cakes, rich meats and super saucy side dishes.

Lots of the healthy raw ingredients used in holiday foods can end up overshadowed by sugar and starch. While adding extra sugar may be tasty, it’s not necessarily good for metabolism. Understanding the food and culinary science behind what you’re cooking means you can make a few alterations to a recipe and still have a delicious dish that’s not overloaded with sugar.

Particularly, if you’re a person living with Type 1 diabetes, the holidays may come with an additional layer of stress and wild blood glucose levels. It’s no time for despair though – it is the holidays, after all.

Cranberries are one seasonal, tasty fruit that can be modified in recipes to be more Type 1 diabetic-friendly – or friendly to anyone looking for a sweet dish without the extra sugar.

I am a food scientist and a Type 1 diabetic. Understanding food composition, ingredient interactions and metabolism has been a literal lifesaver for me.

Type 1 diabetes defined

Type 1 diabetes is all day every day, with no breaks during sleep, no holidays or weekends off, no remission and no cure. Type 1 diabetics don’t make insulin, a hormone that is required to live that promotes the uptake of glucose, or sugar, into cells. The glucose in your cells then supplies your body with energy at the molecular level.

Consequently, Type 1 diabetics take insulin by injection, or via an insulin pump attached to their bodies, and hope that it works well enough to stabilize blood sugar and metabolism, minimize health complications over time and keep us alive.

Type 1 diabetics mainly consider the type and amount of carbohydrates in foods when figuring out how much insulin to take, but they also need to understand the protein and fat interactions in food to dose, or bolus, properly.

In addition to insulin, Type 1 diabetics don’t make another hormone, amylin, which slows gastric motility. This means food moves more quickly through our digestive tract, and we often feel very hungry. Foods that are high in fat, proteins and fiber can help to stave off hunger for a while.

Cranberries, a seasonal treat

Cranberries are native to North America and grow well in the Northeastern and Midwestern states, where they are in season between late September and December. They’re a staple on holiday tables all over the country.

A bowl of cranberries with the zest of an orange on top.
Cranberries are a classic Thanksgiving side dish, but cranberry sauce tends to contain a lot of sugar.
bhofack2/iStock via Getty Images

One cup of whole, raw cranberries contains 190 calories. They are 87% water, with trace amounts of protein and fat, 12 grams of carbohydrates and just over 4 grams of soluble fiber. Soluble fiber combines well with water, which is good for digestive health and can slow the rise of blood glucose.

Cranberries are high in potassium, which helps with electrolyte balance and cell signaling, as well as other important nutrients such as antioxidants, beta-carotene and vitamin C. They also contain vitamin K, which helps with healthy blood clotting.

Cranberries’ flavor and aroma come from compounds in the fruit such as cinnamates that add cinnamon notes, vanillin for hints of vanilla, benzoates and benzaldehyde, which tastes like almonds.

Cranberries are high in pectin, a soluble starch that forms a gel and is used as a setting agent in making jams and jellies, which is why they thicken readily with minimal cooking. Their beautiful red jewel-tone color is from a class of compounds called anthocyanins and proanthocyanidins, which are associated with treating some types of infection.

They also contain phenolics, which are protective compounds produced by the plant. These compounds, which look like rings at the molecular level, interact with proteins in your saliva to produce a dry, astringent sensation that makes your mouth pucker. Similarly, a compound called benzoic acid naturally found in cranberries adds to the fruit’s sourness.

These chemical ingredients make them extremely sour and bitter, and difficult to consume raw. To mitigate these flavors and effects, most cranberry recipes call for lots of sugar.

All that extra sugar can make cranberry dishes hard to consume for Type 1 diabetics, because the sugars cause a rapid rise in blood glucose.

Cranberries without sugar?

Type 1 diabetics – or anyone who wants to reduce the added sugars they’re consuming – can try a few culinary tactics to lower their sugar intake while still enjoying this holiday treat.

Don’t cook your cranberries much longer after they pop. You’ll still have a viscous cranberry liquid without the need for as much sugar, since cooking concentrates some of the bitter compounds, making them more pronounced in your dish.

A line of spoons, each heaped with a pile of powdered spice.
Adding spices to your cranberries can enhance the dish’s flavor without extra sugar.
klenova/iStock via Getty Images

Adding cinnamon, clove, cardamom, nutmeg and other warming spices gives the dish a depth of flavor. Adding heat with a spicy chili pepper can make your cranberry dish more complex while reducing sourness and astringency. Adding salt can reduce the cranberries’ bitterness, so you won’t need lots of sugar.

For a richer flavor and a glossy quality, add butter. Butter also lubricates your mouth, which tends to compliment the dish’s natural astringency. Other fats such as heavy cream or coconut oil work, too.

Adding chopped walnuts, almonds or hazelnuts can slow glucose absorption, so your blood glucose may not spike as quickly. Some new types of sweeteners, such as allulose, taste sweet but don’t raise blood sugar, requiring minimal to no insulin. Allulose has GRAS – generally regarded as safe – status in the U.S., but it isn’t approved as an additive in Europe.

This holiday season you can easily cut the amount of sugar added to your cranberry dishes and get the health benefits without a blood glucose spike.The Conversation

Rosemary Trout, Associate Clinical Professor of Culinary Arts & Food Science, Drexel University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners appeared first on theconversation.com

Continue Reading

Trending