Connect with us

The Conversation

We’ve been here before: AI promised humanlike machines – in 1958

Published

on

We’ve been here before: AI promised humanlike machines – in 1958

Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958.
National Museum of the U.S. Navy/Flickr

Danielle Williams, Arts & Sciences at Washington University in St. Louis

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks
A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster.
Danielle J. Williams, CC BY-ND

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

Three men sit in chairs on a stage
Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI.
AP Photo/Eric Risberg

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.The Conversation

Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=335928

The Conversation

AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them

Published

on

theconversation.com – Arryn Robbins, Assistant Professor of Psychology, University of Richmond – 2025-04-11 07:43:00

A beautiful kitchen to scroll past – but check out the clock.
Tiny Homes via Facebook

Arryn Robbins, University of Richmond

I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.

For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.

These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?

As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.

With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.

We’ve been here before

The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.

But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”

Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.

In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.

Attention shapes what you see, what you miss

Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.

Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.

This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.

People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.

YouTube video
If you’re counting how many passes the people in white make, do you even notice someone walk through in a gorilla suit?

Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.

Efficiency over accuracy in thinking

Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.

Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.

While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.

Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.

Beating the machine

While AI gets better at detecting AI, humans need tools to do the same. Here’s how:

  1. Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
  2. Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
  3. Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
  4. Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.

AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them appeared first on theconversation.com

Continue Reading

The Conversation

Social media before bedtime wreaks havoc on our sleep − a sleep researcher explains why screens alone aren’t the main culprit

Published

on

theconversation.com – Brian N. Chin, Assistant Professor of Psychology, Trinity College – 2025-04-08 07:49:00

Social media use before bedtime can be stimulating in ways that screen time alone is not.
Adam Hester/Tetra Images via Getty Images

Brian N. Chin, Trinity College

“Avoid screens before bed” is one of the most common pieces of sleep advice. But what if the real problem isn’t screen time − it’s the way we use social media at night?

Sleep deprivation is one of the most widespread yet overlooked public health issues, especially among young adults and adolescents.

Despite needing eight to 10 hours of sleep, most adolescents fall short, while nearly two-thirds of young adults regularly get less than the recommended seven to nine hours.

Poor sleep isn’t just about feeling tired − it’s linked to worsened mental health, emotion regulation, memory, academic performance and even increased risk for chronic illness and early mortality.

At the same time, social media is nearly universal among young adults, with 84% using at least one platform daily. While research has long focused on screen time as the culprit for poor sleep, growing evidence suggests that how often people check social media − and how emotionally engaged they are − matters even more than how long they spend online.

As a social psychologist and sleep researcher, I study how social behaviors, including social media habits, affect sleep and well-being. Sleep isn’t just an individual behavior; it’s shaped by our social environments and relationships.

And one of the most common yet underestimated factors shaping modern sleep? How we engage with social media before bed.

Emotional investment in social media

Beyond simply measuring time spent on social media, researchers have started looking at how emotionally connected people feel to their social media use.

Some studies suggest that the way people emotionally engage with social media may have a greater impact on sleep quality than the total time they spend online.

In a 2024 study of 830 young adults, my colleagues and I examined how different types of social media engagement predicted sleep problems. We found that frequent social media visits and emotional investment were stronger predictors of poor sleep than total screen time. Additionally, presleep cognitive arousal and social comparison played a key role in linking social media engagement to sleep disruption, suggesting that social media’s effects on sleep extend beyond simple screen exposure.

I believe these findings suggest that cutting screen time alone may not be enough − reducing how often people check social media and how emotionally connected they feel to it may be more effective in promoting healthier sleep habits.

How social media disrupts sleep

If you’ve ever struggled to fall asleep after scrolling through social media, it’s not just the screen keeping you awake. While blue light can delay melatonin production, my team’s research and that of others suggests that the way people interact with social media may play an even bigger role in sleep disruption.

Here are some of the biggest ways social media interferes with your sleep:

  • Presleep arousal: Doomscrolling and emotionally charged content on social media keeps your brain in a state of heightened alertness, making it harder to relax and fall asleep. Whether it’s political debates, distressing news or even exciting personal updates, emotionally stimulating content can trigger increased cognitive and physiological arousal that delays sleep onset.

  • Social comparison: Viewing idealized social media posts before bed can lead to upward social comparison, increasing stress and making it harder to sleep. People tend to compare themselves to highly curated versions of others’ lives − vacations, fitness progress, career milestones − which can lead to feelings of inadequacy and anxiety that disrupt sleep.

  • Habitual checking: Social media use after lights out is a strong predictor of poor sleep, as checking notifications and scrolling before bed can quickly become an automatic habit. Studies have shown that nighttime-specific social media use, especially after lights are out, is linked to shorter sleep duration, later bedtimes and lower sleep quality. This pattern reflects bedtime procrastination, where people delay sleep despite knowing it would be better for their health and well-being.

  • Fear of missing out, or FOMO: The urge to stay connected also keeps many people scrolling long past their intended bedtime, making sleep feel secondary to staying updated. Research shows that higher FOMO levels are linked to more frequent nighttime social media use and poorer sleep quality. The anticipation of new messages, posts or updates can create a sense of social pressure to stay online and reinforce the habit of delaying sleep.

Taken together, these factors make social media more than just a passive distraction − it becomes an active barrier to restful sleep. In other words, that late-night scroll isn’t harmless − it’s quietly rewiring your sleep and well-being.

How to use social media without sleep disruption

You don’t need to quit social media, but restructuring how you engage with it at night could help. Research suggests that small behavioral changes to your bedtime routine can make a significant difference in sleep quality. I suggest trying these practical, evidence-backed strategies for improving your sleep:

  • Give your brain time to wind down: Avoid emotionally charged content 30 to 60 minutes before bed to help your mind relax and prepare for sleep.

  • Create separation between social media and sleep: Set your phone to “Do Not Disturb” or leave it outside the bedroom to avoid the temptation of late-night checking.

  • Reduce mindless scrolling: If you catch yourself endlessly refreshing, take a small, mindful pause and ask yourself: “Do I actually want to be on this app right now?”

A brief moment of awareness can help break the habit loop.The Conversation

Brian N. Chin, Assistant Professor of Psychology, Trinity College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Social media before bedtime wreaks havoc on our sleep − a sleep researcher explains why screens alone aren’t the main culprit appeared first on theconversation.com

Continue Reading

The Conversation

Providing farmworkers with health insurance is worth it for their employers − new research

Published

on

theconversation.com – John Lowrey, Assistant Professor of Supply Chain and Health Sciences, Northeastern University – 2025-04-08 07:48:00

Farmworkers at Del Bosque Farms pick and pack melons on a mobile platform in Firebaugh, Calif., in July 2021.
AP Photo/Terry Chea

John Lowrey, Northeastern University; Timothy Richards, Arizona State University, and Zachariah Rutledge, Michigan State University

Agricultural employers who provide farmworkers with health insurance earn higher profits, even after accounting for the cost of that coverage. In addition, farmworkers who get health insurance through their employers are more productive and earn more money than those who do not.

These are the key findings from our study published in the March 2025 issue of the American Journal of Agricultural Economics.

To conduct this research, we crunched over three decades of data from the Labor Department’s National Agricultural Workers Survey. We focused on California, the nation’s largest producer of fruits, nuts and other labor-intensive agricultural products in the U.S., from 1989 to 2022.

We determined that if 20% more farmworkers got health insurance coverage, they would have earned $23,063 a year in 2022, up from $22,482 if they did not. Their employers, meanwhile, would earn $7,303 in net profits per worker annually in this same scenario, versus $6,598.

Why it matters

Roughly half of California’s agricultural employers are facing labor shortages at a time when the average age of U.S. farmworkers is also rising.

Some of them, including grape producers, are responding by investing more heavily in labor-saving equipment, which helps reduce the need for seasonal manual labor. However, automated harvesting isn’t yet a viable or affordable option for labor-intensive specialty crops such as melons and strawberries.

Despite labor shortages, agricultural employers may be reluctant to increase total compensation for farmworkers. They may also be wary of providing additional benefits such as health insurance for two main reasons.

First, seasonal workers are, by definition, transient, meaning that the employer who provides coverage may not necessarily be the same one who benefits from a healthier worker. Second, it costs an employer money but doesn’t necessarily benefit them in the future if the worker moves on.

Most U.S. farmworkers are immigrants from Mexico or Central America. Roughly 42% are immigrants who are in the U.S. without legal authorization, down from 55% in the early 2000s.

As the share of farmworkers who are unauthorized immigrants has declined, the share who are U.S. citizens – including those born here – has grown and now stands at about 39%.

The low wages farmworkers earn offer little incentive for more U.S. citizens and permanent residents to take these jobs. These jobs might become more attractive if employers offered health care coverage to protect the health of the worker and their household.

Farmworkers who lack legal authorization to be in the U.S. are not eligible for private health insurance policies, and many can’t enroll in Medicaid, a government-run health insurance program that’s primarily for low-income Americans and people with disabilities. Regardless, some employers do take steps to help them gain access to health care services. As of 2025, a large share of farmworkers remain uninsured, including many citizens and immigrants with legal status.

Limited access to health care is an unfortunate reality for farmworkers, whose jobs are physically demanding and dangerous. In addition, farmworkers are paid at or near the minimum wage and are constantly searching for their next employment opportunity. This uncertainty causes high levels of stress, which can contribute to chronic health issues such as hypertension.

What still isn’t known

It is hard to estimate the effect of employer-provided health insurance on workers and employers, since labor market outcomes are a result of highly complex interactions.

For example, wages, productivity and how long someone keeps their job are highly interdependent variables determined by the interaction between what workers seek and what employers offer. And wages do not always reflect a worker’s skills and abilities, as some people are more willing to accept a job with low pay if their compensation includes good benefits such as health insurance.

The Research Brief is a short take about interesting academic work.The Conversation

John Lowrey, Assistant Professor of Supply Chain and Health Sciences, Northeastern University; Timothy Richards, Professor of Agribusiness, Arizona State University, and Zachariah Rutledge, Assistant Professor of Agricultural, Food and Resource Economics, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Providing farmworkers with health insurance is worth it for their employers − new research appeared first on theconversation.com

Continue Reading

Trending