(The Center Square) – Half of all American adults are using large language models, two-thirds speak with them, and 1 in 4 say moral judgments about right and wrong are made by these artificial intelligence tools.
The findings are in a report released Wednesday by the Imagining the Digital Future research center at Elon University. ChatGPT, Gemini, Claude and Copilot are popular brands, and human-like encounters are trending greater in volume.
The findings “show the degree to which LLMs are now being used in the way that people have used search engines for decades, including a quick access to information, queries about products and services and getting news and information,” the report says. “This has enormous implications for media, marketing and the basic sale of goods and services. It also suggests the profound impact LLMs might have on political and civic processes.”
Lee Rainie directs the research center. The analysis shows many believe the large language model they use “most acts like it understands them at least some of the time. A third say the model they primarily use seems to have a sense of humor,” a release says.
In other findings, the survey documents large proportions of users saying they have had negative experiences. The experiences include laziness, cheating, confusion and dependency on the tools rather than using critical thinking.
“One truly surprising finding,” a release says, is “contrary to the picture many have about how LLMs are used, our survey shows that the share of those who use the models for personal purposes significantly outnumber those who use them for work-related activities, even among workers.”
The sampling was conducted for Elon by the SSRS Opinion Panel platform. The survey reached 500 adults ages 18 and older, has a +/- 5.1% margin of error and 95% confidence level.