Connect with us

The Conversation

AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture

Published

on

theconversation.com – Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School – 2024-12-02 07:37:00

AI played many roles in 2024’s elections.

AP Photo/Paul Vernon

Bruce Schneier, Harvard Kennedy School and Nathan Sanders, Harvard University

It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did.

In a Pew survey of Americans from earlier this fall, nearly eight times as many respondents expected AI to be used for mostly bad purposes in the 2024 election as those who thought it would be used mostly for good. There are real concerns and risks in using AI in electoral politics, but it definitely has not been all bad.

The dreaded “death of truth” has not materialized – at least, not due to AI. And candidates are eagerly adopting AI in many places where it can be constructive, if used responsibly. But because this all happens inside a campaign, and largely in secret, the public often doesn’t see all the details.

Connecting with voters

One of the most impressive and beneficial uses of AI is language translation, and campaigns have started using it widely. Local governments in Japan and California and prominent politicians, including India Prime Minister Narenda Modi and New York City Mayor Eric Adams, used AI to translate meetings and speeches to their diverse constituents.

Even when politicians themselves aren’t speaking through AI, their constituents might be using it to listen to them. Google rolled out free translation services for an additional 110 languages this summer, available to billions of people in real time through their smartphones.

Other candidates used AI’s conversational capabilities to connect with voters. U.S. politicians Asa Hutchinson, Dean Phillips and Francis Suarez deployed chatbots of themselves in their presidential primary campaigns. The fringe candidate Jason Palmer beat Joe Biden in the American Samoan primary, at least partly thanks to using AI-generated emails, texts, audio and video. Pakistan’s former prime minister, Imran Khan, used an AI clone of his voice to deliver speeches from prison.

Perhaps the most effective use of this technology was in Japan, where an obscure and independent Tokyo gubernatorial candidate, Takahiro Anno, used an AI avatar to respond to 8,600 questions from voters and managed to come in fifth among a highly competitive field of 56 candidates.

‘AI Steve’ was an AI persona who ran for office in the 2024 U.K. election.

Nuts and bolts

AIs have been used in political fundraising as well. Companies like Quiller and Tech for Campaigns market AIs to help draft fundraising emails. Other AI systems help candidates target particular donors with personalized messages. It’s notoriously difficult to measure the impact of these kinds of tools, and political consultants are cagey about what really works, but there’s clearly interest in continuing to use these technologies in campaign fundraising.

Polling has been highly mathematical for decades, and pollsters are constantly incorporating new technologies into their processes. Techniques range from using AI to distill voter sentiment from social networking platforms – something known as “social listening” – to creating synthetic voters that can answer tens of thousands of questions. Whether these AI applications will result in more accurate polls and strategic insights for campaigns remains to be seen, but there is promising research motivated by the ever-increasing challenge of reaching real humans with surveys.

On the political organizing side, AI assistants are being used for such diverse purposes as helping craft political messages and strategy, generating ads, drafting speeches and helping coordinate canvassing and get-out-the-vote efforts. In Argentina in 2023, both major presidential candidates used AI to develop campaign posters, videos and other materials.

In 2024, similar capabilities were almost certainly used in a variety of elections around the world. In the U.S., for example, a Georgia politician used AI to produce blog posts, campaign images and podcasts. Even standard productivity software suites like those from Adobe, Microsoft and Google now integrate AI features that are unavoidable – and perhaps very useful to campaigns. Other AI systems help advise candidates looking to run for higher office.

Fakes and counterfakes

And there was AI-created misinformation and propaganda, even though it was not as catastrophic as feared. Days before a Slovakian election in 2023, fake audio discussing election manipulation went viral. This kind of thing happened many times in 2024, but it’s unclear if any of it had any real effect.

In the U.S. presidential election, there was a lot of press after a robocall of a fake Joe Biden voice told New Hampshire voters not to vote in the Democratic primary, but that didn’t appear to make much of a difference in that vote. Similarly, AI-generated images from hurricane disaster areas didn’t seem to have much effect, and neither did a stream of AI-faked celebrity endorsements or viral deepfake images and videos misrepresenting candidates’ actions and seemingly designed to prey on their political weaknesses.

Russian intelligence services aimed to use AI to influence U.S. voters, but it’s not clear whether they had much success.

AI also played a role in protecting the information ecosystem. OpenAI used its own AI models to disrupt an Iranian foreign influence operation aimed at sowing division before the U.S. presidential election. While anyone can use AI tools today to generate convincing fake audio, images and text, and that capability is here to stay, tech platforms also use AI to automatically moderate content like hate speech and extremism. This is a positive use case, making content moderation more efficient and sparing humans from having to review the worst offenses, but there’s room for it to become more effective, more transparent and more equitable.

There is potential for AI models to be much more scalable and adaptable to more languages and countries than organizations of human moderators. But the implementations to date on platforms like Meta demonstrate that a lot more work needs to be done to make these systems fair and effective.

One thing that didn’t matter much in 2024 was corporate AI developers’ prohibitions on using their tools for politics. Despite market leader OpenAI’s emphasis on banning political uses and its use of AI to automatically reject a quarter-million requests to generate images of political candidates, the company’s enforcement has been ineffective and actual use is widespread.

The genie is loose

All of these trends – both good and bad – are likely to continue. As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics. This will happen whether the AI’s performance is superhuman or suboptimal, whether it makes mistakes or not, and whether the balance of its use is positive or negative. All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation.The Conversation

Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School and Nathan Sanders, Affiliate, Berkman Klein Center for Internet & Society, Harvard University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture appeared first on theconversation.com

The Conversation

Will AI revolutionize drug development? Researchers explain why it depends on how it’s used

Published

on

theconversation.com – Duxin Sun, Associate Dean for Research, Charles Walgreen Jr. Professor of Pharmacy and Pharmaceutical Sciences, University of Michigan – 2025-01-03 07:33:00

A high drug failure rate is more than just a pattern recognition problem.

Thom Leach/Science Photo Library via Getty Images

Duxin Sun, University of Michigan and Christian Macedonia, University of Michigan

The potential of using artificial intelligence in drug discovery and development has sparked both excitement and skepticism among scientists, investors and the general public.

“Artificial intelligence is taking over drug development,” claim some companies and researchers. Over the past few years, interest in using AI to design drugs and optimize clinical trials has driven a surge in research and investment. AI-driven platforms like AlphaFold, which won the 2024 Nobel Prize for its ability to predict the structure of proteins and design new ones, showcase AI’s potential to accelerate drug development.

AI in drug discovery is “nonsense,” warn some industry veterans. They urge that “AI’s potential to accelerate drug discovery needs a reality check,” as AI-generated drugs have yet to demonstrate an ability to address the 90% failure rate of new drugs in clinical trials. Unlike the success of AI in image analysis, its effect on drug development remains unclear.

Pharmacist searching through drawer of drug packages

Behind every drug in your pharmacy are many, many more that failed.

nortonrsx/iStock via Getty Images Plus

We have been following the use of AI in drug development in our work as a pharmaceutical scientist in both academia and the pharmaceutical industry and as a former program manager in the Defense Advanced Research Projects Agency, or DARPA. We argue that AI in drug development is not yet a game-changer, nor is it complete nonsense. AI is not a black box that can turn any idea into gold. Rather, we see it as a tool that, when used wisely and competently, could help address the root causes of drug failure and streamline the process.

Most work using AI in drug development intends to reduce the time and money it takes to bring one drug to market – currently 10 to 15 years and US$1 billion to $2 billion. But can AI truly revolutionize drug development and improve success rates?

AI in drug development

Researchers have applied AI and machine learning to every stage of the drug development process. This includes identifying targets in the body, screening potential candidates, designing drug molecules, predicting toxicity and selecting patients who might respond best to the drugs in clinical trials, among others.

Between 2010 and 2022, 20 AI-focused startups discovered 158 drug candidates, 15 of which advanced to clinical trials. Some of these drug candidates were able to complete preclinical testing in the lab and enter human trials in just 30 months, compared with the typical 3 to 6 years. This accomplishment demonstrates AI’s potential to accelerate drug development.

Drug development is a long and costly process.

On the other hand, while AI platforms may rapidly identify compounds that work on cells in a Petri dish or in animal models, the success of these candidates in clinical trials – where the majority of drug failures occur – remains highly uncertain.

Unlike other fields that have large, high-quality datasets available to train AI models, such as image analysis and language processing, the AI in drug development is constrained by small, low-quality datasets. It is difficult to generate drug-related datasets on cells, animals or humans for millions to billions of compounds. While AlphaFold is a breakthrough in predicting protein structures, how precise it can be for drug design remains uncertain. Minor changes to a drug’s structure can greatly affect its activity in the body and thus how effective it is in treating disease.

Survivorship bias

Like AI, past innovations in drug development like computer-aided drug design, the Human Genome Project and high-throughput screening have improved individual steps of the process in the past 40 years, yet drug failure rates haven’t improved.

Most AI researchers can tackle specific tasks in the drug development process when provided with high-quality data and particular questions to answer. But they are often unfamiliar with the full scope of drug development, reducing challenges into pattern recognition problems and refinement of individual steps of the process. Meanwhile, many scientists with expertise in drug development lack training in AI and machine learning. These communication barriers can hinder scientists from moving beyond the mechanics of current development processes and identifying the root causes of drug failures.

Current approaches to drug development, including those using AI, may have fallen into a survivorship bias trap, overly focusing on less critical aspects of the process while overlooking major problems that contribute most to failure. This is analogous to repairing damage to the wings of aircraft returning from the battle fields in World War II while neglecting the fatal vulnerabilities in engines or cockpits of the planes that never made it back. Researchers often overly focus on how to improve a drug’s individual properties rather than the root causes of failure.

Diagram of airplane with clusters of red dots on the wing tips, tail and cockpit areas

While returning planes might survive hits to the wings, those with damage to the engines or cockpits are less likely to make it back.

Martin Grandjean, McGeddon, US Air Force/Wikimedia Commons, CC BY-SA

The current drug development process operates like an assembly line, relying on a checkbox approach with extensive testing at each step of the process. While AI may be able to reduce the time and cost of the lab-based preclinical stages of this assembly line, it is unlikely to boost success rates in the more costly clinical stages that involve testing in people. The persistent 90% failure rate of drugs in clinical trials, despite 40 years of process improvements, underscores this limitation.

Addressing root causes

Drug failures in clinical trials are not solely due to how these studies are designed; selecting the wrong drug candidates to test in clinical trials is also a major factor. New AI-guided strategies could help address both of these challenges.

Currently, three interdependent factors drive most drug failures: dosage, safety and efficacy. Some drugs fail because they’re too toxic, or unsafe. Other drugs fail because they’re deemed ineffective, often because the dose can’t be increased any further without causing harm.

We and our colleagues propose a machine learning system to help select drug candidates by predicting dosage, safety and efficacy based on five previously overlooked features of drugs. Specifically, researchers could use AI models to determine how specifically and potently the drug binds to known and unknown targets, the level of these targets in the body, how concentrated the drug becomes in healthy and diseased tissues, and the drug’s structural properties.

These features of AI-generated drugs could be tested in what we call phase 0+ trials, using ultra-low doses in patients with severe and mild disease. This could help researchers identify optimal drugs while reducing the costs of the current “test-and-see” approach to clinical trials.

While AI alone might not revolutionize drug development, it can help address the root causes of why drugs fail and streamline the lengthy process to approval.The Conversation

Duxin Sun, Associate Dean for Research, Charles Walgreen Jr. Professor of Pharmacy and Pharmaceutical Sciences, University of Michigan and Christian Macedonia, Adjunct Professor in Pharmaceutical Sciences, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Will AI revolutionize drug development? Researchers explain why it depends on how it’s used appeared first on theconversation.com

Continue Reading

The Conversation

Marketing for cybersecurity products often leaves consumers less secure

Published

on

theconversation.com – Doug Jacobson, Professor of Electrical and Computer Engineering, Iowa State University – 2025-01-02 07:27:00

Scare tactics might help sell security products, but they can actually make you less safe.

selimaksan/E+ via Getty Images

Doug Jacobson, Iowa State University

You have likely seen multiple ads for products and services designed to make you more secure online. When you turn on your television, see online ads, or even when you get in-app notifications, you are likely to encounter cybersecurity technology marketed as the ultimate solution and the last line of defense against digital threats.

Cybersecurity is big business, and tech companies often sell their products based on fear. These campaigns are often rooted in what I call the technology vs. user cycle, a feedback loop that creates more problems than it solves.

It works like this: Cybersecurity companies often market their products using tactics that emphasize fear (“Hackers are coming for your data!”), blame (“It’s your fault if something happens!”) and complexity (“Only our advanced solution can protect you”). They perpetuate the idea that users are inherently not savvy enough to manage security independently and that the solution is to adopt the latest product or service.

As a cybersecurity researcher, I find that this approach often has unintended, harmful consequences for people. Rather than feeling empowered, users feel helpless, convinced that cybersecurity is beyond their understanding. They may even develop techno-stress, overwhelmed by the need to keep up with constant updates, new tools and never-ending warnings about threats.

Over time, this can breed apathy and resentment. Users might disengage, believing that no matter what they do, they’ll always be at risk. Ironically, this mindset makes them more vulnerable as they begin to overlook simple, practical steps they could take to protect themselves.

The cycle is self-perpetuating. As users feel less secure, they are more likely to demand new technology to solve their problem, further fueling the very marketing tactics that created their insecurity in the first place. Security providers, in turn, double down on promises of fix-all solutions, reinforcing the narrative that people can’t manage security without their products.

Ironically, as people grow dependent on security products, they can become less secure. They start ignoring basic practices, become apathetic to constant warnings, and put blind trust in solutions they don’t understand.

The result is users remain stuck in a loop where they depend on technology but lack the confidence to use it safely, creating even more opportunities for people with malicious intent to exploit them.

Cybercrime evolution

I’ve worked in cybersecurity since the early 1990s and witnessed the field evolve over the decades. I’ve seen how adversaries adapt to new defenses and exploit people’s growing reliance on the internet. Two key shifts, in particular, stand out as pivotal moments in the evolution of cybercrime.

The first shift came with the realization that cybercrime could be immensely profitable. As society moved from paper checks and cash transactions to digital payments, criminals found that accessing and stealing money electronically was relatively easy. This transition to digital finance created opportunities for criminals to scale up their attacks, bypassing physical barriers and targeting the systems that underpin modern payment methods.

The second shift emerged over a decade ago as criminals targeted individuals directly rather than just going after businesses or governments. While attacks on companies, ransomware campaigns and critical infrastructure breaches still make headlines, there has also been a rise in attacks on everyday users. Cybercriminals have learned that people are often less prepared and more trusting than organizations, and so present lucrative opportunities.

This combination of digital financial systems and direct user targeting has redefined cybersecurity. It’s no longer just about protecting companies or critical infrastructure; it’s about ensuring the average person isn’t left defenseless. Yet, how cybersecurity technology is marketed and deployed often leaves users confused and feeling helpless.

two women, one seated and one standing, look at a computer monitor

Asking a knowledgeable friend or colleague is a good way to cut through the fear and confusion around cybersecurity.

Luis Alvarez/DigitalVision via Getty Images

User empowerment

The good news is that you have more power than you think. Cybersecurity doesn’t have to feel like an unsolvable puzzle or a job for experts alone. Instead of letting fear drive you into techno-stress or apathy, you can take matters into your own hands by leaning on trusted sources like community organizations, local libraries and tech-savvy friends.

These trusted voices can simplify the jargon, provide straightforward advice and help you make informed decisions. Imagine a world where you don’t have to rely on faceless companies for help but instead turn to a network of people who genuinely want to see you succeed.

I believe that cybersecurity vendors should offer tools and education that are inclusive, accessible and centered on real user needs. At the same time, people should actively engage with community-driven initiatives, adopt thoughtful security practices and rely on trusted resources for guidance. People feel more confident and capable when they surround themselves with people willing to teach and support them. Users can then adopt technology thoughtfully rather than rushing to buy every new product out of fear or disengaging completely.

This community-based approach goes beyond individual fixes. It creates a culture of shared responsibility and empowerment and helps create a more secure and resilient digital ecosystem.

Resources

Knowing where to find reliable information and support is essential to take control of your cybersecurity and start building your confidence. The following resource list includes trusted organizations, community programs and educational tools that can help you better understand cybersecurity, protect yourself against threats and even connect with local experts or peers for guidance.

Whether you’re looking to secure your devices, learn how to spot scams or stay informed about the latest digital threats, these resources are a great place to begin. Empowerment starts with taking that first step toward understanding your digital world.The Conversation

Doug Jacobson, Professor of Electrical and Computer Engineering, Iowa State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Marketing for cybersecurity products often leaves consumers less secure appeared first on theconversation.com

Continue Reading

The Conversation

Wildfire smoke’s health risks can linger long-term in homes that escape burning

Published

on

theconversation.com – Colleen E. Reid, Associate Professor of Geography, University of Colorado Boulder – 2024-12-23 11:00:00

The Marshall Fire spared some homes, shown here a day later, but smoke had blanketed the area.

Andy Cross/MediaNews Group/The Denver Post via Getty Images

Colleen E. Reid, University of Colorado Boulder

Three years ago, on Dec. 30, 2021, a wind-driven wildfire raced through two communities just outside Boulder, Colorado. In the span of about eight hours, more than 1,000 homes and businesses burned.

The fire left entire blocks in ash, but among them, pockets of houses survived, seemingly untouched. The owners of these homes may have felt relief at first. But fire damage can be deceiving, as many soon discovered.

When wildfires like the Marshall Fire reach the wildland-urban interface, they are burning both vegetation and human-made materials. Vehicles and buildings burn, along with all of the things inside them – electronics, paint, plastics, furniture.

Research shows that when human-made materials like these burn, the chemicals released are different from what is emitted when just vegetation burns. The smoke and ash can blow under doors and around windows in nearby homes, bringing in chemicals that stick to walls and other indoor surfaces and continue off-gassing for weeks to months, particularly in warmer temperatures.

An aerial view of burned neighborhoods with a few houses standing among burned lots and at the edges of the fire area.

The Marshall Fire swept through several neighborhoods in the towns of Louisville and Superior, Colo. In the homes that were left standing, residents dealt with lingering smoke and ash in their homes.

Michael Ciaglo/Getty Images

In a new study, my colleagues and I looked at the health effects people experienced when they returned to still-standing homes after the Marshall Fire. We also created a checklist for people to use after urban wildfires in the future to help them protect their health and reduce their risks when they return to smoke-damaged homes.

Tests in homes found elevated metals and VOCs

In the days after the Marshall Fire, residents quickly reached out to nearby scientists who study wildfire smoke and health risks at the University of Colorado Boulder and area labs. People wanted to know what was in the ash and causing the lingering smells inside their homes.

In homes we were able to test, my colleagues found elevated levels of metals and PAHs – polycyclic aromatic hydrocarbons – in the ash. We also found elevated VOCs – volatile organic compounds – in airborne samples. Some VOCs, such as dioxins, benzene, formaldehyde and PAHs, can be toxic to humans. Benzene is a known carcinogen.

People wanted to know whether the chemicals that got into their homes that day could harm their health.

At the time, we could find no information about physical health implications for people who have returned to smoke-damaged homes after a wildfire. To look for patterns, we surveyed residents affected by the fire six months, one year and two years afterward.

Symptoms 6 months after the fire

Even six months after the fire, we found that many people were reporting symptoms that aligned with health risks related to smoke and ash from fires.

More than half (55%) of the people who responded to our survey reported that they were experiencing at least one symptom six months after the blaze that they attributed to the Marshall Fire. The most common symptoms reported were itchy or watery eyes (33%), headache (30%), dry cough (27%), sneezing (26%) and sore throat (23%).

All of these symptoms, as well as having a strange taste in one’s mouth, were associated with people reporting that their home smelled differently when they returned to it one week after the fire.

Many survey respondents said that the smells decreased over time. Most attributed the improvement in smell to the passage of time, cleaning surfaces and air ducts, replacing furnace filters, and removing carpet, textiles and furniture from the home. Despite this, many still had symptoms.

We found that living near a large number of burned structures was associated with these health symptoms. For every 10 additional destroyed buildings within 820 feet (250 meters) of a person’s home, there was a 21% increase in headaches and a 26% increase in having a strange taste in their mouth.

These symptoms align with what could be expected from exposure to the chemicals that we found in the ash and measured in the air inside the few smoke-damaged homes that we were able to study in depth.

Lingering symptoms and questions

There are a still a lot of unanswered questions about the health risks from smoke- and ash-damaged homes.

For example, we don’t yet know what long-term health implications might look like for people living with lingering gases from wildfire smoke and ash in a home.

We found a significant decline in the number of people reporting symptoms one year after the fire. However, 33% percent of the people whose homes were affected still reported at least one symptom that they attributed to the fire. About the same percentage also reported at least one symptom two years after the fire.

We also could not measure the level of VOCs or metals that each person was exposed to. But we do think that reports of a change in the smell of a person’s home one week after the fire demonstrates the likely presence of VOCs in the home. That has health implications for people whose homes are exposed to smoke or ash from a wildfire.

Tips to protect yourself after future wildfires

Wildfires are increasingly burning homes and other structures as more people move into the wildland-urban interface, temperatures rise and fire seasons lengthen.

It can be confusing to know what to do if your home is one that survives a wildfire nearby. To help, my colleagues and I put together a website of steps to take if your home is ever infiltrated by smoke or ash from a wildfire.

Here are a few of those steps:

  • When you’re ready to clean your home, start by protecting yourself. Wear at least an N95 (or KN95) mask and gloves, goggles and clothing that covers your skin.

  • Vacuum floors, drapes and furniture. But avoid harsh chemical cleaners because they can react with the chemicals in the ash.

  • Clean your HVAC filter and ducts to avoid spreading ash further. Portable air cleaners with carbon filters can help remove VOCs.

A recent scientific study documents how cleaning all surfaces within a home can reduce reservoirs of VOCs and lower indoor air concentrations of VOCs.

Given that we don’t know much yet about the health harms of smoke- and ash-damaged homes, it is important to take care in how you clean so you can do the most to protect your health.The Conversation

Colleen E. Reid, Associate Professor of Geography, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Wildfire smoke’s health risks can linger long-term in homes that escape burning appeared first on theconversation.com

Continue Reading

Trending