fbpx
Connect with us

The Conversation

How AI deciphers neural signals to help a man with ALS speak

Published

on

theconversation.com – Nicholas Card, Postdoctoral Fellow of Neuroscience and Neuroengineering, of California, Davis – 2024-08-22 07:17:14

Casey Harrell, who has ALS, works with a brain-computer interface to turn his into words.

Nicholas Card

Nicholas Card, University of California, Davis

Brain-computer interfaces are a groundbreaking technology that can paralyzed people regain functions they’ve lost, like moving a hand. These devices record from the brain and decipher the user’s intended action, bypassing damaged or degraded nerves that would normally transmit those brain signals to control muscles.

Advertisement

Since 2006, demonstrations of brain-computer interfaces in humans have primarily focused on restoring arm and hand movements by enabling people to control computer cursors or robotic arms. Recently, researchers have begun developing speech brain-computer interfaces to restore communication for people who cannot speak.

As the user attempts to talk, these brain-computer interfaces record the person’s unique brain signals associated with attempted muscle movements for speaking and then translate them into words. These words can then be displayed as text on a screen or spoken aloud using text-to-speech software.

I’m a reseacher in the Neuroprosthetics Lab at the University of California, Davis, which is part of the BrainGate2 clinical trial. My colleagues and I recently demonstrated a speech brain-computer interface that deciphers the attempted speech of a man with ALS, or amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease. The interface converts neural signals into text with over 97% accuracy. Key to our system is a set of artificial intelligence language models โ€“ artificial neural networks that help interpret natural ones.

Recording brain signals

The first step in our speech brain-computer interface is recording brain signals. There are several sources of brain signals, some of which require surgery to record. Surgically implanted recording devices can capture high-quality brain signals because they are placed closer to neurons, resulting in stronger signals with less interference. These neural recording devices include grids of electrodes placed on the brain’s surface or electrodes implanted directly into brain tissue.

Advertisement

In our study, we used electrode arrays surgically placed in the speech motor cortex, the part of the brain that controls muscles related to speech, of the participant, Casey Harrell. We recorded neural activity from 256 electrodes as Harrell attempted to speak.

A small square device with an array of spikes on the bottom and a bundle of wires on the top

An array of 64 electrodes that embed into brain tissue records neural signals.

UC Davis

Decoding brain signals

The next is relating the complex brain signals to the words the user is trying to say.

One approach is to map neural activity patterns directly to spoken words. This method requires recording brain signals corresponding to each word multiple times to identify the average relationship between neural activity and specific words. While this strategy works well for small vocabularies, as demonstrated in a 2021 study with a 50-word vocabulary, it becomes impractical for larger ones. Imagine asking the brain-computer interface user to try to say every word in the dictionary multiple times โ€“ it could take months, and it still wouldn’t work for new words.

Advertisement

Instead, we use an alternative strategy: mapping brain signals to phonemes, the basic units of sound that make up words. In English, there are 39 phonemes, ch, er, oo, pl and sh, that can be combined to form any word. We can measure the neural activity associated with every phoneme multiple times just by asking the participant to read a few sentences aloud. By accurately mapping neural activity to phonemes, we can assemble them into any English word, even ones the system wasn’t explicitly trained with.

To map brain signals to phonemes, we use advanced machine learning models. These models are particularly well-suited for this task due to their ability to find patterns in large amounts of complex data that would be impossible for humans to discern. Think of these models as super-smart listeners that can pick out important information from noisy brain signals, much like you might focus on a conversation in a crowded room. Using these models, we were able to decipher phoneme sequences during attempted speech with over 90% accuracy.

The brain-computer interface uses a clone of Casey Harrell’s voice to read aloud the text it deciphers from his neural activity.

From phonemes to words

Once we have the deciphered phoneme sequences, we need to convert them into words and sentences. This is challenging, especially if the deciphered phoneme sequence isn’t perfectly accurate. To solve this puzzle, we use two complementary types of machine learning language models.

The first is n-gram language models, which predict which word is most likely to follow a set of n words. We trained a 5-gram, or five-word, language model on millions of sentences to predict the likelihood of a word based on the previous four words, capturing local context and common phrases. For example, after โ€œI am very good,โ€ it might suggest โ€œโ€ as more likely than โ€œpotatoโ€. Using this model, we convert our phoneme sequences into the 100 most likely word sequences, each with an associated probability.

Advertisement

The second is large language models, which power AI chatbots and also predict which words most likely follow others. We use large language models to refine our choices. These models, trained on vast amounts of diverse text, have a broader understanding of language structure and meaning. They help us determine which of our 100 candidate sentences makes the most sense in a wider context.

By carefully balancing probabilities from the n-gram model, the large language model and our initial phoneme predictions, we can make a highly educated guess about what the brain-computer interface user is trying to say. This multistep process allows us to handle the uncertainties in phoneme decoding and produce coherent, contextually appropriate sentences.

Diagram showing a man, his brain, wires and a computer screen

How the UC Davis speech brain-computer interface deciphers neural activity and turns them into words.

UC Davis Health

Real-world benefits

In practice, this speech decoding strategy has been remarkably successful. We’ve enabled Casey Harrell, a man with ALS, to โ€œspeakโ€ with over 97% accuracy using just his thoughts. This breakthrough allows him to easily converse with his and friends for the first time in years, all in the comfort of his own home.

Advertisement

Speech brain-computer interfaces represent a significant step forward in restoring communication. As we continue to refine these devices, they hold the promise of giving a voice to those who have lost the ability to speak, reconnecting them with their loved ones and the world around them.

However, challenges remain, such as making the technology more accessible, portable and durable over years of use. Despite these hurdles, speech brain-computer interfaces are a powerful example of how science and technology can together to solve complex problems and dramatically improve people’s lives.The Conversation

Nicholas Card, Postdoctoral Fellow of Neuroscience and Neuroengineering, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post How AI deciphers neural signals to help a man with ALS speak appeared first on theconversation.com

The Conversation

How researchers measure wildfire smoke exposure doesnโ€™t capture long-term health effects โˆ’ and hides racial disparities

Published

on

theconversation.com – Joan Casey, Associate Professor of Environmental and Occupational Health Sciences, University of Washington – 2024-09-16 07:26:33

Fine particulate matter from wildfires can cause long-term health harms.
Gary Hershorn/Getty Images

Joan Casey, University of Washington and Rachel Morello-Frosch, University of California, Berkeley

Kids born in 2020 worldwide will experience twice the number of wildfires during their lifetimes compared with those born in 1960. In California and other western states, frequent wildfires have become as much a part of summer and fall as popsicles and Halloween candy.

Wildfires produce fine particulate matter, or PMโ‚‚.โ‚…, that chokes the and penetrates deep into lungs. Researchers know that short-term exposure to wildfire PMโ‚‚.โ‚… increases acute care visits for cardiorespiratory problems such as asthma. However, the long-term effects of repeated exposure to wildfire PMโ‚‚.โ‚… on chronic health conditions are unclear.

Advertisement

One reason is that scientists have not decided how best to measure this type of intermittent yet ongoing exposure. Environmental epidemiologists and health scientists like us usually summarize long-term exposure to total PMโ‚‚.โ‚… โ€“ which from power plants, industry and transportation โ€“ as average exposure over a year. This might not make sense when measuring exposure to wildfire. Unlike traffic-related air pollution, for example, levels of wildfire PMโ‚‚.โ‚… vary a lot throughout the year.

To improve health and equity research, our team has developed five metrics that better capture long-term exposure to wildfire PMโ‚‚.โ‚….

Measuring fluctuating wildfire PMโ‚‚.โ‚…

To understand why current measurements of wildfire PMโ‚‚.โ‚… aren’t adequately capturing an individual’s long-term exposure, we need to delve into the concept of averages.

Say the mean level of PMโ‚‚.โ‚… over a year was 1 microgram per cubic meter. A person could experience that exposure as 1 microgram per cubic meter every day for 365 days, or as 365 micrograms per cubic meter on a single day.

Advertisement

While these two scenarios result in the same average exposure over a year, they might have very different biological effects. The body might be able to fend off from exposure to 1 microgram per cubic meter each day, but be overwhelmed by a huge, single dose of 365 micrograms per cubic meter.

For perspective, in 2022, Americans experienced an average total PMโ‚‚.โ‚… exposure of 7.8 micrograms per cubic meter. Researchers estimated that in the 35 states that experience wildfires, these wildfires added on average just 0.69 micrograms per cubic meter to total PMโ‚‚.โ‚… each year from 2016 to 2020. This perspective misses the mark, however.

For example, a census tract close to the 2018 Camp Fire experienced an average wildfire PMโ‚‚.โ‚… concentration of 1.2 micrograms per cubic meter between 2006 to 2020. But the actual fire had a peak exposure of 310 micrograms per cubic meter โ€“ the world’s highest level that day.

Orange haze blanketing a city skyline, small silhouette of a person taking a photo by a streetlight
Classic estimates of average PMโ‚‚.โ‚… levels miss the peak exposure of wildfire .
Angela Weiss/AFP via Getty Images

Scientists want to better understand what such extreme exposures mean for long-term human health. Prior studies on long-term wildfire PMโ‚‚.โ‚… exposure focused mostly on people living close to a large fire, up years later to check on their health status. This misses any new exposures that took place between baseline and follow-up.

More recent studies have tracked long-term exposure to wildfire PMโ‚‚.โ‚… that changes over time. For example, researchers reported associations between wildfire PMโ‚‚.โ‚… exposure over two years and risk of death from cancer and any other cause in Brazil. This work again relied on long-term average exposure and did not directly capture extreme exposures from intermittent wildfire events. Because the study did not evaluate it, we do not know whether a specific pattern of long-term wildfire PMโ‚‚.โ‚… exposure was worse for health.

Advertisement

Most days, people experience no wildfire PMโ‚‚.โ‚… exposure. Some days, wildfire exposure is intense. As of now, we do not know whether a few very bad days or many slightly bad days are riskier for health.

A new framework

How can we get more realistic estimates that capture the huge peaks in PMโ‚‚.โ‚… levels that people are exposed to during wildfires?

When thinking about the wildfire PMโ‚‚.โ‚… that people experience, exposure scientists โ€“ researchers who study contact between humans and harmful agents in the โ€“ consider frequency, duration and intensity. These interlocking factors describe the body’s true exposure during a wildfire event.

In our recent study, our team proposed a framework for measuring long-term exposure to wildfire PMโ‚‚.โ‚… that incorporates the frequency, duration and intensity of wildfire events. We applied air quality models to California wildfire data from 2006 to 2020, deriving new metrics that capture a range of exposure types.

Advertisement
Five heat maps of California paired with bar graphs of exposures over time
The researchers proposed five ways to measure long-term wildfire PMโ‚‚.โ‚… exposure.
Casey et al. 2024/PNAS, CC BY-NC-ND

One metric we devised is number of days with any wildfire PMโ‚‚.โ‚… exposure over a long-term period, which can identify even the smallest exposures. Another metric is average concentration of wildfire PMโ‚‚.โ‚… during the peak of smoke levels over a long period, which highlights locations that experience the most extreme exposures. We also developed several other metrics that may be more useful, depending on what effects are being studied.

Interestingly, these metrics were quite correlated with one another, suggesting places with many days of at least some wildfire PMโ‚‚.โ‚… also had the highest levels overall. Although this can make it difficult to decide between different exposure patterns, the suitability of each metric depends in part on what health effects we are investigating.

Environmental injustice

We also assessed whether certain racial and ethnic groups experienced higher-than-average wildfire PMโ‚‚.โ‚… exposure and found that different groups faced the most exposure depending on the year.

Consider 2018 and 2020, two major wildfire years in California. The most exposed census tracts, by all metrics, were composed primarily of non-Hispanic white individuals in 2018 and Hispanic individuals in 2020. This makes sense, since non-Hispanic white people constitute about 41.6% and Hispanic people 36.4% of California’s population.

To understand whether other groups faced excess wildfire PMโ‚‚.โ‚… exposure, we used relative comparisons. This means we compared the true wildfire PMโ‚‚.โ‚… exposure experienced by each racial and ethnic group with what we would have expected if they were exposed to the average.

Advertisement

We found that Indigenous communities had the most disproportionate exposure, experiencing 1.68 times more PMโ‚‚.โ‚… than expected. In comparison, non-Hispanic white Californians were 1.13 times more exposed to PMโ‚‚.โ‚… than expected, and multiracial Californians 1.09 times more exposed than expected.

Person holding child, sitting by two other people; in the foreground, a child approaches the camera
Better metrics for long-term PM2.5 exposure can help researchers better understand who’s most vulnerable to wildfire smoke.
Eric Thayer/Stringer via Getty Images News

Rural tribal lands had the highest mean wildfire PMโ‚‚.โ‚… concentrations โ€“ 0.83 micrograms per cubic meter โ€“ of any census tract in our study. A large portion of Native American people in California live in rural areas, often with higher wildfire risk due to decades of poor forestry management, including legal suppression of cultural burning practices that studies have shown to aid in reducing catastrophic wildfires. Recent state legislation has removed liability risks of cultural burning on Indigenous lands in California.

Understanding the drivers and health effects of high long-term exposure to wildfire PMโ‚‚.โ‚… among Native American and Alaska Native people can help address substantial health disparities between these groups and other Americans.The Conversation

Joan Casey, Associate Professor of Environmental and Occupational Health Sciences, University of Washington and Rachel Morello-Frosch, Professor of Environmental Science, Policy and Management and of Public Health, University of California, Berkeley

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post How researchers measure wildfire smoke exposure doesn’t capture long-term health effects โˆ’ and hides racial disparities appeared first on theconversation.com

Continue Reading

The Conversation

Genetically modified varieties are coming out of the lab and into homes and gardens

Published

on

theconversation.com – James W. Satterlee, Postdoctoral Fellow in Plant Genetics, Cold Spring Harbor Laboratory – 2024-09-16 07:26:49

Not every rose has its thorn, thanks to gene editing.
James Satterlee, CC BY-SA

James W. Satterlee, Cold Spring Harbor Laboratory

As any avid gardener will tell you, plants with sharp thorns and prickles can you looking like you’ve had a -in with an angry cat. Wouldn’t it be nice to rid plants of their prickles entirely but keep the tasty fruits and beautiful flowers?

I’m a geneticist who, along with my colleagues, recently discovered the gene that accounts for prickliness across a variety of plants, roses, eggplants and even some species of grasses. Genetically tailored, smooth-stemmed plants may eventually arrive at a garden center near you.

Advertisement

Acceleration of nature

Plants and other organisms evolve naturally over time. When random changes to their DNA, called mutations, enhance survival, they get passed on to offspring. For thousands of years, plant breeders have taken advantage of these variations to create high-yielding crop varieties.

In 1983, the first genetically modified organisms, or GMOs, appeared in agriculture. Golden rice, engineered to combat vitamin A deficiency, and pest-resistant corn are just a of examples of how genetic modification has been used to enhance crop plants.

Two recent developments have changed the landscape further. The advent of gene editing using a technique known as CRISPR has made it possible to modify plant traits more easily and quickly. If the genome of an organism were a book, CRISPR-based gene editing is akin to adding or removing a sentence here or there.

This tool, combined with the increasing ease with which scientists can sequence an organism’s complete collection of DNA โ€“ or genome โ€“ is rapidly accelerating the ability to predictably engineer an organism’s traits.

Advertisement

By identifying a key gene that controls prickles in eggplants, our team was able to use gene editing to mutate the same gene in other prickly species, yielding smooth, prickle- plants. In addition to eggplants, we got rid of prickles in a desert-adapted wild plant species with edible raisin-like fruits.

Two sets of two photos. First set shows a cluster of prickly fruits on a plant and the harvest of those prickly fruits. Second set shows the same plant with fruits but without prickles and the harvest of those prickle-free fruits.
The desert raisin (Solanum cleistogamum) gets a makeover.
Blaine Fitzgerald, CC BY-SA

We also used a virus to silence the expression of a closely related gene in roses, yielding a rose without thorns.

In natural settings, prickles defend plants against grazing herbivores. But under cultivation, edited plants would be easier to handle โ€“ and after harvest, fruit would be reduced. It’s worth noting that prickle-free plants still retain other defenses, such as their chemical-laden epidermal hairs called trichomes that deter insect pests.

From glowing petunias to purple tomatoes

, DNA modification technologies are no longer confined to large-scale agribusiness โ€“ they are becoming available directly to consumers.

One approach is to mutate certain genes, like we did with our prickle-free plants. For example, scientists have created a mild-tasting but nutrient-dense mustard green by inactivating the genes responsible for bitterness. Silencing the genes that delay flowering in tomatoes has resulted in compact plants well suited to urban agriculture.

Advertisement

Another modification approach is to permanently transfer genes from one species to another, using recombinant DNA technology to yield what scientists call a transgenic organism.

A photo taken in the dark shows a glowing petunia plant.
The firefly petunia is genetically engineered to glow in the dark.
Ceejayoz, CC BY-SA

At a recent party, I found myself crowded into a darkened bathroom to observe the faint glow of the host’s newly acquired firefly petunia, which contains the genes responsible for the ghost ear mushroom’s bioluminescent glow. Scientists have also modified a pothos houseplant with a gene from rabbits, which allows it to host air-filtering microbes that promote the of harmful volatile organic compounds, or VOCs.

A purple tomato is sliced open to reveal purple flesh inside.
The Norfolk purple tomato is colorful to the core.
Norfolk Healthy Produce, CC BY-SA

Consumers can also grow the purple tomato, genetically engineered to contain pigment-producing genes from the snapdragon plant, resulting in antioxidant-rich tomatoes with a dark purple hue.

Risks and rewards

The introduction of genetically modified plants into the consumer market brings with it both exciting opportunities and potential challenges.

With genetically edited plants in the hands of the public, there could be less oversight over what people do with them. For instance, there is a risk of environmental release, which could have unforeseen ecological consequences. Additionally, as the market for these plants expands, the quality of products may become more variable, necessitating new or more vigilant consumer protection laws. Companies could also apply patent rules limiting seed reuse, echoing some of the issues seen in the agricultural sector.

The future of plant genetic technology is bright โ€“ in some cases, quite literally. Bioluminescent golf courses, houseplants that emit tailored fragrances or flowers capable of changing their color in response to spray-based treatments are all theoretical possibilities. But as with any powerful technology, careful regulation and oversight will be crucial to ensuring these innovations benefit consumers while minimizing potential risks.The Conversation

James W. Satterlee, Postdoctoral Fellow in Plant Genetics, Cold Spring Harbor Laboratory

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Genetically modified varieties are coming out of the lab and into homes and gardens appeared first on .com

Advertisement
Continue Reading

The Conversation

Will your phone one day let you smell as well as see and hear whatโ€™s on the other end of a call?

Published

on

theconversation.com – Jian Liu, Assistant Professor of Electrical Engineering and Computer Science, of Tennessee – 2024-09-16 07:27:05

Phones that transmit odors seem like a great idea, but careful what you wish for!

Teo Mahatmana/iStock via Getty Images

Jian Liu, University of Tennessee

Advertisement

Curious Kids is a for of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Is it possible to make a phone through which we can smell, like we can hear and see? โ€“ Muneeba K., age 10, Pakistan


Imagine this: You pick up your phone for a video call with a friend. Not only can you see their face and hear their voice, but you can also smell the cookies they just baked. It sounds like something out of a science fiction , but could it actually happen?

I’m a computer scientist who studies how machines sense the world.

What phones do now

When you listen to music or to someone on your phone, you can hear the sound through the built-in speakers. These speakers convert digital signals into physical vibrations using a tiny component called a diaphragm. Your ears sense those vibrations as sound waves.

Advertisement

Your phone also has a screen that displays images and videos. The screen uses tiny dots known as pixels that consist of three primary colors: red, green and blue. By mixing these colors in different ways, your phone can show you everything from beautiful beach scenes to cute puppies.

Smelling with phones

Now how about the sense of smell? Smells are created by tiny particles called molecules that float through the and reach your nose. Your nose then sends signals to your brain, which identifies the smell.

So, could your phone send these smell molecules to you? Scientists are working on it. Think about how your phone screen works. It doesn’t have every color in the world stored inside it. Instead, it uses just three colors to create millions of different hues and shades.

How your sense of smell works.

Now imagine something similar for smells. Scientists are developing digital scent technology that uses a small number of different cartridges, each containing a specific scent. Just like how pixels mix three colors to create images, these scent cartridges could mix to create different smells.

Advertisement

Just like images on your phone are made of digital codes that represent combinations of pixels, smells produced by a future phone could be created using digital codes. Each smell could have a specific recipe made up of different amounts of the ingredients in the cartridges.

When you a digital scent code, your phone could mix tiny amounts of the different scents from the cartridges to create the desired smell. This mix would then be released through a small vent on the phone, allowing you to smell it. With just a few cartridges, your phone could potentially create a huge variety of smells, much like how red, green and blue pixels can create countless colors.

Researchers and companies are already working on digital odor makers like this.

The challenges to making smell phones

Creating a phone that can produce smells involves several challenges. One is designing a system that can produce thousands of different smells using only a few cartridges. Another is how to control how strong a scent should be and how long a phone should emit it. And phones will also need to sense odors near them and convert those to digital codes so your friends’ phones can send smells to you.

Advertisement

The cartridges should also be easy to refill, and the chemicals in them be safe to breathe. These hurdles make it a tricky but exciting area of research.

An odiferous future

Even though we’re not there yet, scientists and engineers are working hard to make smell phones a reality. Maybe one day you’ll be able to not only see and hear your friend’s birthday party over the phone, but also smell the candles they blew out!


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the where you .

And since curiosity has no age limit โ€“ adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Jian Liu, Assistant Professor of Electrical Engineering and Computer Science, University of Tennessee

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Will your phone one day let you smell as well as see and hear what’s on the other end of a call? appeared first on .com

Advertisement
Continue Reading

Trending