fbpx
Connect with us

The Conversation

African elephants address one another with name-like calls − similar to humans

Published

on

theconversation.com – Mickey Pardo, Postdoctoral Fellow in Fish, Wildlife and Conservation Biology, Colorado – 2024-06-11 12:47:10
Elephants have close social bonds, which may have led to the evolution of name-like calls.
Michael Pardo

Mickey Pardo, Colorado State University

What's in a name? People use unique names to address each other, but we're one of only a handful of animal species known to do that, bottlenose dolphins. Finding more animals with names and investigating how they use them can improve scientists' understanding of both other animals and ourselves.

As elephant researchers who have observed -ranging elephants for years, my colleagues and I get to know wild elephants as individuals, and we make up names for them that us remember who is who. The elephants in question fully in the wild and are, of course, unaware of the epithets we apply to them.

But in a new study published in Nature Ecology and Evolution, we found evidence that elephants have their own names that they use to address each other. This research places elephants among the very small number of species known to address one another in this way, and it has implications for scientists' understanding of animal intelligence and the evolutionary origins of language.

Advertisement

Finding evidence for name-like calls

My colleagues and I had long suspected that elephants might be able to address one another with name-like calls, but no researchers had tested that idea. To explore this question, we followed elephants across the Kenyan savanna, recording their vocalizations and noting, whenever possible, who made each call and whom the call was addressed to.

When most people think of elephant calls, they imagine loud trumpets. But really, most elephant calls are deep, thrumming sounds known as rumbles that are partially below the range of human hearing. We thought that if elephants have names, they most likely say them in rumbles, so we focused on these calls in our analysis.

Elephant rumbles have a deep, sonorous sound.
Michael Pardo236 KB (download)
Advertisement

We reasoned that if rumbles contain something like a name, then we should be able to identify whom a call is intended for based purely on the call's properties. To determine whether this was the case, we trained a machine learning model to identify the recipient of each call.

We fed the model a of numbers describing the sound properties of each call and told it which elephant each call was addressed to. Based on this information, the model tried to learn patterns in the calls associated with the identity of the recipient. Then, we asked the model to predict the recipient for a separate sample of calls. We used a total of 437 calls from 99 individual callers to train the model.

Part of the reason we needed to use machine learning for this analysis is because rumbles convey multiple messages at once, including the identity, age and sex of the caller, emotional state and behavioral context. Names are likely only one small component within these calls. A computer algorithm is often better than the human ear at detecting such complex and subtle patterns.

We didn't expect elephants to use names in every call, but we had no way of knowing ahead of time which calls might contain a name. So, we included all the rumbles where we thought they might use names at least some of the time in this analysis.

Advertisement

The model successfully identified the recipient for 27.5% of these calls – significantly better than what it would have achieved by randomly guessing. This result indicated that some rumbles contained information that the model to identify the intended recipient of the call.

But this result alone wasn't enough evidence to conclude that the rumbles contained names. For example, the model might have picked up on the unique voice patterns of the caller and guessed who the recipient was based on whom the caller tended to address the most.

In our next analysis, we found that calls from the same caller to the same recipient were significantly more similar, on average, than calls from the same caller to different recipients. This meant that the calls really were specific to individual recipients, like a name.

Next, we wanted to determine whether elephants could perceive and respond to their names. To figure that out, we played 17 elephants a recording of a call that was originally addressed to them that we assumed contained their name. Then, on a separate day, we played them a recording of the same caller addressing someone else.

Advertisement
We played calls to the elephants in our sample, and some elephants called back.

The elephants vocalized and approached the source of the sound more readily when the call was one originally addressed to them. On average, they approached the speaker 128 seconds sooner, vocalized 87 seconds sooner and produced 2.3 times more vocalizations in response to a call that was intended for them. That result told us that elephants can determine whether a call was meant for them just by hearing the call out of context.

Names without imitation

Elephants are not the only animals with name-like calls. Bottlenose dolphins and some parrots address other individuals by imitating the signature call of the addressee, which is a unique “call sign” that dolphins and parrots usually use to announce their own identity.

This system of naming via imitation is a little different from the way names and other words typically work in human language. While we do occasionally name things by imitating the sounds that they make, such as “cuckoo” and “zipper,” most of our words are arbitrary. They have no inherent acoustic connection to the thing they refer to.

Arbitrary words are part of what allows us to about such a wide range of topics, including objects and ideas that don't make any sound.

Advertisement

Intriguingly, we found that elephant calls addressed to a particular recipient were no more similar to the recipient's calls than to the calls of other individuals. This finding suggested that like humans, but unlike other animals, elephants may address one another without just imitating the addressee's calls.

Two elephants, and adult and a juvenile, stand together on a desert.
Elephants' use of name-like calls underscores their intelligence.
Michael Pardo

What's next

We're still not sure exactly where the elephant names are located within a call or how to tease them apart from all of the other information conveyed in a rumble.

Next, we want to figure out how to isolate the names for specific individuals. Achieving that will allow us to address a range of other questions, such as whether different callers use the same name to address the same recipient, how elephants acquire their names, and even whether they ever talk about others in their absence.

Name-like calls in elephants could potentially tell researchers something about how human language evolved.

Most mammals, including our closest primate relatives, produce only a fixed set of vocalizations that are essentially preprogrammed into their brain at birth. But language depends on being able to learn new words.

Advertisement

So, before our ancestors could develop a full-fledged language, they needed to evolve the ability to learn new vocalizations. Dolphins, parrots and elephants have all independently evolved this capacity, and they all use it to address one another by name.

Maybe our ancestors originally evolved the ability to learn new vocalizations in order to learn names for each other, and then later co-opted this ability to learn a wider range of words.

Our findings also underscore how incredibly complex elephants are. Using arbitrary sounds to name other individuals implies a capacity for abstract thought, as it involves using sound as a symbol to represent another elephant.

The fact that elephants need to name each other in the first place highlights the importance of their many, distinct social bonds.

Advertisement

Learning about the elephant mind and its similarities to ours may also increase humans' appreciation for elephants at a time when conflict with humans is one of the biggest threats to wild elephant survival.The Conversation

Mickey Pardo, Postdoctoral Fellow in Fish, Wildlife and Conservation Biology, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post African elephants address one another with name-like calls − similar to humans appeared first on .com

Advertisement

The Conversation

Federal funding for major science agencies is at a 25-year low

Published

on

theconversation.com – Chris Impey, Distinguished Professor of Astronomy, University of Arizona – 2024-06-28 07:19:14
Support for science has traditionally been bipartisan, but fights over spending have affected research funding.
AP Photo/J. Scott Applewhite

Chris Impey, University of Arizona

funding for science is usually immune from political gridlock and polarization in Congress. But, federal funding for science is slated to drop for 2025.

Science research dollars are considered to be discretionary, which means the funding has to be approved by Congress every year. But it's in a budget category with larger entitlement programs like Medicare and Social Security that are generally considered untouchable by politicians of both parties.

Federal investment in scientific research encompasses everything from large telescopes supported by the National Science Foundation to NASA satellites studying climate change, programs studying the use and governance of artificial intelligence at the National Institute of Standards and Technology, and research on Alzheimer's disease funded by the National Institutes of Health.

Advertisement

Studies show that increasing federal research spending productivity and economic competitiveness.

I'm an astronomer and also a senior university administrator. As an administrator, I've been involved in lobbying for research funding as associate dean of the College of Science at the University of Arizona, and in encouraging government investment in astronomy as a vice president of the American Astronomical Society. I've seen the importance of this kind of funding as a researcher who has had federal grants for 30 years, and as a senior academic who helps my colleagues write grants to support their valuable work.

Bipartisan support

Federal funding for many programs is characterized by political polarization, meaning that partisanship and ideological divisions between the two main political parties can to gridlock. Science is usually a rare exception to this problem.

The public shows strong bipartisan support for federal investment in scientific research, and Congress has generally followed suit, passing bills in 2024 with bipartisan backing in April and June.

Advertisement

The House passed these bills, and after reconciliation with language from the Senate, they resulted in final bills to direct US$460 billion in government spending.

However, policy documents produced by Congress reveal a partisan split in how Democratic and Republican lawmakers reference scientific research.

Congressional committees for both sides are citing more scientific papers, but there is only a 5% overlap in the papers they cite. That means that the two parties are using different evidence to make their funding decisions, rather than working from a scientific consensus. Committees under Democratic control were almost twice as likely to cite technical papers as panels led by Republicans, and they were more likely to cite papers that other scientists considered important.

Ideally, all the best ideas for scientific research would receive federal funds. But limited support for scientific research in the United States means that for individual scientists, getting funding is a highly competitive process.

Advertisement

At the National Science Foundation, only 1 in 4 proposals are accepted. rates for funding through the National Institutes of Health are even lower, with 1 in 5 proposals getting accepted. This low success rate means that the agencies have to reject many proposals that are rated excellent by the merit review process.

Scientists are often reluctant to publicly advocate for their programs, in part because they feel disconnected from the policymaking and appropriations process. Their academic doesn't equip them to communicate effectively to legislators and policy experts.

Budgets are down

Research received steady funding for the past few decades, but this year Congress reduced appropriations for science at many top government agencies.

Advertisement

The National Science Foundation budget is down 8%, which led agency leaders to warn Congress that the country may lose its ability to attract and train a scientific workforce.

The cut to the NSF is particularly disappointing since Congress promised it an extra $81 over five years when the CHIPS and Science Act passed in 2022. A deal to limit government spending in exchange for suspending the debt ceiling made the law's goals hard to achieve.

NASA's science budget is down 6%, and the budget for the National Institutes of Health, whose research aims to prevent disease and improve public health, is down 1%. Only the Department of Energy's Office of Science got a bump, a modest 2%.

As a result, the major science agencies are nearing a 25-year low for their funding levels, as a share of U.S. gross domestic product.

Advertisement

Feeling the squeeze

Investment in research and development by the business sector is strongly increasing. In 1990, it was slightly higher than federal investment, but by 2020 it was nearly four times higher.

The distinction is important because business investment tends to focus on later stage and applied research, while federal funding goes to pure and exploratory research that can have enormous downstream benefits, such as for quantum computing and fusion power.

There are several causes of the science funding squeeze. Congressional intentions to increase funding levels, as with the CHIPS and Science Act, and the earlier COMPETES Act in 2007, have been derailed by fights over the debt limit and threats of government shutdowns.

The CHIPS act aimed to spur investment and job creation in semiconductor manufacturing, while the COMPETES Act aimed to increase U.S competitiveness in a wide range of high-tech industries such as exploration.

Advertisement
The CHIPS and Science act aims to stimulate semiconductor production in the U.S. and fund research.

The budget caps for fiscal years 2024 and 2025 remove any possibility for growth. The budget caps were designed to rein in federal spending, but they are a very blunt tool. Also, nondefense discretionary spending is only 15% of all federal spending. Discretionary spending is up for a vote every year, while mandatory spending is dictated by prior laws.

Entitlement programs like Medicare, Medicaid and Social Security are mandatory forms of spending. Taken together, they are three times larger than the amount available for discretionary spending, so science has to fight over a small fraction of the overall budget pie.

Within that 15% slice, scientific research competes with K-12 education, ' , public health, initiatives for small businesses, and more.

Global competition

While government science funding in the U.S. is stagnant, America's main scientific rivals are rising fast.

Advertisement

Federal R&D funding as a percentage of GDP has dropped from 1.2% in 1987 to 1% in 2010 to under 0.8% currently. The United States is still the world's biggest spender on research and development, but in terms of government R&D as a fraction of GDP, the United States ranked 12th in 2021, behind South Korea and a set of European countries. In terms of science researchers as a portion of the labor force, the United States ranks 10th.

Meanwhile, America's main geopolitical rival is rising fast. China has eclipsed the United States in high-impact papers published, and China now spends more than the United States on university and government research.

If the U.S. wants to keep its status as the world leader in scientific research, it'll need to redouble its commitment to science by appropriately funding research.The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post Federal funding for major science agencies is at a 25-year low appeared first on theconversation.com

Continue Reading

The Conversation

AI companies train language models on YouTube’s archive − making family-and-friends videos a privacy risk

Published

on

theconversation.com – Ryan McGrady, Senior Researcher, Initiative for Digital Public , UMass Amherst – 2024-06-27 07:23:53
Your kid's silly could be fodder for ChatGPT.
Halfpoint/iStock via Getty Images

Ryan McGrady, UMass Amherst and Ethan Zuckerman, UMass Amherst

The promised artificial intelligence revolution requires data. Lots and lots of data. OpenAI and Google have begun using YouTube videos to train their text-based AI models. But what does the YouTube archive actually include?

Our team of digital media researchers at the University of Massachusetts Amherst collected and analyzed random samples of YouTube videos to learn more about that archive. We published an 85-page paper about that dataset and set up a website called TubeStats for researchers and journalists who need basic information about YouTube.

Now, we're taking a closer look at some of our more surprising findings to better understand how these obscure videos might become part of powerful AI systems. We've found that many YouTube videos are meant for personal use or for small groups of people, and a significant proportion were created by who appear to be under 13.

Advertisement

Bulk of the YouTube iceberg

Most people's experience of YouTube is algorithmically curated: Up to 70% of the videos users watch are recommended by the site's algorithms. Recommended videos are typically popular content such as influencer stunts, clips, explainer videos, travel vlogs and video reviews, while content that is not recommended languishes in obscurity.

Some YouTube content emulates popular creators or fits into established genres, but much of it is personal: family celebrations, selfies set to music, homework assignments, video game clips without context and kids dancing. The obscure side of YouTube – the vast majority of the estimated 14.8 billion videos created and uploaded to the platform – is poorly understood.

Illuminating this aspect of YouTube – and social media generally – is difficult because big tech companies have become increasingly hostile to researchers.

We've found that many videos on YouTube were never meant to be shared widely. We documented thousands of short, personal videos that have few views but high engagement – likes and comments – implying a small but highly engaged audience. These were clearly meant for a small audience of friends and family. Such social uses of YouTube contrast with videos that try to maximize their audience, suggesting another way to use YouTube: as a video-centered social network for small groups.

Advertisement

Other videos seem intended for a different kind of small, fixed audience: recorded classes from pandemic-era virtual instruction, school board meetings and work meetings. While not what most people think of as social uses, they likewise imply that their creators have a different expectation about the audience for the videos than creators of the kind of content people see in their recommendations.

Fuel for the AI machine

It was with this broader understanding that we read The New York Times exposé on how OpenAI and Google turned to YouTube in a race to find new troves of data to train their large language models. An archive of YouTube transcripts makes an extraordinary dataset for text-based models.

There is also speculation, fueled in part by an evasive answer from OpenAI's chief technology officer Mira Murati, that the videos themselves could be used to train AI text-to-video models such as OpenAI's Sora.

Advertisement

The New York Times story raised concerns about YouTube's terms of service and, of course, the copyright issues that pervade much of the debate about AI. But there's another problem: How could anyone know what an archive of more than 14 videos, uploaded by people all over the world, actually contains? It's not entirely clear that Google knows or even could know if it wanted to.

Kids as content creators

We were surprised to find an unsettling number of videos featuring kids or apparently created by them. YouTube requires uploaders to be at least 13 years old, but we frequently saw children who appeared to be much younger than that, typically dancing, singing or playing video games.

In our preliminary research, our coders determined nearly a fifth of random videos with at least one person's face visible likely included someone under 13. We didn't take into account videos that were clearly shot with the consent of a parent or guardian.

Our current sample size of 250 is relatively small – we are working on coding a much larger sample – but the findings thus far are consistent with what we've seen in the past. We're not aiming to scold Google. Age validation on the internet is infamously difficult and fraught, and we have no way of determining whether these videos were uploaded with the consent of a parent or guardian. But we want to underscore what is being ingested by these large companies' AI models.

Advertisement

Small reach, big influence

It's tempting to assume OpenAI is using highly produced influencer videos or TV newscasts posted to the platform to train its models, but previous research on large language model training data shows that the most popular content is not always the most influential in training AI models. A virtually unwatched conversation between three friends could have much more linguistic value in training a chatbot language model than a music video with millions of views.

Unfortunately, OpenAI and other AI companies are quite opaque about their training materials: They don't specify what goes in and what doesn't. Most of the time, researchers can infer problems with training data through biases in AI systems' output. But when we do get a glimpse at training data, there's often cause for concern. For example, Human Rights Watch released a report on June 10, 2024, that showed that a popular training dataset includes many photos of identifiable kids.

The history of big tech self-regulation is filled with moving goal posts. OpenAI in particular is notorious for asking for forgiveness rather than permission and has increasing criticism for putting profit over safety.

Concerns over the use of user-generated content for training AI models typically center on intellectual property, but there are also privacy issues. YouTube is a vast, unwieldy archive, impossible to fully .

Advertisement

Models trained on a subset of professionally produced videos could conceivably be an AI company's first training corpus. But without strong policies in place, any company that ingests more than the popular tip of the iceberg is likely content that violates the Federal Trade Commission's Children's Online Privacy Protection Rule, which prevents companies from collecting data from children under 13 without notice.

With last year's executive order on AI and at least one promising proposal on the table for comprehensive privacy legislation, there are signs that legal protections for user data in the U.S. might become more robust.

When the Wall Street Journal's Joanna Stern asked OpenAI CTO Mira Murati whether OpenAI trained its text-to-video generator Sora on YouTube videos, she said she wasn't sure.

Have you unwittingly helped train ChatGPT?

The intentions of a YouTube uploader simply aren't as consistent or predictable as those of someone publishing a book, writing an article for a magazine or displaying a painting in a gallery. But even if YouTube's algorithm ignores your upload and it never gets more than a of views, it may be used to train models like ChatGPT and Gemini.

As far as AI is concerned, your family reunion video may be just as important as those uploaded by influencer giant Mr. Beast or CNN.The Conversation

Ryan McGrady, Senior Researcher, Initiative for Digital Public Infrastructure, UMass Amherst and Ethan Zuckerman, Associate Professor of Public Policy, Communication, and Information, UMass Amherst

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI companies train language models on YouTube's archive − making family-and-friends videos a privacy risk appeared first on theconversation.com

Advertisement
Continue Reading

The Conversation

Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins

Published

on

theconversation.com – Denise Su, Associate Professor of Human Evolution and Social Change, Arizona – 2024-06-27 07:23:34
The reconstructed skeleton of Lucy, found in Hadar, Ethiopia, in 1974, and Grace Latimer, then age 4, daughter of a research team member.
James St. John/Flickr, CC BY

Denise Su, Arizona State University

In 1974, on a survey in Hadar in the remote badlands of Ethiopia, U.S. paleoanthropologist Donald Johanson and graduate student Tom Gray found a piece of an elbow joint jutting from the dirt in a gully. It proved to be the first of 47 bones of a single individual – an early human ancestor whom Johanson nicknamed “Lucy.” Her discovery would overturn what scientists thought they knew about the evolution of our own lineage.

Lucy was a member of the species Australopithecus afarensis, an extinct hominin – a group that includes humans and our fossil relatives. Australopithecus afarensis lived from 3.8 million years ago to 2.9 million years ago, in the region that is now Ethiopia, Kenya and Tanzania. Dated to 3.2 million years ago, Lucy was the oldest and most complete human ancestor ever found at the time of her discovery.

Two features set humans apart from all other primates: big brains and standing and walking on two legs instead of four. Prior to Lucy's discovery, scientists thought that our large brains must have evolved first, because all known human fossils at the time already had large brains. But Lucy stood on two feet and had a small brain, not much larger than that of a chimpanzee.

Advertisement

This was immediately clear when scientists reconstructed her skeleton in Cleveland, Ohio. A photographer took a picture of 4-year-old Grace Latimer – who was visiting her father, Bruce Latimer, a member of the research team – standing next to Lucy. The two were roughly the same size, providing a simple illustration of Lucy's small stature and brain. And Lucy was not a young child: Based on her teeth and bones, scientists estimated that she was fully adult when she died.

The also demonstrated how human Lucy was – especially her posture. Along with the 1978 discovery in Tanzania of fossilized footprint trails 3.6 million years old, made by members of her species, Lucy proved unequivocally that standing and walking upright was the first step in becoming human. In fact, large brains did not show up in our lineage until well over 1 million years after Lucy lived.

A human spine and pelvis, with brown fossilized bones and modern white replacements.
Part of Lucy's reconstructed skeleton, on display at the Cleveland of Natural History in 2006.
James St. John/Flickr, CC BY

Lucy's bones show adaptations that allow for upright posture and bipedal locomotion. In particular, her femur, or upper leg bone, is angled; her spine is S-curved; and her pelvis, or hip bone, is short and bowl-shaped.

These features can also be found in modern human skeletons. They allow us, as they enabled Lucy, to stand, walk and on two legs without falling over – even when balanced on one in mid-stride.

In the 50 years since Lucy's discovery, her impact on scientists' understanding of human origins has been immeasurable. She has inspired paleoanthropologists to survey unexplored , pose new hypotheses and develop and use novel techniques and methodologies.

Advertisement

Even as new fossils are discovered, Lucy remains central to modern research on human origins. As an anthropologist and paleoecologist, I know that she is still the reference point for understanding the anatomy of early human ancestors and the evolution of our own bodies. Knowledge of the human fossil record and the evolution of our lineage have exponentially increased, building on the foundation of Lucy's discovery.The Conversation

Denise Su, Associate Professor of Human Evolution and Social Change, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins appeared first on .com

Advertisement
Continue Reading

News from the South

Trending