fbpx
Connect with us

The Conversation

Female giraffes drove the evolution of long giraffe necks in order to feed on the most nutritious leaves, new research suggests

Published

on

theconversation.com – Douglas R. Cavener, Huck Distinguished Chair in Evolutionary Genetics and Professor of Biology, Penn – 2024-06-05 07:43:12

A female giraffe browsing.

Douglas R. Cavener, Penn State

Everything in biology ultimately boils down to food and sex. To survive as an individual you need food. To survive as a species you need sex.

Not surprisingly then, the age-old question of why giraffes have long necks has centered around food and sex. After debating this question for the past 150 years, biologists still cannot agree on which of these two factors was the most important in the evolution of the giraffe's neck. In the past three years, my colleagues and I have been trying to get to the bottom of this question.

Advertisement

Necks for sex

In the 19th century, biologists Charles Darwin and Jean Baptiste Lamarck both speculated that giraffes' long necks helped them reach acacia leaves high up in the trees, though they likely weren't observing actual giraffe behavior when they came up with this theory. Several decades later, when scientists started observing giraffes in Africa, a group of biologists came up with an alternative theory based on sex and reproduction.

These pioneering giraffe biologists noticed how male giraffes, standing side by side, used their long necks to swing their heads and club each other. The researchers called this behavior “neck-fighting” and guessed that it helped the giraffes prove their dominance over each other and woo mates. Males with the longest necks would win these contests and, in turn, boost their reproductive success. That favorability, the scientists predicted, drove the evolution of long necks.

Since its inception, the necks-for-sex sexual selection hypothesis has overshadowed Darwin's and Lamarck's necks-for-food hypothesis.

The necks-for-sex hypothesis predicts that males should have longer necks than females, since only males use them to fight, and indeed they do. But adult male giraffes are also about 30% to 50% larger than female giraffes. All of their body components are bigger. So my team wanted to find out if males have proportionally longer necks when accounting for their overall stature, comprised of their head, neck and forelegs.

Advertisement

Necks not for sex?

But it's not easy to measure giraffe body proportions. For one, their necks grow disproportionately faster during the first six to eight years of their life. And in the wild, you can't tell exactly how old an individual animal is. To get around these problems, we measured body proportions in captive Masai giraffes in North American zoos. Here, we knew the exact age of the giraffes and could then compare this data with the body proportions of wild giraffes that we knew confidently were older than 8 years.

To our surprise, we found that adult female giraffes have proportionally longer necks than males, which contradicts the necks-for-sex hypothesis. We also found that adult female giraffes have proportionally longer body trunks, while adult males have proportionally longer forelegs and thicker necks.

A diagram showing a male giraffe, which is taller with a shorter trunk, and a female giraffe, which has shorter legs and a longer trunk.

Sex-specific differences between male and female giraffes.

Douglas Cavener

Giraffe babies don't have any of these sex-specific body proportion differences. They only appear as giraffes are reaching adulthood.

Advertisement

Finding that female giraffes have proportionally both longer necks and longer body trunks led us to propose that females, and not males, drove the evolution of the giraffe's long neck, and not for sex but for food and reproduction. Our theory is in agreement with Darwin and Lamarck that food was the major driver for the evolution of the giraffe's neck, but with a emphasis on female reproductive .

A shape to die for

Giraffes are notoriously picky eaters and browse on fresh leaves, flowers and seed pods. Female giraffes especially need enough to eat because they spend most of their adult lives either pregnant or providing milk to their calves.

Females tend to use their long necks to probe deep into bushes and trees to find the most nutritious food. By contrast, males tend to feed high in trees by fully extending their necks vertically. Females need proportionally longer trunks to grow calves that can be well over 6 feet tall at birth.

For males, I'd guess that their proportionally longer forelegs are an adaptation that allows them to mount females more easily during sex. While we found that their necks might not be as proportionally long as females' necks are, they are thicker. That's probably an adaptation that helps them win neck fights.

Advertisement

A male giraffe feeding from a tree.

The male giraffe body, with long forelegs supporting the trunk and neck – a shape to die for.

Douglas Cavener

But giraffes' necks aren't their only long feature. They have very long legs, proportionally, which contribute to their height almost as much as their necks. Their long legs at a considerable cost, though – particularly for male giraffes. A disproportionate fraction of their body mass is stacked on top of their spindly front legs, which can to injury and mobility issues in the long .

Graham Mitchell, a prominent giraffe biologist, has called the giraffe body “a shape to die for.” In captivity, where staff can determine the cause of , well over half of male giraffes die from foreleg problems, which shortens their lifespan by 25% with females. Very few female giraffes die from issues related to their legs.

Giraffes' height also means they can't climb up steep slopes very well. My team's research has shown that this limitation has likely stopped them from traveling across the escarpments of the Great Rift Valley in East Africa. But the mating advantage from being tall must outweigh these costs to their health and mobility.

Advertisement

This research isn't ruling out the necks-for-sex theory entirely. The long neck likely does play a critical role in male neck-fighting and winning a mate. But our research suggests that male neck-fighting was probably a side benefit that came along with females getting better access to food.

In the future, my team will look into the genetic factors that led to the giraffe's extraordinary stature and physique. We want to trace and reconstruct the evolutionary path they took to reach toward the skies.The Conversation

Douglas R. Cavener, Huck Distinguished Chair in Evolutionary Genetics and Professor of Biology, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience

Published

on

theconversation.com – Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College – 2024-07-02 07:28:40
The electroencephalogram scientists to record and read brain activity.
Kateryna Kon/Science Photo Library via Getty Images

Erika Nyhus, Bowdoin College

Electroencephalography, or EEG, was invented 100 years ago. In the years since the invention of this device to monitor brain electricity, it has had an incredible impact on how scientists study the human brain.

Since its first use, the EEG has shaped researchers' understanding of cognition, from perception to memory. It has also been important for diagnosing and guiding treatment of multiple brain disorders, epilepsy.

I am a cognitive neuroscientist who uses EEG to study how people remember from their past. The EEG's 100-year anniversary is an to reflect on this discovery's significance in neuroscience and medicine.

Advertisement

Discovery of EEG

On July 6, 1924, psychiatrist Hans Berger performed the first EEG recording on a human, a 17-year-old boy undergoing neurosurgery. At the time, Berger and other researchers were performing electrical recordings on the brains of animals.

What set Berger apart was his obsession with finding the physical basis of what he called psychic energy, or mental effort, in people. Through a of experiments spanning his early career, Berger measured brain volume and temperature to study changes in mental processes such as intellectual work, attention and desire.

He then turned to recording electrical activity. Though he recorded the first traces of EEG in the human brain in 1924, he did not publish the results until 1929. Those five intervening years were a tortuous phase of self-doubt about the source of the EEG signal in the brain and refining the experimental setup. Berger recorded hundreds of EEGs on multiple subjects, including his own , with both experimental successes and setbacks.

This is among the first EEG readings published in Hans Berger's study. The top trace is the EGG while the bottom is a reference trace of 10 Hz.
Two EEG traces, the top more irregular in rhythm than the bottom.
Hans Berger/Über das Elektrenkephalogramm des Menchen. Archives für Psychiatrie. 1929; 87:527-70 via Wikimedia Commons

Finally convinced of his results, he published a series of papers in the journal Archiv für Psychiatrie and had hopes of winning a Nobel Prize. Unfortunately, the research community doubted his results, and years passed before anyone else started using EEG in their own research.

Berger was eventually nominated for a Nobel Prize in 1940. But Nobels were not awarded that year in any category due to World War II and Germany's occupation of Norway.

Advertisement

Neural oscillations

When many neurons are active at the same time, they produce an electrical signal strong enough to spread instantaneously through the conductive tissue of the brain, skull and scalp. EEG electrodes placed on the head can record these electrical signals.

Since the discovery of EEG, researchers have shown that neural activity oscillates at specific frequencies. In his initial EEG recordings in 1924, Berger noted the predominance of oscillatory activity that cycled eight to 12 times per second, or 8 to 12 hertz, named alpha oscillations. Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate.

Neural oscillations are thought to be important for effective communication between specialized brain regions. For example, theta oscillations that cycle at 4 to 8 hertz are important for communication between brain regions involved in memory encoding and retrieval in animals and humans.

Finger pointing at EEG reading
Different frequencies of neural oscillations indicate different types of brain activity.
undefined undefined/iStock via Getty Images Plus

Researchers then examined whether they could alter neural oscillations and therefore affect how neurons to each other. Studies have shown that many behavioral and noninvasive methods can alter neural oscillations and to changes in cognitive performance. Engaging in specific mental activities can induce neural oscillations in the frequencies those mental activities use. For example, my team's research found that mindfulness meditation can increase theta frequency oscillations and improve memory retrieval.

Noninvasive brain stimulation methods can target frequencies of interest. For example, my team's ongoing research found that brain stimulation at theta frequency can lead to improved memory retrieval.

Advertisement

EEG has also led to major discoveries about how the brain processes information in many other cognitive domains, including how people perceive the world around them, how they focus their attention, how they communicate through language and how they emotions.

Diagnosing and treating brain disorders

EEG is commonly used today to diagnose sleep disorders and epilepsy and to guide brain disorder treatments.

Scientists are using EEG to see whether memory can be improved with noninvasive brain stimulation. Although the research is still in its infancy, there have been some promising results. For example, one study found that noninvasive brain stimulation at gamma frequency – 25 hertz – improved memory and neurotransmitter transmission in Alzheimer's disease.

Back of person's head enveloped by the many, small round electrodes of an EEG cap
Researchers and clinicians use EEG to diagnose conditions like epilepsy.
BSIP/Collection Mix: Subjects via Getty Images

A new type of noninvasive brain stimulation called temporal interference uses two high frequencies to cause neural activity equal to the difference between the stimulation frequencies. The high frequencies can better penetrate the brain and reach the targeted area. Researchers recently tested this method in people using 2,000 hertz and 2,005 hertz to send 5 hertz theta frequency at a key brain region for memory, the hippocampus. This led to improvements in remembering the name associated with a face.

Although these results are promising, more research is needed to understand the exact role neural oscillations play in cognition and whether altering them can lead to long-lasting cognitive enhancement.

Advertisement

The future of EEG

The 100-year anniversary of the EEG provides an opportunity to consider what it has taught us about brain function and what this technique can do in the future.

In a survey commissioned by the journal Nature Human Behaviour, over 500 researchers who use EEG in their work were asked to make predictions on the future of the technique. What will be possible in the next 100 years of EEG?

Some researchers, including myself, predict that we'll use EEG to diagnose and create targeted treatments for brain disorders. Others anticipate that an affordable, wearable EEG will be widely used to enhance cognitive function at home or will be seamlessly integrated into virtual reality applications. The possibilities are vast.The Conversation

Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience appeared first on .com

Continue Reading

The Conversation

Supreme Court kicks cases about tech companies’ First Amendment rights back to lower courts − but appears poised to block states from hampering online content moderation

Published

on

theconversation.com – Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse University – 2024-07-01 15:26:42
How much power do social companies have over what users post?
Midnight Studio/iStock/Getty Images Plus

Lynn Greenky, Syracuse University

The has sent back to lower courts the about whether states can block social media companies such as Facebook and X, formerly Twitter, from regulating and controlling what users can post on their platforms.

Laws in Florida and Texas sought to impose restrictions on the internal policies and algorithms of social media platforms in ways that influence which posts will be promoted and spread widely and which will be made less visible or even .

In the unanimous decision, issued on July 1, 2024, the high court remanded the two cases, Moody v. NetChoice and NetChoice v. Paxton, to the 11th and 5th U.S. Circuit Courts of Appeals, respectively. The court admonished the lower courts for their failure to consider the full force of the laws' applications. It also warned the lower courts to consider the boundaries imposed by the Constitution against interference with private speech.

Advertisement

Contrasting views of social media sites

In their arguments before the court in February 2024, the two sides described competing visions of how social media fits into the often overwhelming flood of information that defines modern digital society.

The states said the platforms were mere conduits of communication, or “speech hosts,” similar to legacy telephone companies that were required to carry all calls and prohibited from discriminating against users. The states said that the platforms should have to carry all posts from users without discrimination among them based on what they were saying.

The states argued that the content moderation rules the social media companies imposed were not examples of the platforms themselves speaking – or choosing not to speak. Rather, the states said, the rules affected the platforms' behavior and caused them to censor certain views by allowing them to determine whom to allow to speak on which topics, which is outside First Amendment protections.

By contrast, the social media platforms, represented by NetChoice, a tech industry trade group, argued that the platforms' guidelines about what is acceptable on their sites are protected by the First Amendment's guarantee of speech free from government interference. The companies say their platforms are not public forums that may be subject to government regulation but rather private services that can exercise their own editorial judgment about what does or does not appear on their sites.

Advertisement

They argued that their policies were aspects of their own speech and that they should be to develop and implement guidelines about what is acceptable speech on their platforms based on their own First Amendment rights.

Here's what the First Amendment says and what it means.

A reframe by the Supreme Court

All the litigants – NetChoice, Texas and Florida – framed the issue around the effect of the laws on the content moderation policies of the platforms, specifically whether the platforms were engaged in protected speech. The 11th U.S. Circuit Court of Appeals upheld a lower court preliminary injunction against the Florida law, holding the content moderation policies of the platforms were speech and the law was unconstitutional.

The 5th U.S. Circuit Court of Appeals came to the opposite conclusion and held that the platforms were not engaged in speech, but rather the platform's algorithms controlled platform behavior unprotected by the First Amendment. The 5th Circuit determined the behavior was censorship and reversed a lower court injunction against the Texas law.

The Supreme Court, however, reframed the inquiry. The court noted that the lower courts failed to consider the full range of activities the laws covered. Thus, while a First Amendment inquiry was in order, the decisions of the lower courts and the arguments by the parties were incomplete. The court added that neither the parties nor the lower courts engaged in a thorough analysis of whether and how the states' laws affected other elements of the platforms' products, such as Facebook's direct messaging applications, or even whether the laws have any impact on email providers or online marketplaces.

Advertisement

The Supreme Court directed the lower courts to engage in a much more exacting analysis of the laws and their implications and provided some guidelines.

First Amendment principles

The court held that content moderation policies reflect the constitutionally protected editorial choices of the platforms, at least regarding what the court describes as “heartland applications” of the laws – such as Facebook's Feed and YouTube's homepage.

The Supreme Court required the lower courts to consider two core constitutional principles of the First Amendment. One is that the amendment protects speakers from being compelled to communicate messages they would prefer to exclude. Editorial discretion by entities, social media companies, that compile and curate the speech of others is a protected First Amendment activity.

The other principle holds that the amendment precludes the government from controlling private speech, even for the purpose of balancing the marketplace of ideas. Neither nor federal government may manipulate that marketplace for the purposes of presenting a more balanced array of viewpoints.

Advertisement

The court also affirmed that these principles apply to digital media in the same way they apply to traditional or legacy media.

In the 96-page opinion, Justice Elena Kagan wrote: “The First Amendment … does not go on leave when social media are involved.” For now, it appears the social media platforms will continue to control their content.The Conversation

Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post Supreme Court kicks cases about tech companies' First Amendment rights back to lower courts − but appears poised to block states from hampering online content moderation appeared first on .com

Continue Reading

The Conversation

Disability community has long wrestled with ‘helpful’ technologies – lessons for everyone in dealing with AI

Published

on

theconversation.com – Elaine Short, Assistant Professor of Computer Science, Tufts – 2024-07-01 07:19:34

A robotic arm helps a disabled person paint a picture.

Jenna Schad /Tufts University

Elaine Short, Tufts University

You might have heard that artificial intelligence is going to revolutionize everything, save the world and give everyone superhuman powers. Alternatively, you might have heard that it will take your job, make you lazy and stupid, and make the world a cyberpunk dystopia.

Advertisement

Consider another way to look at AI: as an assistive technology – something that helps you function.

With that view, also consider a community of experts in giving and receiving assistance: the disability community. Many disabled people use technology extensively, both dedicated assistive technologies such as wheelchairs and general-use technologies such as smart home devices.

Equally, many disabled people professional and casual assistance from other people. And, despite stereotypes to the contrary, many disabled people regularly give assistance to the disabled and nondisabled people around them.

Disabled people are well experienced in receiving and giving social and technical assistance, which makes them a valuable source of insight into how everyone might relate to AI in the future. This potential is a key driver for my work as a disabled person and researcher in AI and robotics.

Advertisement

Actively learning to live with help

While virtually everyone values independence, no one is fully independent. Each of us depends on others to grow our food, care for us when we are ill, give us advice and emotional support, and us in thousands of interconnected ways. Being disabled means support needs that are outside what is typical and therefore those needs are much more visible. Because of this, the disability community has reckoned more explicitly with what it means to need help to than most nondisabled people.

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone.

The curb-cut effect – how technologies built for disabled people help everyone – has become a principle of good design.

This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also people with strollers, rolling suitcases and bicycles.

Partnering in assistance

You have probably had the experience of someone to help you without listening to what you actually need. For example, a parent or friend might “help” you clean and instead end up hiding everything you need.

Advertisement

Disability advocates have long battled this type of well-meaning but intrusive assistance – for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control.

The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over.

A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. For example, most people find it difficult to use a joystick to move a robot arm: The joystick can only move from front to back and side to side, but the arm can move in almost as many ways as a human arm.

The author discusses her work on robots that are designed to help people.

To help, AI can predict what someone is planning to do with the robot and then move the robot accordingly. Previous research assumed that people would ignore this help, but we found that people quickly figured out that the system is doing something, actively worked to understand what it was doing and tried to work with the system to get it to do what they wanted.

Advertisement

Most AI systems don't make this easy, but my lab's new approaches to AI empower people to influence robot behavior. We have shown that this results in better interactions in tasks that are creative, like painting. We also have begun to investigate how people can use this control to solve problems outside the ones the robots were designed for. For example, people can use a robot that is trained to carry a cup of to instead pour the water out to water their plants.

Training AI on human variability

The disability-centered perspective also raises concerns about the huge datasets that power AI. The very nature of data-driven AI is to look for common patterns. In general, the better-represented something is in the data, the better the model works.

If disability means having a body or mind outside what is typical, then disability means not being well-represented in the data. Whether it's AI systems designed to detect cheating on exams instead detecting students' disabilities or robots that fail to account for wheelchair users, disabled people's interactions with AI reveal how those systems are brittle.

One of my goals as an AI researcher is to make AI more responsive and adaptable to real human variation, especially in AI systems that learn directly from interacting with people. We have developed frameworks for testing how robust those AI systems are to real human teaching and explored how robots can learn better from human teachers even when those teachers change over time.

Advertisement

Thinking of AI as an assistive technology, and learning from the disability community, can help to ensure that the AI systems of the future serve people's needs – with people in the driver's seat.The Conversation

Elaine Short, Assistant Professor of Computer Science, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Disability community has long wrestled with ‘helpful' technologies – lessons for everyone in dealing with AI appeared first on .com

Advertisement
Continue Reading

News from the South

Trending