fbpx
Connect with us

The Conversation

‘Dancing’ raisins − a simple kitchen experiment reveals how objects can extract energy from their environment and come to life

Published

on

theconversation.com – Saverio Eric Spagnolie, Professor of Mathematics, University of Wisconsin-Madison – 2024-05-13 07:29:32

Surface bubble growth can lift objects upward against gravity.

Saverio Spagnolie

Saverio Eric Spagnolie, University of Wisconsin-Madison

Scientific discovery doesn't always require a high-tech laboratory or a hefty budget. Many people have a first-rate lab right in their own homes – their kitchen.

Advertisement

The kitchen offers plenty of opportunities to view and explore what physicists call soft matter and complex fluids. Everyday phenomena, such as Cheerios clustering in milk or rings left when drops of coffee evaporate, have led to discoveries at the intersection of physics and chemistry and other tasteful collaborations between food scientists and physicists.

Two students, Sam Christianson and Carsen Grote, and I published a new study in Nature Communications in May 2024 that dives into another kitchen observation. We studied how objects can levitate in carbonated fluids, a phenomenon that's whimsically referred to as dancing raisins.

The study explored how objects like raisins can rhythmically move up and down in carbonated fluids for several minutes, even up to an hour.

An accompanying Twitter thread about our research went viral, amassing over half a million views in just two days. Why did this particular experiment catch the imaginations of so many?

Advertisement

Bubbling physics

Sparkling and other carbonated beverages fizz with bubbles because they contain more gas than the fluid can – they're “supersaturated” with gas. When you open a bottle of champagne or a soft drink, the fluid pressure drops and CO₂ molecules begin to make their escape to the surrounding .

Bubbles do not usually form spontaneously in a fluid. A fluid is composed of molecules that like to stick together, so molecules at the fluid boundary are a bit unhappy. This results in surface tension, a force which seeks to reduce the surface area. Since bubbles add surface area, surface tension and fluid pressure normally squeeze any forming bubbles right back out of existence.

But rough patches on a container's surface, like the etchings in some champagne glasses, can protect new bubbles from the crushing effects of surface tension, offering them a to form and grow.

Advertisement

Bubbles also form inside the microscopic, tubelike cloth fibers left behind after wiping a glass with a towel. The bubbles grow steadily on these tubes and, once they're big enough, detach and float upward, carrying gas out of the container.

But as many champagne enthusiasts who put fruits in their glasses know, surface etchings and little cloth fibers aren't the only places where bubbles can form. Adding a small object like a raisin or a peanut to a sparkling drink also enables bubble growth. These immersed objects act as alluring new surfaces for opportunistic molecules like CO₂ to accumulate and form bubbles.

And once enough bubbles have grown on the object, a levitation act may be performed. Together, the bubbles can lift the object up to the surface of the liquid. Once at the surface, the bubbles pop, dropping the object back down. The then begins again, in a periodic vertical dancing motion.

Dancing raisins

Raisins are particularly good dancers. It takes only a few seconds for enough bubbles to form on a raisin's wrinkly surface before it starts to rise upward – bubbles have a harder time forming on smoother surfaces. When dropped into just-opened sparkling water, a raisin can dance a vigorous tango for 20 minutes, and then a slower waltz for another hour or so.

Advertisement

Anyone with a few kitchen staples can do their own dancing raisins experiment.

We found that rotation, or spinning, was critically important for coaxing large objects to dance. Bubbles that cling to the bottom of an object can keep it aloft even after the top bubbles pop. But if the object starts to spin even a little bit, the bubbles underneath make the body spin even faster, which results in even more bubbles popping at the surface. And the sooner those bubbles are , the sooner the object can get back to its vertical dancing.

Small objects like raisins do not rotate as much as larger objects, but instead they do the twist, rapidly wobbling back and forth.

Modeling the bubbly flamenco

In the paper, we developed a mathematical model to predict how many trips to the surface we would expect an object like a raisin to make. In one experiment, we placed a 3D-printed sphere that acted as a model raisin in a glass of just-opened sparkling water. The sphere traveled from the bottom of the container to the top over 750 times in one hour.

The model incorporated the rate of bubble growth as well as the object's shape, size and surface roughness. It also took into account how quickly the fluid loses carbonation based on the container's geometry, and especially the flow created by all that bubbly activity.

Advertisement

Small objects covered in bubbles in carbonated water move upwards towards the surface and back down.

Bubble-coated raisins ‘dance' to the surface and plummet once their lifting agents have popped.

Saverio Spagnolie

The mathematical model helped us determine which forces influence the object's dancing the most. For example, the fluid drag on the object turned out to be relatively unimportant, but the ratio of the object's surface area to its volume was critical.

Looking to the future, the model also provides a way to determine some hard to measure quantities using more easily measured ones. For example, just by observing an object's dancing frequency, we can learn a lot about its surface at the microscopic level without having to see those details directly.

Different dances in different theaters

These results aren't just interesting for carbonated beverage lovers, though. Supersaturated fluids exist in nature, too – magma is one example.

Advertisement

As magma in a volcano rises closer to the Earth's surface, it rapidly depressurizes, and dissolved gases from inside the volcano make a dash for the exit, just like the CO₂ in carbonated water. These escaping gases can form into large, high-pressure bubbles and emerge with such force that a volcanic eruption ensues.

The particulate matter in magma may not dance in the same way raisins do in soda water, but tiny objects in the magma may affect how these explosive play out.

The past decades have also seen an eruption of a different kind – thousands of scientific studies devoted to active matter in fluids. These studies look at things such as swimming microorganisms and the insides of our fluid-filled cells.

Most of these active systems do not exist in water but instead in more complicated biological fluids that contain the energy necessary to produce activity. Microorganisms absorb nutrients from the fluid around them to continue swimming. Molecular motors carry cargo along a superhighway in our cells by pulling nearby energy in the form of ATP from the environment.

Advertisement

Studying these systems can scientists learn more about how the cells and bacteria in the human body function, and how on this planet has evolved to its current .

Meanwhile, a fluid itself can behave strangely because of a diverse molecular composition and bodies moving around inside it. Many new studies have addressed the behavior of microorganisms in such fluids as mucus, for instance, which behaves like both a viscous fluid and an elastic gel. Scientists still have much to learn about these highly complex systems.

While raisins in soda water seem fairly simple when compared with microorganisms swimming through biological fluids, they offer an accessible way to study generic features in those more challenging settings. In both cases, bodies extract energy from their complex fluid environment while also affecting it, and fascinating behaviors ensue.

New insights about the physical world, from geophysics to biology, will continue to emerge from tabletop-scale experiments – and perhaps from right in the kitchen.The Conversation

Saverio Eric Spagnolie, Professor of Mathematics, University of Wisconsin-Madison

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Federal funding for major science agencies is at a 25-year low

Published

on

theconversation.com – Chris Impey, University Distinguished Professor of Astronomy, University of Arizona – 2024-06-28 07:19:14
Support for science has traditionally been bipartisan, but fights over spending have affected research .
AP Photo/J. Scott Applewhite

Chris Impey, University of Arizona

funding for science is usually immune from political gridlock and polarization in . But, federal funding for science is slated to drop for 2025.

Science research dollars are considered to be discretionary, which means the funding has to be approved by Congress every year. But it's in a budget category with larger entitlement programs like Medicare and Social Security that are generally considered untouchable by politicians of both parties.

Federal investment in scientific research encompasses everything from large telescopes supported by the National Science Foundation to NASA satellites studying climate change, programs studying the use and governance of artificial intelligence at the National Institute of Standards and Technology, and research on Alzheimer's disease funded by the National Institutes of Health.

Advertisement

Studies show that increasing federal research spending productivity and economic competitiveness.

I'm an astronomer and also a senior university administrator. As an administrator, I've been involved in lobbying for research funding as associate dean of the College of Science at the University of Arizona, and in encouraging government investment in astronomy as a vice president of the American Astronomical Society. I've seen the importance of this kind of funding as a researcher who has had federal grants for 30 years, and as a senior academic who helps my colleagues write grants to support their valuable work.

Bipartisan support

Federal funding for many programs is characterized by political polarization, meaning that partisanship and ideological divisions between the two main political parties can lead to gridlock. Science is usually a rare exception to this problem.

The public shows strong bipartisan support for federal investment in scientific research, and Congress has generally followed suit, passing bills in 2024 with bipartisan backing in April and June.

Advertisement

The House passed these bills, and after reconciliation with language from the Senate, they resulted in final bills to direct US$460 billion in government spending.

However, policy documents produced by Congress reveal a partisan split in how Democratic and Republican lawmakers reference scientific research.

Congressional committees for both sides are citing more scientific papers, but there is only a 5% overlap in the papers they cite. That means that the two parties are using different evidence to make their funding decisions, rather than working from a scientific consensus. Committees under Democratic control were almost twice as likely to cite technical papers as panels led by Republicans, and they were more likely to cite papers that other scientists considered important.

Ideally, all the best ideas for scientific research would receive federal funds. But limited support for scientific research in the United States means that for individual scientists, getting funding is a highly competitive .

Advertisement

At the National Science Foundation, only 1 in 4 proposals are accepted. Success rates for funding through the National Institutes of Health are even lower, with 1 in 5 proposals getting accepted. This low success rate means that the agencies have to reject many proposals that are rated excellent by the merit review process.

Scientists are often reluctant to publicly advocate for their programs, in part because they feel disconnected from the policymaking and appropriations process. Their academic training doesn't equip them to communicate effectively to legislators and policy experts.

Budgets are down

Research received steady funding for the past few decades, but this year Congress reduced appropriations for science at many top government agencies.

Advertisement

The National Science Foundation budget is down 8%, which led agency leaders to warn Congress that the country may lose its ability to attract and train a scientific workforce.

The cut to the NSF is particularly disappointing since Congress promised it an extra $81 over five years when the CHIPS and Science Act passed in 2022. A deal to limit government spending in exchange for suspending the debt ceiling made the law's goals hard to achieve.

NASA's science budget is down 6%, and the budget for the National Institutes of Health, whose research aims to prevent disease and improve public health, is down 1%. Only the Department of Energy's Office of Science got a bump, a modest 2%.

As a result, the major science agencies are nearing a 25-year low for their funding levels, as a share of U.S. gross domestic product.

Advertisement

Feeling the squeeze

Investment in research and by the business sector is strongly increasing. In 1990, it was slightly higher than federal investment, but by 2020 it was nearly four times higher.

The distinction is important because business investment tends to focus on later stage and applied research, while federal funding goes to pure and exploratory research that can have enormous downstream benefits, such as for quantum computing and fusion power.

There are several causes of the science funding squeeze. Congressional intentions to increase funding levels, as with the CHIPS and Science Act, and the earlier COMPETES Act in 2007, have been derailed by fights over the debt limit and threats of government shutdowns.

The CHIPS act aimed to spur investment and job creation in semiconductor manufacturing, while the COMPETES Act aimed to increase U.S competitiveness in a wide range of high-tech industries such as space exploration.

Advertisement
The CHIPS and Science act aims to stimulate semiconductor production in the U.S. and fund research.

The budget caps for fiscal years 2024 and 2025 remove any possibility for growth. The budget caps were designed to rein in federal spending, but they are a very blunt tool. Also, nondefense discretionary spending is only 15% of all federal spending. Discretionary spending is up for a vote every year, while mandatory spending is dictated by prior laws.

Entitlement programs like Medicare, and Social Security are mandatory forms of spending. Taken together, they are three times larger than the amount available for discretionary spending, so science has to fight over a small fraction of the overall budget pie.

Within that 15% slice, scientific research competes with K-12 education, ' , public health, initiatives for small businesses, and more.

Global competition

While government science funding in the U.S. is stagnant, America's main scientific rivals are rising fast.

Advertisement

Federal R&D funding as a percentage of GDP has dropped from 1.2% in 1987 to 1% in 2010 to under 0.8% currently. The United States is still the world's biggest spender on research and development, but in terms of government R&D as a fraction of GDP, the United States ranked 12th in 2021, behind South Korea and a set of European countries. In terms of science researchers as a portion of the labor force, the United States ranks 10th.

Meanwhile, America's main geopolitical rival is rising fast. China has eclipsed the United States in high-impact papers published, and China now spends more than the United States on university and government research.

If the U.S. wants to keep its status as the world leader in scientific research, it'll need to redouble its commitment to science by appropriately funding research.The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post Federal funding for major science agencies is at a 25-year low appeared first on theconversation.com

Continue Reading

The Conversation

AI companies train language models on YouTube’s archive − making family-and-friends videos a privacy risk

Published

on

theconversation.com – Ryan McGrady, Senior Researcher, Initiative for Digital Public , UMass Amherst – 2024-06-27 07:23:53
Your kid's silly video could be fodder for ChatGPT.
Halfpoint/iStock via Getty Images

Ryan McGrady, UMass Amherst and Ethan Zuckerman, UMass Amherst

The promised artificial intelligence revolution requires data. Lots and lots of data. OpenAI and Google have begun using YouTube videos to train their text-based AI models. But what does the YouTube archive actually include?

Our team of digital media researchers at the University of Amherst collected and analyzed random samples of YouTube videos to learn more about that archive. We published an 85-page paper about that dataset and set up a website called TubeStats for researchers and journalists who need basic information about YouTube.

Now, we're taking a closer look at some of our more surprising findings to better understand how these obscure videos might become part of powerful AI . We've found that many YouTube videos are meant for personal use or for small groups of people, and a significant proportion were created by children who appear to be under 13.

Advertisement

Bulk of the YouTube iceberg

Most people's experience of YouTube is algorithmically curated: Up to 70% of the videos users watch are recommended by the site's algorithms. Recommended videos are typically popular content such as influencer stunts, news clips, explainer videos, travel vlogs and video reviews, while content that is not recommended languishes in obscurity.

Some YouTube content emulates popular creators or fits into established genres, but much of it is personal: family celebrations, selfies set to music, homework assignments, video game clips without context and kids dancing. The obscure side of YouTube – the vast majority of the estimated 14.8 billion videos created and uploaded to the platform – is poorly understood.

Illuminating this aspect of YouTube – and social generally – is difficult because big tech companies have become increasingly hostile to researchers.

We've found that many videos on YouTube were never meant to be shared widely. We documented thousands of short, personal videos that have few views but high engagement – likes and comments – implying a small but highly engaged audience. These were clearly meant for a small audience of friends and family. Such social uses of YouTube contrast with videos that try to maximize their audience, suggesting another way to use YouTube: as a video-centered social network for small groups.

Advertisement

Other videos seem intended for a different kind of small, fixed audience: recorded classes from pandemic-era virtual instruction, school board meetings and work meetings. While not what most people think of as social uses, they likewise imply that their creators have a different expectation about the audience for the videos than creators of the kind of content people see in their recommendations.

Fuel for the AI machine

It was with this broader understanding that we read The New York Times exposé on how OpenAI and Google turned to YouTube in a race to find new troves of data to train their large language models. An archive of YouTube transcripts makes an extraordinary dataset for text-based models.

There is also speculation, fueled in part by an evasive answer from OpenAI's chief technology officer Mira Murati, that the videos themselves could be used to train AI text-to-video models such as OpenAI's Sora.

Advertisement

The New York Times story raised concerns about YouTube's terms of service and, of course, the copyright issues that pervade much of the debate about AI. But there's another problem: How could anyone know what an archive of more than 14 videos, uploaded by people all over the world, actually contains? It's not entirely clear that Google knows or even could know if it wanted to.

Kids as content creators

We were surprised to find an unsettling number of videos featuring kids or apparently created by them. YouTube requires uploaders to be at least 13 years old, but we frequently saw children who appeared to be much younger than that, typically dancing, singing or playing video games.

In our preliminary research, our coders determined nearly a fifth of random videos with at least one person's face visible likely included someone under 13. We didn't take into account videos that were clearly shot with the consent of a parent or guardian.

Our current sample size of 250 is relatively small – we are working on coding a much larger sample – but the findings thus far are consistent with what we've seen in the past. We're not aiming to scold Google. Age validation on the internet is infamously difficult and fraught, and we have no way of determining whether these videos were uploaded with the consent of a parent or guardian. But we want to underscore what is being ingested by these large companies' AI models.

Advertisement

Small reach, big influence

It's tempting to assume OpenAI is using highly produced influencer videos or TV newscasts posted to the platform to train its models, but previous research on large language model data shows that the most popular content is not always the most influential in training AI models. A virtually unwatched conversation between three friends could have much more linguistic value in training a chatbot language model than a music video with millions of views.

Unfortunately, OpenAI and other AI companies are quite opaque about their training materials: They don't specify what goes in and what doesn't. Most of the time, researchers can infer problems with training data through biases in AI systems' output. But when we do get a glimpse at training data, there's often cause for concern. For example, Human Rights Watch released a report on June 10, 2024, that showed that a popular training dataset includes many photos of identifiable kids.

The history of big tech self-regulation is filled with moving goal posts. OpenAI in particular is notorious for asking for forgiveness rather than permission and has faced increasing criticism for putting profit over safety.

Concerns over the use of user-generated content for training AI models typically center on intellectual property, but there are also privacy issues. YouTube is a vast, unwieldy archive, impossible to fully .

Advertisement

Models trained on a subset of professionally produced videos could conceivably be an AI company's first training corpus. But without strong policies in place, any company that ingests more than the popular tip of the iceberg is likely including content that violates the Federal Trade Commission's Children's Online Privacy Protection Rule, which prevents companies from collecting data from children under 13 without notice.

With last year's executive order on AI and at least one promising proposal on the table for comprehensive privacy legislation, there are signs that legal protections for user data in the U.S. might become more robust.

When the Wall Street Journal's Joanna Stern asked OpenAI CTO Mira Murati whether OpenAI trained its text-to-video generator Sora on YouTube videos, she said she wasn't sure.

Have you unwittingly helped train ChatGPT?

The intentions of a YouTube uploader simply aren't as consistent or predictable as those of someone publishing a book, writing an article for a magazine or displaying a painting in a gallery. But even if YouTube's algorithm ignores your upload and it never gets more than a of views, it may be used to train models like ChatGPT and Gemini.

As far as AI is concerned, your family reunion video may be just as important as those uploaded by influencer giant Mr. Beast or CNN.The Conversation

Ryan McGrady, Senior Researcher, Initiative for Digital Public Infrastructure, UMass Amherst and Ethan Zuckerman, Associate Professor of Public Policy, Communication, and Information, UMass Amherst

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI companies train language models on YouTube's archive − making family-and-friends videos a privacy risk appeared first on .com

Advertisement
Continue Reading

The Conversation

Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins

Published

on

theconversation.com – Denise Su, Associate Professor of Human Evolution and Social Change, Arizona – 2024-06-27 07:23:34
The reconstructed skeleton of Lucy, found in Hadar, Ethiopia, in 1974, and Grace Latimer, then age 4, daughter of a research team member.
James St. John/Flickr, CC BY

Denise Su, Arizona State University

In 1974, on a survey in Hadar in the remote badlands of Ethiopia, U.S. paleoanthropologist Donald Johanson and graduate student Tom Gray found a piece of an elbow joint jutting from the dirt in a gully. It proved to be the first of 47 bones of a single individual – an early human ancestor whom Johanson nicknamed “Lucy.” Her discovery would overturn what scientists thought they knew about the evolution of our own lineage.

Lucy was a member of the species Australopithecus afarensis, an extinct hominin – a group that includes humans and our fossil relatives. Australopithecus afarensis lived from 3.8 million years ago to 2.9 million years ago, in the region that is now Ethiopia, Kenya and Tanzania. Dated to 3.2 million years ago, Lucy was the oldest and most complete human ancestor ever found at the time of her discovery.

Two features set humans apart from all other primates: big brains and standing and walking on two legs instead of four. Prior to Lucy's discovery, scientists thought that our large brains must have evolved first, because all known human fossils at the time already had large brains. But Lucy stood on two feet and had a small brain, not much larger than that of a chimpanzee.

Advertisement

This was immediately clear when scientists reconstructed her skeleton in Cleveland, Ohio. A photographer took a picture of 4-year-old Grace Latimer – who was visiting her father, Bruce Latimer, a member of the research team – standing next to Lucy. The two were roughly the same size, providing a simple illustration of Lucy's small stature and brain. And Lucy was not a young child: Based on her teeth and bones, scientists estimated that she was fully adult when she died.

The also demonstrated how human Lucy was – especially her posture. Along with the 1978 discovery in Tanzania of fossilized footprint trails 3.6 million years old, made by members of her species, Lucy proved unequivocally that standing and walking upright was the first step in becoming human. In fact, large brains did not show up in our lineage until well over 1 million years after Lucy lived.

A human spine and pelvis, with brown fossilized bones and modern white replacements.
Part of Lucy's reconstructed skeleton, on display at the Cleveland of Natural History in 2006.
James St. John/Flickr, CC BY

Lucy's bones show adaptations that allow for upright posture and bipedal locomotion. In particular, her femur, or upper leg bone, is angled; her spine is S-curved; and her pelvis, or hip bone, is short and bowl-shaped.

These features can also be found in modern human skeletons. They allow us, as they enabled Lucy, to stand, walk and on two legs without falling over – even when balanced on one in mid-stride.

In the 50 years since Lucy's discovery, her impact on scientists' understanding of human origins has been immeasurable. She has inspired paleoanthropologists to survey unexplored , pose new hypotheses and develop and use novel techniques and methodologies.

Advertisement

Even as new fossils are discovered, Lucy remains central to modern research on human origins. As an anthropologist and paleoecologist, I know that she is still the reference point for understanding the anatomy of early human ancestors and the evolution of our own bodies. Knowledge of the human fossil record and the evolution of our lineage have exponentially increased, building on the foundation of Lucy's discovery.The Conversation

Denise Su, Associate Professor of Human Evolution and Social Change, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins appeared first on .com

Advertisement
Continue Reading

News from the South

Trending