fbpx
Connect with us

The Conversation

Keeping astronauts healthy in space isn’t easy − new training programs will prepare students to perform medicine while thousands of miles away from Earth

Published

on

theconversation.com – Arian Anderson, Emergency Medicine Physician, University of Colorado Anschutz Medical Campus – 2024-06-18 07:39:04

Space medicine professionals in training consult with each other during a simulation exercise.

Katya Arquilla

Arian Anderson, University of Colorado Anschutz Medical Campus

In the coming decade, more people will go to space than ever before as human spaceflight enters a new era. NASA, the European Space Agency and other governmental agencies are partnering to develop crewed missions beyond the Moon. At the same time, these agencies are collaborating with private companies using new technologies to down the price of space exploration.

Advertisement

Companies such as SpaceX, Blue Origin and Sierra Space have developed vehicles with reusable boosters, automated flight and lightweight materials to support these deep space missions. Some even have ambitions of their own to build private space stations, Moon bases or mining operations in the coming decades.

But as these technologies and partnerships rapidly make spaceflight more accessible, new challenges emerge. For one, maintaining the and performance of an astronaut crew. My team of researchers and educators at the University of Colorado and others around the world are looking to address this issue.

A group of people in orange jumpsuits stand around a table, with a person laying on it.

With spaceflight set to expand, astronauts will need access to medical care over longer voyages and on commercial flights.

Katya Arquilla

Emerging medical challenges in space

NASA astronauts are some of the most accomplished people on the planet, and they're some of the healthiest. Astronauts undergo extensive medical and psychological testing that in one study disqualified 26% of final-round applicants. This rigorous screening and testing process effectively limits the of a medical occurring during a mission.

Advertisement

But as spaceflight becomes more accessible, astronaut crews on commercial missions will likely make up the majority of space travelers in the coming years. Private missions will be short and stay in a close orbit around Earth in the near term, but private crews will likely have less training and more chronic medical conditions than the professional astronauts currently living and working in space.

While experiments aboard the International Space Station have extensively studied the normal physiological changes occurring to the human system in weightlessness, there is limited to no data about how common chronic diseases such as diabetes or high blood pressure behave in the space .

Mars, shown from space.

During Mars missions, astronauts will be away from Earth for long periods of time, with limited access to medical resources.

CU/LASP EMM/EXI ITF/Kevin M. Gill, CC BY

This industry boom is also creating opportunities for long-duration missions to the Moon and Mars. Because of the length of missions and the distance from Earth, professional astronauts on these missions will experience prolonged weightlessness, leading to bone and muscle loss, communication delays of a few seconds up to 40 minutes, and extreme isolation for months to years at a time.

Advertisement

Crews must function autonomously, while being exposed to new hazards such as lunar or Martian dust. Because of the fuel required for these missions, resources will be limited to the lowest mass and volume possible.

As a result, mission planners will need to make difficult decisions to determine what supplies are truly necessary in advance, with limited or unavailable resupply opportunities for food, and medicine. In space, for example, radiation and humidity inside a spacecraft can cause medications to deteriorate more quickly and become unavailable or even toxic to crew members.

Crews on the space station have access to a flight surgeon at Mission Control to manage medical care in the same way telehealth is used on Earth. Crews on distant planets, however, will need to perform medical care or procedures autonomously.

In the event of a medical emergency, crews may not be able to evacuate to Earth. Unlike the space station, where medical evacuations to Earth can occur in less than 24 hours, lunar evacuations may take weeks. Evacuations from Mars may not be possible for months or even years.

Advertisement

Put simply, the current approaches to medical care in spaceflight will not meet the needs of future commercial and professional astronauts. Researchers will need to develop new technologies and novel training approaches to prepare future providers to treat medical conditions in space.

The current leaders in space medicine are either experts in aerospace engineering or in medicine, but rarely do experts have formal training or a complete understanding of both fields. And these disciplines often can't speak each other's language both literally and figuratively.

Training the next generation

To meet the evolving demands of human spaceflight, educators and universities are looking to develop a way to train specialists who understand both the limitations of the human body and the constraints of engineering design.

Some schools and hospitals, such as the University of Texas Medical Branch, have residency training programs for medical school graduates in aerospace medicine. Others, such as UCLA and Massachusetts General Hospital, have specialty training programs in space medicine, but these currently target fully trained emergency medicine physicians.

Advertisement

My team at the University of Colorado has created a program that integrates human physiology and engineering principles to train medical to think like engineers.

Two domed tents connected by long tubes, in the desert.

The University of Colorado brings students to the desert to simulate a lunar base. Students work together to solve simulated medical issues that might occur during a space mission.

Katya Arquilla

This program aims to help students understand human health and performance in the spaceflight environment. It approaches these topics from an engineering design and constraints perspective to find to the challenges astronauts will face.

One of our most popular classes is called Mars in Simulated Surface Environments. This class puts students through engineering and medical scenarios in a simulated Mars environment in the Utah desert. Students deal with the challenges of working and providing care while wearing a spacesuit and on a desolate Mars-like landscape.

Advertisement

The stress of the simulations can feel real to the students, and they learn to apply their combined skill sets to care for their fellow crew members.

Educational programs like these and others aim to create cross-trained specialists who understand both patient care and the procedural nature of engineering design and can merge the two, whether for space tourists in orbit or as a pioneer to the surface of another planet.

A new period of spaceflight is here, and these programs are already training experts to make space accessible and safe.The Conversation

Arian Anderson, Emergency Medicine Physician, University of Colorado Anschutz Medical Campus

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post Keeping astronauts healthy in space isn't easy − new training programs will prepare students to perform medicine while thousands of miles away from Earth appeared first on theconversation.com

The Conversation

Federal funding for major science agencies is at a 25-year low

Published

on

theconversation.com – Chris Impey, University Distinguished Professor of Astronomy, University of Arizona – 2024-06-28 07:19:14
Support for science has traditionally been bipartisan, but fights over spending have affected research .
AP Photo/J. Scott Applewhite

Chris Impey, University of Arizona

Government funding for science is usually immune from political gridlock and polarization in Congress. But, federal funding for science is slated to drop for 2025.

Science research dollars are considered to be discretionary, which means the funding has to be approved by Congress every year. But it's in a budget category with larger entitlement programs like Medicare and Social Security that are generally considered untouchable by politicians of both parties.

Federal investment in scientific research encompasses everything from large telescopes supported by the National Science Foundation to NASA satellites studying climate change, programs studying the use and governance of artificial intelligence at the National Institute of Standards and Technology, and research on Alzheimer's disease funded by the National Institutes of .

Advertisement

Studies show that increasing federal research spending productivity and economic competitiveness.

I'm an astronomer and also a senior university administrator. As an administrator, I've been involved in lobbying for research funding as associate dean of the College of Science at the University of Arizona, and in encouraging government investment in astronomy as a vice president of the American Astronomical Society. I've seen the importance of this kind of funding as a researcher who has had federal grants for 30 years, and as a senior academic who helps my colleagues write grants to support their valuable work.

Bipartisan support

Federal funding for many programs is characterized by political polarization, meaning that partisanship and ideological divisions between the two main political parties can to gridlock. Science is usually a rare exception to this problem.

The public shows strong bipartisan support for federal investment in scientific research, and Congress has generally followed suit, passing bills in 2024 with bipartisan backing in April and June.

Advertisement

The House passed these bills, and after reconciliation with language from the Senate, they resulted in final bills to direct US$460 billion in government spending.

However, policy documents produced by Congress reveal a partisan split in how Democratic and Republican lawmakers reference scientific research.

Congressional committees for both sides are citing more scientific papers, but there is only a 5% overlap in the papers they cite. That means that the two parties are using different evidence to make their funding decisions, rather than working from a scientific consensus. Committees under Democratic control were almost twice as likely to cite technical papers as panels led by Republicans, and they were more likely to cite papers that other scientists considered important.

Ideally, all the best ideas for scientific research would federal funds. But limited support for scientific research in the United States means that for individual scientists, getting funding is a highly competitive .

Advertisement

At the National Science Foundation, only 1 in 4 proposals are accepted. Success rates for funding through the National Institutes of Health are even lower, with 1 in 5 proposals getting accepted. This low success rate means that the agencies have to reject many proposals that are rated excellent by the merit review process.

Scientists are often reluctant to publicly advocate for their programs, in part because they feel disconnected from the policymaking and appropriations process. Their academic training doesn't equip them to communicate effectively to legislators and policy experts.

Budgets are down

Research received steady funding for the past few decades, but this year Congress reduced appropriations for science at many top government agencies.

Advertisement

The National Science Foundation budget is down 8%, which led agency leaders to warn Congress that the country may lose its ability to attract and train a scientific workforce.

The cut to the NSF is particularly disappointing since Congress promised it an extra $81 over five years when the CHIPS and Science Act passed in 2022. A deal to limit government spending in exchange for suspending the debt ceiling made the 's goals hard to achieve.

NASA's science budget is down 6%, and the budget for the National Institutes of Health, whose research aims to prevent disease and improve public health, is down 1%. Only the Department of Energy's Office of Science got a bump, a modest 2%.

As a result, the major science agencies are nearing a 25-year low for their funding levels, as a share of U.S. gross domestic product.

Advertisement

Feeling the squeeze

Investment in research and development by the business sector is strongly increasing. In 1990, it was slightly higher than federal investment, but by 2020 it was nearly four times higher.

The distinction is important because business investment tends to focus on later stage and applied research, while federal funding goes to pure and exploratory research that can have enormous downstream benefits, such as for quantum computing and fusion power.

There are several causes of the science funding squeeze. Congressional intentions to increase funding levels, as with the CHIPS and Science Act, and the earlier COMPETES Act in 2007, have been derailed by fights over the debt limit and threats of government shutdowns.

The CHIPS act aimed to spur investment and job creation in semiconductor manufacturing, while the COMPETES Act aimed to increase U.S competitiveness in a wide range of high-tech industries such as space exploration.

Advertisement
The CHIPS and Science act aims to stimulate semiconductor production in the U.S. and fund research.

The budget caps for fiscal years 2024 and 2025 any possibility for growth. The budget caps were designed to rein in federal spending, but they are a very blunt tool. Also, nondefense discretionary spending is only 15% of all federal spending. Discretionary spending is up for a vote every year, while mandatory spending is dictated by prior laws.

Entitlement programs like Medicare, and Social Security are mandatory forms of spending. Taken together, they are three times larger than the amount available for discretionary spending, so science has to fight over a small fraction of the overall budget pie.

Within that 15% slice, scientific research competes with K-12 education, veterans' health care, public health, initiatives for small businesses, and more.

Global competition

While government science funding in the U.S. is stagnant, America's main scientific rivals are rising fast.

Advertisement

Federal R&D funding as a percentage of GDP has dropped from 1.2% in 1987 to 1% in 2010 to under 0.8% currently. The United States is still the world's biggest spender on research and development, but in terms of government R&D as a fraction of GDP, the United States ranked 12th in 2021, behind South Korea and a set of European countries. In terms of science researchers as a portion of the labor force, the United States ranks 10th.

Meanwhile, America's main geopolitical rival is rising fast. China has eclipsed the United States in high-impact papers published, and China now spends more than the United States on university and government research.

If the U.S. wants to keep its status as the world leader in scientific research, it'll need to redouble its commitment to science by appropriately funding research.The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post Federal funding for major science agencies is at a 25-year low appeared first on theconversation.com

Continue Reading

The Conversation

AI companies train language models on YouTube’s archive − making family-and-friends videos a privacy risk

Published

on

theconversation.com – Ryan McGrady, Senior Researcher, Initiative for Digital Public , UMass Amherst – 2024-06-27 07:23:53
Your kid's silly could be fodder for ChatGPT.
Halfpoint/iStock via Getty Images

Ryan McGrady, UMass Amherst and Ethan Zuckerman, UMass Amherst

The promised artificial intelligence revolution requires data. Lots and lots of data. OpenAI and Google have begun using YouTube videos to train their text-based AI models. But what does the YouTube archive actually include?

Our team of digital media researchers at the of Amherst collected and analyzed random samples of YouTube videos to learn more about that archive. We published an 85-page paper about that dataset and set up a website called TubeStats for researchers and journalists who need basic information about YouTube.

Now, we're taking a closer look at some of our more surprising findings to better understand how these obscure videos might become part of powerful AI systems. We've found that many YouTube videos are meant for personal use or for small groups of people, and a significant proportion were created by children who appear to be under 13.

Advertisement

Bulk of the YouTube iceberg

Most people's experience of YouTube is algorithmically curated: Up to 70% of the videos users watch are recommended by the site's algorithms. Recommended videos are typically popular content such as influencer stunts, clips, explainer videos, travel vlogs and video reviews, while content that is not recommended languishes in obscurity.

Some YouTube content emulates popular creators or fits into established genres, but much of it is personal: celebrations, selfies set to music, homework assignments, video game clips without context and kids dancing. The obscure side of YouTube – the vast majority of the estimated 14.8 billion videos created and uploaded to the platform – is poorly understood.

Illuminating this aspect of YouTube – and social media generally – is difficult because big tech companies have become increasingly hostile to researchers.

We've found that many videos on YouTube were never meant to be shared widely. We documented thousands of short, personal videos that have few views but high engagement – likes and comments – implying a small but highly engaged audience. These were clearly meant for a small audience of friends and family. Such social uses of YouTube contrast with videos that try to maximize their audience, suggesting another way to use YouTube: as a video-centered social network for small groups.

Advertisement

Other videos seem intended for a different kind of small, fixed audience: recorded classes from pandemic-era virtual instruction, school board meetings and work meetings. While not what most people think of as social uses, they likewise imply that their creators have a different expectation about the audience for the videos than creators of the kind of content people see in their recommendations.

Fuel for the AI machine

It was with this broader understanding that we read The New York Times exposé on how OpenAI and Google turned to YouTube in a race to find new troves of data to train their large language models. An archive of YouTube transcripts makes an extraordinary dataset for text-based models.

There is also speculation, fueled in part by an evasive answer from OpenAI's chief technology officer Mira Murati, that the videos themselves could be used to train AI text-to-video models such as OpenAI's Sora.

Advertisement

The New York Times story raised concerns about YouTube's terms of service and, of course, the copyright issues that pervade much of the debate about AI. But there's another problem: How could anyone know what an archive of more than 14 billion videos, uploaded by people all over the world, actually contains? It's not entirely clear that Google knows or even could know if it wanted to.

Kids as content creators

We were surprised to find an unsettling number of videos featuring kids or apparently created by them. YouTube requires uploaders to be at least 13 years old, but we frequently saw children who appeared to be much younger than that, typically dancing, singing or playing video games.

In our preliminary research, our coders determined nearly a fifth of random videos with at least one person's face visible likely included someone under 13. We didn't take into account videos that were clearly shot with the consent of a parent or guardian.

Our current sample size of 250 is relatively small – we are working on coding a much larger sample – but the findings thus far are consistent with what we've seen in the past. We're not aiming to scold Google. Age validation on the internet is infamously difficult and fraught, and we have no way of determining whether these videos were uploaded with the consent of a parent or guardian. But we want to underscore what is being ingested by these large companies' AI models.

Advertisement

Small reach, big influence

It's tempting to assume OpenAI is using highly produced influencer videos or TV newscasts posted to the platform to train its models, but previous research on large language model training data shows that the most popular content is not always the most influential in training AI models. A virtually unwatched conversation between three friends could have much more linguistic value in training a chatbot language model than a music video with millions of views.

Unfortunately, OpenAI and other AI companies are quite opaque about their training materials: They don't specify what goes in and what doesn't. Most of the time, researchers can infer problems with training data through biases in AI systems' output. But when we do get a glimpse at training data, there's often cause for concern. For example, Human Rights Watch released a report on June 10, 2024, that showed that a popular training dataset includes many photos of identifiable kids.

The history of big tech self-regulation is filled with moving goal posts. OpenAI in particular is notorious for asking for forgiveness rather than permission and has faced increasing criticism for putting profit over safety.

Concerns over the use of user-generated content for training AI models typically center on intellectual property, but there are also privacy issues. YouTube is a vast, unwieldy archive, impossible to fully .

Advertisement

Models trained on a subset of professionally produced videos could conceivably be an AI company's first training corpus. But without strong policies in place, any company that ingests more than the popular tip of the iceberg is likely including content that violates the Federal Trade Commission's Children's Online Privacy Protection Rule, which prevents companies from collecting data from children under 13 without notice.

With last year's executive order on AI and at least one promising proposal on the table for comprehensive privacy legislation, there are signs that legal protections for user data in the U.S. might become more robust.

When the Wall Street Journal's Joanna Stern asked OpenAI CTO Mira Murati whether OpenAI trained its text-to-video generator Sora on YouTube videos, she said she wasn't sure.

Have you unwittingly helped train ChatGPT?

The intentions of a YouTube uploader simply aren't as consistent or predictable as those of someone publishing a book, writing an article for a magazine or displaying a painting in a gallery. But even if YouTube's algorithm ignores your upload and it never gets more than a of views, it may be used to train models like ChatGPT and Gemini.

As far as AI is concerned, your family reunion video may be just as important as those uploaded by influencer giant Mr. Beast or CNN.The Conversation

Ryan McGrady, Senior Researcher, Initiative for Digital Public Infrastructure, UMass Amherst and Ethan Zuckerman, Associate Professor of Public Policy, Communication, and Information, UMass Amherst

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI companies train language models on YouTube's archive − making family-and-friends videos a privacy risk appeared first on .com

Advertisement
Continue Reading

The Conversation

Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins

Published

on

theconversation.com – Denise Su, Associate Professor of Human Evolution and Social Change, Arizona – 2024-06-27 07:23:34
The reconstructed skeleton of Lucy, found in Hadar, Ethiopia, in 1974, and Grace Latimer, then age 4, daughter of a research team member.
James St. John/Flickr, CC BY

Denise Su, Arizona State University

In 1974, on a survey in Hadar in the remote badlands of Ethiopia, U.S. paleoanthropologist Donald Johanson and graduate student Tom Gray found a piece of an elbow joint jutting from the dirt in a gully. It proved to be the first of 47 bones of a single individual – an early human ancestor whom Johanson nicknamed “Lucy.” Her discovery would overturn what scientists thought they knew about the evolution of our own lineage.

Lucy was a member of the species Australopithecus afarensis, an extinct hominin – a group that includes humans and our fossil relatives. Australopithecus afarensis lived from 3.8 million years ago to 2.9 million years ago, in the region that is now Ethiopia, Kenya and Tanzania. Dated to 3.2 million years ago, Lucy was the oldest and most complete human ancestor ever found at the time of her discovery.

Two features set humans apart from all other primates: big brains and standing and walking on two legs instead of four. Prior to Lucy's discovery, scientists thought that our large brains must have evolved first, because all known human fossils at the time already had large brains. But Lucy stood on two feet and had a small brain, not much larger than that of a chimpanzee.

Advertisement

This was immediately clear when scientists reconstructed her skeleton in Cleveland, Ohio. A photographer took a picture of 4-year-old Grace Latimer – who was visiting her father, Bruce Latimer, a member of the research team – standing next to Lucy. The two were roughly the same size, providing a simple illustration of Lucy's small stature and brain. And Lucy was not a young child: Based on her teeth and bones, scientists estimated that she was fully adult when she died.

The also demonstrated how human Lucy was – especially her posture. Along with the 1978 discovery in Tanzania of fossilized footprint trails 3.6 million years old, made by members of her species, Lucy proved unequivocally that standing and walking upright was the first step in becoming human. In fact, large brains did not show up in our lineage until well over 1 million years after Lucy lived.

A human spine and pelvis, with brown fossilized bones and modern white replacements.
Part of Lucy's reconstructed skeleton, on display at the Cleveland of Natural History in 2006.
James St. John/Flickr, CC BY

Lucy's bones show adaptations that allow for upright posture and bipedal locomotion. In particular, her femur, or upper leg bone, is angled; her spine is S-curved; and her pelvis, or hip bone, is short and bowl-shaped.

These features can also be found in modern human skeletons. They allow us, as they enabled Lucy, to stand, walk and on two legs without falling over – even when balanced on one in mid-stride.

In the 50 years since Lucy's discovery, her impact on scientists' understanding of human origins has been immeasurable. She has inspired paleoanthropologists to survey unexplored , pose new hypotheses and develop and use novel techniques and methodologies.

Advertisement

Even as new fossils are discovered, Lucy remains central to modern research on human origins. As an anthropologist and paleoecologist, I know that she is still the reference point for understanding the anatomy of early human ancestors and the evolution of our own bodies. Knowledge of the human fossil record and the evolution of our lineage have exponentially increased, building on the foundation of Lucy's discovery.The Conversation

Denise Su, Associate Professor of Human Evolution and Social Change, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Lucy, discovered 50 years ago in Ethiopia, stood just 3.5 feet tall − but she still towers over our understanding of human origins appeared first on .com

Advertisement
Continue Reading

News from the South

Trending