Connect with us

The Conversation

Looking back toward cosmic dawn − astronomers confirm the faintest galaxy ever seen

Published

on

Looking back toward cosmic dawn − astronomers confirm the faintest galaxy ever seen

A phenomenon called gravitational lensing can help astronomers observe faint, hard-to-see galaxies.
NASA/STScI

Guido Roberts-Borsani, University of California, Los Angeles

The universe we live in is a transparent one, where light from stars and galaxies shines bright against a clear, dark backdrop. But this wasn’t always the case – in its early years, the universe was filled with a fog of hydrogen atoms that obscured light from the earliest stars and galaxies.

Clouds interrupted by bright spots
The early universe was filled with a fog made up of hydrogen atoms until the first stars and galaxies burned it away.
NASA/JPL-Caltech, CC BY

The intense ultraviolet light from the first generations of stars and galaxies is thought to have burned through the hydrogen fog, transforming the universe into what we see today. While previous generations of telescopes lacked the ability to study those early cosmic objects, astronomers are now using the James Webb Space Telescope’s superior technology to study the stars and galaxies that formed in the immediate aftermath of the Big Bang.

I’m an astronomer who studies the farthest galaxies in the universe using the world’s foremost ground- and space-based telescopes. Using new observations from the Webb telescope and a phenomenon called gravitational lensing, my team confirmed the existence of the faintest galaxy currently known in the early universe. The galaxy, called JD1, is seen as it was when the universe was only 480 million years old, or 4% of its present age.

A brief history of the early universe

The first billion years of the universe’s life were a crucial period in its evolution. In the first moments after the Big Bang, matter and light were bound to each other in a hot, dense “soup” of fundamental particles.

However, a fraction of a second after the Big Bang, the universe expanded extremely rapidly. This expansion eventually allowed the universe to cool enough for light and matter to separate out of their “soup” and – some 380,000 years later – form hydrogen atoms. The hydrogen atoms appeared as an intergalactic fog, and with no light from stars and galaxies, the universe was dark. This period is known as the cosmic dark ages.

The arrival of the first generations of stars and galaxies several hundred million years after the Big Bang bathed the universe in extremely hot UV light, which burned – or ionized – the hydrogen fog. This process yielded the transparent, complex and beautiful universe we see today.

Astronomers like me call the first billion years of the universe – when this hydrogen fog was burning away – the epoch of reionization. To fully understand this time period, we study when the first stars and galaxies formed, what their main properties were and whether they were able to produce enough UV light to burn through all the hydrogen.

A visual model showing the burning of hydrogen fog by UV light in the ‘reionization’ era. Ionized, or burned, regions are blue and translucent. Ionization fronts are red and white, and neutral regions are dark and opaque. Via djxatlanta on Youtube.

The search for faint galaxies in the early universe

The first step toward understanding the epoch of reionization is finding and confirming the distances to galaxies that astronomers think might be responsible for this process. Since light travels at a finite speed, it takes time to arrive to our telescopes, so astronomers see objects as they were in the past.

For example, light from the center of our galaxy, the Milky Way, takes about 27,000 years to reach us on Earth, so we see it as it was 27,000 years in the past. That means that if we want to see back to the very first instants after the Big Bang (the universe is 13.8 billion years old), we have to look for objects at extreme distances.

Because galaxies residing in this time period are so far away, they appear extremely faint and small to our telescopes and emit most of their light in the infrared. This means astronomers need powerful infrared telescopes like Webb to find them. Prior to Webb, virtually all of the distant galaxies found by astronomers were exceptionally bright and large, simply because our telescopes weren’t sensitive enough to see the fainter, smaller galaxies.

However, it’s the latter population that are far more numerous, representative and likely to be the main drivers to the reionization process, not the bright ones. So, these faint galaxies are the ones astronomers need to study in greater detail. It’s like trying to understand the evolution of humans by studying entire populations rather than a few very tall people. By allowing us to see faint galaxies, Webb is opening a new window into studying the early universe.

A typical early galaxy

JD1 is one such “typical” faint galaxy. It was discovered in 2014 with the Hubble Space Telescope as a suspect distant galaxy. But Hubble didn’t have the capabilities or sensitivity to confirm its distance – it could make only an educated guess.

Small and faint nearby galaxies can sometimes be mistaken as distant ones, so astronomers need to be sure of their distances before we can make claims about their properties. Distant galaxies therefore remain “candidates” until they are confirmed. The Webb telescope finally has the capabilities to confirm these, and JD1 was one of the first major confirmations by Webb of an extremely distant galaxy candidate found by Hubble. This confirmation ranks it as the faintest galaxy yet seen in the early universe.

To confirm JD1, an international team of astronomers and I used Webb’s near-infrared spectrograph, NIRSpec, to obtain an infrared spectrum of the galaxy. The spectrum allowed us to pinpoint the distance from Earth and determine its age, the number of young stars it formed and the amount of dust and heavy elements that it produced.

Bright lights (galaxies and a few stars) against a dark backdrop of sky. One faint galaxy is shown in a magnified box as a dim smudge.
A sky full of galaxies and a few stars. JD1, pictured in a zoomed-in box, is the faintest galaxy yet found in the early universe.
Guido Roberts-Borsani/UCLA; original images: NASA, ESA, CSA, Swinburne University of Technology, University of Pittsburgh, STScI

Gravitational lensing, nature’s magnifying glass

Even for Webb, JD1 would be impossible to see without a helping hand from nature. JD1 is located behind a large cluster of nearby galaxies, called Abell 2744, whose combined gravitational strength bends and amplifies the light from JD1. This effect, known as gravitational lensing, makes JD1 appear larger and 13 times brighter than it ordinarily would.

Large galaxies can warp and distort light traveling around them. This video shows how this process, called gravitational lensing, works.

Without gravitational lensing, astronomers would not have seen JD1, even with Webb. The combination of JD1’s gravitational magnification and new images from another one of Webb’s near-infrared instruments, NIRCam, made it possible for our team to study the galaxy’s structure in unprecedented detail and resolution.

Not only does this mean we as astronomers can study the inner regions of early galaxies, it also means we can start determining whether such early galaxies were small, compact and isolated sources, or if they were merging and interacting with nearby galaxies. By studying these galaxies, we are tracing back to the building blocks that shaped the universe and gave rise to our cosmic home.The Conversation

Guido Roberts-Borsani, Postdoctoral Researcher in Astrophysics, University of California, Los Angeles

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

An 83-year-old short story by Borges portends a bleak future for the internet

Published

on

theconversation.com – Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis – 2024-11-19 07:22:00

Fifty years before the architecture for the web was created, Jorge Luis Borges had already imagined an analog equivalent.
Sophie Bassouls/Sygma via Getty Images

Roger J. Kreuz, University of Memphis

How will the internet evolve in the coming decades?

Fiction writers have explored some possibilities.

In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.

Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.

The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.

To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.

Stephenson’s record as a prognosticator has been impressive – he anticipated the metaverse in his 1992 novel “Snow Crash,” and a key plot element of his “Diamond Age,” released in 1995, is an interactive primer that functions much like a chatbot.

On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.

Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges.

The rise of the chatbots

Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.

The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.

To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.

The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”

This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.

As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.

And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.

Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web.

An infinite − and useless − library

It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.

A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.

How bad might this get?

Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.

In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet.

Illustration of connected gold hexagons that expand endlessly into the horizon.
In Borges’ imaginary, endlessly expansive library of content, finding something meaningful is like finding a needle in a haystack.
aire images/Moment via Getty Images

Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.

The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.

Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.

Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?

The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post An 83-year-old short story by Borges portends a bleak future for the internet appeared first on theconversation.com

Continue Reading

The Conversation

A better understanding of what people do on their devices is key to digital well-being

Published

on

theconversation.com – Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State – 2024-11-19 07:21:00

What you do on your screens matters as much as how much time you spend on them.
Klaus Vedfelt/DigitalVision via Getty Images

Rinanda Shaleha, Penn State

In an era where digital devices are everywhere, the term “screen time” has become a buzz phrase in discussions about technology’s impact on people’s lives. Parents are concerned about their children’s screen habits. But what if this entire approach to screen time is fundamentally flawed?

While researchers have made advances in measuring screen use, a detailed critique of the research in 2020 revealed major issues in how screen time is conceptualized, measured and studied. I study how digital technology affects human cognition and emotions. My ongoing research with cognitive psychologist Nelson Roque builds on that critique’s findings.

We categorized existing screen-time measures, mapping them to attributes like whether they are duration-based or context-specific, and are studying how they relate to health outcomes such as anxiety, stress, depression, loneliness, mood and sleep quality, creating a clearer framework for understanding screen time. We believe that grouping all digital activities together misses how different types of screen use affect people.

By applying this framework, researchers can better identify which digital activities are beneficial or potentially harmful, allowing people to adopt more intentional screen habits that support well-being and reduce negative mental and emotional health effects.

Screen time isn’t one thing

Screen time, at first glance, seems easy to understand: It’s simply the time spent on devices with screens such as smartphones, tablets, laptops and TVs. But this basic definition hides the variety within people’s digital activities. To truly understand screen time’s impact, you need to look closer at specific digital activities and how each affects cognitive function and mental health.

In our research, we divide screen time into four broad categories: educational use, work-related use, social interaction and entertainment.

For education, activities like online classes and reading articles can improve cognitive skills like problem-solving and critical thinking. Digital tools like mobile apps can support learning by boosting motivation, self-regulation and self-control.

But these tools also pose challenges, such as distracting learners and contributing to poorer recall compared with traditional learning methods. For young users, screen-based learning may even have negative impacts on development and their social environment.

Screen time for work, like writing reports or attending virtual meetings, is a central part of modern life. It can improve productivity and enable remote work. However, prolonged screen exposure and multitasking may also lead to stress, anxiety and cognitive fatigue.

Screen use for social connection helps people interact with others through video chats, social media or online communities. These interactions can promote social connectedness and even improve health outcomes such as decreased depressive symptoms and improved glycemic control for people with chronic conditions. But passive screen use, like endless social media scrolling, can lead to negative experiences such as cyberbullying, social comparison and loneliness, especially for teens.

Screen use for entertainment provides relaxation and stress relief. Mindfulness apps or meditation tools, for example, can reduce anxiety and improve emotional regulation. Creative digital activities, like graphic design and music production, can reduce stress and improve mental health. However, too much screen use may reduce well-being by limiting physical activity and time for other rewarding pursuits.

Context matters

Screen time affects people differently based on factors like mood, social setting, age and family environment. Your emotions before and during screen use can shape your experience. Positive interactions can lift your mood, while loneliness might deepen with certain online activities. For example, we found that differences in age and stress levels affect how readily people become distracted on their devices. Alerts and other changes distract users, which makes it more challenging to focus on tasks.

The social context of screen use also matters. Watching a movie with family can strengthen bonds, while using screens alone can increase feelings of isolation, especially when it replaces face-to-face interactions.

Family influence plays a role, too. For example, parents’ screen habits affect their children’s screen behavior, and structured parental involvement can help reduce excessive use. It highlights the positive effect of structured parental involvement, along with mindful social contexts, in managing screen time for healthier digital interactions.

A woman, man and child look at a tablet screen in a living room
Shared screen time with family and friends can boost well-being.
kate_sept2004/E+ via Getty Images

Consistency and nuance

Technology now lets researchers track screen use accurately, but simply counting hours doesn’t give us the full picture. Even when we measure specific activities, like social media or gaming, studies don’t often capture engagement level or intent. For example, someone might use social media to stay informed or to procrastinate.

Studies on screen time often vary in how they define and categorize it. Some focus on total screen exposure without differentiating between activities. Others examine specific types of use but may not account for the content or context. This lack of consistency in defining screen time makes it hard to compare studies or generalize findings.

Understanding screen use requires a more nuanced approach than tracking the amount of time people spend on their screens. Recognizing the different effects of specific digital activities and distinguishing between active and passive use are crucial steps. Using standardized definitions and combining quantitative data with personal insights would provide a fuller picture. Researchers can also study how screen use affects people over time.

For policymakers, this means developing guidelines that move beyond one-size-fits-all limits by focusing on recommendations suited to specific activities and individual needs. For the rest of us, this awareness encourages a balanced digital diet that blends enriching online and offline activities for better well-being.The Conversation

Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post A better understanding of what people do on their devices is key to digital well-being appeared first on theconversation.com

Continue Reading

The Conversation

From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries

Published

on

theconversation.com – Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology – 2024-11-18 07:27:00

Modern bike helmets are made through complex materials engineering.
Johner Images via Getty Images

Jud Ready, Georgia Institute of Technology

Imagine – it’s the mid-1800s, and you’re riding your high-wheeled, penny-farthing bicycle down a dusty road. Sure, it may have some bumps, but if you lose your balance, you’re landing on a relatively soft dirt road. But as the years go by, these roads are replaced with pavement, cobblestones, bricks or wooden slats. All these materials are much harder and still quite bumpy.

As paved roads grew more common across the U.S. and Europe, bicyclists started to suffer gruesome skull fractures and other serious head injuries during falls.

As head injuries became more common, people started seeking out head protection. But the first bike helmets were very different than helmets of today.

I’m a materials engineer who teaches a course at Georgia Tech about materials science and engineering in sports. The class covers many topics, but particularly helmets, as they’re used in many different sports, including cycling, and the materials they’re made of play an important role in how they work. Over the decades, people have used a wide variety of materials to protect their heads while biking, and companies continue to develop new and innovative materials.

In the beginning, there was the pith helmet.

Pith helmets

The first head protection concept introduced to the biking world was a hat made from pith, which is the spongy rind found in the stem of sola plants, aeschynomene aspera. Pith helmet craftsmen would press the pith into sheets and laminate it across dome-shaped molds to form a helmet shape. Then, they’d cover the hats in canvas as a form of weatherproofing.

A hat made of a brown material with a flat rim.
Hats made out of pith were used by militaries as well as for head protection while biking.
Auckland Museum, CC BY-SA

Pith helmets were far from what we would consider a helmet today, but they persisted until the early 20th century, when bicycle-racing clubs emerged. Since pith helmets offered little to no ventilation, the racers began to use halo-shaped leather helmets. These had better airflow and were more comfortable, although they weren’t much better at protecting the head.

A bike helmet made from leather strips connected into a dome on the head of a mannequin.
Leather strip bike helmets were made in the 1930s.
Museums Victoria, CC BY-SA

Leather halo helmets

The initial concept for the halo helmet used a simple leather strip wrapped around the forehead. But these halo helmets quickly evolved, as riders arranged additional strips longitudinally from front to back. They wrapped the leather bands in wool.

For better head protection, the helmet makers then started adding more layers of leather strips to increase the helmet’s thickness. Eventually, they added different materials such as cotton, foam and other textiles into these leather layers for better protection.

While these had better airflow than the pith hats, the leather “hairnet” helmets continued to offer very little protection during a fall on a paved surface. And, like pith, the leather helmets degraded when exposed to sweat and rain.

Despite these drawbacks, leather strip helmets dominated the market for several decades as cycling continued to evolve throughout the 20th century.

Then, in the 1970s, a nonprofit dedicated to testing motorcycle helmets called the Snell Foundation released new standards for bike helmets. They set their standards so high that only lightweight motorcycle helmets could pass, which most bicyclists refused to wear.

New materials and new helmets

The motorcycle equipment manufacturing company Bell Motorsports responded to the new standards by releasing the Bell Biker in 1975. This helmet used expanded polystyrene, or EPS. EPS is the same foam used to manufacture styrofoam coolers. It’s lightweight and absorbs energy well.

Constructing the Bell Biker involved spraying EPS into a dome shaped mold. The manufacturers used small pellets of a very hard plastic – polycarbonate, or PC – to mold an outer shell and then adhere it to the outside of the EPS.

Mottled white foam
Expanded polystyrene, or EPS, is a foam used in styrofoam coolers as well as the core of bike helmets.
Tiia Monto/Wikimedia Commons, CC BY-SA

Unlike the pith and leather helmets, this design was lightweight, load bearing, impact absorbing and well ventilated. The PC shell provided a smooth surface so that during a fall, the helmet would skid along the pavement instead of getting jerked around and caught, which could cause abrupt head rotation and lead to concussions and other head and neck injuries.

Over the next two decades, as cycling became more popular, helmet manufacturers tried to strike the perfect balance between lightweight and ventilated helmets, while simultaneously providing impact protection.

In order to decrease weight, a company called Giro Sport Design created an all-EPS helmet covered by a thin lycra fabric cover instead of a hard PC shell. This design eliminated the weight of the PC shell and improved ventilation.

In 1989, a company called Pro Tec introduced a helmet with a nylon mesh infused in the EPS foam core. The nylon mesh dramatically increased the helmet’s structural support without the added weight of the PC shell.

A man standing by a bike wearing a green helmet that's made of a thin material with a long tail.
Many racing cyclists found teardrop-style helmets to be more aerodynamic.
Bongarts/Getty Images, CC BY-NC-ND

Meanwhile, as cycling became more competitive, many riders and manufacturers started designing more aerodynamic helmets using the existing materials. A revolutionary teardrop style helmet debuted in the 1984 Olympics.

Now, even casual biking enthusiasts will don teardrop helmets.

Helmets on the market today

Helmet makers continue to innovate. Today, many commercial brands use a hard polyethylene terephthalate, or PET, shell around the EPS foam in place of a PC shell to increase the helmet’s protection and lifespan, while decreasing cost.

Meanwhile, some brands still use PC shells. Instead of gluing them to the EPS foam, the shell serves as the mold itself, with the EPS expanding to fit inside it. Manufacturing helmets this way eliminates several process steps, as well as any gaps between the foam and shell. This process makes the helmet both stronger and cheaper to manufacture.

As helmets evolve to provide more protection with still lighter weight, materials called copolymers, such as acrylonitrile-butadiene-styrene, are replacing PC and PET shell materials.

Materials that are easier and cheaper to manufacture, such as expanded polyurethane and expanded polypropylene, are also starting to replace the ubiquitous EPS core.

Just as the leather and pith helmets would look strange to a cyclist today, a century from now, bike helmets could be made with entirely new and innovative materials.The Conversation

Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries appeared first on theconversation.com

Continue Reading

Trending