fbpx
Connect with us

The Conversation

I’m an astrophysicist mapping the universe with data from the Chandra X-ray Observatory − clear, sharp photos help me study energetic black holes

Published

on

theconversation.com – Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution – 2024-05-29 07:15:29

NASA's Chandra X-ray Observatory detects X-ray emissions from astronomical .

NASA/CXC & J. Vaughan

Giuseppina Fabbiano, Smithsonian Institution

When a star is born or dies, or when any other very energetic phenomenon occurs in the universe, it emits X-rays, which are high-energy light particles that aren't visible to the naked eye. These X-rays are the same kind that doctors use to take pictures of broken bones inside the body. But instead of looking at the shadows produced by the bones stopping X-rays inside of a person, astronomers detect X-rays flying through to get images of events such as black holes and supernovae.

Advertisement

Images and spectra – charts showing the distribution of light across different wavelengths from an object – are the two main ways astronomers investigate the universe. Images tell them what things look like and where certain phenomena are , while spectra tell them how much energy the photons, or light particles, they are collecting have. Spectra can clue them in to how the they came from formed. When studying complex objects, they need both imaging and spectra.

Scientists and engineers designed the Chandra X-ray Observatory to detect these X-rays. Since 1999, Chandra's data has given astronomers incredibly detailed images of some of the universe's most dramatic events.

The Chandra craft, which looks like a long metal tube with six solar panels coming off it in two wings.

The Chandra spacecraft and its components.

NASA/CXC/SAO & J.Vaughan

forming and dying create supernova explosions that send chemical elements out into space. Chandra watches as gas and stars fall into the deep gravitational pulls of black holes, and it bears witness as gas that's a thousand times hotter than the Sun escapes galaxies in explosive winds. It can see when the gravity of huge masses of dark matter trap that hot gas in gigantic pockets.

Advertisement

An explosion of light and color, and a cloud with points of bright light.

On the left is the Cassiopeia A supernova. The image is about 19 light years across, and different colors in the image identify different chemical elements (red indicates silicon, yellow indicates sulfur, cyan indicates calcium, purple indicates iron and blue indicates high energy). The point at the center could be the neutron star remnant of the exploded star. On the right are the colliding ‘Antennae' galaxies, which form a gigantic structure about 30,000 light years across.

Chandra X-ray Center

NASA designed Chandra to orbit around the Earth because it would not be able to see any of this activity from Earth's surface. Earth's atmosphere absorbs X-rays coming from space, which is great for on Earth because these X-rays can harm biological organisms. But it also means that even if NASA placed Chandra on the highest mountaintop, it still wouldn't be able to detect any X-rays. NASA needed to send Chandra into space.

I am an astrophysicist at the Smithsonian Astrophysical Observatory, part of the Center for Astrophysics | Harvard and Smithsonian. I've been working on Chandra since before it launched 25 years ago, and it's been a pleasure to see what the observatory can teach astronomers about the universe.

Supermassive black holes and their host galaxies

Astronomers have found supermassive black holes, which have masses ten to 100 million times that of our Sun, in the centers of all galaxies. These supermassive black holes are mostly sitting there peacefully, and astronomers can detect them by looking at the gravitational pull they exert on nearby stars.

Advertisement

But sometimes, stars or clouds fall into these black holes, which activates them and makes the region close to the black hole emit lots of X-rays. Once activated, they are called active galactic nuclei, AGN, or quasars.

My colleagues and I wanted to better understand what happens to the host galaxy once its black hole turns into an AGN. We picked one galaxy, ESO 428-G014, to look at with Chandra.

An AGN can outshine its host galaxy, which means that more light from the AGN than all the stars and other objects in the host galaxy. The AGN also deposits a lot of energy within the confines of its host galaxy. This effect, which astronomers call feedback, is an important ingredient for researchers who are building simulations that model how the universe evolves over time. But we still don't quite know how much of a role the energy from an AGN plays in the formation of stars in its host galaxy.

Luckily, images from Chandra can provide important insight. I use computational techniques to build and images from the observatory that can tell me about these AGNs.

Advertisement

Three images of a black hole, from low to high resolution, with a bright spot above and right from the center surrounded by clouds.

Getting the ultimate Chandra resolution. From left to right, you see the raw image, the same image at a higher resolution and the image after applying a smoothing algorithm.

G. Fabbiano

The active supermassive black hole in ESO 428-G014 produces X-rays that illuminate a large area, extending as far as 15,000 light years away from the black hole. The basic image that I generated of ESO 428-G014 with Chandra data tells me that the region near the center is the brightest, and that there is a large, elongated region of X-ray emission.

The same data, at a slightly higher resolution, shows two distinct regions with high X-ray emissions. There's a “head,” which encompasses the center, and a slightly curved “tail,” extending down from this central region.

I can also process the data with an adaptive smoothing algorithm that brings the image into an even higher resolution and creates a clearer picture of what the galaxy looks like. This shows clouds of gas around the bright center.

Advertisement

My team has been able to see some of the ways the AGN interacts with the galaxy. The images show nuclear winds sweeping the galaxy, dense clouds and interstellar gas reflecting X-ray light, and jets shooting out radio waves that heat up clouds in the galaxy.

These images are teaching us how this feedback process operates in detail and how to measure how much energy an AGN deposits. These results will researchers produce more realistic simulations of how the universe evolves.

The next 25 years of X-ray astronomy

The year 2024 marks the 25th year since Chandra started making observations of the sky. My colleagues and I continue to depend on Chandra to answer questions about the origin of the universe that no other telescope can.

By providing astronomers with X-ray data, Chandra's data supplements information from the Hubble Space Telescope and the James Webb Space Telescope to give astronomers unique answers to open questions in astrophysics, such as where the supermassive black holes found at the centers of all galaxies came from.

Advertisement

For this particular question, astronomers used Chandra to observe a faraway galaxy first observed by the James Webb Space Telescope. This galaxy emitted the light captured by Webb 13.4 years ago, when the universe was young. Chandra's X-ray data revealed a bright supermassive black hole in this galaxy and suggested that supermassive black holes may form by the collapsing clouds in the early universe.

Sharp imaging has been crucial for these discoveries. But Chandra is expected to last only another 10 years. To keep the search for answers going, astronomers will need to start designing a “super Chandra” X-ray observatory that could succeed Chandra in future decades, though NASA has not yet announced any firm plans to do so.The Conversation

Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience

Published

on

theconversation.com – Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College – 2024-07-02 07:28:40
The electroencephalogram scientists to record and read brain activity.
Kateryna Kon/Science Photo Library via Getty Images

Erika Nyhus, Bowdoin College

Electroencephalography, or EEG, was invented 100 years ago. In the years since the invention of this device to monitor brain electricity, it has had an incredible impact on how scientists study the human brain.

Since its first use, the EEG has shaped researchers' understanding of cognition, from perception to memory. It has also been important for diagnosing and guiding treatment of multiple brain disorders, epilepsy.

I am a cognitive neuroscientist who uses EEG to study how people remember events from their past. The EEG's 100-year anniversary is an opportunity to reflect on this discovery's significance in neuroscience and medicine.

Advertisement

Discovery of EEG

On July 6, 1924, psychiatrist Hans Berger performed the first EEG recording on a human, a 17-year-old boy undergoing neurosurgery. At the time, Berger and other researchers were performing electrical recordings on the brains of animals.

What set Berger apart was his obsession with finding the physical basis of what he called psychic energy, or mental effort, in people. Through a of experiments spanning his early career, Berger measured brain volume and temperature to study changes in mental processes such as intellectual work, attention and desire.

He then turned to recording electrical activity. Though he recorded the first traces of EEG in the human brain in 1924, he did not publish the results until 1929. Those five intervening years were a tortuous phase of self-doubt about the source of the EEG signal in the brain and refining the experimental setup. Berger recorded hundreds of EEGs on multiple subjects, including his own , with both experimental successes and setbacks.

This is among the first EEG readings published in Hans Berger's study. The top trace is the EGG while the bottom is a reference trace of 10 Hz.
Two EEG traces, the top more irregular in rhythm than the bottom.
Hans Berger/Über das Elektrenkephalogramm des Menchen. Archives für Psychiatrie. 1929; 87:527-70 via Wikimedia Commons

Finally convinced of his results, he published a series of papers in the journal Archiv für Psychiatrie and had hopes of winning a Nobel Prize. Unfortunately, the research community doubted his results, and years passed before anyone else started using EEG in their own research.

Berger was eventually nominated for a Nobel Prize in 1940. But Nobels were not awarded that year in any category due to World War II and Germany's occupation of Norway.

Advertisement

Neural oscillations

When many neurons are active at the same time, they produce an electrical signal strong enough to spread instantaneously through the conductive tissue of the brain, skull and scalp. EEG electrodes placed on the head can record these electrical .

Since the discovery of EEG, researchers have shown that neural activity oscillates at specific frequencies. In his initial EEG recordings in 1924, Berger noted the predominance of oscillatory activity that cycled eight to 12 times per second, or 8 to 12 hertz, named alpha oscillations. Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate.

Neural oscillations are thought to be important for effective communication between specialized brain regions. For example, theta oscillations that cycle at 4 to 8 hertz are important for communication between brain regions involved in memory encoding and retrieval in animals and humans.

Finger pointing at EEG reading
Different frequencies of neural oscillations indicate different types of brain activity.
undefined undefined/iStock via Getty Images Plus

Researchers then examined whether they could alter neural oscillations and therefore affect how neurons to each other. Studies have shown that many behavioral and noninvasive methods can alter neural oscillations and to changes in cognitive performance. Engaging in specific mental activities can induce neural oscillations in the frequencies those mental activities use. For example, my team's research found that mindfulness meditation can increase theta frequency oscillations and improve memory retrieval.

Noninvasive brain stimulation methods can target frequencies of interest. For example, my team's ongoing research found that brain stimulation at theta frequency can lead to improved memory retrieval.

Advertisement

EEG has also led to major discoveries about how the brain processes information in many other cognitive domains, including how people perceive the world around them, how they focus their attention, how they communicate through language and how they emotions.

Diagnosing and treating brain disorders

EEG is commonly used to diagnose sleep disorders and epilepsy and to guide brain disorder treatments.

Scientists are using EEG to see whether memory can be improved with noninvasive brain stimulation. Although the research is still in its infancy, there have been some promising results. For example, one study found that noninvasive brain stimulation at gamma frequency – 25 hertz – improved memory and neurotransmitter transmission in Alzheimer's disease.

Back of person's head enveloped by the many, small round electrodes of an EEG cap
Researchers and clinicians use EEG to diagnose conditions like epilepsy.
BSIP/Collection Mix: Subjects via Getty Images

A new type of noninvasive brain stimulation called temporal interference uses two high frequencies to cause neural activity equal to the difference between the stimulation frequencies. The high frequencies can better penetrate the brain and reach the targeted area. Researchers recently tested this method in people using 2,000 hertz and 2,005 hertz to send 5 hertz theta frequency at a key brain region for memory, the hippocampus. This led to improvements in remembering the name associated with a face.

Although these results are promising, more research is needed to understand the exact role neural oscillations play in cognition and whether altering them can lead to long-lasting cognitive enhancement.

Advertisement

The future of EEG

The 100-year anniversary of the EEG provides an opportunity to consider what it has taught us about brain function and what this technique can do in the future.

In a survey commissioned by the journal Nature Human Behaviour, over 500 researchers who use EEG in their work were asked to make predictions on the future of the technique. What will be possible in the next 100 years of EEG?

Some researchers, including myself, predict that we'll use EEG to diagnose and create targeted treatments for brain disorders. Others anticipate that an affordable, wearable EEG will be widely used to enhance cognitive function at home or will be seamlessly integrated into virtual reality applications. The possibilities are vast.The Conversation

Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience appeared first on .com

Continue Reading

The Conversation

Supreme Court kicks cases about tech companies’ First Amendment rights back to lower courts − but appears poised to block states from hampering online content moderation

Published

on

theconversation.com – Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse – 2024-07-01 15:26:42
How much power do social media companies have over what users post?
Midnight Studio/iStock/Getty Images Plus

Lynn Greenky, Syracuse University

The has sent back to lower courts the about whether states can block social media companies such as Facebook and X, formerly Twitter, from regulating and controlling what users can post on their platforms.

Laws in Florida and Texas sought to impose restrictions on the internal policies and algorithms of social media platforms in ways that influence which posts will be promoted and spread widely and which will be made less visible or even .

In the unanimous decision, issued on July 1, 2024, the high court remanded the two cases, Moody v. NetChoice and NetChoice v. Paxton, to the 11th and 5th U.S. Circuit Courts of Appeals, respectively. The court admonished the lower courts for their failure to consider the full force of the laws' applications. It also warned the lower courts to consider the boundaries imposed by the Constitution against government interference with private speech.

Advertisement

Contrasting views of social media sites

In their arguments before the court in February 2024, the two sides described competing visions of how social media fits into the often overwhelming flood of information that defines modern digital society.

The states said the platforms were mere conduits of communication, or “speech hosts,” similar to legacy telephone companies that were required to carry all calls and prohibited from discriminating against users. The states said that the platforms should have to carry all posts from users without discrimination among them based on what they were saying.

The states argued that the content moderation rules the social media companies imposed were not examples of the platforms themselves speaking – or choosing not to speak. Rather, the states said, the rules affected the platforms' behavior and caused them to censor certain views by allowing them to determine whom to allow to speak on which topics, which is outside First Amendment protections.

By contrast, the social media platforms, represented by NetChoice, a tech industry trade group, argued that the platforms' guidelines about what is acceptable on their sites are protected by the First Amendment's guarantee of speech free from government interference. The companies say their platforms are not public forums that may be subject to government regulation but rather private services that can exercise their own editorial judgment about what does or does not appear on their sites.

Advertisement

They argued that their policies were aspects of their own speech and that they should be to develop and implement guidelines about what is acceptable speech on their platforms based on their own First Amendment rights.

Here's what the First Amendment says and what it means.

A reframe by the Supreme Court

All the litigants – NetChoice, Texas and Florida – framed the issue around the effect of the laws on the content moderation policies of the platforms, specifically whether the platforms were engaged in protected speech. The 11th U.S. Circuit Court of Appeals upheld a lower court preliminary injunction against the Florida law, holding the content moderation policies of the platforms were speech and the law was unconstitutional.

The 5th U.S. Circuit Court of Appeals came to the opposite conclusion and held that the platforms were not engaged in speech, but rather the platform's algorithms controlled platform behavior unprotected by the First Amendment. The 5th Circuit determined the behavior was censorship and reversed a lower court injunction against the Texas law.

The Supreme Court, however, reframed the inquiry. The court noted that the lower courts failed to consider the full range of activities the laws covered. Thus, while a First Amendment inquiry was in order, the decisions of the lower courts and the arguments by the parties were incomplete. The court added that neither the parties nor the lower courts engaged in a thorough analysis of whether and how the states' laws affected other elements of the platforms' products, such as Facebook's direct messaging applications, or even whether the laws have any impact on email providers or online marketplaces.

Advertisement

The Supreme Court directed the lower courts to engage in a much more exacting analysis of the laws and their implications and provided some guidelines.

First Amendment principles

The court held that content moderation policies reflect the constitutionally protected editorial choices of the platforms, at least regarding what the court describes as “heartland applications” of the laws – such as Facebook's Feed and YouTube's homepage.

The Supreme Court required the lower courts to consider two core constitutional principles of the First Amendment. One is that the amendment protects speakers from being compelled to communicate messages they would prefer to exclude. Editorial discretion by entities, social media companies, that compile and curate the speech of others is a protected First Amendment activity.

The other principle that the amendment precludes the government from controlling private speech, even for the purpose of balancing the marketplace of ideas. Neither state nor federal government may manipulate that marketplace for the purposes of presenting a more balanced array of viewpoints.

Advertisement

The court also affirmed that these principles apply to digital media in the same way they apply to traditional or legacy media.

In the 96-page opinion, Justice Elena Kagan wrote: “The First Amendment … does not go on when social media are involved.” For now, it appears the social media platforms will continue to control their content.The Conversation

Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post Supreme Court kicks cases about tech companies' First Amendment rights back to lower courts − but appears poised to block states from hampering online content moderation appeared first on .com

Continue Reading

The Conversation

Disability community has long wrestled with ‘helpful’ technologies – lessons for everyone in dealing with AI

Published

on

theconversation.com – Elaine Short, Assistant Professor of Computer Science, Tufts – 2024-07-01 07:19:34

A robotic arm helps a disabled person paint a picture.

Jenna Schad /Tufts University

Elaine Short, Tufts University

You might have heard that artificial intelligence is going to revolutionize everything, save the world and give everyone superhuman powers. Alternatively, you might have heard that it will take your job, make you lazy and stupid, and make the world a cyberpunk dystopia.

Advertisement

Consider another way to look at AI: as an assistive technology – something that helps you function.

With that view, also consider a community of experts in giving and receiving assistance: the disability community. Many disabled people use technology extensively, both dedicated assistive technologies such as wheelchairs and general-use technologies such as smart home devices.

Equally, many disabled people professional and casual assistance from other people. And, despite stereotypes to the contrary, many disabled people regularly give assistance to the disabled and nondisabled people around them.

Disabled people are well experienced in receiving and giving social and technical assistance, which makes them a valuable source of insight into how everyone might relate to AI in the future. This potential is a key driver for my work as a disabled person and researcher in AI and robotics.

Advertisement

Actively learning to live with help

While virtually everyone values independence, no one is fully independent. Each of us depends on others to grow our food, care for us when we are ill, give us advice and emotional , and us in thousands of interconnected ways. Being disabled means support needs that are outside what is typical and therefore those needs are much more visible. Because of this, the disability community has reckoned more explicitly with what it means to need help to than most nondisabled people.

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone.

The curb-cut effect – how technologies built for disabled people help everyone – has become a principle of good design.

This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles.

Partnering in assistance

You have probably had the experience of someone to help you without listening to what you actually need. For example, a parent or friend might “help” you clean and instead end up hiding everything you need.

Advertisement

Disability advocates have long battled this type of well-meaning but intrusive assistance – for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control.

The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over.

A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. For example, most people find it difficult to use a joystick to move a robot arm: The joystick can only move from front to back and side to side, but the arm can move in almost as many ways as a human arm.

The author discusses her work on robots that are designed to help people.

To help, AI can predict what someone is planning to do with the robot and then move the robot accordingly. Previous research assumed that people would ignore this help, but we found that people quickly figured out that the system is doing something, actively worked to understand what it was doing and tried to work with the system to get it to do what they wanted.

Advertisement

Most AI systems don't make this easy, but my lab's new approaches to AI empower people to influence robot behavior. We have shown that this results in better interactions in tasks that are creative, like painting. We also have begun to investigate how people can use this control to solve problems outside the ones the robots were designed for. For example, people can use a robot that is trained to carry a cup of to instead pour the water out to water their plants.

Training AI on human variability

The disability-centered perspective also raises concerns about the huge datasets that power AI. The very nature of data-driven AI is to look for common patterns. In general, the better-represented something is in the data, the better the model works.

If disability means having a body or mind outside what is typical, then disability means not being well-represented in the data. Whether it's AI systems designed to detect cheating on exams instead detecting students' disabilities or robots that fail to account for wheelchair users, disabled people's interactions with AI reveal how those systems are brittle.

One of my goals as an AI researcher is to make AI more responsive and adaptable to real human variation, especially in AI systems that learn directly from interacting with people. We have developed frameworks for testing how robust those AI systems are to real human teaching and explored how robots can learn better from human teachers even when those teachers change over time.

Advertisement

Thinking of AI as an assistive technology, and learning from the disability community, can help to ensure that the AI systems of the future serve people's needs – with people in the driver's seat.The Conversation

Elaine Short, Assistant Professor of Computer Science, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Disability community has long wrestled with ‘helpful' technologies – lessons for everyone in dealing with AI appeared first on .com

Advertisement
Continue Reading

News from the South

Trending