Connect with us

The Conversation

From anecdotes to AI tools, how doctors make medical decisions is evolving with technology

Published

on

theconversation.com – Aaron J. Masino, Associate Professor of Computing, Clemson University – 2025-01-10 07:25:00

AI tools can help doctors synthesize all the information that goes into a clinical decision.

Khanchit Khirisutchalual/iStock via Getty Images Plus

Aaron J. Masino, Clemson University

The practice of medicine has undergone an incredible, albeit incomplete, transformation over the past 50 years, moving steadily from a field informed primarily by expert opinion and the anecdotal experience of individual clinicians toward a formal scientific discipline.

The advent of evidence-based medicine meant clinicians identified the most effective treatment options for their patients based on quality evaluations of the latest research. Now, precision medicine is enabling providers to use a patient’s individual genetic, environmental and clinical information to further personalize their care.

The potential benefits of precision medicine also come with new challenges. Importantly, the amount and complexity of data available for each patient is rapidly increasing. How will clinicians figure out which data is useful for a particular patient? What is the most effective way to interpret the data in order to select the best treatment?

These are precisely the challenges that computer scientists like me are working to address. Collaborating with experts in genetics, medicine and environmental science, my colleagues and I develop computer-based systems, often using artificial intelligence, to help clinicians integrate a wide range of complex patient data to make the best care decisions.

The rise of evidence-based medicine

As recently as the 1970s, clinical decisions were primarily based on expert opinion, anecdotal experience and theories of disease mechanisms that were frequently unsupported by empirical research. Around that time, a few pioneering researchers argued that clinical decision-making should be grounded in the best available evidence. By the 1990s, the term evidence-based medicine was introduced to describe the discipline of integrating research with clinical expertise when making decisions about patient care.

The bedrock of evidence-based medicine is a hierarchy of evidence quality that determines what kinds of information clinicians should rely on most heavily to make treatment decisions.

Pyramid diagram with systematic reviews up top, followed by critically-appraised topics, critically-appriased individual articles, RCTs, cohort studies, case-controlled series and background information/expert opinion at the base

Certain types of evidence are stronger than others. While filtered information has been evaluated for rigor and quality, unfiltered information has not.

CFCF/Wikimedia Commons, CC BY-SA

Randomized controlled trials randomly place participants in different groups that receive either an experimental treatment or a placebo. These studies, also called clinical trials, are considered the best individual sources of evidence because they allow researchers to compare treatment effectiveness with minimal bias by ensuring the groups are similar.

Observational studies, such as cohort and case-control studies, focus on the health outcomes of a group of participants without any intervention from the researchers. While used in evidence-based medicine, these studies are considered weaker than clinical trials because they don’t control for potential confounding factors and biases.

Overall, systematic reviews that synthesize the findings of multiple research studies offer the highest quality evidence. In contrast, single-case reports detailing one individual’s experience are weak evidence because they may not apply to a wider population. Similarly, personal testimonials and expert opinions alone are not supported by empirical data.

In practice, clinicians can use the framework of evidence-based medicine to formulate a specific clinical question about their patient that can be clearly answered by reviewing the best available research. For example, a clinician might ask whether statins would be more effective than diet and exercise to lower LDL cholesterol for a 50 year-old male with no other risk factors. Integrating evidence, patient preferences and their own expertise, they can develop diagnoses and treatment plans.

As may be expected, gathering and putting all the evidence together can be a laborious process. Consequently, clinicians and patients commonly rely on clinical guidelines developed by third parties such as the American Medical Association, the National Institutes of Health and the World Health Organization. These guidelines provide recommendations and standards of care based on systematic and thorough assessment of available research.

Dawn of precision medicine

Around the same time that evidence-based medicine was gaining traction, two other transformative developments in science and health care were underway. These advances would lead to the emergence of precision medicine, which uses patient-specific information to tailor health care decisions to each person.

The first was the Human Genome Project, which officially began in 1990 and was completed in 2003. It sought to create a reference map of human DNA, or the genetic information cells use to function and survive.

This map of the human genome enabled scientists to discover genes linked to thousands of rare diseases, understand why people respond differently to the same drug, and identify mutations in tumors that can be targeted with specific treatments. Increasingly, clinicians are analyzing a patient’s DNA to identify genetic variations that inform their care.

Columns of thin, luminescent bars against a black background

Output from the DNA sequencer used by the Human Genome Project.

National Human Genome Research Institute/Flickr

The second was the development of electronic medical records to store patient medical history. Although researchers had been conducting pilot studies of digital records for several years, the development of industry standards for electronic medical records began only in the late 1980s. Adoption did not become widespread until after the 2009 American Recovery and Reinvestment Act.

Electronic medical records enable scientists to conduct large-scale studies of the associations between genetic variants and observable traits that inform precision medicine. By storing data in an organized digital format, researchers can also use these patient records to train AI models for use in medical practice.

More data, more AI, more precision

Superficially, the idea of using patient health information to personalize care is not new. For example, the ongoing Framingham Heart Study, which began in 1948, yielded a mathematical model to estimate a patient’s coronary artery disease risk based on their individual health information, rather than the average population risk.

One fundamental difference between efforts to personalize medicine now and prior to the Human Genome Project and electronic medical records, however, is that the mental capacity required to analyze the scale and complexity of individual patient data available today far exceeds that of the human brain. Each person has hundreds of genetic variants, hundreds to thousands of environmental exposures and a clinical history that may include numerous physiological measurements, lab values and imaging results. In my team’s ongoing work, the AI models we’re developing to detect sepsis in infants use dozens of input variables, many of which are updated every hour.

Researchers like me are using AI to develop tools that help clinicians analyze all this data to tailor diagnoses and treatment plans to each individual. For example, some genes can affect how well certain medications work for different patients. While genetic tests can reveal some of these traits, it is not yet feasible to screen every patient due to cost. Instead, AI systems can analyze a patient’s medical history to predict whether genetic testing will be beneficial based on how likely they are to be prescribed a medication known to be influenced by genetic factors.

Another example is diagnosing rare diseases, or conditions that affect fewer than 200,000 people in the U.S. Diagnosis is very difficult because many of the several thousand known rare diseases have overlapping symptoms, and the same disease can present differently among different people. AI tools can assist by examining a patient’s unique genetic traits and clinical characteristics to determine which ones likely cause disease. These AI systems may include components that predict whether the patient’s specific genetic variation negatively affects protein function and whether the patient’s symptoms are similar to specific rare diseases.

Future of clinical decision-making

New technologies will soon enable routine measurement of other types of biomolecular data beyond genetics. Wearable health devices can continuously monitor heart rate, blood pressure and other physiological features, producing data that AI tools can use to diagnose disease and personalize treatment.

Related studies are already producing promising results in precision oncology and personalized preventive health. For example, researchers are developing a wearable ultrasound scanner to detect breast cancer, and engineers are developing skinlike sensors to detect changes in tumor size.

Research will continue to expand our knowledge of genetics, the health effects of environmental exposures and how AI works. These developments will significantly alter how clinicians make decisions and provide care over the next 50 years.The Conversation

Aaron J. Masino, Associate Professor of Computing, Clemson University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From anecdotes to AI tools, how doctors make medical decisions is evolving with technology appeared first on theconversation.com

The Conversation

Colors are objective, according to two philosophers − even though the blue you see doesn’t match what I see

Published

on

theconversation.com – Elay Shech, Professor of Philosophy, Auburn University – 2025-04-25 07:55:00

What appear to be blue and green spirals are actually the same color.
Akiyoshi Kitaoka

Elay Shech, Auburn University and Michael Watkins, Auburn University

Is your green my green? Probably not. What appears as pure green to me will likely look a bit yellowish or blueish to you. This is because visual systems vary from person to person. Moreover, an object’s color may appear differently against different backgrounds or under different lighting.

These facts might naturally lead you to think that colors are subjective. That, unlike features such as length and temperature, colors are not objective features. Either nothing has a true color, or colors are relative to observers and their viewing conditions.

But perceptual variation has misled you. We are philosophers who study colors, objectivity and science, and we argue in our book “The Metaphysics of Colors” that colors are as objective as length and temperature.

Perceptual variation

There is a surprising amount of variation in how people perceive the world. If you offer a group of people a spectrum of color chips ranging from chartreuse to purple and asked them to pick the unique green chip – the chip with no yellow or blue in it – their choices would vary considerably. Indeed, there wouldn’t be a single chip that most observers would agree is unique green.

Generally, an object’s background can result in dramatic changes in how you perceive its colors. If you place a gray object against a lighter background, it will appear darker than if you place it against a darker background. This variation in perception is perhaps most striking when viewing an object under different lighting, where a red apple could look green or blue.

Of course, that you experience something differently does not prove that what is experienced is not objective. Water that feels cold to one person may not feel cold to another. And although we do not know who is feeling the water “correctly,” or whether that question even makes sense, we can know the temperature of the water and presume that this temperature is independent of your experience.

Similarly, that you can change the appearance of something’s color is not the same as changing its color. You can make an apple look green or blue, but that is not evidence that the apple is not red.

Apple under a gradient of red to blue light
Under different lighting conditions, objects take on different colors.
Gyozo Vaczi/iStock via Getty Images Plus

For comparison, the Moon appears larger when it’s on the horizon than when it appears near its zenith. But the size of the Moon has not changed, only its appearance. Hence, that the appearance of an object’s color or size varies is, by itself, no reason to think that its color and size are not objective features of the object. In other words, the properties of an object are independent of how they appear to you.

That said, given that there is so much variation in how objects appear, how do you determine what color something actually is? Is there a way to determine the color of something despite the many different experiences you might have of it?

Matching colors

Perhaps determining the color of something is to determine whether it is red or blue. But we suggest a different approach. Notice that squares that appear to be the same shade of pink against different backgrounds look different against the same background.

Green, purple and orange squares with smaller squares in shades of pink placed at their centers and at the bottom of the image
The smaller squares may appear to be the same color, but if you compare them with the strip of squares at the bottom, they’re actually different shades.
Shobdohin/Wikimedia Commons, CC BY-SA

It’s easy to assume that to prove colors are objective would require knowing which observers, lighting conditions and backgrounds are the best, or “normal.” But determining the right observers and viewing conditions is not required for determining the very specific color of an object, regardless of its name. And it is not required to determine whether two objects have the same color.

To determine whether two objects have the same color, an observer would need to view the objects side by side against the same background and under various lighting conditions. If you painted part of a room and find that you don’t have enough paint, for instance, finding a match might be very tricky. A color match requires that no observer under any lighting condition will see a difference between the new paint and the old.

YouTube video
Is the dress yellow and white or black and blue?

That two people can determine whether two objects have the same color even if they don’t agree on exactly what that color is – just as a pool of water can have a particular temperature without feeling the same to me and you – seems like compelling evidence to us that colors are objective features of our world.

Colors, science and indispensability

Everyday interactions with colors – such as matching paint samples, determining whether your shirt and pants clash, and even your ability to interpret works of art – are hard to explain if colors are not objective features of objects. But if you turn to science and look at the many ways that researchers think about colors, it becomes harder still.

For example, in the field of color science, scientific laws are used to explain how objects and light affect perception and the colors of other objects. Such laws, for instance, predict what happens when you mix colored pigments, when you view contrasting colors simultaneously or successively, and when you look at colored objects in various lighting conditions.

The philosophers Hilary Putnam and Willard van Orman Quine made famous what is known as the indispensability argument. The basic idea is that if something is indispensable to science, then it must be real and objective – otherwise, science wouldn’t work as well as it does.

For example, you may wonder whether unobservable entities such as electrons and electromagnetic fields really exist. But, so the argument goes, the best scientific explanations assume the existence of such entities and so they must exist. Similarly, because mathematics is indispensable to contemporary science, some philosophers argue that this means mathematical objects are objective and exist independently of a person’s mind.

Blue damselfish, seeming iridescent against a black background
The color of an animal can exert evolutionary pressure.
Paul Starosta/Stone via Getty Images

Likewise, we suggest that color plays an indispensable role in evolutionary biology. For example, researchers have argued that aposematism – the use of colors to signal a warning for predators – also benefits an animal’s ability to gather resources. Here, an animal’s coloration works directly to expand its food-gathering niche insofar as it informs potential predators that the animal is poisonous or venomous.

In fact, animals can exploit the fact that the same color pattern can be perceived differently by different perceivers. For instance, some damselfish have ultraviolet face patterns that help them be recognized by other members of their species and communicate with potential mates while remaining largely hidden to predators unable to perceive ultraviolet colors.

In sum, our ability to determine whether objects are colored the same or differently and the indispensable roles they play in science suggest that colors are as real and objective as length and temperature.The Conversation

Elay Shech, Professor of Philosophy, Auburn University and Michael Watkins, Professor of Philosophy, Auburn University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Colors are objective, according to two philosophers − even though the blue you see doesn’t match what I see appeared first on theconversation.com

Continue Reading

The Conversation

‘Extraordinary claims require extraordinary evidence’ − an astronomer explains how much evidence scientists need to claim discoveries like extraterrestrial life

Published

on

theconversation.com – Chris Impey, University Distinguished Professor of Astronomy, University of Arizona – 2025-04-25 07:54:00

The universe is filled with countless galaxies, stars and planets. Astronomers may find life one day, but they will need extraordinary proof.
ESA/Euclid/Euclid Consortium/NASA, image processing by J.-C. Cuillandre (CEA Paris-Saclay), G. Anselmi

Chris Impey, University of Arizona

The detection of life beyond Earth would be one of the most profound discoveries in the history of science. The Milky Way galaxy alone hosts hundreds of millions of potentially habitable planets. Astronomers are using powerful space telescopes to look for molecular indicators of biology in the atmospheres of the most Earth-like of these planets.

But so far, no solid evidence of life has ever been found beyond the Earth. A paper published in April 2025 claimed to detect a signature of life in the atmosphere of the planet K2-18b. And while this discovery is intriguing, most astronomers – including the paper’s authors – aren’t ready to claim that it means extraterrestrial life exists. A detection of life would be a remarkable development.

The astronomer Carl Sagan used the phrase, “Extraordinary claims require extraordinary evidence,” in regard to searching for alien life. It conveys the idea that there should be a high bar for evidence to support a remarkable claim.

I’m an astronomer who has written a book about astrobiology. Over my career, I’ve seen some compelling scientific discoveries. But to reach this threshold of finding life beyond Earth, a result needs to fit several important criteria.

When is a result important and reliable?

There are three criteria for a scientific result to represent a true discovery and not be subject to uncertainty and doubt. How does the claim of life on K2-18b measure up?

First, the experiment needs to measure a meaningful and important quantity. Researchers observed K2-18b’s atmosphere with the James Webb Space Telescope and saw a spectral feature that they identified as dimethyl sulfide.

On Earth, dimethyl sulfide is associated with biology, in particular bacteria and plankton in the oceans. However, it can also arise by other means, so this single molecule is not conclusive proof of life.

Second, the detection needs to be strong. Every detector has some noise from the random motion of electrons. The signal should be strong enough to have a low probability of arising by chance from this noise.

The K2-18b detection has a significance of 3-sigma, which means it has a 0.3% probability of arising by chance.

That sounds low, but most scientists would consider that a weak detection. There are many molecules that could create a feature in the same spectral range.

The “gold standard” for scientific detection is 5-sigma, which means the probability of the finding happening by chance is less than 0.00006%. For example, physicists at CERN gathered data patiently for two years until they had a 5-sigma detection of the Higgs boson particle, leading to a Nobel Prize one year later in 2013.

YouTube video
The announcement of the discovery of the Higgs boson took decades from the time Peter Higgs first predicted the existence of the particle. Scientists, such as Joe Incandela shown here, waited until they’d reached that 5-sigma level to say, ‘I think we have it.’

Third, a result needs to be repeatable. Results are considered reliable when they’ve been repeated – ideally corroborated by other investigators or confirmed using a different instrument. For K2-18b, this might mean detecting other molecules that indicate biology, such as oxygen in the planet’s atmosphere. Without more and better data, most researchers are viewing the claim of life on K2-18b with skepticism.

Claims of life on Mars

In the past, some scientists have claimed to have found life much closer to home, on the planet Mars.

Over a century ago, retired Boston merchant turned astronomer Percival Lowell claimed that linear features he saw on the surface of Mars were canals, constructed by a dying civilization to transport water from the poles to the equator. Artificial waterways on Mars would certainly have been a major discovery, but this example failed the other two criteria: strong evidence and repeatability.

Lowell was misled by his visual observations, and he was engaging in wishful thinking. No other astronomers could confirm his findings.

An image of Mars in space
Mars, as taken by the OSIRIS instrument on the ESA Rosetta spacecraft during its February 2007 flyby of the planet and adjusted to show color.
ESA & MPS for OSIRIS Team MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA, CC BY-SA

In 1996, NASA held a press conference where a team of scientists presented evidence for biology in the Martian meteorite ALH 84001. Their evidence included an evocative image that seemed to show microfossils in the meteorite.

However, scientists have come up with explanations for the meteorite’s unusual features that do not involve biology. That extraordinary claim has dissipated.

More recently, astronomers detected low levels of methane in the atmosphere of Mars. Like dimethyl sulfide and oxygen, methane on Earth is made primarily – but not exclusively – by life. Different spacecraft and rovers on the Martian surface have returned conflicting results, where a detection with one spacecraft was not confirmed by another.

The low level and variability of methane on Mars is still a mystery. And in the absence of definitive evidence that this very low level of methane has a biological origin, nobody is claiming definitive evidence of life on Mars.

Claims of advanced civilizations

Detecting microbial life on Mars or an exoplanet would be dramatic, but the discovery of extraterrestrial civilizations would be truly spectacular.

The search for extraterrestrial intelligence, or SETI, has been underway for 75 years. No messages have ever been received, but in 1977 a radio telescope in Ohio detected a strong signal that lasted only for a minute.

This signal was so unusual that an astronomer working at the telescope wrote “Wow!” on the printout, giving the signal its name. Unfortunately, nothing like it has since been detected from that region of the sky, so the Wow! Signal fails the test of repeatability.

An illustration of a long, thin rock flying through space.
‘Oumuamua is the first object passing through the solar system that astronomers have identified as having interstellar origins.
European Southern Observatory/M. Kornmesser

In 2017, a rocky, cigar-shaped object called ‘Oumuamua was the first known interstellar object to visit the solar system. ‘Oumuamua’s strange shape and trajectory led Harvard astronomer Avi Loeb to argue that it was an alien artifact. However, the object has already left the solar system, so there’s no chance for astronomers to observe it again. And some researchers have gathered evidence suggesting that it’s just a comet.

While many scientists think we aren’t alone, given the enormous amount of habitable real estate beyond Earth, no detection has cleared the threshold enunciated by Carl Sagan.

Claims about the universe

These same criteria apply to research about the entire universe. One particular concern in cosmology is the fact that, unlike the case of planets, there is only one universe to study.

A cautionary tale comes from attempts to show that the universe went through a period of extremely rapid expansion a fraction of a second after the Big Bang. Cosmologists call this event inflation, and it is invoked to explain why the universe is now smooth and flat.

In 2014, astronomers claimed to have found evidence for inflation in a subtle signal from microwaves left over after the Big Bang. Within a year, however, the team retracted the result because the signal had a mundane explanation: They had confused dust in our galaxy with a signature of inflation.

On the other hand, the discovery of the universe’s acceleration shows the success of the scientific method. In 1929, astronomer Edwin Hubble found that the universe was expanding. Then, in 1998, evidence emerged that this cosmic expansion is accelerating. Physicists were startled by this result.

Two research groups used supernovae to separately trace the expansion. In a friendly rivalry, they used different sets of supernovae but got the same result. Independent corroboration increased their confidence that the universe was accelerating. They called the force behind this accelerating expansion dark energy and received a Nobel Prize in 2011 for its discovery.

On scales large and small, astronomers try to set a high bar of evidence before claiming a discovery.The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post ‘Extraordinary claims require extraordinary evidence’ − an astronomer explains how much evidence scientists need to claim discoveries like extraterrestrial life appeared first on theconversation.com

Continue Reading

The Conversation

Perfect brownies baked at high altitude are possible thanks to Colorado’s home economics pioneer Inga Allison

Published

on

theconversation.com – Tobi Jacobi, Professor of English, Colorado State University – 2025-04-22 07:47:00

Students work in the high-altitude baking laboratory.
Archives and Special Collections, Colorado State University

Tobi Jacobi, Colorado State University and Caitlin Clark, Colorado State University

Many bakers working at high altitudes have carefully followed a standard recipe only to reach into the oven to find a sunken cake, flat cookies or dry muffins.

Experienced mountain bakers know they need a few tricks to achieve the same results as their fellow artisans working at sea level.

These tricks are more than family lore, however. They originated in the early 20th century thanks to research on high-altitude baking done by Inga Allison, then a professor at Colorado State University. It was Allison’s scientific prowess and experimentation that brought us the possibility of perfect high-altitude brownies and other baked goods.

A recipe for brownies at high altitude.
Inga Allison’s high-altitude brownie recipe.
Archives and Special Collections, Colorado State University

We are two current academics at CSU whose work has been touched by Allison’s legacy.

One of us – Caitlin Clark – still relies on Allison’s lessons a century later in her work as a food scientist in Colorado. The other – Tobi Jacobi – is a scholar of women’s rhetoric and community writing, and an enthusiastic home baker in the Rocky Mountains, who learned about Allison while conducting archival research on women’s work and leadership at CSU.

That research developed into “Knowing Her,” an exhibition Jacobi developed with Suzanne Faris, a CSU sculpture professor. The exhibit highlights dozens of women across 100 years of women’s work and leadership at CSU and will be on display through mid-August 2025 in the CSU Fort Collins campus Morgan Library.

A pioneer in home economics

Inga Allison is one of the fascinating and accomplished women who is part of the exhibit.

Allison was born in 1876 in Illinois and attended the University of Chicago, where she completed the prestigious “science course” work that heavily influenced her career trajectory. Her studies and research also set the stage for her belief that women’s education was more than preparation for domestic life.

In 1908, Allison was hired as a faculty member in home economics at Colorado Agricultural College, which is now CSU. She joined a group of faculty who were beginning to study the effects of altitude on baking and crop growth. The department was located inside Guggenheim Hall, a building that was constructed for home economics education but lacked lab equipment or serious research materials.

A sepia-toned photograph of Inga Allison, a white woman in dark clothes with her hair pulled back.
Inga Allison was a professor of home economics at Colorado Agricultural College, where she developed recipes that worked in high altitudes.
Archives and Special Collections, Colorado State University

Allison took both the land grant mission of the university with its focus on teaching, research and extension and her particular charge to prepare women for the future seriously. She urged her students to move beyond simple conceptions of home economics as mere preparation for domestic life. She wanted them to engage with the physical, biological and social sciences to understand the larger context for home economics work.

Such thinking, according to CSU historian James E. Hansen, pushed women college students in the early 20th century to expand the reach of home economics to include “extension and welfare work, dietetics, institutional management, laboratory research work, child development and teaching.”

News articles from the early 1900s track Allison giving lectures like “The Economic Side of Natural Living” to the Colorado Health Club and talks on domestic science to ladies clubs and at schools across Colorado. One of her talks in 1910 focused on the art of dishwashing.

Allison became the home economics department chair in 1910 and eventually dean. In this leadership role, she urged then-CSU President Charles Lory to fund lab materials for the home economics department. It took 19 years for this dream to come to fruition.

In the meantime, Allison collaborated with Lory, who gave her access to lab equipment in the physics department. She pieced together equipment to conduct research on the relationship between cooking foods in water and atmospheric pressure, but systematic control of heat, temperature and pressure was difficult to achieve.

She sought other ways to conduct high-altitude experiments and traveled across Colorado where she worked with students to test baking recipes in varied conditions, including at 11,797 feet in a shelter house on Fall River Road near Estes Park.

Early 1900s car traveling in the Rocky Mountains.
Inga Allison tested her high-altitude baking recipes at 11,797 feet at the shelter house on Fall River Road, near Estes Park, Colorado.
Archives and Special Collections, Colorado State University

But Allison realized that recipes baked at 5,000 feet in Fort Collins and Denver simply didn’t work in higher altitudes. Little advancement in baking methods occurred until 1927, when the first altitude baking lab in the nation was constructed at CSU thanks to Allison’s research. The results were tangible — and tasty — as public dissemination of altitude-specific baking practices began.

A 1932 bulletin on baking at altitude offers hundreds of formulas for success at heights ranging from 4,000 feet to over 11,000 feet. Its author, Marjorie Peterson, a home economics staff person at the Colorado Experiment Station, credits Allison for her constructive suggestions and support in the development of the booklet.

Science of high-altitude baking

As a senior food scientist in a mountain state, one of us – Caitlin Clark – advises bakers on how to adjust their recipes to compensate for altitude. Thanks to Allison’s research, bakers at high altitude today can anticipate how the lower air pressure will affect their recipes and compensate by making small adjustments.

The first thing you have to understand before heading into the kitchen is that the higher the altitude, the lower the air pressure. This lower pressure has chemical and physical effects on baking.

Air pressure is a force that pushes back on all of the molecules in a system and prevents them from venturing off into the environment. Heat plays the opposite role – it adds energy and pushes molecules to escape.

When water is boiled, molecules escape by turning into steam. The less air pressure is pushing back, the less energy is required to make this happen. That’s why water boils at lower temperatures at higher altitudes – around 200 degrees Fahrenheit in Denver compared with 212 F at sea level.

So, when baking is done at high altitude, steam is produced at a lower temperature and earlier in the baking time. Carbon dioxide produced by leavening agents also expands more rapidly in the thinner air. This causes high-altitude baked goods to rise too early, before their structure has fully set, leading to collapsed cakes and flat muffins. Finally, the rapid evaporation of water leads to over-concentration of sugars and fats in the recipe, which can cause pastries to have a gummy, undesirable texture.

Allison learned that high-altitude bakers could adjust to their environment by reducing the amount of sugar or increasing liquids to prevent over-concentration, and using less of leavening agents like baking soda or baking powder to prevent dough from rising too quickly.

Allison was one of many groundbreaking women in the early 20th century who actively supported higher education for women and advanced research in science, politics, humanities and education in Colorado.

Others included Grace Espy-Patton, a professor of English and sociology at CSU from 1885 to 1896 who founded an early feminist journal and was the first woman to register to vote in Fort Collins. Miriam Palmer was an aphid specialist and master illustrator whose work crafting hyper-realistic wax apples in the early 1900s allowed farmers to confirm rediscovery of the lost Colorado Orange apple, a fruit that has been successfully propagated in recent years.

In 1945, Allison retired as both an emerita professor and emerita dean at CSU. She immediately stepped into the role of student and took classes in Russian and biochemistry.

In the fall of 1958, CSU opened a new dormitory for women that was named Allison Hall in her honor.

“I had supposed that such a thing happened only to the very rich or the very dead,” Allison told reporters at the dedication ceremony.

Read more of our stories about Colorado.The Conversation

Tobi Jacobi, Professor of English, Colorado State University and Caitlin Clark, Senior Food Scientist at the CSU Spur Food Innovation Center, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Perfect brownies baked at high altitude are possible thanks to Colorado’s home economics pioneer Inga Allison appeared first on theconversation.com

Continue Reading

Trending