Connect with us

The Conversation

Preparing for a pandemic that never came ended up setting off another − how an accidental virus release triggered 1977’s ‘Russian flu’

Published

on

theconversation.com – Donald S. Burke, Dean Emeritus and Distinguished University Professor Emeritus of Health Science and Policy, and of Epidemiology, at the School of Public Health, University of Pittsburgh – 2024-09-04 07:28:24

Vaccine research quickly picked up to try to prevent a possible flu pandemic in 1976.

AP Photo

Donald S. Burke, University of Pittsburgh

Nineteen-year-old U.S. Army Pvt. David Lewis set out from Fort Dix on a 50-mile hike with his unit on Feb. 5, 1976. On that bitter cold day, he collapsed and died. Autopsy specimens unexpectedly tested positive for an H1N1 swine influenza virus.

Virus disease surveillance at Fort Dix found another 13 cases among recruits who had been hospitalized for respiratory illness. Additional serum antibody testing revealed that over 200 recruits had been infected but not hospitalized with the novel swine H1N1 strain.

masked nurse and military man stand above patient in bed

Officials worried about a repeat of something like the 1918 flu pandemic, which took hold in soldiers and spread globally.

PhotoQuest/Getty Images

Alarm bells instantly went off within the epidemiology community: Could Pvt. Lewis’ death from an H1N1 swine flu be a harbinger of another global pandemic like the terrible 1918 H1N1 swine flu pandemic that killed an estimated 50 million people worldwide?

The U.S. government acted quickly. On March 24, 1976, President Gerald Ford announced a plan to “inoculate every man, woman, and child in the United States.” On Oct. 1, 1976, the mass immunization campaign began.

Meanwhile, the initial small outbreak at Fort Dix had rapidly fizzled, with no new cases on the base after February. As Army Col. Frank Top, who headed the Fort Dix virus investigation, later told me, “We had shown pretty clearly that (the virus) didn’t go anywhere but Fort Dix … it disappeared.”

Nonetheless, concerned by that outbreak and witnessing the massive crash vaccine program in the U.S., biomedical scientists worldwide began H1N1 swine influenza vaccine research and development programs in their own countries. Going into the 1976-77 winter season, the world waited – and prepared – for an H1N1 swine influenza pandemic that never came.

piles of cardboard boxes and two men lifting them

By September 1976, New York State Health Department workers were unloading cartons of swine flu vaccine for distribution.

AP Photo/Jim McKnight

But that wasn’t the end of the story. As an experienced infectious disease epidemiologist, I make the case that there were unintended consequences of those seemingly prudent but ultimately unnecessary preparations.

What was odd about H1N1 Russian flu pandemic

In an epidemiological twist, a new pandemic influenza virus did emerge, but it was not the anticipated H1N1 swine virus.

In November 1977, health officials in Russia reported that a human – not swine – H1N1 influenza strain had been detected in Moscow. By month’s end, it was reported across the entire USSR and soon throughout the world.

Compared with other influenzas, this pandemic was peculiar. First, the mortality rate was low, about a third that of most influenza strains. Second, only those younger than 26 were regularly attacked. And finally, unlike other newly emerged pandemic influenza viruses in the past, it failed to displace the existing prevalent H3N2 subtype that was that year’s seasonal flu. Instead, the two flu strains – the new H1N1 and the long-standing H3N2 – circulated side by side.

Here the story takes yet another turn. Microbiologist Peter Palese applied what was then a novel technique called RNA oligonucleotide mapping to study the genetic makeup of the new H1N1 Russian flu virus. He and his colleagues grew the virus in the lab, then used RNA-cutting enzymes to chop the viral genome into hundreds of pieces. By spreading the chopped RNA in two dimensions based on size and electrical charge, the RNA fragments created a unique fingerprint-like map of spots.

dark spots in a funnel shape on a lighter background

Researchers were surprised to see the ‘genetic fingerprint’ for the 1977 H1N1 Russian flu strain closely matched that of an extinct influenza virus.

Peter Palese

Much to Palese’s surprise, when they compared the spot pattern of the 1977 H1N1 Russian flu with a variety of other influenza viruses, this “new” virus was essentially identical to older human influenza H1N1 strains that had gone extinct in the early 1950s.

So, the 1977 Russian flu virus was actually a strain that had disappeared from the planet a quarter century early, then was somehow resurrected back into circulation. This explained why it attacked only younger people – older people had already been infected and become immune when the virus circulated decades ago in its earlier incarnation.

But how did the older strain come back from extinction?

black and white photo of people sitting on subway in Moscow, 1977

Though called the Russian flu, the virus appears to have been circulating elsewhere before being identified in the Soviet population.

Gilbert UZAN/Gamma-Rapho via Getty Images

Refining the timeline of a resurrected virus

Despite its name, the Russian flu probably didn’t really start in Russia. The first published reports of the virus were from Russia, but subsequent reports from China provided evidence that it had first been detected months earlier, in May and June of 1977, in the Chinese port city of Tientsin.

In 2010, scientists used detailed genetic studies of several samples of the 1977 virus to pinpoint the date of their earliest common ancestor. This “molecular clock” data suggested the virus initially infected people a full year earlier, in April or May of 1976.

So, the best evidence is that the 1977 Russian flu actually emerged – or more properly “re-emerged” – in or near Tientsin, China, in the spring of 1976.

A frozen lab virus

Was it simply a coincidence that within months of Pvt. Lewis’ death from H1N1 swine flu, a heretofore extinct H1N1 influenza strain suddenly reentered the human population?

Influenza virologists around the world had for years been using freezers to store influenza virus strains, including some that had gone extinct in the wild. Fears of a new H1N1 swine flu pandemic in 1976 in the United States had prompted a worldwide surge in research on H1N1 viruses and vaccines. An accidental release of one of these stored viruses was certainly possible in any of the countries where H1N1 research was taking place, including China, Russia, the U.S., the U.K. and probably others.

Years after the reemergence, Palese, the microbiologist, reflected on personal conversations he had at the time with Chi-Ming Chu, the leading Chinese expert on influenza. Palese wrote in 2004 that “the introduction of the 1977 H1N1 virus is now thought to be the result of vaccine trials in the Far East involving the challenge of several thousand military recruits with live H1N1 virus.”

Although exactly how such an accidental release may have occurred during a vaccine trial is unknown, there are two leading possibilities. First, scientists could have used the resurrected H1N1 virus as their starting material for development of a live, attenuated H1N1 vaccine. If the virus in the vaccine wasn’t adequately weakened, it could have become transmissible person to person. Another possibility is that researchers used the live, resurrected virus to test the immunity provided by conventional H1N1 vaccines, and it accidentally escaped from the research setting.

Whatever the specific mechanism of the release, the combination of the detailed location and timing of the pandemic’s origins and the stature of Chu and Palese as highly credible sources combine to make a strong case for an accidental release in China as the source of the Russian flu pandemic virus.

black and grey bubbly blobs

The H1N1 influenza virus identified at Fort Dix wasn’t the one that ended up causing a pandemic.

CDC/Dr. E. Palmer, R.E. Bates, 1976 via Getty Images

A sobering history lesson

The resurrection of an extinct but dangerous human-adapted H1N1 virus came about as the world was scrambling to prevent what was perceived to be the imminent emergence of a swine H1N1 influenza pandemic. People were so concerned about the possibility of a new pandemic that they inadvertently caused one. It was a self-fulfilling-prophecy pandemic.

I have no intent to lay blame here; indeed, my main point is that in the epidemiological fog of the moment in 1976, with anxiety mounting worldwide about a looming pandemic, a research unit in any country could have accidentally released the resurrected virus that came to be called the Russian flu. In the global rush to head off a possible new pandemic of H1N1 swine flu from Fort Dix through research and vaccination, accidents could have happened anywhere.

Of course, biocontainment facilities and policies have improved dramatically over the past half-century. But at the same time, there has been an equally dramatic proliferation of high-containment labs around the world.

woman fully contained in personal protective gear reaches across glass bottles

Across the globe, researchers work on dangerous pathogens in labs that are part of biocontainment facilities.

AP Photo/Michael Probst

Overreaction. Unintended consequences. Making matters worse. Self-fulfilling prophecy. There is a rich variety of terms to describe how the best intentions can go awry. Still reeling from COVID-19, the world now faces new threats from cross-species jumps of avian flu viruses, mpox viruses and others. It’s critical that we be quick to respond to these emerging threats to prevent yet another global disease conflagration. Quick, but not too quick, history suggests.The Conversation

Donald S. Burke, Dean Emeritus and Distinguished University Professor Emeritus of Health Science and Policy, and of Epidemiology, at the School of Public Health, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Preparing for a pandemic that never came ended up setting off another − how an accidental virus release triggered 1977’s ‘Russian flu’ appeared first on theconversation.com

The Conversation

An 83-year-old short story by Borges portends a bleak future for the internet

Published

on

theconversation.com – Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis – 2024-11-19 07:22:00

Fifty years before the architecture for the web was created, Jorge Luis Borges had already imagined an analog equivalent.
Sophie Bassouls/Sygma via Getty Images

Roger J. Kreuz, University of Memphis

How will the internet evolve in the coming decades?

Fiction writers have explored some possibilities.

In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.

Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.

The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.

To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.

Stephenson’s record as a prognosticator has been impressive – he anticipated the metaverse in his 1992 novel “Snow Crash,” and a key plot element of his “Diamond Age,” released in 1995, is an interactive primer that functions much like a chatbot.

On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.

Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges.

The rise of the chatbots

Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.

The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.

To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.

The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”

This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.

As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.

And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.

Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web.

An infinite − and useless − library

It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.

A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.

How bad might this get?

Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.

In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet.

Illustration of connected gold hexagons that expand endlessly into the horizon.
In Borges’ imaginary, endlessly expansive library of content, finding something meaningful is like finding a needle in a haystack.
aire images/Moment via Getty Images

Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.

The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.

Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.

Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?

The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post An 83-year-old short story by Borges portends a bleak future for the internet appeared first on theconversation.com

Continue Reading

The Conversation

A better understanding of what people do on their devices is key to digital well-being

Published

on

theconversation.com – Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State – 2024-11-19 07:21:00

What you do on your screens matters as much as how much time you spend on them.
Klaus Vedfelt/DigitalVision via Getty Images

Rinanda Shaleha, Penn State

In an era where digital devices are everywhere, the term “screen time” has become a buzz phrase in discussions about technology’s impact on people’s lives. Parents are concerned about their children’s screen habits. But what if this entire approach to screen time is fundamentally flawed?

While researchers have made advances in measuring screen use, a detailed critique of the research in 2020 revealed major issues in how screen time is conceptualized, measured and studied. I study how digital technology affects human cognition and emotions. My ongoing research with cognitive psychologist Nelson Roque builds on that critique’s findings.

We categorized existing screen-time measures, mapping them to attributes like whether they are duration-based or context-specific, and are studying how they relate to health outcomes such as anxiety, stress, depression, loneliness, mood and sleep quality, creating a clearer framework for understanding screen time. We believe that grouping all digital activities together misses how different types of screen use affect people.

By applying this framework, researchers can better identify which digital activities are beneficial or potentially harmful, allowing people to adopt more intentional screen habits that support well-being and reduce negative mental and emotional health effects.

Screen time isn’t one thing

Screen time, at first glance, seems easy to understand: It’s simply the time spent on devices with screens such as smartphones, tablets, laptops and TVs. But this basic definition hides the variety within people’s digital activities. To truly understand screen time’s impact, you need to look closer at specific digital activities and how each affects cognitive function and mental health.

In our research, we divide screen time into four broad categories: educational use, work-related use, social interaction and entertainment.

For education, activities like online classes and reading articles can improve cognitive skills like problem-solving and critical thinking. Digital tools like mobile apps can support learning by boosting motivation, self-regulation and self-control.

But these tools also pose challenges, such as distracting learners and contributing to poorer recall compared with traditional learning methods. For young users, screen-based learning may even have negative impacts on development and their social environment.

Screen time for work, like writing reports or attending virtual meetings, is a central part of modern life. It can improve productivity and enable remote work. However, prolonged screen exposure and multitasking may also lead to stress, anxiety and cognitive fatigue.

Screen use for social connection helps people interact with others through video chats, social media or online communities. These interactions can promote social connectedness and even improve health outcomes such as decreased depressive symptoms and improved glycemic control for people with chronic conditions. But passive screen use, like endless social media scrolling, can lead to negative experiences such as cyberbullying, social comparison and loneliness, especially for teens.

Screen use for entertainment provides relaxation and stress relief. Mindfulness apps or meditation tools, for example, can reduce anxiety and improve emotional regulation. Creative digital activities, like graphic design and music production, can reduce stress and improve mental health. However, too much screen use may reduce well-being by limiting physical activity and time for other rewarding pursuits.

Context matters

Screen time affects people differently based on factors like mood, social setting, age and family environment. Your emotions before and during screen use can shape your experience. Positive interactions can lift your mood, while loneliness might deepen with certain online activities. For example, we found that differences in age and stress levels affect how readily people become distracted on their devices. Alerts and other changes distract users, which makes it more challenging to focus on tasks.

The social context of screen use also matters. Watching a movie with family can strengthen bonds, while using screens alone can increase feelings of isolation, especially when it replaces face-to-face interactions.

Family influence plays a role, too. For example, parents’ screen habits affect their children’s screen behavior, and structured parental involvement can help reduce excessive use. It highlights the positive effect of structured parental involvement, along with mindful social contexts, in managing screen time for healthier digital interactions.

A woman, man and child look at a tablet screen in a living room
Shared screen time with family and friends can boost well-being.
kate_sept2004/E+ via Getty Images

Consistency and nuance

Technology now lets researchers track screen use accurately, but simply counting hours doesn’t give us the full picture. Even when we measure specific activities, like social media or gaming, studies don’t often capture engagement level or intent. For example, someone might use social media to stay informed or to procrastinate.

Studies on screen time often vary in how they define and categorize it. Some focus on total screen exposure without differentiating between activities. Others examine specific types of use but may not account for the content or context. This lack of consistency in defining screen time makes it hard to compare studies or generalize findings.

Understanding screen use requires a more nuanced approach than tracking the amount of time people spend on their screens. Recognizing the different effects of specific digital activities and distinguishing between active and passive use are crucial steps. Using standardized definitions and combining quantitative data with personal insights would provide a fuller picture. Researchers can also study how screen use affects people over time.

For policymakers, this means developing guidelines that move beyond one-size-fits-all limits by focusing on recommendations suited to specific activities and individual needs. For the rest of us, this awareness encourages a balanced digital diet that blends enriching online and offline activities for better well-being.The Conversation

Rinanda Shaleha, Doctoral student in the College of Health and Human Development, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post A better understanding of what people do on their devices is key to digital well-being appeared first on theconversation.com

Continue Reading

The Conversation

From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries

Published

on

theconversation.com – Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology – 2024-11-18 07:27:00

Modern bike helmets are made through complex materials engineering.
Johner Images via Getty Images

Jud Ready, Georgia Institute of Technology

Imagine – it’s the mid-1800s, and you’re riding your high-wheeled, penny-farthing bicycle down a dusty road. Sure, it may have some bumps, but if you lose your balance, you’re landing on a relatively soft dirt road. But as the years go by, these roads are replaced with pavement, cobblestones, bricks or wooden slats. All these materials are much harder and still quite bumpy.

As paved roads grew more common across the U.S. and Europe, bicyclists started to suffer gruesome skull fractures and other serious head injuries during falls.

As head injuries became more common, people started seeking out head protection. But the first bike helmets were very different than helmets of today.

I’m a materials engineer who teaches a course at Georgia Tech about materials science and engineering in sports. The class covers many topics, but particularly helmets, as they’re used in many different sports, including cycling, and the materials they’re made of play an important role in how they work. Over the decades, people have used a wide variety of materials to protect their heads while biking, and companies continue to develop new and innovative materials.

In the beginning, there was the pith helmet.

Pith helmets

The first head protection concept introduced to the biking world was a hat made from pith, which is the spongy rind found in the stem of sola plants, aeschynomene aspera. Pith helmet craftsmen would press the pith into sheets and laminate it across dome-shaped molds to form a helmet shape. Then, they’d cover the hats in canvas as a form of weatherproofing.

A hat made of a brown material with a flat rim.
Hats made out of pith were used by militaries as well as for head protection while biking.
Auckland Museum, CC BY-SA

Pith helmets were far from what we would consider a helmet today, but they persisted until the early 20th century, when bicycle-racing clubs emerged. Since pith helmets offered little to no ventilation, the racers began to use halo-shaped leather helmets. These had better airflow and were more comfortable, although they weren’t much better at protecting the head.

A bike helmet made from leather strips connected into a dome on the head of a mannequin.
Leather strip bike helmets were made in the 1930s.
Museums Victoria, CC BY-SA

Leather halo helmets

The initial concept for the halo helmet used a simple leather strip wrapped around the forehead. But these halo helmets quickly evolved, as riders arranged additional strips longitudinally from front to back. They wrapped the leather bands in wool.

For better head protection, the helmet makers then started adding more layers of leather strips to increase the helmet’s thickness. Eventually, they added different materials such as cotton, foam and other textiles into these leather layers for better protection.

While these had better airflow than the pith hats, the leather “hairnet” helmets continued to offer very little protection during a fall on a paved surface. And, like pith, the leather helmets degraded when exposed to sweat and rain.

Despite these drawbacks, leather strip helmets dominated the market for several decades as cycling continued to evolve throughout the 20th century.

Then, in the 1970s, a nonprofit dedicated to testing motorcycle helmets called the Snell Foundation released new standards for bike helmets. They set their standards so high that only lightweight motorcycle helmets could pass, which most bicyclists refused to wear.

New materials and new helmets

The motorcycle equipment manufacturing company Bell Motorsports responded to the new standards by releasing the Bell Biker in 1975. This helmet used expanded polystyrene, or EPS. EPS is the same foam used to manufacture styrofoam coolers. It’s lightweight and absorbs energy well.

Constructing the Bell Biker involved spraying EPS into a dome shaped mold. The manufacturers used small pellets of a very hard plastic – polycarbonate, or PC – to mold an outer shell and then adhere it to the outside of the EPS.

Mottled white foam
Expanded polystyrene, or EPS, is a foam used in styrofoam coolers as well as the core of bike helmets.
Tiia Monto/Wikimedia Commons, CC BY-SA

Unlike the pith and leather helmets, this design was lightweight, load bearing, impact absorbing and well ventilated. The PC shell provided a smooth surface so that during a fall, the helmet would skid along the pavement instead of getting jerked around and caught, which could cause abrupt head rotation and lead to concussions and other head and neck injuries.

Over the next two decades, as cycling became more popular, helmet manufacturers tried to strike the perfect balance between lightweight and ventilated helmets, while simultaneously providing impact protection.

In order to decrease weight, a company called Giro Sport Design created an all-EPS helmet covered by a thin lycra fabric cover instead of a hard PC shell. This design eliminated the weight of the PC shell and improved ventilation.

In 1989, a company called Pro Tec introduced a helmet with a nylon mesh infused in the EPS foam core. The nylon mesh dramatically increased the helmet’s structural support without the added weight of the PC shell.

A man standing by a bike wearing a green helmet that's made of a thin material with a long tail.
Many racing cyclists found teardrop-style helmets to be more aerodynamic.
Bongarts/Getty Images, CC BY-NC-ND

Meanwhile, as cycling became more competitive, many riders and manufacturers started designing more aerodynamic helmets using the existing materials. A revolutionary teardrop style helmet debuted in the 1984 Olympics.

Now, even casual biking enthusiasts will don teardrop helmets.

Helmets on the market today

Helmet makers continue to innovate. Today, many commercial brands use a hard polyethylene terephthalate, or PET, shell around the EPS foam in place of a PC shell to increase the helmet’s protection and lifespan, while decreasing cost.

Meanwhile, some brands still use PC shells. Instead of gluing them to the EPS foam, the shell serves as the mold itself, with the EPS expanding to fit inside it. Manufacturing helmets this way eliminates several process steps, as well as any gaps between the foam and shell. This process makes the helmet both stronger and cheaper to manufacture.

As helmets evolve to provide more protection with still lighter weight, materials called copolymers, such as acrylonitrile-butadiene-styrene, are replacing PC and PET shell materials.

Materials that are easier and cheaper to manufacture, such as expanded polyurethane and expanded polypropylene, are also starting to replace the ubiquitous EPS core.

Just as the leather and pith helmets would look strange to a cyclist today, a century from now, bike helmets could be made with entirely new and innovative materials.The Conversation

Jud Ready, Principal Research Engineer in Materials Science and Engineering, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post From using plant rinds to high-tech materials, bike helmets have improved significantly over the past 2 centuries appeared first on theconversation.com

Continue Reading

Trending