Connect with us

The Conversation

Catholic cardinals play a key role in secular politics as well as the Catholic Church–and the importance of Pope Francis’ choice to head the church in DC

Published

on

theconversation.com – Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy Cross – 2025-01-15 07:44:00

Cardinal Robert McElroy, who will head the Catholic Archdiocese of Washington, D.C.
AP Photo/Lenny Ignelzi, File

Joanne M. Pierce, College of the Holy Cross

Pope Francis recently appointed Cardinal Robert McElroy, a harsh critic of President-elect Donald Trump’s immigration policy, to head the Catholic Archdiocese of Washington, D.C.

The move has led to concerns among some Catholics about how he might interact with the new administration, especially since Trump has announced plans to appoint Brian Burch, the head of a conservative Catholic political group, as ambassador to the Vatican.

As a specialist on medieval Catholicism, I am aware of the important roles that cardinals have played over the centuries in church administration and secular politics.

In addition to pastoral ministry, cardinals serving as bishops in their own countries can play an important part in shaping public opinion. Others are bishops who have served or still serve as papal ambassadors in various countries around the world.

Ancient origins

After the legalization of Christianity in the early fourth century by the Emperor Constantine, Christianity spread rapidly in both the Western and Eastern parts of the Roman Empire. Bishops, who were the heads of the central churches in cities and supported by the emperors, met together in several general – ecumenical – councils to condemn heresies and assign authority more clearly.

By the end of the fifth century, bishops of five major cities, including Rome, were given wider authority over an expanded geographical territory. They were called patriarchs, from the Greek and Latin words meaning “father.”

By this time, Rome survived numerous attacks from pagan European tribes and the Asian Huns before finally succumbing to Germanic barbarians in 476 C.E..

During this tumultuous century, the church had assumed more secular authority and had largely taken over Rome’s civil administration.

In fact, Justinian, emperor of the Eastern Roman Empire in the sixth century, referred to the pope as patriarch not just “of Rome,” but of “the West,” implicitly extending papal jurisdiction over all the churches of the former Western Empire; the popes themselves did not use the title until the seventh century.

And as Roman Christianity spread through Western Europe, so did this intertwining of political activity and religious authority.

First cardinals

In its earliest centuries, Christianity developed three classes of ordained clerics, each with different responsibilities: Bishops oversaw churches in a specific geographic area; priests ran individual local church communities – parishes; and deacons assisted the priests, especially in charitable outreach.

By the seventh century, deacons from seven of the oldest and most important churches of Rome served as special advisers to the popes. They were called cardinals, from Latin “cardo” – meaning hinge – and “cardinalis,” meaning key or principal.

Later, priests and bishops were also chosen for this honor. Over time, cardinals became powerful members of the church in Rome and Italian Catholicism.

After Christianity was legalized in the fourth century, the faith expanded rapidly beyond Rome’s old imperial boundaries. However, cardinals were not named from these countries until much later, in the 12th century.

Missionaries to Europe

Popes began to send missionaries to convert other pagan peoples in Europe. As early as the fourth and fifth centuries, some leaders of various Germanic tribes – like Clovis, king of the Franks – accepted baptism for themselves.
And thanks to another papal missionary, Augustine of Canterbury, the early Celtic church in England, adopted Roman Christian practice in the seventh century.

However, the 10th and early 11th centuries were a dark time for the papacy. Politically powerful families in Rome competed to have relatives chosen as pope, and there was no set mechanism for electing one. Some of these popes led immoral lives; at one point, a 20-year-old was chosen as Pope Benedict IX, who then sold the office to another cleric.

The power struggle for the papacy, not missionary activity, had become the main focus for Romans. But by the end of the 11th century, with the help of powerful European leaders called the Holy Roman Emperors, a series of reform-minded clerics were named pope.

One of them, Pope Nicholas II, set new rules for the selection of a new pope: He was to be elected by an assembly of cardinals. Later, a two-thirds majority was specified for election.

Popes also refocused their efforts on missionary activity. One result was the creation of the first cardinals outside of Italy: in France, England and Germany. However they were heavily outnumbered by Italians. In the later medieval period, cardinals from Austria, Hungary, Poland, Portugal and Spain would also join what came to be known as the College of Cardinals.

Political activity

Increasingly, cardinals were treated as important dignitaries and addressed as “Eminence,” even though many were not the sons of kings or nobles. Certainly, most of them became involved in European politics of the later medieval period, since secular and religious interests often intertwined. Many became wealthy patrons of the arts and architectural projects.

Not only were cardinals the primary papal advisers, but some also served in secular political positions. One of the best-known is Thomas Wolsey, who became Lord Chancellor of England in the 16th century under King Henry VIII, despite being a commoner.

Two cardinals also served as chief ministers to King Louis XIII of France in the 17th century: the Frenchman Armand-Jean du Plessis de Richelieu and Jules Mazarin — an Italian by birth.

Even into the modern period, naming a foreign cleric as cardinal was taken as a measure of the importance of their country in the Catholic world. For example, the first American cardinal, John McCloskey, was created cardinal in 1875, some 100 years after the birth of the United States. The first from strongly Catholic Latin America was named in 1906, when a Brazilian bishop, Joaquim Arcoverde de Albuquerque Cavalcanti, was created cardinal.

A man in a red head dress bows before the pope seated on a chair.
Sri Lankan Cardinal Malcom Ranjith receives the red three-cornered biretta hat from Pope Benedict XVI during a ceremony inside St. Peter’s Basilica, at the Vatican, on Nov. 20, 2010.
AP Photo/Pier Paolo Cito

The Philippines, another strongly Catholic country, did not have a cardinal until Rufino J. Santos in 1960. The small Catholic community in pluralistic Sri Lanka was not represented by a cardinal until 2010, when Malcom Ranjith was chosen.

Contemporary issues

Since 1962, only bishops can be created cardinals; priests must agree to be ordained as bishops before being made cardinal.

Some nominees have refused the honor because they were unwilling to be ordained bishops for various reasons: health, advanced age, or because they didn’t want to leave their religious communities. Occasional exceptions can be made to this rule – for example, Cardinal Avery Dulles was a Jesuit and over 80 years old when named, and most recently, Timothy Radcliffe, a Dominican priest and theologian, is a 79-year-old member of the Dominican order. Both were allowed to remain priests.

Today, many cardinals are engaged in pastoral ministry, as bishops of a diocese or archbishops of a larger archdiocese. Other bishops and cardinals serve in one of the several departments, called dicasteries within the Vatican bureaucracy.

In addition, there are other offices within the College of Cardinals. For example, the leader or head of the college is called the dean; one of his duties is to coordinate the conclave that will be convoked in the event a pope dies or resigns.

Cardinals are appointed for life, although they can resign, voluntarily or under pressure. Resignation is rare; since 1900, only three have done so.

Since his election in 2013, Francis has held 10 consistoriesspecial assemblies of the College of Cardinals – appointing a majority of the cardinals under 80 years old who will be eligible to elect his successor.

Not only has Francis chosen like-minded progressive candidates, but he has also included candidates from countries that are more marginalized or torn by violence. Most recently, cardinals have been selected from the Ivory Coast and Ukraine; another is a Chilean-born archbishop of Palestinian descent. These new cardinals contribute new and, perhaps, challenging perspectives to the once-heavily European College of Cardinals.

I expect that in the future, all these cardinals, including Cardinal McElroy in his key position, will play an important role in supporting or criticizing the politics of both church and state.The Conversation

Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy Cross

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Catholic cardinals play a key role in secular politics as well as the Catholic Church–and the importance of Pope Francis’ choice to head the church in DC appeared first on theconversation.com

The Conversation

Meta shift from fact-checking to crowdsourcing spotlights competing approaches in fight against misinformation and hate speech

Published

on

theconversation.com – Anjana Susarla, Professor of Information Systems, Michigan State University – 2025-01-15 07:46:00

Meta stirred up controversy when it ditched fact-checking.

Chesnot/Getty Images

Anjana Susarla, Michigan State University

Meta’s decision to change its content moderation policies by replacing centralized fact-checking teams with user-generated community labeling has stirred up a storm of reactions. But taken at face value, the changes raise the question of the effectiveness of Meta’s old policy, fact-checking, and its new one, community comments.

With billions of people worldwide accessing their services, platforms such as Meta’s Facebook and Instagram have a responsibility to ensure that users are not harmed by consumer fraud, hate speech, misinformation or other online ills. Given the scale of this problem, combating online harms is a serious societal challenge. Content moderation plays a role in addressing these online harms.

Moderating content involves three steps. The first is scanning online content – typically, social media posts – to detect potentially harmful words or images. The second is assessing whether the flagged content violates the law or the platform’s terms of service. The third is intervening in some way. Interventions include removing posts, adding warning labels to posts, and diminishing how much a post can be seen or shared.

Content moderation can range from user-driven moderation models on community-based platforms such as Wikipedia to centralized content moderation models such as those used by Instagram. Research shows that both approaches are a mixed bag.

Does fact-checking work?

Meta’s previous content moderation policy relied on third-party fact-checking organizations, which brought problematic content to the attention of Meta staff. Meta’s U.S. fact-checking organizations were AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch and USA TODAY.

Fact-checking relies on impartial expert review. Research shows that it can reduce the effects of misinformation but is not a cure-all. Also, fact-checking’s effectiveness depends on whether users perceive the role of fact-checkers and the nature of fact-checking organizations as trustworthy.

Crowdsourced content moderation

In his announcement, Meta CEO Mark Zuckerberg highlighted that content moderation at Meta would shift to a community notes model similar to X, formerly Twitter. X’s community notes is a crowdsourced fact-checking approach that allows users to write notes to inform others about potentially misleading posts.

Studies are mixed on the effectiveness of X-style content moderation efforts. A large-scale study found little evidence that the introduction of community notes significantly reduced engagement with misleading tweets on X. Rather, it appears that such crowd-based efforts might be too slow to effectively reduce engagement with misinformation in the early and most viral stage of its spread.

There have been some successes from quality certifications and badges on platforms. However, community-provided labels might not be effective in reducing engagement with misinformation, especially when they’re not accompanied by appropriate training about labeling for a platform’s users. Research also shows that X’s Community Notes is subject to partisan bias.

Crowdsourced initiatives such as the community-edited online reference Wikipedia depend on peer feedback and rely on having a robust system of contributors. As I have written before, a Wikipedia-style model needs strong mechanisms of community governance to ensure that individual volunteers follow consistent guidelines when they authenticate and fact-check posts. People could game the system in a coordinated manner and up-vote interesting and compelling but unverified content.

Misinformation researcher Renée DiResta analyzes Meta’s change in content moderation policy.

Content moderation and consumer harms

A safe and trustworthy online space is akin to a public good, but without motivated people willing to invest effort for the greater common good, the overall user experience could suffer.

Algorithms on social media platforms aim to maximize engagement. However, given that policies that encourage engagement can also result in harm, content moderation also plays a role in consumer safety and product liability.

This aspect of content moderation has implications for businesses that either use Meta for advertising or to connect with their consumers. Content moderation is also a brand safety issue because platforms have to balance their desire to keep the social media environment safer against that of greater engagement.

AI content everywhere

Content moderation is likely to be further strained by growing amounts of content generated by artificial intelligence tools. AI detection tools are flawed, and developments in generative AI are challenging people’s ability to differentiate between human-generated and AI-generated content.

In January 2023, for example, OpenAI launched a classifier that was supposed to differentiate between texts generated by humans and those generated by AI. However, the company discontinued the tool in July 2023 due to its low accuracy.

There is potential for a flood of inauthentic accounts – AI bots – that exploit algorithmic and human vulnerabilities to monetize false and harmful content. For example, they could commit fraud and manipulate opinions for economic or political gain.

Generative AI tools such as ChatGPT make it easier to create large volumes of realistic-looking social media profiles and content. AI-generated content primed for engagement can also exhibit significant biases, such as race and gender. In fact, Meta faced a backlash for its own AI-generated profiles, with commentators labeling it “AI-generated slop.”

More than moderation

Regardless of the type of content moderation, the practice alone is not effective at reducing belief in misinformation or at limiting its spread.

Ultimately, research shows that a combination of fact-checking approaches in tandem with audits of platforms and partnerships with researchers and citizen activists are important in ensuring safe and trustworthy community spaces on social media.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Meta shift from fact-checking to crowdsourcing spotlights competing approaches in fight against misinformation and hate speech appeared first on theconversation.com

Continue Reading

The Conversation

Insurance for natural disasters is failing homeowners − I don’t have the answers, but I do know the right questions to ask

Published

on

theconversation.com – Jay Feinman, Distinguished Professor of Law Emeritus, Rutgers University – 2025-01-15 07:46:00

Jay Feinman, Rutgers University

The wildfires that have devastated large parts of Los Angeles County have drawn fresh attention to the struggles many Americans face insuring their homes.

Since 2022, seven of the 12 largest insurance companies have stopped issuing new policies to homeowners in California, citing increased risks due to climate change. California isn’t alone: The same thing has happened in other vulnerable states, including Louisiana and Florida. The proportion of Americans without home insurance has risen from 5% to 12% since 2019. Meanwhile, those fortunate enough to have insurance are paying more than ever: Premiums in California, like elsewhere, have increased dramatically over the past five years.

When the private insurance market fails to provide coverage, the government often comes in to fill the gap. For example, the National Flood Insurance Program was established back in the 1960s because almost all private insurers excluded flood coverage. Meanwhile, the California FAIR Plan, which serves more than 450,000 Californians, is a typical state-created insurer of last resort. Such programs, which are available in many states, offer limited coverage to people who can’t get private insurance.

But the sheer scale of need means it’s hard for public programs to stay afloat. It’s not inconceivable that the recent wildfires could exceed the reserves and reinsurance available to the California FAIR plan. Because of the way the plan is set up, that would force other insurers – and ultimately homeowners – to make up the difference.

These are tricky problems, and – speaking as an expert in insurance – I can’t say I have answers. But I do know the right questions to ask. And that’s a crucial first step if you want to find solutions.

What is insurance for, anyway?

One of the most important questions is also the most basic: What are the goals of insurance?

Insurance is a financial product that allows people to share risk – meaning that if a catastrophe strikes any one person, they won’t have to bear the costs alone. But it’s not just about money. Even if most people don’t realize it, every form of insurance embodies values and serves public policy goals. This often requires making social, political and even moral trade-offs.

What is the problem we’re trying to solve?

The first step in solving a problem is to identify it. When it comes to insurance, this isn’t always easy. For example, “Homeowners need insurance coverage that they can’t afford in the private market” might seem like a good description of the problem. But it’s not. This is because some homes in disaster-prone areas are simply too risky to insure.

Imagine a home in a coastal area that floods over and over, for example. If you were an insurer, how much would you charge for that policy? When a house is subject to repeated losses, it makes more economic sense to buy and demolish it instead.

Defining the problem carefully also helps to clarify the values at stake. For example, one value is protecting the investments of current homeowners – particularly, say, long-time, elderly residents. But another value is pricing risk correctly, so people don’t move into dangerous developments.

Put more broadly, one value is recognizing society’s collective responsibility toward people who suffer financial distress, and another is promoting fair and efficient use of social resources. These values can be in conflict.

What does the government have to do with insurance?

Back in 1881, in his classic lectures on The Common Law, Supreme Court Justice Oliver Wendell Holmes Jr. said:

The state might conceivably make itself a mutual insurance company against accidents and distribute the burden of its citizens’ mishaps among all its members. There might be a pension for paralytics, and state aid for those who suffered in person or estate from tempest or wild beasts.

Holmes’ own position was clear: “The state does none of these things,” he wrote – and it should not. This strain of individualism has remained strong in U.S. politics: Individual liberty, personal responsibility and economic opportunity are the foundations of American life, individualists say, so each person should win or lose on their own.

Under this approach, the private insurance market bases its pooling, risk classification and pricing mostly on how much risk each policyholder presents, so that homes in wildfire-prone areas are charged higher premiums. In theory, this is both morally sound and economically efficient, since each policyholder bears the cost of their own risks. But when the private market fails – as happened with flood insurance – the government has a strong incentive to step in.

Today, as an empirical matter, Holmes’ statement couldn’t be more wrong. The state does, in fact, make itself “a mutual insurance company against accidents” and provides a “pension for paralytics,” through Medicaid, Social Security Disability Insurance and other programs. And in California, as elsewhere, the government does provide aid for those who “suffered in estate … from tempest,” through the Federal Emergency Management Agency and other entities.

Since at least the New Deal, there has been broad recognition that some level of collective responsibility is essential; the only questions are where and how much. In the health insurance realm, for example, the Affordable Care Act provides subsidized health insurance for many Americans, and changing Medicare is a political third rail.

Public policy on disaster losses is situated between the two extremes of letting losses lie and having the state assume all of the burdens of those losses. Often policymakers and researchers see insurance or insurance-like plans as solutions – whether provided by a public entity or involving a mixed public-private program. FEMA, for example, operates the National Flood Insurance Program in cooperation with private insurers and also gives direct grants for mitigation of flood damage.

What should a public insurance solution look like?

Sometimes one question leads to another, and that’s the case here. In my research, I’ve identified more than a dozen questions that policymakers must answer in order to design an effective public solution to disaster insurance. Three questions are most important:

• What are the goals of the insurance?

• Who is being insured?

• How are policyholders and their risks classified?

Let’s start with the first question: What are the goals of the insurance? As I mentioned earlier, any form of insurance faces trade-offs and limits.

When an insurance solution has been adopted rather than some other form of intervention, a primary goal is to compensate the policyholder for a loss. But that’s not the only goal. For example, insurance often aims to reduce losses in addition to paying if they occur. Insurers have many ways to shape behavior, such as charging lower premiums for homeowners who keep their property free of flammable brush. Because many of these behaviors affect other people as well, they generate a social benefit. And since insurance has social benefits, how those benefits are distributed – along race, gender, class and other lines – is also important.

The remnants of a house and a car are seen engulfed in flames.

A home in Altadena, Calif., is consumed by flames due to the Eaton Fire on Jan. 8, 2025.

Jon Putman/NurPhoto via Getty Images

That leads to the second key question: Who is being insured?

Insurance involves transferring risk from an individual to a larger group of people who can share the risk. Insurance experts call this “risk pooling.” Pools that are too small will struggle because there aren’t enough people to share the burden.

In public solutions to catastrophe problems, getting more people in the pool could be especially useful in expanding coverage. For example, the National Flood Insurance Program brings many homeowners across the country into a pool, but it also excludes some, such as those who suffer damage from wind during a hurricane. In contrast, the proposed INSURE Act, introduced in the last Congress, would effectively put the entire nation in a pool to cover a variety of catastrophic risks, including flood, wildfire, earthquake and others.

Still, just because you’re in the same pool as someone else doesn’t mean you’ll be treated the same – people with the same insurance can be charged different premiums and receive different amounts of coverage.

That leads to the third question: How are policyholders and their risks classified?

If insurers treated everybody exactly the same, they would quickly go out of business. That’s why they analyze huge amounts of information about past losses, current conditions and future predictions, trying to determine the risks posed by each member. This work is done by actuaries and underwriters, but it’s not just a matter of math: Insurers classify policyholders in ways that reflect the goals and values of the insurance, which typically include balancing widespread availability, broad coverage and affordable pricing, and the social benefits the insurance generates.

One view of this process is that more precise risk classification and pricing are good. Because insurance involves risk transfer, the more accurately risks can be calculated and priced, the better the process works.

But there’s a deeper problem, which has to do with values. Sometimes accuracy in underwriting can conflict with larger social goals. With catastrophes in particular, broad coverage may be a top priority, since many people believe the state has a responsibility to protect its people. Moreover, protecting people’s investments in their homes is important, and suddenly raising the premiums of homeowners at high risk would threaten their investments. Disasters also cause communal responses – many unaffected Americans donate to the Red Cross and other nonprofits to support victims – and a strict focus on accuracy in underwriting could undermine that sense of community.

As floods, storms, wildfires and other catastrophes become increasingly common, the availability and affordability of property insurance has become a high-profile political issue. Politics involve choices. Asking better questions will help politicians – and the rest of us – make better choices.The Conversation

Jay Feinman, Distinguished Professor of Law Emeritus, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Insurance for natural disasters is failing homeowners − I don’t have the answers, but I do know the right questions to ask appeared first on theconversation.com

Continue Reading

The Conversation

Investments and regulation for vaccines, broadband, microchips and AI

Published

on

theconversation.com – Mark Zachary Taylor, Associate Professor of Public Policy, Georgia Institute of Technology – 2025-01-15 07:46:00

Massive support for U.S. computer chip manufacturing will be part of Joe Biden’s tech legacy.

AP Photo/Jacquelyn Martin

Mark Zachary Taylor, Georgia Institute of Technology

In evaluating the outgoing Biden administration, much news has focused on inflation, immigration or Hunter’s laptop. But as an expert on national competitiveness in science and technology, I have a somewhat different emphasis. My research shows that U.S. prosperity and security depend heavily on the country’s ability to produce cutting-edge science and tech.

So, how did the Biden administration perform along these lines?

Advancing pandemic science and tech

President Joe Biden’s immediate challenge after inauguration was to end the COVID-19 pandemic and then shift the economy back to normal operations.

First, he threw the weight of his administration behind vaccine production and distribution. Thanks to President Donald Trump’s Operation Warp Speed, inoculations had begun mid-December 2020. But there had been no national rollout, and no plans existed for one. When Biden took office, only about 5% of Americans had been vaccinated.

Seated and masked Biden gets a shot in his arm from a masked medical worker

Biden set an example by getting his own COVID-19 vaccinations.

Joshua Roberts via Getty Images

The Biden administration collaborated with private retail chains to build up cold storage and distribution capacity. To ensure adequate vaccine supply, Biden worked to support the major pharmaceutical manufacturers. And throughout, Biden conducted a public relations campaign to inform, educate and motivate Americans to get vaccinated.

Within the first 10 weeks of Biden’s presidency, one-third of the U.S. population had received at least one dose, half by the end of May, and over 70% by year’s end. And as Americans got vaccinated, travel bans were lifted, schools came back into session, and business gradually returned to normal.

A later study found that Biden’s vaccination program prevented more than 3.2 million American deaths and 18.5 million hospitalizations, and saved US$1.15 trillion in medical costs and lost economic output.

In the wake of the economic distress caused by the COVID-19 pandemic, Biden signed two bills with direct and widespread impacts on science and technology. Previous administrations had promised infrastructure investments, but Biden delivered. The Infrastructure Investment and Jobs Act, passed with bipartisan support during late 2021, provided $1.2 trillion for infrastructure of all types.

Rather than just rebuilding, the act prioritized technological upgrades: clean water, clean energy, rural high-speed internet, modernization of public transit and airports, and electric grid reliability.

installer on a residential roof carrying a solar panel

Clean energy technologies, including solar panels, got a boost from the Inflation Reduction Act.

David Becker/The Washington Post via Getty Images

In August 2022, Biden signed the Inflation Reduction Act, totaling $739 billion in tax credits and direct expenditures. This was the largest climate change legislation in U.S. history. It implemented a vast panoply of subsidies and incentives to develop and distribute the science and tech necessary for clean and renewable energy, environmental conservation and to address climate change.

Science and tech marquees and sleepers

Some Biden administration science and technology achievements have been fairly obvious. For example, Biden successfully pushed for increased federal research and development funding. Federal R&D dollars jumped by 25% from 2021 to 2024. Recipients included the National Science Foundation, Department of Energy, NASA and the Department of Defense. In addition, Biden oversaw investment in emerging technologies, such as AI, and their responsible governance.

Biden also retained or raised Trump’s tariffs and continued his predecessor’s skepticism of new free-trade agreements, thereby cementing a protectionist turn in American trade policy. Biden’s addition was to add protectionist industrial policy – subsidies for domestic manufacturing and innovation, as well as “buy-American” mandates.

Other accomplishments have been more under the radar. For example, within the National Science Foundation, Biden created a Directorate for Technology, Innovation and Partnerships to improve U.S. economic competitiveness. Its tasks are to speed the development of breakthrough technologies, to accelerate their transition into the marketplace, and to reskill and upskill American workers into high-quality jobs with better wages.

Biden talks into mic in a factory with big American flag in background

Biden encouraged companies to manufacture new inventions in the United States.

AP Photo/Susan Walsh

Biden implemented policies aimed at strengthening and improving federal scientific integrity to help citizens feel they can trust federally funded science and its use. He also advanced new measures to improve research security, aimed at keeping federally funded research from being improperly obtained by foreign entities.

The CHIPS & Science Act

The jewel in the crown of Biden’s science and tech agenda was the bipartisan Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act, meant to strengthen U.S. manufacturing capabilities in advanced semiconductor chips. It has awarded about $40 billion to American chip producers, prompting an additional $450 billion in private investment in over 90 new manufacturing projects across 28 states.

Directed at everything from advanced packaging to memory chips, the CHIPS Act’s subsidies have reduced the private costs of domestic semiconductor production. CHIPS also pushes for these new manufacturing jobs to go to American workers at good pay. Whereas the U.S. manufactured few of the most advanced chips just two years ago, the industry expects the United States to possess 28% of global capacity by 2032.

Less well known are the “science” parts of the CHIPS Act. For example, it invested half a billion dollars in dozens of regional innovation and technology hubs across the country. These hubs focus on a broad range of strategic sectors, including critical materials, sustainable polymers, precision medicine and medical devices. Over 30 tech hubs have already been designated, such as the Elevate Quantum Tech Hub in Denver and the Wisconsin Biohealth Tech Hub.

Biden stands at table that holds examples of technology, flanked by two other men

Biden tours a semiconductor manufacturer in North Carolina in 2023.

AP Photo/Carolyn Kaster

The CHIPS Act also aims to broaden participation in science. It does so by improving the tracking and funding of research and STEM education to hitherto underrepresented Americans – by district, occupation, ethnicity, gender, institution and socioeconomic background. It also attempts to extend the impact of federally funded research to tackle global challenges, such as supply chain disruptions, resource waste and energy security.

Missed opportunities and future possibilities

Despite these achievements, the Biden administration has faced criticism on the science and tech front. Some critics allege that U.S. research security is still not properly defending American science and technology against theft or counterfeit by rivals.

Others insist that federal R&D spending remains too low. In particular, they call for more investment in U.S. research infrastructure – such as up-to-date laboratories and data systems – and emerging technologies.

The administration’s government-centered approach to AI has also drawn criticism as stifling and wrong-headed.

Personally, I am agnostic on these issues, but they are legitimate concerns. In my opinion, science and technology investments take considerable time to pan out, so early judgments of Biden’s success or failure are probably premature.

Nevertheless, the next administration has its work cut out for it. International cooperation will likely be key. The most vexing global problems require science and technology advances that are beyond the ability of any single country. The challenge is for the United States to collaborate in ways that complement American competitiveness.

National priorities will likely include the development of productive and ethical AI that helps the U.S. to be more competitive, as well as a new quantum computing industry. Neuroscience and “healthspan” research also hold considerable promise for improving U.S. competitiveness while transforming Americans’ life satisfaction.

Keeping the whole American science and technology enterprise rigorous will require two elements from the federal government: more resources and a competitive environment. American greatness will depend on President-elect Trump’s ability to deliver them.The Conversation

Mark Zachary Taylor, Associate Professor of Public Policy, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Investments and regulation for vaccines, broadband, microchips and AI appeared first on theconversation.com

Continue Reading

Trending