'Profound Risk': Urgent Bipartisan Call for Intervention Into Growing AI Technology, Biden Remains Crickets

(AP Photo/Lee Jin-man)

The term “artificial intelligence” (AI) first coined in 1955, today refers to advanced analysis and logic-based techniques, including machine (computer) learning, to interpret events, support and automate decisions, and take actions; as opposed to the stuff of science fiction movies in which humanoids destroy mankind.


At least not yet.

As the rapid development of AI technology continues, lawmakers on both sides of the aisle are calling for congressional intervention in controversial technology. In addition, as reported by Fox News, a letter signed by Elon Musk, Apple co-founder Steve Wozniak, and other tech giants cited “profound risks to society and humanity,” and called for a six-month pause to advanced AI developments.

The letter also said:

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

And the response from the Biden administration? (Cue the crickets.)

Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models and companies to integrate generative AI models into their products — in large part leading to warnings in both Congress and from the tech-expert signatories to the previously mentioned letter.

The letter warned that at this stage, no one “can understand, predict, or reliably control” the powerful new tools developed in AI labs. Tech experts cited “risks of propaganda and lies spread through AI-generated articles that look real,” as well as the long-held belief — which we’re already seeing come to fruition from complex assembly lines to fast-food restaurants — in the ability of AI programs to outperform workers and make jobs obsolete.


Despite the chirping of crickets emanating from the Biden White House, congressional Republicans and Democrats appear to stand on common ground — a feat in and of itself, these days — in calling for oversight of the rapidly developing technology and its potential impact on the future.

Here’s the thing: It isn’t necessary for proverbial everyday Americans to understand, much less fully grasp, AI and the threats it might very well pose, and therein lies the potential problem for society as a whole — to the benefit of those who might use it in nefarious ways.

Republican Sen. Mike Rounds (S.D.), leader of the Senate AI Caucus, told Fox News Digital on Wednesday:

I think what you have to do is, to identify what is not allowed in terms of ethics and illegal activities, whether it is AI or not — you impose on AI activities the same level of ethics and privacy that you do for other competencies today.

Across the aisle, Michigan Democrat Sen. Gary Peters said the Senate Homeland Security and Government Affairs Committee, which he chairs, recently held a hearing on the “pros and cons” of AI technology.

I intend to have a series of hearings in Homeland Security and Government Affairs taking up AI and what we should be thinking about.

On the House side, Rep. Ken Buck (R-Colo.), a leader in the efforts to crack down on Big Tech, also urged Congress to intervene, telling Fox:


With the emergence of AI comes both opportunity and challenges. We have seen the impact and consequences of a decade of inaction on Big Tech. Congress cannot afford to be caught sleeping at the wheel again.

AI has great promise but left unscrutinized could be used to spread propaganda, dangerously restructure our economy, and increase the size of current Big Tech monopolies.

Finally, Sen. Michael Bennet (D-Colo.), sent a letter to tech company leaders last week calling for them to consider the safety of children when rolling out AI systems such as chatbots, suggesting that an agency could be created to regulate the relatively restriction-free AI industry “in the long term.” For now, however, Bennet said these companies should police themselves. Uh-huh, I’m sure that’ll happen.

Bennet also urged Congress to step in:

I think we do have a role to play. In the long run, I think what we could do is set up, you know, an agency here. They can negotiate on behalf of the American people, so we can actually have a negotiation about privacy… In the near term, I think it’s going to be important for tech to police itself.

So What About that ‘Profound Risk’?

According to Cyber Insights 2023, one of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes — “artificial media produced using deep learning techniques and a portmanteau of ‘deep learning’ and ‘fake.’ Deepfakes replace features on one image with those of another.”


While deepfakes have been around since the 1990s, they’ve increasingly become more realistic and thus more popular in usage.

Here’s more, via Cyber Insights:

The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.

What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming and will begin to emerge from 2023.


As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen and these predictions will often be focused on people (staff and customers).

Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.

That’s enough of “the weeds.”

The Bottom Line

It’s clear that the potential risk of AI in the wrong hands is growing. As a limited-government guy, I’m not fond of government over-regulation, much less intervention in the private sector, and I’m not yet sure if I’d support it in response to the development of AI technology.


Hell, given what we’ve seen from the U.S. Intelligence Community over the last several years, it’s anyone’s guess where the Deep State’s interests lie regarding artificial intelligence.

All of the above said, I’m still a fan of “Trust but verify,” and I always will be. So should you.

What are your thoughts on the issue, RedStaters?

The opinions expressed by contributors are their own and do not necessarily represent the views of RedState.com.


Join the conversation as a VIP Member

Trending on RedState Videos