Congress should think before it regulates AI

by
Congress should think before it regulates AI

“You need to do legislation and learn at the same time.”

This recent quip from Sen. Richard Blumenthal, Connecticut Democrat, neatly encapsulates how some in Congress are approaching artificial intelligence. To prevent next-generation computer programs from wreaking havoc on American society, this faction wants to enact comprehensive regulation at “lightspeed.”

This approach is understandable insofar as it recognizes the awe-inspiring power of AI and the duty of policymakers to mitigate abuse of that power. But it also bears the imprint of a classic error in left-liberal thinking: the idea that action is always better than inaction, even when we have no idea what we’re doing.



The political philosopher J. Budziszewski calls this the “fallacy of desperate gestures.” I call it a recipe for disaster.

Let’s be clear: The federal government has a legitimate and necessary role in regulating emerging technologies, and AI is no exception. If the last few years have taught us anything, it’s that the interests of private entities and the interest of the nation do not always align.

Technology companies can and will pursue profits through AI developments, but it’s up to Congress to ensure that those developments don’t come at America’s expense. No one else will do that job for us.

Let’s also be clear, however, that Congress, which Mr. Blumenthal rightly characterizes as a body that “operates at the speed of molasses in sub-freezing weather,” is in no position to comprehensively regulate AI.

“There are some senators that still can’t spell AI,” Sen. Mark Warner joked earlier this year. Indeed, few lawmakers had given the topic much thought before ChatGPT made headlines. Moreover, the technology is changing so rapidly that computer scientists can barely keep up with it, much less politicians.

If we ignore these factors and attempt comprehensive regulation anyway, one of three things is bound to happen. The first is that, by freak accident or divine providence, the regulation will be successful, and America will reap the benefits of the AI revolution without unduly suffering its downsides. Unfortunately, this is about as likely as Vladimir Putin acknowledging he was wrong to invade Ukraine.

The second, more likely possibility is that Congress will use the pretense of regulation to protect and empower special interests. With this in mind, Silicon Valley’s vocal support for AI regulation is cause for legitimate concern.

We should not be surprised that Mark Zuckerberg and Elon Musk want Congress to play referee for them, because Meta and X are among the companies best positioned to navigate and benefit from regulation. But we should be suspicious of any attempt by policymakers to delegate legislative decisions to “industry experts” with conflicting interests.

The third possible outcome of comprehensively regulating AI, not exclusive with the second, is that Congress will be too heavy-handed, stifle domestic innovation, and transfer the technological edge to our adversaries.

This would be the worst outcome of all, because AI has the capacity not just to energize the economy, but to revolutionize national security, and the race is on between the United States, China, Russia, and others to gain the upper hand.

Imagine a world in which the People’s Liberation Army is months or even years ahead of the Pentagon in the realm of AI. We would be at the mercy of next-generation influence campaigns, market manipulation, and election interference. And in the case of conflict, our men and women in uniform would be at a severe tactical disadvantage.

I don’t suggest this to frighten people, but to clarify how much is at stake and how much hasty legislation could cost us.

At the risk of repetition, this is not an argument against regulation wholesale. AI is like any technological innovation in history: The extent to which it causes good rather than evil depends on the boundaries in which society allows it to develop.

But at this early stage, we would be foolish to think that we are anywhere close to having the last word on the issue, or that any such word would be to America’s benefit. Simply put, we have to know what game we’re playing before we can write its rules in our favor.

Some commonsense next steps present themselves, such as prohibiting technology transfers and joint ventures with foreign adversaries, like the Chinese Communist Party, or perhaps mandating disclosures of AI-generated content.

Beyond that, though, we should leave “lightspeed” to the computer scientists and work as the Founders intended — carefully and methodically. It may not be efficient, but it requires no pretense, and it keeps drastic unintended consequences at bay.

• Marco Rubio is an American politician and lawyer serving as the senior U.S. senator from Florida, a seat he has held since 2011.



Source Link

You may also like

Leave a Comment