AI Regulation Has Plenty of Critics. Here’s One.

News Room
By News Room 9 Min Read

This article is from the free weekly Barron’s Tech email newsletter. Sign up here to get it delivered directly to your inbox.

Big Debate. Hi everyone. The regulation of artificial intelligence is once again sparking heated discussion among regulators, lawmakers and industry insiders. The future of AI could be at stake depending on the debate’s outcome.

On Monday, the Biden administration issued an executive order on artificial intelligence, invoking the Defense Production Act. It requires companies to report and send test results to the government when developing more powerful versions of AI systems. It marks the first concrete step by the government in mandating reporting requirements around AI computing. Meanwhile, leaders from around the world are meeting in Europe this week, seeking some kind of consensus around global regulation of AI.

It’s just the beginning. Earlier this year, OpenAI CEO Sam Altman testified before Congress and called for a new federal agency to mandate licensing and enforce safety standards for advanced AI. The start-up’s principal partner,
Microsoft
(ticker: MSFT), has also supported the idea of a licensing requirement.

But that plan worries some leading AI computer scientists. Google Brain co-founder Andrew Ng and
Meta Platforms
(META) chief AI scientist Yann LeCun, known as the Godfather of AI, say that burdensome regulations could concentrate power among a handful of large technology companies and well-funded start-ups, while impeding the development of open source AI models.

Critics of regulation often say that government’s best intentions can ultimately preserve the status quo, helping incumbents while hurting upstarts. That’s a particular worry for those in the venture capital community.

Barron’s Tech asked Bill Gurley, a prominent venture capitalist, for his thoughts on the topic. Gurley is a general partner at Benchmark and was an early investor in Zillow and Uber. These days, he’s spending even more of his time thinking about big picture issues, after stepping back from active investing for the firm in 2020.

As a strong proponent of open-source AI, Gurley worries that large incumbent players could use regulation—and the power of lobbyists—to squash future innovation.

Here’s an edited version of our conversation:

Barron’s: From your decades of experience, what drives technology regulatory policy in government?

Gurley: First, I want to make it clear I’m not antiregulation. You need rules and enforcement; otherwise you have chaos. But what I’ve seen in all my years is that many times the incumbent that sought to be regulated had such a hand in the creation of the regulation they tilt the scales in favor of themselves.

There’s a Morgan Stanley report where they studied five large pieces of regulatory work and the stock performance of the incumbents. It proved it’s a wonderful buying opportunity, when people fear that the regulation is going to hurt the incumbent.

It makes me skeptical especially in the case when it’s obvious the incumbents are active in the pursuit of regulation as in AI today.

What do you think about OpenAI’s and Microsoft’s call for a federal licensing requirement for advanced AI?

Obviously, they stated out loud they want regulation. The level of [lobbying] effort is unprecedented. The only thing that comes close is some of the efforts SBF [FTX’s Sam Bankman-Fried] had where he put together Super PACs and was trying to affect regulation.

Someone asked me on Twitter/X what do you recommend? How about a process that doesn’t have companies that just raised billions of dollars in the front seat of the process? It would be better if they weren’t involved at all.

There’s plenty of experts, academics, whatever. They aren’t conflated with their economic interest. Go build your regulatory plan with those people. Don’t do it with the companies that are begging for regulation.

What’s your reaction to the Biden administration executive order on AI this week?

My first reaction is why are we using an executive order. I’m anti-executive order on both sides of the aisle for everything. Is it really an emergency? How many people have gone extinct from a large language model so far? Is something going to happen in the next 60 days before Congress could act? And what gives you confidence that these solutions are the ones that are going to solve the problem? We have almost no operating history.

What do you think about the doomsayers who talk about the existential risk to humanity from AI as a reason to regulate it?

AI models are a vector map of text. It has emergent properties, but it’s not hyper intelligent. But if you think this is an existential threat, why are the people that are the biggest doomsayers the ones that are raising billions and scraping some of those billions into their pockets? If they’re that scared, they should shut their company down. Stop doing it.

It’s like I’m playing with this chemical that potentially could explode, watch me do it. Please help me, regulate me. If you’re so scared, stop. That would cause me to say, okay. Maybe they got it right, and I got it wrong.

I think one of the reasons that politicians are concerned is because large-language models could become a super baby, birthed from a search engine and Wikipedia that can answer questions and manipulate truth and impact voting decisions. But if you believe that, wouldn’t you be more comfortable with an open source model where academics can see all the code, what’s happening, and what data sources are in this thing versus a proprietary system where you have no visibility.

You’ve been an outspoken proponent of the open-source AI model where individuals contribute, and it’s not owned by companies. Why are you worried that will be regulated away?

The reason I worry about open source the most is I think it’s one of the most amazing gifts to society that the technology industry has ever created. Because if an idea can be propagated for free that lifts prosperity for everybody, it’s amazing. Open source enables hyper competition. It’s great for consumers, and you end up with lower and lower prices and more innovation.

What’s a bad outcome if the incumbents can get the government to write the regulations they want?

The biggest negative is if they succeed in making the open source AI models illegal. That would be horrific and profoundly impact innovation in the AI space and force everyone to buy from the big companies.

It would also be horrible precedent that would allow—in the name of security—other incumbents to start targeting open source in other markets and making it illegal.  

Thanks for your time, Bill.

This Week in Barron’s Tech

Write to Tae Kim at [email protected] or follow him on X at @firstadopter.



Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *