The Real Economic Problem Of AI Isn’t Tech But People

News Room
By News Room 10 Min Read

With all the discussion and coverage of artificial intelligence, one might think the data, the understanding, the concerns were all understood and available to all. The conclusions are all contradictory. AI will usher in an era of prosperity and freedom of all. Or it will destroy humanity — or at least make the wealthy even wealthier while putting hundreds of millions out of work. But they are all absolute, like this opening to a Wired article about OpenAI, the company behind ChatGPT:

“What OpenAI Really Wants: The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.”

Emphasis in the original. The good, the bad, the extremely stated. Last year, Ilya Sutskever, chief scientist of OpenAI, wrote on Twitter/X, “it may be that today’s large neural networks are slightly conscious.” And in a September interview with Time, he said, “The upshot is, eventually AI systems will become very, very, very capable and powerful. We will not be able to understand them. They’ll be much smarter than us. By that time it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies.”

There is a lot going on under the surface. Nirit Weiss-Blatt, a communications researcher who focuses on discussions of technology, has referred to “‘AGI utopia vs. potential apocalypse’ ideology” and how it can be “traumatizing.”

Any set of choices that are absolute and polar can be traumatizing. Fight? Flight? Emotional exhaustion, more like it, because the emergency never ends. Instead, it is constantly restated and emphasized, drummed into people’s heads.

But there is another disturbing aspect that feeds into social issues like income and wealth inequality. The talk about AI, on the parts of those who create it or expect to make money from it, is proceeding in a manipulative and misdirecting way.

The danger is in the framing. Everything is a matter of what software will decide to do. It is “AI” (an incredibly complex combination of many forms of programs) that will become, or maybe already has, according to Sutskever, conscious. AI that will take control. AI that will provide massive benefits for all humanity or wipe it away, like a real-life version of the film The Matrix.

That is the biggest misconception, or maybe lie, in discussions that have been taking place. If you thought that your work could potentially result in the demise of humankind, would you keep doing it? Unless you had a particularly perverse psychology, you wouldn’t. Could you restrict how you used everything built up from basics that have long been controlled? Yes, and I say that knowing something about the technology and how it differs from other more familiar predecessors.

The single biggest shiftiness is the degree to which people who are responsible are framing discussions as though they have no power or responsibility. No agency. The software will or won’t do things. “Stop us,” executives and researchers say to governments, which in my experience means, “Create regulations that have a safe harbor clause so that by following a few steps, we can do what we want and avoid legal responsibility.”

But the people with the most ability and power to regulate what they do — to consider whether they should enable potential mass unemployment for the gross profit of a minority of wealthy entities and persons — are the ones unreasonably pushing away responsibility because they don’t want the trouble or restrictions.

For a reasonably fair society to be possible, everyone must insist that others take on the responsibilities they have. Even if it means they can’t do everything they’d like or make as much money as they could. With all the discussion and coverage of artificial intelligence, one might think the data, the understanding, the concerns were all understood and available to all. The conclusions are all contradictory. AI will usher in an era of prosperity and freedom of all. Or it will destroy humanity — or at least make the wealthy even wealthier while putting hundreds of millions out of work. But they are all absolute, like this opening to a Wired article about OpenAI, the company behind ChatGPT:

“What OpenAI Really Wants: The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.”

Emphasis in the original. The good, the bad, the extremely stated. Last year, Ilya Sutskever, chief scientist of OpenAI, wrote on Twitter/X, “it may be that today’s large neural networks are slightly conscious.” And in a September interview with Time, he said, “The upshot is, eventually AI systems will become very, very, very capable and powerful. We will not be able to understand them. They’ll be much smarter than us. By that time it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies.”

There is a lot going on under the surface. Nirit Weiss-Blatt, a communications researcher who focuses on discussions of technology, has referred to “‘AGI utopia vs. potential apocalypse’ ideology” and how it can be “traumatizing.”

Any set of choices that are absolute and polar can be traumatizing. Fight? Flight? Emotional exhaustion, more like it, because the emergency never ends. Instead, it is constantly restated and emphasized, drummed into people’s heads.

But there is another disturbing aspect that feeds into social issues like income and wealth inequality. The talk about AI, on the parts of those who create it or expect to make money from it, is proceeding in a manipulative and misdirecting way.

The danger is in the framing. Everything is a matter of what software will decide to do. It is “AI” (an incredibly complex combination of many forms of programs) that will become, or maybe already has, according to Sutskever, conscious. AI that will take control. AI that will provide massive benefits for all humanity or wipe it away, like a real-life version of the film The Matrix.

That is the biggest misconception, or maybe lie, in discussions that have been taking place. If you thought that your work could potentially result in the demise of humankind, would you keep doing it? Unless you had a particularly perverse psychology, you wouldn’t. Could you restrict how you used everything built up from basics that have long been controlled? Yes, of course you can.

The single biggest shiftiness is the degree to which people who are responsible are framing discussions as though they have no power or responsibility. No agency. The software will or won’t do things. “Stop us,” executives and researchers say to governments, which in my experience means, “Create regulations that have a safe harbor clause so that by following a few steps, we can do what we want and avoid legal responsibility.”

This hits such an odd extreme that OpenAI tries to be invisible to others, including journalists like Matthew Kupfer of The San Francisco Standard, who wrote an amusing piece about how flustered and panicked people at the company got when he found their office and walked in for an interview.

But the people with the most ability and power to regulate what they do — to consider whether they should enable potential mass unemployment for the gross profit of a minority of wealthy entities and persons — are the ones unreasonably pushing away responsibility because they don’t want the trouble or restrictions.

For a reasonably fair society to be possible, everyone must insist that others take on the responsibilities they have. Even if it means they can’t do everything they’d like or make as much money as they could.



Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *