AI is nothing new. Seriously. It’s just been getting a lot of attention lately because there’s money to be made.
Remember Deep Blue? It defeated a world champion chess player back in 1997, over a quarter-century ago.
Remember Sophia? Back in 2017, this bucket of bolts made the talk show rounds and was granted citizenship of Saudi Arabia. One of her/its creators David Hanson told America, “She is basically alive,” which is giving the word “basically” quite a bit of heavy lifting to do.
A few months earlier, Google’s AlphaGo system had beaten a world champion Go player, prompting him to retire. Very impressive artificial intelligence, but still nothing close to actual intelligence. It’s like if Van Gogh had retired because he felt he couldn’t compete with photography.
Back in the nineties, I was writing my own AI routines for games like Go. It’s really not as big a deal as you might think to make decent AI for tasks like this. The tricky part is writing code that makes better decisions that the people who wrote it. This is where learning algorithms come in. It’s basically brute-force trial and error with a memory of what worked and what didn’t (though again “basically” does some heavy lifting).
In 2015 (before Sophia, mind you!), I wrote my own simple chatbot that learns to talk based on user interaction. That’s generative AI, and it ain’t magic. If you strip the commentary out of it, my little Python script is well under 100 lines of code. It’s a hell of a lot of fun to play with, but I’m under no illusion that it’s “sentient” or “conscious” or even “intelligent” in any humanlike sense of the word. And I kind of doubt that anyone intelligent enough to code an AI system would honestly fall into that kind of thinking either.
When you add enough complexity and adaptive power into a program like ChatGPT, it reaches a point where it starts to trick your animal brain into feeling like it’s somehow “alive,” but that says much more about our brains than it does about these systems. (People fall in love with inanimate objects all the time. We’re a weird species.) The tricky thing is that I can’t say categorically that they’re not alive because the word and indeed the whole concept of “alive” is a human invention. Words mean different things to different people, and context is essential. Big Tech influencers know this, and they’re using it against you.
Take a moment to give this excellent article from The Guardian a read.
“What we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon…) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.”
Exactly! It's not about whether AI is useful and indeed “the future” (I'd say yes to both) or whether AI can possibly become sentient or conscious in any morally meaningful sense (I'd say no); it's about how these corporations are pulling off massive theft right now by pushing the idea that an AI system is just “learning” in the same way a human would. They want you to see AI as a person to frame the narrative in a way that maximizes their own profits. Like an AI system, a corporation is a machine, an artificial construction with no mind or soul or morality. A corporation is a money-making machine that we (tragically) decided to consider a “person” for legal purposes. Let’s not make the same mistake with AI systems.
I think it’s mostly a bad faith argument on the part of AI evangelists that a generative AI program is actually doing the same thing a person does and maybe even is a person in some bizarre posthuman sense. They know what they’re doing is a sort of distributed microplagiarism (and therefore illegal), but it helps their case if they say, “No, the AI isn't copying anything; it's just looking and learning and taking creative inspiration from things.”
The human mind is special. Humans have the freedom to learn things and be inspired by things just so long as we’re not straight-up copying without giving due credit. We make laws for ourselves, and somewhere along the way, we decided to grant ourselves this freedom. Software is not special. We don't need to allow it to “learn” from our work for free without giving due credit to its sources. We don’t owe software anything like that. And we certainly don't owe software companies the ability to ramp up profits even faster than they’ve already been doing. I think the industry will survive even if it has to pay creators for their role in all this.
Now more than ever, we need to keep in mind the distinction Philip K. Dick made between the android (literally “false human” or “artificial person”) and the authentic human. Sophia is an android, obviously, but so is the company who built her/it: Hanson Robotics. So are OpenAI, Microsoft, Alphabet, Apple, Amazon, etc. These constructions are manifestly not people, and if we continue to pretend that they are, we reduce real human beings to the level of the android.
I agree with you here both on the micro-plagiarism and AI is not a person. It's a big topic, one that reminds of Alan Watts' talk on people confusing representations with reality and "eating the menu" instead of the food.
But don't take my word for it... Here's somebody who knows much more about AI than I do giving a detailed rundown of that current state of AI and explaing why people are overreacting to the linguistic smoke and mirrors of ChatGPT. https://spectrum.ieee.org/amp/gpt-4-calm-down-2660261157