Re: Are you mad at AI or at society?
This article is in response to this one I read today. I’ll be quoting the bits I’m responding to directly, but you should read it in its entirety for yourself first.
Models are trained on content without the creators’ consent or any compensation — that’s the biggest problem with AI.
This is what most people online make noise about, but I don’t think it’s the biggest problem, and I question whether it is even a big problem. I’ll talk more about what I think is the biggest problem below, because I also need to address an issue of terminology:
machine learning technologies ARE useful. I can’t believe people are arguing with that as part of the anti-AI sentiment.
This is a crucial misunderstanding of what current-day AI critics are talking about. “Machine learning” generally refers to mathematical modeling to make predictions about a system. It’s basically advanced statistics aided by computers. These can range from things like weather forecasting to playing a video game. I haven’t met anyone who doubts the utility of these technologies. What others and myself doubt the utility of are so-called “generative AI” technologies.
Generative models like LLMs and image generators are in some sense also doing advanced statistics and prediction, but the specific things being predicted are different in a meaningful way. When a weather model yields a forecast, it is making a prediction about the real world based on input data from the real world. A language model is not modeling the real world; it’s modeling the way humans (and bots) have written about the real world and about anything else. It is predicting the next word (or part of a word) in a sequence. When one understands this fundamental principle of operation, it is no longer surprising the sorts of mistakes they make when we apply them to solving problems in the real world. They do not understand the words they are saying. If they can be said to understand anything, it is that they understand the rules of grammar, typical ways that writing is structured, and the association of various tokens that often appear near one another. In a meaningful sense, generative models do not live in the same world as older, more practical machine learning models, or indeed, the same world as ourselves. It is for this reason that I and many others doubt the utility of these models.
As for people losing their jobs — that’s nothing new as well, and of course, it’s sad. It’s even sadder that we don’t live in a world where people welcome everything that makes their work easier and faster.
There’s a lot to be said about AI’s threat to human employment. CGP Grey made a very good video many years ago about the topic that’s worth considering. There are some dated examples given, but the core argument is still sound.
talking about artists losing livelihood is even harder. I wholeheartedly believe that art is necessary both for the author and for the consumer, it’s an important part of human experience. But art as a career was always something happening despite the odds.
Painting survived photography, theater survived movies, books survived everything. At the same time, AI opens up new possibilities. Soon it’s going to be possible to create full-fledged movies with it. And you might dislike the idea of watching AI-generated movies, but consider this: anyone could make their own movie. Until now, that was a medium closed off unless you had serious money and connections. How is that limiting creativity?
I too am not exactly worried about the future of human creativity as a result of these models, but we are definitely looking at this in different ways.
Whether computer-generated movies will be any fun to watch is a matter of opinion, so I’ll leave that aside. I do, however, take issue with categorizing their creation as creativity. It is arguable whether or not the machine generating an artwork is being creative, and I’m not particularly interested in the answer to that question, but consider this: how is prompting an AI model to make you an artwork any different from asking a real human artist to make you an artwork? Quality of the work and sentimentality aside, these are the same action on the part of you giving the prompt.
But think about that. All you have done is presented an idea for an artwork. You have not actually made any art yourself by doing so. If you prompted a human artist to make something for you and you took credit for it, you would be called a plagiarist, but for some reason people think it is appropriate to take credit for the output of a machine in exactly the same relationship to you. So-called “AI artists” have no business making any claim to creativity. Maybe one day, these models will be so good that you can feed them a script and they will produce a faithful representation for people to watch. In that case, the author of the script could claim credit for creative screenwriting, but until then, I don’t consider your example to be one of creative work using AI.
But what about disinformation? Sure, AI can lie. But I’d argue there’s a high chance that it will still be better than the sources the average person gets information from.
I’m not sure I understand this one. Disinformation is not the same as misinformation. The concern about AI-generated disinformation is the ability for people to cheaply run disinformation campaigns to intentionally deceive masses of people to suit their own ends, probably political ones. These sorts of bots are definitely not going to be better than average news sources, flawed as they might be.
Ultimately, there’s a fundamental incentive for technologies and products to be useful.
To whom?
The cynical among us are expressing concerns that the tech oligarchs are willing to hemorrhage money for these AI projects in order to force acceptance and integration such that it becomes difficult to function in society without them, ultimately enabling the creation of a new techno-feudalist system.
Are you against AI itself, or against socialized research being turned into privatized wealth? Is it about people losing jobs without any chance to retrain? About digital products getting worse? Or do you really believe that AI has no good use?
Now I shall finally talk about what I think is the biggest problem with these new generative models.
As was mentioned in the article and the CGP Grey video, the fear of automation has long been dismissed with the argument that machines take the mundane and terrible jobs from humans and free us up for more fulfilling work. I ask you: what jobs are the new bots designed to take? Generative models threaten to take from humans the last places we hoped to retreat to from the prior wave of automation.
There is intrinsic joy in creativity and problem solving, and that cannot be taken from us, but to me, the ultimate joy in these is in the sharing of them with others. When I make art, I am pleased with my work, but I am not truly fulfilled until someone else has also appreciated it. When I write software, I take satisfaction in the solution of a problem, but that satisfaction is multiplied when it becomes useful to other people, too. But who will care about my art when anyone can have a similar or better experience at the push of a button? Why would anyone care about my solution when a bot could create a better one in a fraction of the time? These models threaten to take our humanity from us not by oppression but by isolation, in a world where we are already ever more isolated by other technology.
As for the utility of the technology, I have explained already how the generative models have a fundamental flaw in their disconnection from reality, and though newer models seem to be improving, the returns seem to be diminishing. I am not yet convinced that this technology is capable of achieving intelligence. But even setting that aside, I implore you to ask yourself again to whom these technologies might ever be useful. Should all their flaws be overcome, what do you and I stand to gain from their employment?
I submit to you that the only ones who stand to gain are the parasites who created them, who, lacking creativity of their own, seek to sell us the story that we can be creative by a simple prompt of the machine, that we need not read the words our friends and loved ones wrote us (how inconvenient) but rather a substanceless summary to which we can reply with a message that we couldn’t be bothered to write ourselves either.
To answer the titular question: I am not mad at AI. I am mad at society for considering this to be a worthy enterprise in the first place, but there is still time, I hope, for society to redeem itself.