Not Inevitable

The only thing inevitable about so-called AI is that it’s creating a huge investment bubble that is going to crash. I don’t know if that’s going to bring down everything or just cause the typical problems caused by crashing of investment bubbles, but there will be a crash.

Nothing grows forever. Sometimes folks say that the only thing that grows forever is cancer, but of course cancer that continues unchecked kills its host.

Use of so-called AI in a multitude of fields is not inevitable. That’s just marketing hype. The whole field is built on hype, starting with the name, because it is in no way intelligent.

What they’re calling AI is software. It’s been developed by several different processes, but the one that most of us are familiar with comes from large language models. That’s what’s behind the various chatbots.

And what that software does is predict words or patterns or images based on all the stolen material that’s been packed into it. It is incapable of thinking or analyzing, which is why it frequently makes up citations or books or other references – it is programmed to recognize what those things look like but since it doesn’t know what they mean or why they’re important, it imitates them.

Which is to say, even when doing what it’s programmed to do, it’s unreliable at best.

There are forms of so-called AI that are useful in the hands of people who know how to use it as an effective tool. But it is not useful for many, maybe most, of the purposes for which it is being sold.

It is particularly not useful for writing or creating art because it has no capacity to think or bring anything new to the process.

I am moved to write all this because of the recent SFWA kerfuffle over whether any fiction produced with even a modicum of “AI” should be considered for awards. SFWA issued one set of rules that indicated that some uses might be OK for purposes of the Nebula awards, and then swiftly backtracked after a number of writers expressed their outrage on social media.

As part of all of this, a writer named Erin Underwood put an essay up on File 770 defending many uses of AI. Many other people have effectively criticized her arguments – Jason Sanford’s overview is an excellent one – so I won’t bother here even though I found her arguments appalling.

Rather I want to focus on the argument that we have to use this error-prone software because it’s part of the progress of tech and inevitable.

While I am in no way a tech expert, I have read extensively about this stuff, including such books as The AI Con by Emily Bender and Alex Hanna, Empire of AI by Karen Hao, and one of my favorite books of the last year, More Everything Forever by Adam Becker. I also follow the work of Cory Doctorow and Ed Zitron on the subject and was deeply affected by Ted Chiang’s essays about it in The New Yorker.

And perhaps the thing that has hit me the strongest is that the criticism of so-called AI is coming from people who love the possibilities of computers and digital technology in general. These are not the same people who grumbled as the internet took off or even those who objected to smart phones. They aren’t resistant to change, but they do know hype when they see it.

So-called AI is not a change in some fundamental technology in the same way that the personal computer or the internet was. This is a new version of software that’s useful for some things, but that’s all it is. The money behind it is the only real argument for inevitability.

The kind of people who are fighting against so-called AI are what tells me that it’s not inevitable. And if it’s not inevitable, it can be stopped. That requires resistance.

Writers and artists, along with their organizations and – one hopes – publishers and other entities in creative fields have been leading that resistance and they need to not only hold firm, but keep pushing.

One thing that happened as part of the political crisis in the United States in 2025 was that some law firms and universities and major media companies conceded to the grifter occupying our White House (or the part of it he hasn’t torn down yet). I suspect all of those institutions would tell you that they did it because what he was threatening them with was “inevitable” and they felt they had no choice.

Except – surprise – any number of other institutions have successfully resisted, some of them coming from much weaker positions than internationally renowned universities and wealthy law firms. For example, a number of hospitals and clinics who provide gender-affirming care have refused to turn over data on their patients and succeeded in court.

Turns out it’s not inevitable that the grifter is going to win.

If we continue to resist the incursion of so-called AI into our work, we have a good chance of succeeding, because the truth is that it cannot do what we do. But that means that organizations like SFWA have to take a firm stand, and that leading writers and publishers need to do the same.

If we refuse to go along with the hype, we can win.

Leave a Reply

Your email address will not be published. Required fields are marked *