This week I had the joyful and inspiring experience of going in person to hear an interview with Andrea Hairston, who was finishing up a tour in support of her new book, Archangels of Funk.
At one point, she mentioned that she had started working on that book about 20 years back, but realized that she had to know more about her character – and specifically about her character’s ancestors – before she could write it.
So she wrote two other books, Redwood and Wildfire, and Will Do Magic for Small Change, so that she could write Archangels of Funk. I should point out that both of those books are incredible works in and of themselves. Which is to say that she created art while building the framework for more art.
But what also hit me was how antithetical her process was to what the so-called “AI” chatbots (that is, the large language models or LLMs) promise: frictionless writing. I mean, she did a lot of research plus wrote two complete novels to figure out what she needed to know to write the latest book.
Talk about friction!
But that’s the point. Writing is so much more than putting words on a page in some semblance of the right order. The LLMs can’t do anything more than that, and that’s not even taking into consideration the incredible inaccuracy of what they do.
They can’t react to the words they’ve written and figure out that they need to research more or even write another book or two before the one in progress. All that involves thinking and they can’t think.
Now I’m a science fiction writer, so I’m not going to say we won’t someday have actual artificial intelligence that can think, and possibly think in ways very different from human beings. But what we have now isn’t that.
I’m not even saying LLMs aren’t useful for some things, because obviously there are some serious uses. They can go through data and catch all the references to something, while humans will miss some of them. They’re apparently useful in catching anomalies on X-rays that radiologists might miss, though what that means is that the radiologist needs to take a second look rather than that we should replace radiologists with LLMs.
The chatbots might even be useful for the useless memos sent in most offices, though if chatbots can do that, what’s the purpose of managers?
But they can’t write good fiction.
I’m working on a novel right now, and the more I write my way into it, the more I discover about it and the more questions I have. This is what makes writing hard.
I mean, I can come up with words all day. That’s easy. I can even come up with ideas all day – I have way more than I know what to do with.
But putting those words all together so that they make sense and – with any luck – transcend the original idea, now that takes work.
Now I know why students of all ages want to use chatbots to write papers for school. It’s because they see writing as a chore, not to mention that all too many people think the purpose of school is to get credentialed, not to learn things.
But as Harvard writing professor Jane Rosenzweig points out in a recent newsletter that also includes her recent piece in the Boston Globe:
I don’t assign summary and analysis because I need more summaries or analyses; I assign these projects because I want to help my students think through complex ideas and grapple with them. And I don’t ask my students to write papers because the world needs more student papers; I assign papers because I want my students to go through the process of figuring out what they think. The friction is the point.
You do the writing to learn things.
Now if you don’t care about learning, I guess getting a device to do the work for you is great (or at least, it would be great if the chatbots weren’t wildly inaccurate).
But it’s not going to turn you into a writer. The only thing that can do that is doing the writing your ownself.
You gotta have friction to get good fiction.