A Meander from “AI” to People and Back Again

The latest scandal to hit the science fiction community is the revelation that the people putting on WorldCon in Seattle are using ChatGPT to vet proposed panelists. Given that a large number of people who want to be on panels are published authors who are part of the class actions against the companies making these over-hyped LLM products, the amount of outrage was completely predictable.

A number of us also pointed out that the information produced by these programs is very often wrong, since they make things up because they are basically word prediction devices. As a person who is not famous and who has a common Anglo name, I shudder to think what so-called “AI” would produce about me.

But the biggest problem I see with “AI” – outside of the environmental costs, the error rate, and the use of materials without permission to create it – is that they keep trying to sell it to do things it doesn’t do well, instead of keeping it for the few things it actually can do. Of course, there’s not a lot of money to be made from those few things, especially when you factor in the costs.

We’re at the point in tech where new things are not going to change the product that much, no matter what the hype says. Exponential growth cannot last forever. If you don’t believe me, look up the grains of rice on a chessboard story.

I’ve been thinking a lot about all the ways people are trying to use “AI” or even older forms of tech to get rid of workers and the more I think about it, the more disastrous it looks.

We don’t need more tech doing stuff; we need more people doing stuff.

It’s not just “AI”. Just try to call your bank or your doctor, or, god help us all, Social Security or the IRS (now made worse by the Dodgy Minions). When we have issues, we need people – real people, who understand what we’re calling about and can solve our problems.

The “chat” feature on a website doesn’t cut it and an “AI” enabled chat feature is probably worse in that it might well make up an answer instead of just not knowing what to do.

I’ve been thinking of health care in particular. I’ve heard a lot of talk about how LLMs can read radiology films better than radiologists, in that they don’t get bored or distracted, so they can point to any things that look out of the ordinary in reference to the kind of films they were trained on.

But of course, what they’re really doing is flagging the problems for a radiologist to look at. They don’t replace the expert; they help them do their job. It’s probably useful, but it isn’t going to make radiology any cheaper, because you still need the person to look carefully at the films.

I don’t believe for a moment that LLMs will be better at diagnosing patients. Continue reading “A Meander from “AI” to People and Back Again”