Becoming an “AI Vegan”

Arwa Mahdawi introduced me to a new term, or maybe a new concept: “AI vegans,” which is to say, people whose attitude and actions in relationship to so-called “AI” parallels the way vegans deal with animal products.

Mahdawi cited an article by a professor who directs online education at Georgia Tech’s College of Computing, David Joyner – someone who’s clearly not a tech-phobe.

I like this concept quite a lot. While I am not vegan, I respect the vegan approach, and often think that they’re likely right on all points, especially with respect to the effect on the environment.

The criticisms by the “AI vegans” go like this:

  • “AI” is immoral and unethical – particularly because the materials used to develop it were stolen from people (including me and others here in the Treehouse).
  • Using “AI” is bad for your health – recent studies have shown harm to the critical thinking faculties of those who use a lot of chatbots to do their intellectual work for them.
  • The “AI” industry is very destructive to the environment, requiring massive amounts of water and electricity – which includes building new coal and other fossil fuel powered power plants despite the fact that we’re at the tipping point for renewables.

Those ideas directly parallel the vegan attitude toward animal products. I’d add a fourth one: Most “AI” products being sold don’t work very well. This is particularly true of writing programs, but also true of many of the ones aimed at employers who want to fire their workers.

I do my damnedest to avoid any use of “AI.” I try to disable it in writing programs – it gets in my way – and I’ve reached the point where I assume any feel-good story on social media is “AI” generated. I suspect it’s in the spell-check programs now, because they don’t work as well as they used to. I’m sure it’s in the grammar programs, but since I don’t use that crap – my command of grammar is certainly better than any fucking programmer’s, much less “AI’s” – I don’t worry about those.

I hope the “AI vegan” movement catches on, because this slop is out of control.

We keep being told that some chatbot can pass the multiple guess part of the bar exam with flying colors. As someone who has taken that exam, I don’t find that difficult to believe. A bot that has incorporated previous tests and other prep materials for that exam or any similar exam can probably do a great job on it, especially since the bots can’t think and are only making the statistical best guess in any situation.

I mean, the biggest problem with multiple guess exams – yeah, I know they’re technically called multiple choice and probably have some fancy new name these days that I’m not familiar with since thank all that’s holy I haven’t had to take one in years, but you know what I mean by multiple guess – is that they don’t reward thinking.

Despite the fact that every time I’m faced with four choices for an answer I always want to go with a fifth one, I used to be pretty good at those tests. I had a gut understanding of them. I do not think this is one of my best traits, though it was useful.

The thing is, the same chatbots that do great on those tests are the ones that make up citations for legal briefs. They know the form and the way people word arguments, but they don’t grasp the idea that the arguments have to be based on actually existing legal precedent, much less what that precedent actually says.

The best “AI” products are the ones that help people do the tedious parts of their job, so long as they can be double-checked. The ones intended to replace people are pretty useless. But of course, it’s the ones intended to replace people that are being sold everywhere, because that’s where the money would be.

Cory Doctorow distinguishes between centaurs (people assisted by machines) and reverse-centaurs (people forced to assist machines). Centaurs find using “AI” for tedious tasks useful; reverse-centaurs are miserable in their jobs, especially if their employers are firing people and piling more work on them based on the myth that the “AI” can do it. He’s writing a short book about that, out next year.

Like Doctorow, I don’t have any objections to “AI” as software used to help people do their work better, but it’s pretty clear to me that it can’t replace people.

Here’s something else that scared me about this tech recently: Paul Krugman says that the surge in investment in “AI” is why the markets haven’t fallen despite the disastrous economic “policy” from the current regime.

If you read Ed Zitron’s many rants about what’s wrong with the finances behind “AI”, you probably expect a crash at some point. Given what Krugman has to say, I can see that crash bleeding over into the lives of those of us who have nothing to do with “AI.”

So I’m an “AI vegan” despite being someone who has always thought digital tech was great. I love having the internet in my pocket, love being able to keep in touch with people, love being able to read reports from all over, love blogging (even if it’s now old school). I got my first computer in 1983 and I’ve been online since the mid-90s.

But I hate the enshittification and all the damage done to tech by capitalism, of which the overblown promotion of “AI” is just the most extreme example. I want to see it reined in.

I hope more people join the “AI vegan” movement.

3 thoughts on “Becoming an “AI Vegan”

  1. Sign me up, too. AI might be a useful tool. It’s not a workforce, and it certainly is not a creative workforce.

    One thing I realized about multiple guess questions is that the format is great for people who grew up in a dysfunctional household where you had to sort out what might be the true answer from what might be the expected or preferred answer. When I was briefly applying to business schools (it didn’t take) I aced the GMATs, not least because I could tell from the structure of the questions what the test-makers thought I should answer… common sense and truth be damned.

    1. Maybe I picked up my multiple guess test skills from the disconnect between what I learned from my parents — journalists with complex take on the world — and what I learned in school and the larger society. It’s certainly related to figuring out what the answer is supposed to be, rather than what the answer actually is. Probably a valuable survival skill, now that I think about it.

Leave a Reply

Your email address will not be published. Required fields are marked *