Walking and “AI”

These days my morning book is Rebecca Solnit’s Wanderlust: A History of Walking. It’s a particularly appropriate book for me, since I do a lot of walking.

My neighbors frequently comment on my walking, though most of what I do is walk around the neighborhood or to some stores. It’s not exciting most of the time, though I do see little things in people’s yards – there’s someone on Emerald making miniature houses and putting them at the edge of their yard. They even have addresses.

My walking is a combination of exercise and mind-clearing and errand-running, but it is an important part of my life. There are days when getting my steps in is my biggest accomplishment.

Walking and reading about walking demonstrate one of the biggest flaws in the large language models and other machine learning software that’s being marketed as “AI”: it can’t walk. All it “knows” about walking comes from ingesting books like Solnit’s, which means it can probably associate walking with pilgrimages and Wordsworth and desert hikes.

But it has no idea what any of that actually means. I can read about Solnit joining a pilgrimage in northern New Mexico and think about that region – which I’ve visited – and what it feels like if you don’t have the right shoes for a hike.

And I can also follow her sidetrack about the man who has painted the stations of the cross on his old Cadilac and go off on a tangent in my mind about low riders and guys with well-kept old cars who play booming music and the boys I went to high school with who souped up ‘57 Chevys and cruised around the drive-in.

In one section discussing promenades in Mexico and other Spanish-influenced places, she connects the walking version with car cruising, because walking begats other things, even if people like me do a lot of walking because we are so damn tired of car culture.

“AI” gets none of that, because it can’t walk and it can’t smell and it can’t see and it can’t hear and it can’t touch and actually it can’t even read; it just sorts words and images.

It may be useful for some things – though not enough things to be worth all the money being thrown at it – but it is never going to be an intelligence.

I worry a lot about what’s going on with the “AI” industry – the massive energy use, the billions invested in companies that earn a fraction of that, the shoving of it into places where it is screws things up, the use of it to replace people in jobs even though it can’t do what people do.

I’m very worried about it being making our already bureaucratic-heavy society even less responsive. It has the potential for undermining effective government work, since apparently one purpose of the dodgy minions was to put “AI” in places where we used to have people who understood their jobs. And it’s certainly going to be worse with corporations, which weren’t trying to help us in the first place.

I mean, I got an email from Substack telling me about how many publishers on that platform are using some form of “AI.” Apparently some are using it for transcripts of videos, but others are using it for “research.” I commented that they should require their users to state their AI use so that we know to avoid those writers.

The only thing I can say in Substack’s favor – and I only use that platform because some people whose work I really need to read are on there, not because I’m a fan of the company or its model – is that they at least put together some data about it. I don’t have any illusion that they will act further.

But I don’t worry about this “AI” becoming Skynet, because the people who are building – and hyping – it do not understand intelligence. Some of them understand computer programming. Most of them understand money.

They seem to think that all it takes to create an intelligent being is computer programming. They know nothing about biology or neuroscience or how the human brain works.

To be fair, even the neuroscientists are still trying to figure out how the brain works, but they’re pretty sure it doesn’t work like a computer.

The doomer fear of “AI” taking over the world and building all those paperclips is just another way of boosting that nonsense. It further distracts everyone from the actual harm it is doing right now. Emily Bender and Alex Hanna explain that in depth in The AI Con.

Ed Zitron thinks the whole “AI” industry is going to crash, and that it might bring a lot of good tech stuff down with it, because it’s taking way more money than they’re making from it to keep it going. He may be right, but it may not crash soon enough to stop people from destroying many currently useful systems with it.

Me, I just hope that eventually this “AI” will be relegated to the few things it does well and that the crash of the nonsense won’t do too much harm.

This blog post was completely written by me.

 

2 thoughts on “Walking and “AI”

  1. Mostly I’m worried about the damage AI will do before we discover that it can’t do the things we want it to do. I grew up with a lot of SF involving AI, and I keep remembering the final paragraph or so from Frederick Pohl’s Gateway, in which Our Hero, having survived harrowing adventures which have left him wealthy but plagued with guilt, is talking to his therapist–which is AI. And Rob (Our Hero) says something like “You call this living?” And the AI Therapist responds, “Yes, I do. And I envy it very much.”

    Pohl’s AI understood its limitations. I wish the AI producers now did. I get the feeling that many of them just don’t like people very much, and think machines would be an improvement. I’d rather improve the people.

    Keep walking!

    1. Exactly! The people developing this stuff haven’t read enough good SF and haven’t understood what they have read.

      I particularly like your observation: “Pohl’s AI understood its limitations. I wish the AI producers now did.”

      I plan to keep walking as long as my legs let me.

Leave a Reply

Your email address will not be published. Required fields are marked *