Maybe when it comes to the chat bots and art bots and other such creations, we’re asking the wrong questions.
I mean, if a chat bot can pass the bar exam, the question shouldn’t be “can a chat bot practice law” but rather “does the bar exam do a good job of determining whether someone would make a good lawyer.”
Having taken a bar exam, I can assure you it’s primarily a hazing ritual. I’m sure the chat bots do very well on the multistate multiple guess portion of the exam, which requires you to memorize vast amounts of information, much of which is not relevant to actual practice.
My experience with bar exams is out of date, but when I took it, we had to learn all the old common law (based on British law) definitions of criminal behavior. These were no longer in use in Texas (where I took the bar) or in any other state that had adopted a modern penal code.
I bet a chat bot is hell on wheels at stuff like that, but I suspect a bot lawyer would not know what to do in a situation where its client was before a judge for revocation of probation (on a felony drug charge) and the judge, in the middle of ranting at its client, gave it a huge wink.
I’m not even sure how a chat bot would know about the wink, but assuming a bot could see it, I suspect it wouldn’t know it was a signal that the judge wasn’t going to revoke probation.
The time it happened to me, I knew what it meant. In fact, I knew from the moment the judge started ranting that he wasn’t going to send my client to prison. He winked at me because I was a young lawyer and he wasn’t sure I understood yet that he was yelling at my client in lieu of revoking probation.
The real practice of law is about subtleties. I suspect algorithm-driven software fueled by large language models is very useful in plowing through reams of documents and will get better, but it’s going to be crap at the negotiating table or in the courtroom where you have to read people as well as put the right information before them at the right time.
A bar exam that a bot can pass is not going to show who’s good at that kind of interaction. So maybe we need to change the bar exam. It could even include questions that required test takers to use some of the law-focused software effectively or to review an analysis of documents created by a chat bot to see whether the bot missed something crucial.
It might even include making an oral argument or some other presentation that only a person can do.
I am pretty sure that the chat bots can easily reach the point where they will blow the top off of every multiple guess standardized test in existence, assuming they haven’t done so already. Those tests and the bots “think” alike — that is, the tests have certain patterns that can be learned, and the bots are excellent at doing that sort of pattern.
I happen to be very good at those tests, partly because I do, in fact, have a large knowledge base (for a person) and a good gut instinct about what those tests want (for a person). I also think they’re junk. All they test is how well you take those tests. That is not something that is useful for anything except taking those tests.
Yeah, some very smart people do well on them because they know a lot and understand how they work.
But some very smart people do very badly because they question (with good reasons) the questions. Other very smart people are not lucky enough to have been exposed to the knowledge base.
The chat bots have all the kind of knowledge used in the tests and they lack the ability to think about whether the question makes sense.
In fact, they don’t think at all. And that’s important, because the fact that we use the term “artificial intelligence” to apply to these bots and the fact that they sound like a person in some situations makes people think that the bots can think and reason.
But the term AI right now just means large language models and brute force computing. It’s not even Hal in 2001: A Space Odyssey, much less Data in Star Trek.
I suspect we’ll get an AI that can think some day, but right now that’s not the problem we need to address. There are some very smart people looking at the real problems, which include such things as the lack of transparency in the development of these programs and the sheer amount of energy required to do the computations to get there.
And we will have companies using them in ways that will create even more misinformation than we’re dealing with now.
One of the best critiques out there is by Ted Chiang in the New Yorker. He describes the kind of work the bots put out as similar to a “blurry jpg” and points out that while they can write something that sounds like a reasonable student essay, they can’t do simple arithmetic.
I won’t bother to try to describe his arguments any further. Go read the article for yourself. It’s worth your time because, unlike a chat bot, Ted Chiang can think complexly about a subject and write about it in a way that will make you think about it as well.
When we have AI that can do that as well as Ted Chiang, I’ll get worried.