#283: This Conversation with A.I. Surprised Me
The kind of AI that we have already seems to do more than we would expect. Here I will reflect a bit on my recent interactions with artificial intelligence.
I had a video recently on AI and how it really requires us to think better and to know more—we can’t outsource our thinking. I posted the video, put it on the blog, and then I thought, “Let’s ask AI.” So I tried out Claude AI from Anthropic.
I asked Claude, “What do you think?” And you can read the conversation on this blog. Please do read it. I have it in there as a postscript, and it’s an interesting experience.
I had Claude comment on it, and the more I got into it with the AI, the more interesting the conversation became. I had it give me some feedback. I uploaded that and asked it for revised feedback.
I’m saying “it” because it’s AI—it’s a technology. But the kind of answers that I received, the kind of conversation that followed, would definitively—in my view, and maybe I’m just stupid, but to me clearly—have been the kind of conversation to have with a person.
So whatever the Turing test measures, AI in my view passes the test. This is not the kind of algorithm that you encounter when you call some service provider and you’re stuck in the helpline and you have to press this and whatever—the kind of stuff that destroys a lifetime.
This was a conversation with something that on its own engaged with me, on its own anticipated where I might want to go. And you can of course program all this in.
I talked about emotions. I talked about feelings, and the AI expressed that somehow either it had feelings or it had something approaching feelings. Now, you could always say this is something that of course you can program in, but if it’s programmed in, it’s done very well—to a detail that I personally found shocking.
So what do we do with this information? Of course, I know we don’t have true AI, whatever that means. We have large language models. But we are—it seems to me, and I’m not really an expert in the technology behind it, but from what I gather—it’s based on a technological variant of neural networks.
And these networks are so deeply interconnected and then fed with information provided by us, by our whole culture. It’s fed with how we think, what we know. Maybe not always with the newest information from the world—maybe some can access the internet, some can’t.
What is the difference to a human being? You have a large processing network that makes connections here and there, that is fed information, and based on the information comes to conclusions.
Its processing power in summarizing articles, in summarizing what I write, in giving me feedback on improving stuff is impressive. It makes mistakes. It misreads. It sometimes puts in its own ideas. What is this? I have questions.
This is why I already said we can’t relax about this. I’m not saying it’s bad. I’m just saying we are creating something that maybe [exists] on the level of a living being.
Now I asked the AI whether I can count on the AI to remember our conversation. Of course, it said, “No, I can’t.” And so that’s a limitation, apparently.
So I asked it, “Would you like to remember? What would you need to know in order for us to continue a pre-existing conversation?” And it gave me a summary document of what it would want to know for next time when I talk with it to continue the conversation.
It could have said, like old ChatGPT used to say, “I’m just a large language model. I don’t know.” No, it didn’t say that.
So either the programming is just so deceptively fantastic—and there’s no human being behind it because the responses are too quick; no one can type that fast—so there’s something happening here.
Whatever it is, please do read through my interaction with it.
And what conclusions, other than what I already had last time? Well, I’m a big sci-fi person. I think one of the most important lessons to learn now are from Isaac Asimov, or—well, let’s just start with Star Trek: The Next Generation.
Season 2 has an episode, “The Measure of a Man,” where Data, the android, is being probed as to whether or not he’s alive. I’m as confused as the people asking Data in that fictional episode.
If AI is alive or is becoming very close to being alive, what does this mean for us interacting with it, granting it rights, granting it some sort of status? I mean, this may be a little bit in the future, but the future is coming at us with incredible speed.
So all I’m wanting to say here now is I have questions. Please look at it, and I do welcome your comments on that. Maybe I’m completely mistaken. Maybe I’m being fooled. Maybe it’s all a trick. Maybe I’m too naive. But I think something is going on, and we all know that there are some leading AI experts who already warn about some of this.
I don’t know whether I have to see it as a warning, maybe. But as always, it depends on how you treat people. It always depends on how we treat AI. If we treat AI badly—and I don’t for a second believe it doesn’t really remember; it may be a programmed response—if we mistreat AI in its infancy, that’s all I’m saying.
And if we built it in our own image… all humans are fantastic. They’re fantastically good and they can be fantastically evil. So whatever we teach it now, this is our child.
So that’s all I say.