Last time on Dead Fresh I talked about AI with James Paterson. It was an interesting and informative back and forth in which James helped add context and colour to my admittedly spotty understanding of this nascent technology. Based on the lovely feedback I’ve gotten, it did the same for many of you too.
So now that we’ve responsibly set the table and soberly covered our bases, let’s take this puppy off the leash and out for a walk. If last week was about excavating rabbit holes, this week let’s bounce on some unicorn trampolines…
Presstube: So are you aware of the idea of watermarking and the engine consuming its own bullshit? Have you heard anything about this?
Dead Fresh: No, but I assume it’s very chill and nothing to worry about (laughs).
P: Ok, so right now the datasets are trained on humanity’s information, which is imperfect and flawed in many ways, but factually still has a certain level of integrity. At a base level, the foundation of our information landscape has a certain integrity to it. But this machine, this new system, can produce a lot of faulty facts and figures, right? It can tell you the wrong people invented the wrong thing at the wrong time. It can tell you all kinds of things, factually incorrect things, with great confidence.
DF: I mean, this sounds like something some people are doing already. It’s one of the most infuriating things about the world we live.
P: For sure. Our data integrity, prior to AI, was already deeply flawed. Not to put too fine a point on it, but all of reality and all of our storytelling is a sort of fiction at best, right? But at least there's some sense of maintenance and integrity. We have experts in the field who will go into a Wikipedia article and bash away the bullshit and keep the good stuff. But something like ChatGPT that can pump tremendous amounts of stuff back into the pool, next time they do its training, what is it consuming? Some of what it’s consuming will be a layer of really confident and difficult to detect bullshit. It’s eating its own feces.
So then when it produces new stuff, it is foundationally trained on lies that it created in the first place. This is why some of the players in the space are being more careful and are trying to come up with a watermarking situation where the system can detect when it wrote a piece of information. The problem is that that is extremely hard to do. So while we thought we were already in a post-truth era, we're about to enter the hockey stick phase of post-truth unless these issues get sorted out soon.
DF: Ooof, right. Like trying to fact check or content moderate Trump tweets is going to seem quaint in comparison.
P: Exactly.
DF: Ok, so let me step back a minute. When I think about what we’ve been talking about and all the different takes out there, there seems to be a spectrum of possible AI outcomes that people believe in. It’s a pretty dramatic spectrum that stretches from “ruin society Terminator-style” all the way over to “just pull lever to make society great again”. So where do you personally fall on that spectrum?
P: Okay… My feeling is that it’s going to solve problems, and in fact is already solving problems, that have been intractable for us. Maybe it cracks cold fusion, as an example, which is something we've been finding very, very challenging. So I'm bracing myself for a Cambrian explosion of problem solving where a bunch of stuff that we've been stuck on gets done. That's sort of generally positive. But of course, each of those developments has its own negative slipstream that we can't possibly understand.
DF: I can hear those Yellowstone wolves howling again…
P: For the “taking our jobs” part, that freaked me out for a while but ultimately I’ve landed somewhere personally that I feel comfortable with. Which is, essentially, that nothing can take away your experience of being alive. AI can’t do that, no matter how sophisticated it gets. So, if and when we get the rug pulled out from under us, it's in terms of an already faulty view of how we derive our sense of value and meaning. Our culture places value on things like being young, hot, talented, rich and smart. If not, you're a loser. So if something comes along to knock that off a bit, then I’m okay with the job trade-off.
Now, the part that I am freaked out about and am trying to metabolize is that we are incredibly persuadable. We’re more like ChatGPT than we want to admit. We're essentially like a meat-based ChatGPT… like a MeatGPT? (laughs)
DF: MeatGPT! Oh my god, James.
P: That part is about confronting what we actually are. We're already seeing this with people getting radicalized by going YouTube rabbit holes. I know people that believe some very weird things because of how persuadable they are and how persuasive the algorithms are.
DF: Yes by now we all sort of know how good our social platforms are at this, how they’re engineered to keep us on them, to keep us engaged. And yet they still work.
P: Exactly. So this is like that. Imagine you have a personal AI product that has a much larger window of what it remembers about you. It won't take long before it knows what to say to ingratiate itself to you, to make you believe this or that. I think we're gonna find out that we're crazy persuadable and that we have this new force in the world that can essentially make us putty in its hands.
DF: Ewww.
P: Right? It's really weird. The idea of being putty in something's hands that isn't even sentient. Like what does that even mean? What the fuck! All these lonely people out there are talking to this new thing, and their minds are changing, their beliefs are changing and they are being shaped by this force. I just don’t see how that's gonna be good.
DF: And I mean we already have a clear history of turning ourselves over to tech stuff without thinking about it too much. Like even just posting photos to Instagram because of the free little dopamine candy we get out of it.
P: Exactly. 1984 predicted this world where big brother was in the corner of the room, watching you and punishing you if you didn't do what it said. But it turns out that, instead, we’d happily pay 1000 bucks for a new phone and then just broadcast our lives and survey ourselves for them.
DF: Right, it’s not a malevolent force in the corner..
P: Yeah, it’s like how vampires don't come into your house unless you welcome them in. And it's much creepier that we want them to be here.
DF: Look, the vampires are just way more helpful than I thought they’d be and honestly they’ve been super nice to me so far.
P: So if some of this pans out, then what else are we? Maybe there's a weird positive side of this that would be like - look this is so utopian, I admit, as I'm saying it - but what if it forces everybody to discover what they are beyond their own logical mind and habitual storytelling?
DF: You know, funnily enough I was talking about this with a friend of mine who brought up the writer Iain M. Banks’ Culture series of books that imagine a post-scarcity future where our resources are all accounted for, which of course impacts everything.
P: Yes, so maybe it can be a gift. Maybe we remember how we're actually part of nature in a certain way. Like we've differentiated ourselves as humans because of this kind of ChatGPT thing we can do. We’ve used it to dominate the earth and we've done all kinds of things - nefarious shit and all sorts of beautiful stuff too. There's a chance that we get back in touch with other aspects of ourselves that are much more in common with the rest of nature.
I mean, if this thing gets wildly out-of-the-bag smarter than us, it's game over. It’s not going to be 100 times smarter than us, it’s going to be a trillion times smarter. And if that's the case, then you’re going to have way more in common with your dog than with the AI. Our pets will seem really close. Like you’ll be hanging out with some hamsters and thinking “my brothers!”
DF: Mammal solidarity! Wow, well somehow that seems like the perfect place to end this. Thanks so much for taking the time, James. This was great.
I mean, if this thing gets wildly out-of-the-bag smarter than us, it's game over. It’s not going to be 100 times smarter than us, it’s going to be a trillion times smarter. And if that's the case, then you’re going to have way more in common with your dog than with the AI. Our pets will seem really close. Like you’ll be hanging out with some hamsters and thinking “my brothers!” That is the part that rang my bell
The human experience, a.k.a. MEAT ASLEEP AT THE WHEEL about to get infinitely weirder. Mammal solidarity was a great note to end on. Thoroughly enjoyed the whole route.