It was another rough news week. We seem to be having a lot of these over the past, what, five and a half years? As you've probably noticed, I don't really delve into the doom and gloom on this blog. I'm not naive to it, it's just not what I want to write about, and being that this blog really serves no purpose other than giving me an outlet to write about what I want to write about, I'm not going to write about things I don't want to write about. I'm going to write about things I want to write about, and today that's a really interesting episode of the Lost Debate podcast I listened to about AI. Before I get to that, however, I want to link to an episode of a different podcast, The Gist, about renewal energy. It's good, and reasonably optimistic, and probably on a more important topic than the AI one, but I personally find AI stuff more fascinating than climate stuff, so that's what I going with today.
You should listen to the Lost Debate episode yourself, but if you don't, I'll give you the tl;dl (too long; didn't listen) version below. And if you want the tl;dr version of my tl;dl version, it's this: Over the past year or so, the advancements in AI have mostly stalled out, and there's no indication that they will leap forward again anytime soon. This means the promise of a benevolent super-intelligence solving all the world's problems will likely go unfulfilled. But I don't think too many people were banking on that anyway. On the contrary, I think most people were apprehensive about the prospect of machines attempting to destroy humanity Terminator style, or at least of taking all our jobs, replacing all our human relationships, and depriving of us of any sense of purpose whatsoever. It seems that those things probably won't happen either.
Now, before I go any further, I should say that I am not an expert in AI, and I'm largely regurgitating what the guest of the podcast, a computer science professor named Cal Newport, said. But I do have a strong background in applied math and scientific computation, so I understand a lot of the concepts at a high-level (and I feel confident I could learn the nitty-gritty details if I took the time to do so). I have just enough of a comprehension to put my own spin on things, so I'm not just mimicking what the expert says. Also, everything everybody says is just speculation, anyway. Nobody actually knows what will happen. Not all speculation is the same--some people should be listened to over others--but it's still speculation and predictions. There's no reason why I can't add my opining to the mix.
Okay, with all those caveats out of the way, here are my thoughts...
To understand why AI technology has seemingly stalled, it might help to understand how AIs like ChatGPT work. I once heard somebody describe these AIs as an "amazingly good auto-fill." Basically, given a prompt, they decide what the "best" first word is, and then using that first word as addition input, they decide what the best second word is, and then using that, they decide the third word, and then the fourth word, and so, until they get to a point where the best word is no word, and they stop. If they were good at deciding what each word should be along the way, they will have produced an intelligent response.
This begs the question: How do they decided the "best" word along the way? That's where things you probably heard of like "training" and "machine learning" come into play. Basically, before an AI is released to the public, it goes through a long computing period, where it scours a kajillion bytes of available data -- books, blogs, songs, etc. -- and then it remembers certain markers about these things. So, when the user prompts it, it goes, Oh, I've seen something like this before; I should respond as follows...
It might be easier to think about in terms of a game like chess. The best chess engines can now annihilate the best human players. In the past 30 years, we've gone from machines can never beat humans to humans can never beat machines. The way computers are able to win so consistently is by making moves no human could ever think to make. For example, a chess bot might just give up a knight early in the game for seemingly no reason. It has a reason -- it's played against its self millions of times, and it knows from experience that a knight sacrifice in this given situation is a winning move -- but there is no way any human could possibly deduce this. Humans strategize and think a few moves ahead -- if I do this, they'll do this, then I can do this... Machines assess given situations, and then use what they've learned from their extensive training sessions to make the corresponding moves with the highest win percentages. Nobody, not even the machine itself, can explain exactly why a machine made a certain move other than that's just what the numbers developed through training say.
Because games have well-defined rules and state spaces, it's not too surprising that computers can get very good--far better than any human--at games through this type of learning. It's much, much more surprising that AIs can learn this way for life in general. But they can. In fact, this is what jump-started the AI hype a few years ago in the first place. The major AI companies decided to ramp up the training of their chatbots, using more computing power for a longer period of time, and the results were off the charts. Just by increasing training, the chatbots got way better at things -- holding conversations, solving logic problems, writing songs, etc. So, they did it again, and the gains jumped up again. So, they did it again, and again the results jumped.
This is when we really started to hear about AI taking over, as the belief, understandably, was that AI was going to just keep getting better as the training got more intensive. Working under this assumption, the AI companies ramped up again and built massive computing warehouses, and subjected their chatbots to super-powered months-long training sessions. And the needle barely moved. They got better but only marginally so. Apparently, the improvement for Meta's commercial product was so minimal, it wasn't even worth releasing as a new version. Just as weird as it was to see these incredible jumps in the first rounds of training, it was equally weird to see things suddenly stall out.
So, that's where we are now, and according to Professor Newport, the upward scaling of the training was the promise behind AI. That was basically the whole shebang. Without that, AI is just a normal, impressive, maybe-good, maybe-bad new technology, not a humanity altering singularity. And for most people, I think, this is a comforting thought.
Alright, I actually had a few more things to say on this, but I appear to have run out of time -- gotta go get my flag football coach on.
Until next time...
PS -- Like last time, I had to hustle off to the game before posting this, and like last week, Lil' S2's team came up victorious. It started out shaky, as Lil' S2 threw a pick-six on the game's opening play from scrimmage, but we persevered, and pulled it out 18-13. I had a moment as coach I'm particularly proud of. Late in the game with us losing, we faced a big 4th-and-long. Defenses are allowed one blitz per four downs (where a kid can just run straight for the QB), and they hadn't use it yet, so I knew they would. The entire drive I had this kid M playing QB, so I put Lil' S2 next to him, seemingly as a running back and then I had M call "go!", but I had the snapper hike it to Lil' S2 instead. The blitzer predictably came for M, giving Lil' S2 enough time to get off a bomb to our star wide out Z, who made a brilliant catch in traffic, converting the first down. Then we scored the go ahead touchdown a few plays later. Bam!
No comments:
Post a Comment