Sunday, May 21, 2023

Entry 663: I'm Sorry, Dave, I'm Afraid I Can't Do That (Part I)

Fun fact: I've never actually seen 2001: A Space Odyssey. I do want to see it, as it's frequently on those "Best Movies Of All Time" lists. In fact, it was no. 1 on the latest Sight and Sound decennial directors' poll. I've heard from several non-movie-directing people that it's extremely slow, but slow doesn't always mean bad for me. Another Kubrick film, Eyes Wide Shut, moves like a watched pot, but I quite enjoyed it, anyway. It's slowness was an asset as it built up anticipation. Of course, the payoff in Eyes Wide Shut is a massive orgy. Maybe it's not as satisfying if it's somebody getting trapped in outer space or whatever it is happens in 2001.

Anyway, AI. That's the topic of this entry. It's on a lot of people's mind these days and has been since the release of ChatGPT last November. A large part of this collective contemplation of AI is concern over its deleterious impact on human life. Sam Altman, the president of OpenAI, the company that created ChatGPT, testified to Congress recently on the risks of AI; several podcasts to which I subscribe have devoted segments to the subject; and a coalition of alarmed tech leaders wrote an open letter calling for a pause on AI development.

I would hardly consider myself a tech leader (although I am director of R&D for a profitable tech company), but I have a different view on AI: I don't fear it. Now, I will admit that a large part of this could be "fear fatigue." There's only so much a person can worry about before they start to not care -- or at least before they significantly discount potential concerns -- and AI is very low on my concern triage. First and foremost, I worry about the safety and well-being of my children. Then, I move on to societal issues. In the short term, I fear an idiotic debt-ceiling stalemate that could lead to a government shutout/default, inducing financial hardship for millions of people, for absolutely no reason. In the medium term, I fear another Trump presidency. In the long run, it's going to take a lot more than a letter from concerned businessmen to unseat climate change as my principal concern. 

But it could be I'm wrong and AI should be feared. I mean, nobody can accurately predict the future, not even with the aid of AI.* However, I think I can make a decent case for the not-that-concerned position on AI, other than I just don't want to worry about this right now!

*As I heard somebody point out, AI is ill-suited to predict the future because it's completely trained on things that have already happened. Of course, so are humans if you think about it. In general, I don't think we will ever be predict the future with anything close to perfect accuracy. Even assuming the future can be predicted (a big assumption) doing so would require you to model the interaction of everything in the universe, which, it seems to me, could not be done without having a model as big as the universe itself.

AI fears can be broken down, I think, into three broad categories: 1) Massive human job loss; 2) exacerbation of existing social problems (misinformation, racial bias, plagiarism, etc.); 3) sci-fi movie-esque robot-caused apocalypse (War Games, The Terminator, The Matrix, etc.).

Let's consider each of these in turn.

Massive Human Job Loss

The Luddites are The Boy Who Cried Wolf on this one. We've been hearing for literally centuries about how the latest technology is going to render large segments of the human workforce obsolete and cause massive job loss. Not only has this been wrong every time, the exact opposite has happened, and technology has created jobs and grown economies, affording much of the world a level of wealth and comfort unimaginable when textile machinery began replacing humans in England 200 years ago.

With that said, the thing about The Boy Who Cried Wolf is that there is, in fact, a wolf at the end of the fable. It's possible AI is that wolf. But I'm skeptical. Machines are way better than humans at certain tasks, but humans using machines are better than machines by themselves, and I don't see why that would change with AI. You still need a human to an give AI proper instructions and to synthesize and interpret its results. The nature of work is absolutely change, but the gap between AIs will do a lot of the tasks humans used to do and human labor is largely no longer needed still seems quite large to me.

As with any new technology, some jobs will become obsolete, but new ones will be created. The industry around AI is going to be massive, and already entrepreneurial individuals are finding ways to get in on it. On one of the podcasts linked above, they discuss a musician (I can't remember her name) who's selling her style to anybody who wants to use -- like, people can use AI to create songs cribbing her work, but she gets a cut of whatever they sell. These sort of things -- in addition to a bunch of other things nobody's even thought of yet -- are going to become very prevalent.

Another thing is that human beings have the advantage that they are liked and trusted by other humans. There are certain things that we do only because other humans are involved. Do you remember the basketball shooting robot at the Olympics that almost never misses? Of course not, because nobody cares. (And notice there are always humans around to set it up.) Or think about flying. Technologically speaking, it would be relatively easy, I imagine, to make commercial airplanes totally self-flying, but I've never heard anybody seriously suggest this because human passengers like having a human pilot (two of them, actually) in the cockpit. That's also a reason why I've been somewhat bearish on the notion of driverless cars totally taking over. I'm not sure that we will sign off on a bunch of empty vehicles on the streets, considering a more palatable alternative -- cars that effectively operate themselves but still require a human in the driver's seat to handle unforeseeable events -- is already happening.*

*Another thing about fully autonomous vehicles is that there is a bit of a safety catch-22 with them. The only way people will accept them is if they are super safe and basically never hit pedestrians. But once pedestrians know this they will take advantage of it. Imagine an autonomous vehicle trying to navigate a city like New York if it has to stop every time it sees jaywalkers -- jackwalkers who know that it is programmed to never hit them. How long would it take to get across town? Can't you just see a passenger in a driverless Uber shouting out the window for everybody to get out of the way and nobody is listening?  

The last thing I'll say on this is just something I've been kinda thinking about. It's not like a fully fleshed out theory or anything and might be total nonsense. But here it is, nevertheless. Human beings are excellent generalists, which is one of the big advantages we have over machines. We're better at solving what Dave Epstein calls "hard problems," in which the rules and parameters are not clearly delineated. Our ability to generalize comes in large part from our lived experience. We are constantly taking in information and storing it for future use, and a lot of it is so effortless to us, we don't even realize we're doing it. Going back to driving, for example, a human knows better than to drive a car into a big metal object because we've all learned what big metal objects are and what happens when you drive a car into one. This is something we just know from life experience that a computer doesn't know without explicitly learning it. Of course, it's now not that hard to teach a computer these types of things -- and I'm sure Tesla programmers did just that after seeing the video -- but what happens when it's something else that a car doesn't know but is obvious to a human?

It seems to me that in order to largely replace humans you need machines that can generalize the way humans can. But what if there is no way to do that without actually living like a human? Nobody's entire life is uploaded on the internet, so any AI only processing data from the internet is going to be limited in this regard. Now, perhaps the sheer amount of data on the internet, and the raw processing power of a computer will eventually be able to overcome this limitation. But is that a certainty? Is it possible that maybe the only way to actually be as smart as a human is to, in effect, experience life as a human? What if in order to get AI to be as good at generalizing as a human you have to, in some way, have it live a "human" life? Like its programmers would have to "adopt" it and treat it like their child until it reaches "adulthood," and then it would have to have a mechanism to physically navigate the world on its own. You know, go off to college, graduate with a degree in philosophy, and then work at a coffee shop while it ponders going to law school. This type of AI still seems a long ways away, and it would have less marketability -- if something has to navigate the world like a human for 25 years (or what have you) before it is fully trained, it's a lot less valuable than one that has humanlike abilities to generalize out of the box.

Anyway, like I said, maybe this is all just bullshit, stoner-talk pablum, but it's fun to think about... Alright, I realize I bit off a tad more than I can chew with this entry, so I'm going to hit the other two items in my next post.

Until next time...

2 comments:

  1. I've had similar thoughts about the human experience. J and I have talked a few times about the Singularity and the thing I always get hung up on is not if it will one day be possible, but is consciousness even human? Assuming we are eventually capable of uploading our consciousness into a machine, I posit a person likely becomes something else the moment this consciousness leaves their meat suit because the unique physical sensations and experiences garnered from a *human* body are, arguably, integral to being "human." Fascinating stuff to think about. You and I can definitely have some fun convos in HI this summer.

    ReplyDelete
    Replies
    1. For sure!

      And I'm going to talk about consciousness in my next post.

      Delete