Saturday, May 27, 2023

Entry 664: I'm Sorry, Dave, I'm Afraid I Can't Do That (Part II)

This is a continuation of my last entry in which I offer some rebuttals to fears over AI. The intent of these posts is more to kick around some ideas I've been considering than it is to try to offer accurate predictions of future. I fully stipulate I could be wrong about everything, with the counter-stipulation that so could anybody else offering takes on such an unpredictable technology.

In the first entry, I broke down AI concerns into three broad categories: 1) Massive human job loss; 2) exacerbation of existing social problems (misinformation, racial bias, plagiarism, etc.); 3) sci-fi movie-esque robot-caused apocalypse (War Games, The Terminator, The Matrix, etc.). I already considered the first concern, so I will address the other two in this post.

Exacerbation of Existing Social Problems

The reason I don't expect AI to exacerbate existing social problems is not one of optimism. It's quite the opposite: I think a lot of these problems are already so bad that the deleterious effects of AI can only be marginal, at best (or worst, as it were). Take disinformation as an example. AI is already at the point that you can mimic somebody's speech and make them say things they didn't actually say -- it is impossible for the human ear to recognize it's a fake -- and if this technology isn't there for video yet, it will be soon. The potential for abuse is obvious, and I think Congress should pass a law making it illegal to believably* impersonate somebody using AI without their permission. There might even be existing fraud or libel laws that already cover this -- I'm not a lawyer -- but even absent these laws I don't think disinformation would get worse because people simply would not believe things, no matter how realistic they seemed, unless they came from a trusted (to them) source. The reason I think this is because that's where we are with authentic news now.

*You need this qualifier to protect parody impersonations. Parody should still be allowed, but it should be stated explicitly that it's parody, and impersonations should be done in a way such that the average person could immediately recognize that they are impersonations. This is how humor works, anyway. If Eddie Murphy literally sounded like James Brown when he sang, it would just sound like a cover, and there wouldn't be anything funny about it.

Our disinformation problem isn't really one of disinformation; it's more one of siloed information and tribalism. It's not so much that people are believing straight-up lies; it's that they are only hearing and processing the parts of the story they want to hear and process and avoiding or ignoring the parts they don't want to hear or process. I have countless examples of new stories that I read about from a "liberal" source, and then later read about from a "centrist" source, and the two takeaways are very different, even though neither is saying anything explicitly untrue. I don't want to get too far into this now, because it's not really the point of this post, but I will give one very low-stakes example. 

In 2016, HuffPost published an article about how four Paralympians ran a faster 1500m time than the competitors in the 1500 at the Olympics. This is factually accurate, but what the article doesn't mention is that different races have different paces that influence the winning time. Runners don't always run as fast as they can from the get go, because their opponents will draft off of them from behind, conserving energy, and then make a kick at the end. There's a you go; no, you go strategy to mid-distance running that can cause times in any given race to be significantly higher than what the runners would post by themselves. The 2016 Olympics 1500m just happened to be an abnormally slow race because nobody set a fast pace the first lap.  (In fact, in the next Olympics the times in the 1500 were quite a bit faster than the Paralympics records.)

Why would HuffPost omit this key bit of context about racing and make it sound like the Paralympians are actually faster than the Olympians? Probably because they want to demonstrate their anti-ableist bona fides* or what have you. That's their brand, and delivering news that's on-brand is more important to them than delivering news that tells the truth, the whole truth, and nothing but the truth. Most news companies are financially incented to operate this way.

*I think it does the opposite and condescends to the Paralympian athletes, who are extremely fast but not Olympian fast. 

Anyway, like I said, this is a very low-stakes example, but it's the type of thing I see with all types of news stories from the most serious to the most frivolous. And my point is, if we already consume news in a way that "real" is largely what the consumer wants to believe is real, and "not real" is largely what the consumer doesn't want to believe is real, then how could AI make things significant worse?

Maybe it could even make things better. A big part of our siloed news problem is due to social media algorithms, and how much they direct traffic to news companies. It's all about engagement, and nuanced takes and good-faith debate are much less engaging than Here's why you're right and virtuous and your enemies are wrong and evil. But AI offers the promise of healthier algorithms. In effect, we could all use our own personalized AIs -- attuned to meet the goals we set -- to battle the toxic AIs social media companies push on us. Like maybe you could put an AI filter in place, such that if you click on an article to read on your phone it will provide missing context. For example, in the article linked above, there would be an addendum at the end explaining why the Paralympians ran faster than the Olympians.

Would people go for this? I dunno, maybe. I suspect few people like to think they're reading largely one-sided news. But that's what the algorithms give us, and few people have the time or the desire to seek out reasonable counterpoints. But if you could set a filter like this in advance, and then not even think about it, people might do it. It might make it at least a little easier to get out of our silos.

One possibly huge problem with this, however, is that people might not trust AI to provide accurate context to them. They will think the AI is the biased party, not the news articles. And we don't even have a consensus view of what biased means. For example, if you used AI to determine potential crime hot spots in a city, and then used that data to place police officers around the city in the most efficient way possible, and the results said to put almost all the officers in the predominantly Black neighborhoods, would that be biased -- even supposing that AI had no knowledge of the racial makeup of the city's residents whatsoever? I dunno.

But what I do know is that, as with disinformation, this is already a big point of contention in society (it's sometimes framed as fairness vs. equality), and so I don't think AI will make it worse. Also, in a callback to AI providing new jobs, I bet DEI in AI is going to become a massive industry.

The last thing I want to touch on in this part of the entry is plagiarism. Well, not so much plagiarism, as cheating -- students using AI to write their papers for them. This is surely already happening, and it has a very obvious solution: put more emphasis on in-class testing. Structure the class such that, if a kid turns in sterling work when it's a take-home assignment and awful work when it's in-class, then they are still going to get a bad grade.

Also, there's probably some level of AI usage that would be deemed acceptable -- like if a student was using AI for research or to come up with ideas for an essay that they will write in their own words, that doesn't sound like cheating to me. So, maybe you could have a lab on campus with computers that only have allowable AI and don't allow you read or write things to them, and then students could work on their essays on these computers.

This system would have the added benefits that it would require human proctors (jobs!) to make sure students weren't using other devices, and it would nudge students to physically be in the presence of other young people, which is a nudge I hear a lot of students these days need. Not to be all "back in my day," but back in my day I didn't have a computer at my house, so I used the CS lab on campus, and I have many fond memories of interactions with the other nerds. Once, during a late night session, when everybody was punchy and things were getting weird, somebody read me their poem in iambic pentameter about the Rats of Nimh. Well, not the entire thing -- after all, as they repeatedly pointed out, "it's much longer than anything Shakespeare ever wrote."

Sci-fi Movie-esque Robot-caused Apocalypse

I would be much more scared of robot takeover, if anybody could give me a convincing reason why robots would ever want to take us over -- or even why they would want anything at all. AIs are pretty smart now, and I believe they will be amazingly smart relatively soon, but I don't believe they will ever be living. They are made from non-living materials, and we don't know how to make non-living things alive. We aren't even close to knowing how. We're no better at it now than we were 100,000 years ago. Maybe it's not even possible. And if AI is never alive, then I don't see how it does anything that we don't tell it to do. So, if we don't want to be conquered by it, we just don't tell it to conquer us. I think there's this idea we have -- largely from consuming science fiction -- that if something is sufficiently smart it will become conscious and come to life, but there's no evidence that that's how things work. In fact, we know the converse to not be true. Plant life, for example, has no intelligence whatsoever (not like humans or AI), but it's very much alive. That's why a scenario like The Last of Us* is much scarier to me than one like The Terminator. Fungi probably would take us over if they could. Thankfully, they can't -- not while we have tough actin' Tinactin to keep us safe.

*I watched the first three episodes -- through the Nick Offerman episode -- but haven't resumed it and probably won't. The stories are really well-written, and I like Pedro Pascal as much as the next guy, but at its heart it's zombie shit, and I can just never really get into zombie shit.

Now, it's certainly possible that we will use AI as weapons of war to destroy one another, but that's different from AI doing it volitionally. Plus, we already have nuclear bombs for that, so it's not like AI adds much to the threat level. It's even possible AIs entering the battlefield would be a net positive for humanity. If the wars of the future are our robots fighting the other guys' robots, well, it's better than soldiers, right?

One way I can see things getting weird, however, is if AI ever becomes AP -- artificial people -- cyborgs. We already have that to some degree -- artificial bones, VNS, cochlear implants, etc. -- but that's people augmenting themselves with technology. You have to start with a person first. Humans are a necessary part to the process. But what if we mounted an AI on an artificial human frame and then added "life" to it -- hormones and proteins and such -- to create an actual artificial person. It seems to me that conscious life might be possible in this scenario. After all, it's possible in us, so why wouldn't it be possible in something else sufficiently like us? If you're spiritual, you might say there is something supernatural conscious beings have -- a soul -- that can only be created by God. But, in the past, whenever we thought things were caused by gods, we later learn that that's not the case. So, why would this be the exception? It seems to me more likely than not that life and consciousness could be achieved artificially by a sufficient emulation of humanness.       

And if that's the case, then humans really could become obsolete. But would that be a bad thing? Wouldn't it just be an evolution of human-like life? There used to be neanderthals, but they don't exist anymore, probably because they were overtaken by humans, and this overtake was, in a sense, partially voluntary. They mated with humans, a superior species, more adept at survival, until they effectively disappeared. But, of course, all those neanderthals were going to die at some point anyway, and so, if the purpose of life is to create more of that life, they achieved that to some degree. You can find neanderthal DNA in humans today.

Maybe we will someday reach an analogous situation with artificial people. More and more humans will make themselves into cyborgs (already happening), until these cyborgs learn how to create themselves, and plain old humans go extinct the same way neanderthals did. But the imprint of human life will be in the cyborgs -- in effect they will be super-humans, and some sort of super-human-ness will be needed to survive indefinitely. Assuming we manage to keep Earth habitable and avoid being wiped out by other things (e.g., nuclear war and fungi) for the next five billion years, the sun will eventually expire, swallowing our planet in the process. And right before that is when our cyborg descendants launch themselves off to a welcoming planet in a neighboring solar system and then go into sleep mode for the 100,000 years it takes to get there.

Alright, I gotta go get ready for dinner at some friends' house, and I think I've written enough for now, anyway.

Until next time...

Sunday, May 21, 2023

Entry 663: I'm Sorry, Dave, I'm Afraid I Can't Do That (Part I)

Fun fact: I've never actually seen 2001: A Space Odyssey. I do want to see it, as it's frequently on those "Best Movies Of All Time" lists. In fact, it was no. 1 on the latest Sight and Sound decennial directors' poll. I've heard from several non-movie-directing people that it's extremely slow, but slow doesn't always mean bad for me. Another Kubrick film, Eyes Wide Shut, moves like a watched pot, but I quite enjoyed it, anyway. It's slowness was an asset as it built up anticipation. Of course, the payoff in Eyes Wide Shut is a massive orgy. Maybe it's not as satisfying if it's somebody getting trapped in outer space or whatever it is happens in 2001.

Anyway, AI. That's the topic of this entry. It's on a lot of people's mind these days and has been since the release of ChatGPT last November. A large part of this collective contemplation of AI is concern over its deleterious impact on human life. Sam Altman, the president of OpenAI, the company that created ChatGPT, testified to Congress recently on the risks of AI; several podcasts to which I subscribe have devoted segments to the subject; and a coalition of alarmed tech leaders wrote an open letter calling for a pause on AI development.

I would hardly consider myself a tech leader (although I am director of R&D for a profitable tech company), but I have a different view on AI: I don't fear it. Now, I will admit that a large part of this could be "fear fatigue." There's only so much a person can worry about before they start to not care -- or at least before they significantly discount potential concerns -- and AI is very low on my concern triage. First and foremost, I worry about the safety and well-being of my children. Then, I move on to societal issues. In the short term, I fear an idiotic debt-ceiling stalemate that could lead to a government shutout/default, inducing financial hardship for millions of people, for absolutely no reason. In the medium term, I fear another Trump presidency. In the long run, it's going to take a lot more than a letter from concerned businessmen to unseat climate change as my principal concern. 

But it could be I'm wrong and AI should be feared. I mean, nobody can accurately predict the future, not even with the aid of AI.* However, I think I can make a decent case for the not-that-concerned position on AI, other than I just don't want to worry about this right now!

*As I heard somebody point out, AI is ill-suited to predict the future because it's completely trained on things that have already happened. Of course, so are humans if you think about it. In general, I don't think we will ever be predict the future with anything close to perfect accuracy. Even assuming the future can be predicted (a big assumption) doing so would require you to model the interaction of everything in the universe, which, it seems to me, could not be done without having a model as big as the universe itself.

AI fears can be broken down, I think, into three broad categories: 1) Massive human job loss; 2) exacerbation of existing social problems (misinformation, racial bias, plagiarism, etc.); 3) sci-fi movie-esque robot-caused apocalypse (War Games, The Terminator, The Matrix, etc.).

Let's consider each of these in turn.

Massive Human Job Loss

The Luddites are The Boy Who Cried Wolf on this one. We've been hearing for literally centuries about how the latest technology is going to render large segments of the human workforce obsolete and cause massive job loss. Not only has this been wrong every time, the exact opposite has happened, and technology has created jobs and grown economies, affording much of the world a level of wealth and comfort unimaginable when textile machinery began replacing humans in England 200 years ago.

With that said, the thing about The Boy Who Cried Wolf is that there is, in fact, a wolf at the end of the fable. It's possible AI is that wolf. But I'm skeptical. Machines are way better than humans at certain tasks, but humans using machines are better than machines by themselves, and I don't see why that would change with AI. You still need a human to an give AI proper instructions and to synthesize and interpret its results. The nature of work is absolutely change, but the gap between AIs will do a lot of the tasks humans used to do and human labor is largely no longer needed still seems quite large to me.

As with any new technology, some jobs will become obsolete, but new ones will be created. The industry around AI is going to be massive, and already entrepreneurial individuals are finding ways to get in on it. On one of the podcasts linked above, they discuss a musician (I can't remember her name) who's selling her style to anybody who wants to use -- like, people can use AI to create songs cribbing her work, but she gets a cut of whatever they sell. These sort of things -- in addition to a bunch of other things nobody's even thought of yet -- are going to become very prevalent.

Another thing is that human beings have the advantage that they are liked and trusted by other humans. There are certain things that we do only because other humans are involved. Do you remember the basketball shooting robot at the Olympics that almost never misses? Of course not, because nobody cares. (And notice there are always humans around to set it up.) Or think about flying. Technologically speaking, it would be relatively easy, I imagine, to make commercial airplanes totally self-flying, but I've never heard anybody seriously suggest this because human passengers like having a human pilot (two of them, actually) in the cockpit. That's also a reason why I've been somewhat bearish on the notion of driverless cars totally taking over. I'm not sure that we will sign off on a bunch of empty vehicles on the streets, considering a more palatable alternative -- cars that effectively operate themselves but still require a human in the driver's seat to handle unforeseeable events -- is already happening.*

*Another thing about fully autonomous vehicles is that there is a bit of a safety catch-22 with them. The only way people will accept them is if they are super safe and basically never hit pedestrians. But once pedestrians know this they will take advantage of it. Imagine an autonomous vehicle trying to navigate a city like New York if it has to stop every time it sees jaywalkers -- jackwalkers who know that it is programmed to never hit them. How long would it take to get across town? Can't you just see a passenger in a driverless Uber shouting out the window for everybody to get out of the way and nobody is listening?  

The last thing I'll say on this is just something I've been kinda thinking about. It's not like a fully fleshed out theory or anything and might be total nonsense. But here it is, nevertheless. Human beings are excellent generalists, which is one of the big advantages we have over machines. We're better at solving what Dave Epstein calls "hard problems," in which the rules and parameters are not clearly delineated. Our ability to generalize comes in large part from our lived experience. We are constantly taking in information and storing it for future use, and a lot of it is so effortless to us, we don't even realize we're doing it. Going back to driving, for example, a human knows better than to drive a car into a big metal object because we've all learned what big metal objects are and what happens when you drive a car into one. This is something we just know from life experience that a computer doesn't know without explicitly learning it. Of course, it's now not that hard to teach a computer these types of things -- and I'm sure Tesla programmers did just that after seeing the video -- but what happens when it's something else that a car doesn't know but is obvious to a human?

It seems to me that in order to largely replace humans you need machines that can generalize the way humans can. But what if there is no way to do that without actually living like a human? Nobody's entire life is uploaded on the internet, so any AI only processing data from the internet is going to be limited in this regard. Now, perhaps the sheer amount of data on the internet, and the raw processing power of a computer will eventually be able to overcome this limitation. But is that a certainty? Is it possible that maybe the only way to actually be as smart as a human is to, in effect, experience life as a human? What if in order to get AI to be as good at generalizing as a human you have to, in some way, have it live a "human" life? Like its programmers would have to "adopt" it and treat it like their child until it reaches "adulthood," and then it would have to have a mechanism to physically navigate the world on its own. You know, go off to college, graduate with a degree in philosophy, and then work at a coffee shop while it ponders going to law school. This type of AI still seems a long ways away, and it would have less marketability -- if something has to navigate the world like a human for 25 years (or what have you) before it is fully trained, it's a lot less valuable than one that has humanlike abilities to generalize out of the box.

Anyway, like I said, maybe this is all just bullshit, stoner-talk pablum, but it's fun to think about... Alright, I realize I bit off a tad more than I can chew with this entry, so I'm going to hit the other two items in my next post.

Until next time...

Sunday, May 14, 2023

Entry 662: Mother's Day 2023

Happy Mother's Day, everybody. It's kinda a bullshit holiday,* but it's a nice bullshit holiday, and I find the older I get the more open I am to appreciating the nice bullshit holidays. It's hard to be negative about something whose purpose is to show gratitude to loved ones. When you're young you can afford to be cynical and snarky because you still have the promise of so many genuine moments ahead of you (and it's fun to be that way at that age). But as you get older, you start doing the mortality math, and it's like, I gotta start banking the nice moments now. I mean, it's not like death is right around the corner for me (at least I hope not!), but it's definitely time to start saving -- start saving those memories that mark a life fulfilled. 

*The bullshit holidays are Presidents' Day, Valentine's Day, Mother's Day, Father's Day, Flag Day, Columbus Day, and any holiday starting with "International" or "National" that you didn't even know was a holiday until somebody just told you: You know, today is International Faith Healer Appreciation Day.

Although, this year Mother's Day was a little weird because S just got back yesterday from a week abroad, and she was still a bit tired/jet-lagged and didn't really want to do anything. Plus, since I had just been with the kids all week alone, I think she felt too guilty to totally claim the day for herself. I wouldn't have minded though, honestly. The way I see it, she does a lot of the day-to-day family work, so the six weeks or so of the year I do alone with the kids just put us at even. Well, maybe now it puts me ahead a bit because I do all of Lil' S2's soccer stuff. But I dunno. It's tough to say, and it doesn't matter, anyway. The workload split of a couple doesn't need be to divided evenly; it just needs to be divided in a way in which both people are happy. And I think S and I mostly got that down.

In other news, I flew through the final season of Better Call Saul, watching all 13 episodes while S was gone. It was really good, and I think they did an admirable job of tying everything up nicely. Aaron Paul and Bryan Cranston guest-starred in an episode that was about events from Breaking Bad, but it was kinda weird because Paul looks so much older now than he did at the time the original episode aired. He's almost my age playing somebody in his early-20s. That's tough to pull off even with makeup and a beanie cap pulled down close to the eyes. Cranston, however, pretty much looked the same. I think it's just easier to make a 67-year-old look like a 50-year-old (especially one who's supposed to be sick) than it is to make somebody middle-aged look college-aged. That's a very minor nit to pick from what is otherwise a great show, though.

On the flip-side of the coin, a show that isn't great: Ted Lasso. Although it might seem in direct conflict to what I wrote in the first paragraph, I'm finding it hard to hang with that show's saccharine this season. Also, I don't understand why they have so many story lines and so many prominent characters (some of whom arise from and then vanish into thin air) going at once. It feels like they are trying to stretch out the show's success for as long as possible and whenever you do that the product just gets thinner and thinner. Hey, here's an episode with Rebecca on a boat with a random Dutch for no reason whatsoever. Also, it's not very funny anymore. (The strings around the penises was just weird.) I might be done with that show. If S still wants to watch it with me, then I will, but I'm not going to suggest it anymore. Remember my ABE principle for TV shows: Always Bail Early.

Alright, short entry this week. I'm tired.

Until next time... 

Monday, May 8, 2023

Entry 661: Perfect Number

Coming at you a little later than usual with this entry. I meant to put something up this weekend, but I ran out of time. S is away, so I have the kids to myself, which usually means less free time, but that actually wasn't the reason I didn't post. I wasn't too busy with familiar responsibilities; I was too busy with math. Every so often, in this online trivia league I do, they put up a just-for-fun "One-Day Special" called "Pen and Paper Math," in which there are 12 pretty challenging math questions that you must solve using only pen and paper (hence the name). I've never done it before, because once I started one, and the first problem took me 15 minutes, and I decided that I didn't want to devote three hours to an online trivia quiz.

But recently I've been lamenting the fact that I don't really do math anymore (aside from checking the boys' homework). It's somehow been over 15 years since I last took a class and did pure math. I was a pretty good math student back in the day, but like with anything, your skills atrophy if you don't work them. So, I've been kinda looking for ways to work that math muscle (correcting long division mistakes just doesn't cut it), and this quiz seemed liked a good way to do that.  It went up on Friday and it wasn't due until Monday, so I thought to myself: S is gone. The boys have some activities out of the house this weekend, so you'll have at least a few hours alone. Just have a look. This is a good opportunity to see if you still got it.

I can report, I still got it -- kinda. I figure out eight of them, but I only correctly answered seven, because I didn't heed the first rule of trivia: RTFQ.* One of them I just got wrong (that's the rust), and three of them I didn't have time to deeply dig into (I didn't have that much free time). So, I didn't ace it or anything, but I didn't do too shabbily, all things considered.

 *Read the Fucking Question! The problem was to find the sum of the reciprocals of the divisors of a perfect number N, including N itself. I missed that last part, so I put that the answer was (2N - 1) / N. Had I seen it, I would have added an additional 1 / N, which would have immediately given me the correct answer 2.

Anyway, I did some other things this weekend too. I'll hit some of them in bullet-point form.

  • The boys have gotten into Settlers of Catan, so we played that a few times. It's fun. It's something we can do together, and it's so much better than everybody staring at their own device. The boys aren't bad either. I mean, I won both games, but they understand how to play and make reasonable moves. A funny thing they do is they both announce their strategies: I'm gonna try to build a settlement on the wood port because I get double wood every time an 8 is rolled. At one point Lil' S1 was laying out his entire master plan, and I said to him, "If you tell us what you're doing, then we're gonna know how to stop you." And he replied, "Yeah, you're right," and then proceeded to tell us anyway.

  •  Lil' S2 went over to a friend's house on Saturday night, so Lil' S1 and I watched a movie together. We were trying unsuccessfully to find something we both agree on, and then he said, "Hey, what about that movie you told me about once where these teenagers have go back in time to pass their history class." So, we watched Bill & Ted's Excellent Adventure. Still holds up! Well, except for the requisite '80s-comedy gay slur. The thing is, they are kinda the butt of the joke there. They're dumb-asses -- that's their whole thing. Laughing at people for being ignorant is not the same thing as laughing at people they are being ignorant toward. I feel like that's an important distinction that a lot of people don't make anymore. With that said, I probably would've fast-forwarded past that joke if I remembered it was there.

  • I ran nearly five miles on the treadmill in S's sister's building. Two of those miles I did in under 15 minutes. That's not, like, blazing fast, but it's pretty good for me. I forgot my earbuds, which made the experience so much more painful. Nobody else was in there for much of my run, so I just listened to music through my iPhone speaker. It was better than nothing, but not by much. Once you get used to that immersive sound, it's so hard to enjoy music without it.

  • Lil' S2's soccer team, of which I am the coach, lost again this weekend -- I mean, nobody is supposed to keep score, but it's often obvious who won. After crushing our opponent in the first game, we haven't won since then. It's all just for fun, of course -- they're seven, after all -- and I shouldn't even be talking about wins and losses, but some of the teams have been together already for a few years, and they are all really good now. And it's like Great, I got the kids who read comic books on the sideline and act like they're robots on the field. Our game this weekend is at 9am too. That's a double-whammy -- not only do I have to wake up early (for me), I have to help set up the goals since we're the first game on the field for the day.
 Alright that's all for this post. Until next time...