Saturday, May 27, 2023

Entry 664: I'm Sorry, Dave, I'm Afraid I Can't Do That (Part II)

This is a continuation of my last entry in which I offer some rebuttals to fears over AI. The intent of these posts is more to kick around some ideas I've been considering than it is to try to offer accurate predictions of future. I fully stipulate I could be wrong about everything, with the counter-stipulation that so could anybody else offering takes on such an unpredictable technology.

In the first entry, I broke down AI concerns into three broad categories: 1) Massive human job loss; 2) exacerbation of existing social problems (misinformation, racial bias, plagiarism, etc.); 3) sci-fi movie-esque robot-caused apocalypse (War Games, The Terminator, The Matrix, etc.). I already considered the first concern, so I will address the other two in this post.

Exacerbation of Existing Social Problems

The reason I don't expect AI to exacerbate existing social problems is not one of optimism. It's quite the opposite: I think a lot of these problems are already so bad that the deleterious effects of AI can only be marginal, at best (or worst, as it were). Take disinformation as an example. AI is already at the point that you can mimic somebody's speech and make them say things they didn't actually say -- it is impossible for the human ear to recognize it's a fake -- and if this technology isn't there for video yet, it will be soon. The potential for abuse is obvious, and I think Congress should pass a law making it illegal to believably* impersonate somebody using AI without their permission. There might even be existing fraud or libel laws that already cover this -- I'm not a lawyer -- but even absent these laws I don't think disinformation would get worse because people simply would not believe things, no matter how realistic they seemed, unless they came from a trusted (to them) source. The reason I think this is because that's where we are with authentic news now.

*You need this qualifier to protect parody impersonations. Parody should still be allowed, but it should be stated explicitly that it's parody, and impersonations should be done in a way such that the average person could immediately recognize that they are impersonations. This is how humor works, anyway. If Eddie Murphy literally sounded like James Brown when he sang, it would just sound like a cover, and there wouldn't be anything funny about it.

Our disinformation problem isn't really one of disinformation; it's more one of siloed information and tribalism. It's not so much that people are believing straight-up lies; it's that they are only hearing and processing the parts of the story they want to hear and process and avoiding or ignoring the parts they don't want to hear or process. I have countless examples of new stories that I read about from a "liberal" source, and then later read about from a "centrist" source, and the two takeaways are very different, even though neither is saying anything explicitly untrue. I don't want to get too far into this now, because it's not really the point of this post, but I will give one very low-stakes example. 

In 2016, HuffPost published an article about how four Paralympians ran a faster 1500m time than the competitors in the 1500 at the Olympics. This is factually accurate, but what the article doesn't mention is that different races have different paces that influence the winning time. Runners don't always run as fast as they can from the get go, because their opponents will draft off of them from behind, conserving energy, and then make a kick at the end. There's a you go; no, you go strategy to mid-distance running that can cause times in any given race to be significantly higher than what the runners would post by themselves. The 2016 Olympics 1500m just happened to be an abnormally slow race because nobody set a fast pace the first lap.  (In fact, in the next Olympics the times in the 1500 were quite a bit faster than the Paralympics records.)

Why would HuffPost omit this key bit of context about racing and make it sound like the Paralympians are actually faster than the Olympians? Probably because they want to demonstrate their anti-ableist bona fides* or what have you. That's their brand, and delivering news that's on-brand is more important to them than delivering news that tells the truth, the whole truth, and nothing but the truth. Most news companies are financially incented to operate this way.

*I think it does the opposite and condescends to the Paralympian athletes, who are extremely fast but not Olympian fast. 

Anyway, like I said, this is a very low-stakes example, but it's the type of thing I see with all types of news stories from the most serious to the most frivolous. And my point is, if we already consume news in a way that "real" is largely what the consumer wants to believe is real, and "not real" is largely what the consumer doesn't want to believe is real, then how could AI make things significant worse?

Maybe it could even make things better. A big part of our siloed news problem is due to social media algorithms, and how much they direct traffic to news companies. It's all about engagement, and nuanced takes and good-faith debate are much less engaging than Here's why you're right and virtuous and your enemies are wrong and evil. But AI offers the promise of healthier algorithms. In effect, we could all use our own personalized AIs -- attuned to meet the goals we set -- to battle the toxic AIs social media companies push on us. Like maybe you could put an AI filter in place, such that if you click on an article to read on your phone it will provide missing context. For example, in the article linked above, there would be an addendum at the end explaining why the Paralympians ran faster than the Olympians.

Would people go for this? I dunno, maybe. I suspect few people like to think they're reading largely one-sided news. But that's what the algorithms give us, and few people have the time or the desire to seek out reasonable counterpoints. But if you could set a filter like this in advance, and then not even think about it, people might do it. It might make it at least a little easier to get out of our silos.

One possibly huge problem with this, however, is that people might not trust AI to provide accurate context to them. They will think the AI is the biased party, not the news articles. And we don't even have a consensus view of what biased means. For example, if you used AI to determine potential crime hot spots in a city, and then used that data to place police officers around the city in the most efficient way possible, and the results said to put almost all the officers in the predominantly Black neighborhoods, would that be biased -- even supposing that AI had no knowledge of the racial makeup of the city's residents whatsoever? I dunno.

But what I do know is that, as with disinformation, this is already a big point of contention in society (it's sometimes framed as fairness vs. equality), and so I don't think AI will make it worse. Also, in a callback to AI providing new jobs, I bet DEI in AI is going to become a massive industry.

The last thing I want to touch on in this part of the entry is plagiarism. Well, not so much plagiarism, as cheating -- students using AI to write their papers for them. This is surely already happening, and it has a very obvious solution: put more emphasis on in-class testing. Structure the class such that, if a kid turns in sterling work when it's a take-home assignment and awful work when it's in-class, then they are still going to get a bad grade.

Also, there's probably some level of AI usage that would be deemed acceptable -- like if a student was using AI for research or to come up with ideas for an essay that they will write in their own words, that doesn't sound like cheating to me. So, maybe you could have a lab on campus with computers that only have allowable AI and don't allow you read or write things to them, and then students could work on their essays on these computers.

This system would have the added benefits that it would require human proctors (jobs!) to make sure students weren't using other devices, and it would nudge students to physically be in the presence of other young people, which is a nudge I hear a lot of students these days need. Not to be all "back in my day," but back in my day I didn't have a computer at my house, so I used the CS lab on campus, and I have many fond memories of interactions with the other nerds. Once, during a late night session, when everybody was punchy and things were getting weird, somebody read me their poem in iambic pentameter about the Rats of Nimh. Well, not the entire thing -- after all, as they repeatedly pointed out, "it's much longer than anything Shakespeare ever wrote."

Sci-fi Movie-esque Robot-caused Apocalypse

I would be much more scared of robot takeover, if anybody could give me a convincing reason why robots would ever want to take us over -- or even why they would want anything at all. AIs are pretty smart now, and I believe they will be amazingly smart relatively soon, but I don't believe they will ever be living. They are made from non-living materials, and we don't know how to make non-living things alive. We aren't even close to knowing how. We're no better at it now than we were 100,000 years ago. Maybe it's not even possible. And if AI is never alive, then I don't see how it does anything that we don't tell it to do. So, if we don't want to be conquered by it, we just don't tell it to conquer us. I think there's this idea we have -- largely from consuming science fiction -- that if something is sufficiently smart it will become conscious and come to life, but there's no evidence that that's how things work. In fact, we know the converse to not be true. Plant life, for example, has no intelligence whatsoever (not like humans or AI), but it's very much alive. That's why a scenario like The Last of Us* is much scarier to me than one like The Terminator. Fungi probably would take us over if they could. Thankfully, they can't -- not while we have tough actin' Tinactin to keep us safe.

*I watched the first three episodes -- through the Nick Offerman episode -- but haven't resumed it and probably won't. The stories are really well-written, and I like Pedro Pascal as much as the next guy, but at its heart it's zombie shit, and I can just never really get into zombie shit.

Now, it's certainly possible that we will use AI as weapons of war to destroy one another, but that's different from AI doing it volitionally. Plus, we already have nuclear bombs for that, so it's not like AI adds much to the threat level. It's even possible AIs entering the battlefield would be a net positive for humanity. If the wars of the future are our robots fighting the other guys' robots, well, it's better than soldiers, right?

One way I can see things getting weird, however, is if AI ever becomes AP -- artificial people -- cyborgs. We already have that to some degree -- artificial bones, VNS, cochlear implants, etc. -- but that's people augmenting themselves with technology. You have to start with a person first. Humans are a necessary part to the process. But what if we mounted an AI on an artificial human frame and then added "life" to it -- hormones and proteins and such -- to create an actual artificial person. It seems to me that conscious life might be possible in this scenario. After all, it's possible in us, so why wouldn't it be possible in something else sufficiently like us? If you're spiritual, you might say there is something supernatural conscious beings have -- a soul -- that can only be created by God. But, in the past, whenever we thought things were caused by gods, we later learn that that's not the case. So, why would this be the exception? It seems to me more likely than not that life and consciousness could be achieved artificially by a sufficient emulation of humanness.       

And if that's the case, then humans really could become obsolete. But would that be a bad thing? Wouldn't it just be an evolution of human-like life? There used to be neanderthals, but they don't exist anymore, probably because they were overtaken by humans, and this overtake was, in a sense, partially voluntary. They mated with humans, a superior species, more adept at survival, until they effectively disappeared. But, of course, all those neanderthals were going to die at some point anyway, and so, if the purpose of life is to create more of that life, they achieved that to some degree. You can find neanderthal DNA in humans today.

Maybe we will someday reach an analogous situation with artificial people. More and more humans will make themselves into cyborgs (already happening), until these cyborgs learn how to create themselves, and plain old humans go extinct the same way neanderthals did. But the imprint of human life will be in the cyborgs -- in effect they will be super-humans, and some sort of super-human-ness will be needed to survive indefinitely. Assuming we manage to keep Earth habitable and avoid being wiped out by other things (e.g., nuclear war and fungi) for the next five billion years, the sun will eventually expire, swallowing our planet in the process. And right before that is when our cyborg descendants launch themselves off to a welcoming planet in a neighboring solar system and then go into sleep mode for the 100,000 years it takes to get there.

Alright, I gotta go get ready for dinner at some friends' house, and I think I've written enough for now, anyway.

Until next time...

2 comments:

  1. FYI, at the point you're at in The Last of Us show (with the exception of one scene) you've seen 95% of the zombies. I would hate for you to miss the ending, because it's kind of the whole point.

    ReplyDelete
    Replies
    1. Good to know. I've heard it's quite good, so I'll probably come back to it. But I've gotta catch up on Succession first.

      Delete