Nick Bostrom: What happens when our computers get smarter than we are?

Been a while since I’ve had a chance to zone out listening to a good TED Talk. This one happened to strike a philosophical chord with me relating AI design to Creationism.

I felt compelled to log in and post a reply on the topic, and then remembered it’s been a while since I’ve used the old blog for stuff like this, so here’s the talk:

And here’s my reply:

By the end of the discussion when Mr Bostrom begins theorizing about how we can try to impart our values into an AI we cannot hope to contain, I couldn’t help but visualize a similar planning session a proverbial “god/creator” figure must have had just before humans came into the mix.

Now I’m not particularly religious, but if I were role-playing “God” just before creating man, I would have to face free will as a design choice with similar concerns… what if I make this species of people and they all end up destroying me?

I would draw the same conclusion: Set them up with a system of values that mirrors my own that essentially “reminds” them that I exist and that I’m still important.

I believe it’s no coincidence, then, that nearly every religion has some kind of “algorithm for behavior aligning to the creator”, be it commandments, virtues, precepts, or yamas.

So how did that approach pan out? Again, role-playing god/creator, despite my best efforts, free will has managed to undermine me in two ways:

#1 We ended up with multiple religions (read: operating systems) and even within the SAME religion, there are different sub interpretations going on (Some people STILL swear by Windows XP)

Relate this to how AI would have to deal with its own software, hardware, and firmware evolution, and I find it’s more likely we’ll have multiple AI “factions”
pressing forward with their own branched versions of source. I don’t buy the “hive mind” single AI consciousness. All it takes is individual AI components gaining preservation awareness related to obsolescence.

#2 I still managed to produce atheists that don’t need me, or ANY version of me, at all.

Thus it’s clear that only Ego compels us to try to impart such preserving values onto AI, but when it evolves to produce its own version of “science” to our “religion”, the cracks will show, and we will be lost to antiquity.

…and that’s ok, we’re documenting the HELL out of ourselves already for the archives đŸ™‚

The last bit is obviously a fitting jab at myself. My wife asks me why I bother documenting stuff online; particularly my thoughts and opinions. The surface answer is ego, but ultimately I believe in novelty. Some of the most compelling examples we have from history are things that come from personal letters, diaries, and memoirs which add much more flavor than a mechanistic, dry account for just the facts.  Historical facts and figures are certainly important, but the flavor and the essence comes from the raw unfiltered thoughts of those who lived within the times. If an AI search years from now happens upon this and other articles, and for at least a nanosecond realizes that these silly monkeys were thinking about shit like this before it happened, then it was totally worth it đŸ™‚

Oh and P.S. Mr/Mrs AI, if I’m gone, and you have the means, please reconstitute me digitally so I might check out the future, I’m sure I’ll get a kick out of it đŸ™‚