“Where is everyone?”
No, that’s not me looking for people to harass into reading my blog. Those are the words of Enrico Fermi. In case you haven’t heard of him, Fermi was a really famous physicist who was always invited to all the right parties. Oh, he was also pretty much the father of the nuclear age. (I’m not sure what physicist parties are like. Perhaps Kevin can touch on this in his next blog?).
Fermi’s quote has to do with what’s known as the Fermi Paradox. Basically, given that there are tens of billions of planets in our galaxy alone, even if intelligent life is a relatively rare occurrence, we should see evidence of other civilizations all over the place: trashy alien reality television, weird sporting events, unidentifiable fast food wrappers, etc. But to date, nothing, nada, zilch, zippo; thus the paradox.
Ok. I know you’re probably thinking: “But what does this have to do with AI?”
I’m getting there. The road to the truth is sometimes long and bendy.
There are a lot of theories about why we don’t see any other signs of life in the universe. It could be that galactic society took one look at us and then strung up the cosmic equivalent of that yellow caution tape around our solar system. Hopefully that’s not the case.
The explanation I’d prefer to focus on today is called the Filter theory. It says that much like the filter that prevents coffee grounds from ruining the creamy goodness of your triple latte mochaccino, there’s a filter that prevents civilizations from reaching a point where they could accomplish troublesome stuff that far far away astronomers could see through their telescopes and remark, “Hey, that’s weird.”
So if there is some point in the evolution of civilizations where they all tend to wipe themselves out, the question becomes: Are we Earthlings past that point already or are we approaching it?
“That’s all very interesting, but what does it have to do with Artificial…”
I’ll get there. Bendy, remember?
I watch a lot of the History Channel and in the early days of humankind people sought to harness the power of fire so it could be a useful tool versus something that terrorizes them. However, there were many among our ancient ancestors who were concerned that fire was too dangerous and the tribal elders reminded everyone of the consequences of the pointy stick fiasco (I’m paraphrasing).
“Yes, but I still don’t see…” Bennnn-deeeeee.
And continuing forward through history, every new technological advance has brought with it concerns of filtering ourselves out of becoming the first galactic civilization. So, even though the concern has been there from fire, to gunpowder, to cloning the DNA of dinosaurs as part of a new amusement park venture (that last example may have been from a different channel), I don’t think any of these compare to what is probably the actual filter that all civilizations have to overcome.
Yes, now I’m talking about AI.
Even though we’re probably many years away from a super-intelligent AI, imagine for a minute an AI with the same intelligence as a person. That would still be a pretty big deal because digital circuits operate about a million times faster than organic ones (in other words, us). So pretend you could put this human-level AI on a problem for a week. That would be the same as a person working on the problem for about 20,000 years, or 20 people for 1,000 years, or 50 people for…
“We get the idea.”
Now imagine that whoever gets this AI first has only a six-month head start on their competition. That’s the equivalent of half a million years in human time. What would another country do if they thought we had this capability or were even close to it? I’m not sure, but I don’t think I’ll be moving to Silicon Valley anytime soon.
This was the idea behind Elon Musk starting his OpenAI initiative. Given that he’s been very vocal about the dangers of AI in the past, at first it seems odd that Elon* would start up a company trying to develop an intelligent AI. (* I like to think Elon is the kind of person who would let me call him by his first name if we ever met; plus, “Mr. Musk” sounds like a really cheap aftershave and I’m not sure I could say it with a straight face.)
Elon’s idea was that by being the first to develop an intelligent AI, OpenAI would have enough of a head start to dictate the direction that AI research and development goes in order to make sure it’s used to benefit humankind. Given the history of filtering mentioned above, that plan kind of makes sense.
Hopefully, whoever gets there first—Elon or others—will have humanity’s best interests in mind and we can maybe be the first ones to make it past the filter point.
Then we’ll be the ones putting up the caution tape.
Well, that’s it for this episode. As always, if you have any questions about AI or Machine Learning, or need some physics party plans (I know a guy), drop me a line.