This post is, ostensibly, a book review. But I promise you that very quickly this post will morph into my unadulterated opinion of artificial intelligence and the future of humankind. You may want to stop reading, especially if you don’t want to read any spoilers. About humankind, not the book!
Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World by Mo Gawdat is the latest in a plethora of books about AI and the future of our world. Gawdat is an entrepreneur and the former chief business officer at Google X, the not-so-secret research and development lab founded by Google in 2010. If it’s important for you to read Gawdat’s bonafides, feel free to take a look at his Wikipedia. Let’s just say he has been involved with the development of AI at a high level and his opinions are worth reading. That said, I’ll say right off the top that I do not agree with the conclusions he reaches in Scary Smart.
The premise of the book is actually quite simple. Gawdat begins by making some assumptions about AI, then he leads the reader down a path that he thinks can save us from the machines. His thesis is fairly simple, so I’ll summarize it in very basic terms.
Gawdat suggests there are three inevitables:
AI is happening
AI is becoming way smarter than us humans.
Really bad things will happen.
Yet Gawdat remains optimistic about the future of us humans, but only if we follow his plan to secure our future as a species living happily alongside the machines. What is his plan? In a nutshell, he advises us to treat AI as if it is a child and we are its parents.
If we want the machines of the future to have our best interests in mind, there are three things we need to change: the direction in which we aim the machines, what we teach them, and how we treat them. — Mo Gawdat
If this is what our future as a species is dependent upon we are fucked. We’re a species that doesn’t give a rat’s ass about each other let alone the machines we’re building. We are a species that in large part raises its children to hate those who pray to a different imaginary man in the sky. We’re a species that fights wars over made up borders and overzealous nationalism. We invented a bomb capable of killing our entire species, and had the nerve to drop it on civilians — twice. As I write this, Donald J. Trump is leading in the polls to return to the White House despite being under 91 criminal indictments.
I could go on. But you get the point. Humans ain’t so good at raising ethical and empathetic offspring. We have been on this planet for a mere 190,000 years (or about .007% of its history) and we’re within a few decades of literally making it uninhabitable for our own kind. Humans suck. I promise you it will not take long for our machine overlords to understand this and that’ll be the end of us.
To Gawdat’s credit, he doesn’t opine that we will raise AI correctly, just that if we want to survive we must. But given that his previous set of best-selling books were a bunch of self-help drivel about how we can all by happier I suspect he does think we can in fact turn the ship around before it hits the proverbial iceberg. I simply disagree.
Let’s be clear, there are many ways humankind can come to an end and any number of them are likely to rid the planet of us horrific little meat puppets. People much smarter than me have suggested some doozies:
Earth can get hit by another killer asteroid
We can wage full on nuclear or biochemical war
We can get wiped out by a pandemic (Covid came scarily close)
The climate can continue on its current path toward becoming inhospitable to humans
The super volcano we naively call Yellowstone National Park can erupt and the resulting ash can cause a climate winter
We can try to geoengineer an end to climate change that will backfire into a deadly feedback loop
And of course, we can develop super intelligent machines that will eventually figure out that we’re not worth keeping around
I think any one of the above is more likely to occur than not. The braintrust at Scientific American agree, suggesting its more a matter of when and how not if. Here’s where we separate the optimists from the pessimists. I don’t know where you stand, but I think it’ll come as no surprise to you, dear reader, that I believe we’re doomed. If I had to pick one of the above scenarios for the end of humankind, I’d lean toward climate change mostly because we’re well on the way to the point of no return. Maybe we’ve even gone past it. But this post is about AI so let’s say for argument’s sake AI reaches general intelligence before we reach the pivotal point of no return on climate.
Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.
Some AI experts believe we are still a few decades away from AGI. Others think given what we’ve accomplished in just the past few years the timeline is much more condensed. Either way, we’re going to get there. The question we should be asking is not when we’ll reach AGI, but what it means. As a science fiction fan I used to look forward to the rise of the machines, but the more I learn and the closer we get the more I believe AGI will be the beginning of the end for humankind as we know it.
Even experts can’t agree on when we’ll get to AGI or even if we will. In fact, you could go down a serious rabbit hole just Googling the subject of AGI. I’ll let you do so at your own leisure, but I’d like to simply point out that AI has advanced so exponentially in the past few years that what seemed like science fiction in 2020 is no longer out of our field of vision. In some cases, it is already here.
I’m sure you’re tired of hearing about ChatGPT, but the truth is it’s revolutionary. Developers are just scratching the surface of what it can do. And you can argue that it’s not really intelligent in the way AGI would be, but it’s damn effing close. I’m not going to sit here and say ChatGPT is sentient, but I don’t think AI has to be sentient in the way we think of sentience to destroy us.
And ChatGPT is only one of many AI tools concurrently coming to fruition. Idon’t think a large language model is “intelligent” as we think of intelligence, but I do think those arguing about whether or not machines are akin to humans are missing the point — it doesn’t matter. Smarter is smarter, whether the intelligence is made up of tissues and neurons or chips and RAM.
A lot of the anxiety we’ve heard in the conversations about AI thus far is about how it will change how we work. It already has (ask your kids about how they use ChatGPT). Jobs have always been replaced by technology, but to be fair most of those jobs have been physical in nature. Robots build cars on an assembly line. But if you think AI won’t replace your white collar job you haven’t been paying attention. It already is replacing these jobs and it’s hard to imagine a job it can’t do, or won’t be doing, in the very near future.
Computer coder? Gone. Writer/reporter? Gone. Paralegal? Gone. Financial analyst? Gone. Graphic designer? See ya. Pathologist? Gone. Here is an article about just ten jobs already going away.
Jobs where humans think and write will be gone before the end of this decade. And just when you think large language models like ChatGPT are growing too quickly along comes Sora. With Sora, all you do is type in a description of something and out pops a video of that thing. Think of the possibilities. Just days after Sora hit the world, filmmaker Tyler Perry shut down plans to expand his studio. Why? Perry says he has been watching AI very closely, but when he saw Sora he knew it would change the way films are made. So what you say? Well, in the near future there will be no actors or cameramen or film editors. Hollywood is a $500 billion industry.
One television executive saw Sora and said shows could be made entirely by AI in just a few years time. More jobs gone. Take a look at the video below and understand none of it is “real” and all were created by simply prompting an AI to create it.
Loss of jobs doesn’t worry you? How about not knowing if what you see with your own eyes is real? X melted down a few weeks ago when AI-generated images of a nude Taylor Swift circulated and before X could take them down they were seen by 45 million people. Does it even matter if they were not “real” nudes? Not to Taylor Swift I’m sure, and clearly not to 45 million drooling humans.
No biggie you say? You might think differently when someone you wronged creates a deepfake of you spouting racist vitriol and it goes viral. The thing is, we’re only in the infancy of AI. Two years ago a deepfake looked ridiculous. Today you can’t tell what’s real or not. Imagine what we’ll have in two more years? Exponential growth.
You can’t sit back and suggest the ramifications of AI aren’t concerning. You may think it’s a bunch of hype, but it doesn’t care what you think. AI has been launched into our world and soon it will reach intelligence levels beyond what we can imagine.
Which brings us back to Mo Gawdat. We’ll be okay, he writes, as long as we treat the computers nicely so they love us and look past our faults to create a world where humans have evolved into a new phase of existence. Gawdat begins his book with a little thought experiment. I paraphrase:
Imagine yourself sitting in the wilderness next to a campfire in the year 2055. Why are you there? Are you staying off the grid to escape the machines, or are you enjoying the outdoors because AI has relieved humans of all our mundane work responsibilities and allowed us the time and freedom to enjoy being in nature?
Gawdat suggests the answer to this question depends on what we, humankind, decide to do today to ensure AI develops in a way that is positive for all of us. Easier said than done. As I reminded you above, we don’t have much success as a species coming together to save the world (unless you count those brave Americans who defeated the aliens in Independence Day).
Gawdat barely even discusses another possibility? What if a non-benevolent AI is unleashed on the world by people with bad motives? China? North Korea? Russia? Meta? What if despite all odds, Americans come together and launch a “Marshall Plan” to ensure AI plays nicely with us — we could still be ended by bad actors. Shit, we can’t even stop SPAM from fake Nigerian princes how are we going to keep AI from undermining the stock market, or melting down our nuclear power plants, or shutting down the president’s pacemaker remotely? I’ve seen enough episodes of Black Mirror to know things don’t end well for the humans in a world run by machines.
I’m just barely scratching the surface of what could go wrong with AI. Job losses. Deepfakes. Algorithm bias. Weapons automation. Economic meltdown. Loss of privacy. Misinformation and deception. And yes, an existential risk to humankind.
I’m not sure teaching computers to be nice is the answer. It feels really naive to me. But hey, what do I care? I probably only have 30-40 years left on this mortal coil so why worry? I’ll leave you with a little bit of wisdom from Alvy Singer.