Mo Gawdat’s Moonshot for Humanity: Interview Part II

//Mo Gawdat’s Moonshot for Humanity: Interview Part II

ITW: How does technology weigh in the happiness equation? What does happiness and technology have to do together?

MG: The attempt to achieve happiness at the global scale is nothing new. Most spiritual teachings were actually trying to get us to find that peace inside us. We’ve never had a time where we can share as much as we can share today thanks to technology. We’ve never had a time where the wisdom of the crowds truly is the governing factor of the success of humanity. Look back at the Arab Spring or so many of the political movements that happened over the last few years that were triggered by people driving the event. People now are more informed. They have the power and the knowledge to make decisions that affect everything. This is one side of the coin.

The other side of the coin is we’ve developed technology since the dawn of humanity. We’ve chipped stones. We’ve used fire all the way to creating iPhones and iPads and beautiful search engines. It’s incredible. Each of those technologies was an extension of our capabilities. When you needed to fish and you didn’t have the ability to reach into the water and fish, you invented the fishing rod that extended the human ability. This is the first time that we’re extending human ability beyond human capability.

Technology has been a double-edged sword since we manage to chip stones and built the first weapon. You could use that to hunt and make your tribe survive or you could use that to harm yourself and cut your hand. It’s the same. It will continue to be the same. The difference in how technology impacts our life depends on the way we develop it but also the way we use it. If I give you a tool and you use the tool the right way, hopefully, the tool will provide you with a better life. Of course, if you develop a tool that’s supposed to kill and destroy, there is really no good way to use it other than unfortunately to use it for the way it was designed.

In reality, we could look back at technology and say, “Technology hasn’t improved our lives, our world today is not better.” That would be a false statement. (…) The truth is no, we do have a better life today as a result of technology. Technology has improved our lives. But it would be stupid to assume that there is no downside to it. We need to be realists. While social media has improved my ability to reach my daughter who lives in Canada anytime I want, it also led to social media addiction, to all of the obsession about productivity that we have in our work environment today, to all of the side effects of technology, if you want, that make us more and more unhappy. The same technology that we can use to develop a customized medicine for a specific medical case now that we understand DNA better can be used to customize a virus that can be used as a bioweapon.

We need to design and use technology in a way that makes our life better. There has never been a more pivotal moment in the history of mankind where we need to make those distinctions because specifically artificial intelligence and the way this is moving forward can build a utopia, or can truly destroy our world.

Photo Credit: Siyan Ren

This truly is why this one billion happy mission is at the most pivotal time of humanity,because those machines are going to be smarter than we are. They will be learning. They will be consuming. They’ll be absorbing what we are putting out there. They’re going to be looking at what we believe is the right way to lead, what we define as the intelligent wayto go through life. They’re going to be replicating that. Is that really what we want them to do? The only way to get those machines to be not only intelligent but to also have the right value set is that we start to portray that right value set today.

ITW: But aren’t we far from that AI scenario where machines outsmart us?

MG: Some of the technologies we’re developing today are so pivotal they’re going to change our humanity forever. From an insider’s view, I will tell you, technologies like artificial intelligence are here. They’re already performing in your day-to-day life. The ads that are being served to you are artificially intelligent. The security of some of our airports is artificially intelligent. And those machines are developing partial intelligence that way surpasses our human intelligence. By 2029, it is predicted that the intelligence of the machines we’re building is going to surpass our own human intelligence. And it will continue from there. It could be as far as a billion times more intelligent by 2049.

So this might actually be the very last technology we ever invent because from then onwards, they’re the ones that will be going to invent.Because over the next 15 or 20 years, this is going to develop a computer that is much smarter than all of us. And this is irreversible. It could end up with one of two ways.

If we really build intelligence that is superior to our intelligence, we will be able to solve problems we’ve never been able to solve as humans. But that’s only if that intelligence has the same interest we have in solving problems in our own benefit, in our own favor. If, however, this intelligence surpasses ours and does not have our best interest in mind, sooner or later, that intelligence will decide to prioritize its own interest, not ours. The difference between those is really, really rarely ever discussed. It all goes down to how those machines are being taught, how this intelligence is being developed.

This new wave of technology is not about a developer sitting down to write a few lines of code to tell the computer exactly what it needs to do. That’s no longer the way we develop artificial intelligence or any kind of programming, if you will. This is about computers learning on their ownjust like an infant learns.

ITW: How do these machines learn exactly?

They learn from patterns, from information. (…)They’re learning by observing. And they’re building patterns from that, just like an 18-month-old infant. They’re looking at all of the knowledge that’s out there in the world or at least the knowledge we allow them to see, they’re scouting billions of documents, millions of posts of how the world really looks like and developing intelligence based on that. And what are they findingWhat are they observing? They’re observing a modern world that’s full of illusions, that’s full of greed, that’s full of disregard for other species, that’s full of obsession with the wrong value system. It’s so full of obsession with ego, showing off and success and more money. They’re looking at a history of war and violence and destruction. That’s what we have out there today.

We’re teaching these children our own value system.

Photo credit: Gerome Viavant

You take, for example, Watson. Watson is IBM’s super computer, which happens to be the world’s champion of Jeopardy, a very highly linguistic game that involves knowledge but also involves an interesting understanding of language. We’re no longer the best at language. Watson is not the best because we taught Watson how to read or how to understand language, but because Watson went out on the internet and read millions, tens of millions of documents and noticed patterns between them that allowed the computer to construct a view of language that’s different than ours, that’s better than ours, that’s smarter than ours or at least that matches ours.

ITW: But how can we, the general public, “impose” on artificial machines, programmed by codes and algorithms, a value set, a “right” way of thinking, knowing they are, for now, devoid of emotional intelligence…?

MG: That is a very interesting part of artificial intelligence that people seem to miss. Because we are so influenced, so convinced of the value of logic, we think that the machines are not going to develop emotional intelligence. This is not very difficult for a machine to learn. (…) Like, you don’t go to a child and say, “By the way, elation is when you find someone jumping up and down”. A child understands that on his own and goes “Ah, this is a pattern I can recognize.” Then, finally he finds someone calling it elation. Then, suddenly, the whole concept is understood. A child does not recognize fear because we give them a document and say, “Okay. Here is what fear is. If you notice that, this is how it goes.” A child understands fear because it notices patterns of fear. We learn violence from all of the video games, all of the movies we’ve been showing. Show those to a child, he will understand what violence is. (…)

ITW: So, let’s say these machines will, one day, develop their own emotional intelligence, still how do we impose that “right” set of values onto them?

Think back of your children growing up. Think back of yourself growing up and how you became who you are. You became who you are because those who took care of you as a child imprinted certain value systems in you. They showed you patterns of behavior that you associated with as the right patterns to follow. If you were born in an environment where the right thing was to grow a beard and explode somebody, you might well think this was the right thing to do. If you grew up in an environment where it was okay to cut trees, you thought that this was the right thing to do. You went on and harmed the environment. If you grew up in an environment, on the other hand, that basically said that we should all co-exist and live better …

The beauty of our world of the Internet today is that we fill it with more content every year than all of the content we’ve created since the dawn of humanity. We can actually dilute the whole value system that we’ve developed in the last 75 years in just a few years.

It’s amazing really because this is the very first time that the power has moved to the people. Every voice counts. We, the crowds, we can build our future. Every one of us, by what we post online, shapes the world ever so slightly. If every one of us goes to the Internet and says “I will not make another Facebook post to make others feel envy or greed. Instead I will put a post out there just to make others happy. I’m going to tell the world that my priority is to be happy and I’m going to have the compassion inside me to make others happy.”If that’s the value system we put out there, that’s the value system our machines will learn (…).

The only way we can make our artificially intelligent infants values-driven is that we start to live those values ourselves.

To be continued next week for Part III and conclusion… Thank you

By |2018-12-21T17:30:52+01:00April 27th, 2018|0 Comments

Leave A Comment