#onebillionhappy – An Urgent Call for Humanity

//#onebillionhappy – An Urgent Call for Humanity

Onebillionhappy – An Urgent Call for Humanity

‘In 2014 I lost my son as the result of a medical error during a routine surgical procedure.’

by Mo Gawdat

I’ve been fortunate in that I was in a position where I learned so much in my corporate career, and also have the ability to dedicate more and more of my time and resources to making my vision a reality.My vision is to make one billion people happy. It’s the biggest mission I’ve ever assumed, and it might actually be one of the biggest missions that humanity needs today.I spent the last 11 years of my life working at Google, opening almost half of Google’s offices globally. Then, I moved to Google[x], the innovation lab, where I was chief business officer, working among an elite team of engineers, until February 2018. We worked with artificial intelligence – AI – machine learning. I believe that in the next 15 years or so, this artificial intelligence will surpass human intelligence. Now is the time for us to shape the lessons we teach these mechanical babies.In 2014 I lost my son as the result of a medical error during a routine surgical procedure. It triggered me to do the most unexpected thing, which was to write a book about happiness. The book Solve for Happy: Engineer Your Path to Joy was released in March 2017. It was my attempt to honor my son’s way of living, and was really the core of the onebillionhappy mission. As an engineer, my happiness model is based on the theory that happiness is your default state and you can return to this state when you make that choice.

I left Google to dedicate the rest of my life to #onebillionhappy. I call it my personal moonshot. And I am happy to share it with you in this video:

By |2018-05-14T16:22:36+00:00May 14th, 2018|1 Comment

One Comment

  1. Jed Diamond June 19, 2018 at 5:41 pm - Reply

    Mo, I’m with you all the way. As you know my focus for the last 50 years has been helping men, and the women who love them, to move from being angry and depressed to being happy and joyous. With a billion happy people, there are not problems we can’t solve. With a billion more angry and depressed there will be more and more problems that will be created that can’t be solved. I agree with your assessment of AI and the necessity to give these future intelligences a better body of happy, joyful, experience to model their decisions. Here’s what we can look forward to if we don’t take charge.

    MIT Creates World’s First Psychopath AI by Only Feeding It Data from Reddit – (Complex – June 7, 2018)
    The newest artificial intelligence creation of MIT researchers, named Norman, has been deliberately from “the darkest corners of Reddit,” and now all it thinks about is murder. It apparently wasn’t enough to name him after the creepy protagonist in Hitchcock’s Psycho, they had to go and create the “world’s first psychopath AI.” In order to test Norman’s psychological status after his Reddit binge, the researchers used Rorschach inkblots, which they claim “is used to detect underlying thought disorders.” Norman consistently saw horrifying and violent images in 10 different inkblots where a standard AI saw much more benign images. For example: a standard AI saw a “black and white photo of a small bird” where Norman saw a “man gets puled into a dough machine.” Similarly, a standard AI saw a “photo of a baseball glove” in the same inkblot where Norman saw a “man murdered by machine gun in broad daylight.” In another, standard AI saw a “person holding an umbrella in the air” and Norman saw a “man shot dead in front of his screaming wife.” There is a larger point to this experiment. The MIT researchers were trying to prove the point that “the data that is used to teach a machine learning algorithm can significantly influence its behavior,” and therefore, if you’ll use it to make any important decisions, the data you feed it matters. “When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the researchers wrote. As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: “What if you’re not white and a piece of software predicts you’ll commit a crime because of that?”

Leave A Comment