An interview with Jaan Talinn, original co-founder of Skype

Paul Kemp: Welcome to another episode of The App Guy Podcast. I am your host, it’s Paul Kemp. This is a very, very special episode. It’s episode #500. We actually made it to #500, can you believe it?

Subscribe For Free : The App Guy Podcast

So, 500 episodes of The App Guy Podcast. Now, before I just start celebrating, I have a terrific episode lined up for you. It’s with the co-founder of Skype.

Skype is the reason why this show exists

Skype is the reasons why I can get so many guests on the show. It’s because it’s done all virtually.

Let me introduce Jaan Tallinn, one of the original co-founders of Skype.

We’re going to be talking about what he’s doing at the moment. It’s related to promoting the study of existential risk. Specifically relating to AI. Imagine! The possibility of the human race not existing in the future. It’s clearly a really important topic.

Jaan, welcome to The App Guy Podcast!

Jaan Tallinn: Thanks for having me.

Paul Kemp: Thanks for coming on this special episode #500. Let’s go straight into it then. How is the human race going to survive with artificial intelligence looming? What’s the big threat to us?

Jaan Tallinn: Well, first of all, AI is just one of the many potential existential risks. The big idea is that as we are creating more and more powerful technologies, the effective radius of influence that new technologies have keeps increasing, while the planet does not. You can imagine that clearly, you can do way more damage with nuclear weapons than you could do with rifles. Nuclear weapons are just almost a hundred-year-old technology. In some ways, we got lucky that it’s actually hard to construct nuclear weapons, whereas some of the new stuff that’s on the horizon might actually be fairly easy.

Paul Kemp: It’s interesting, we’re talking about the possibility of exterminating the human race with nuclear weapons at a time that actually seems quite dangerous (the threat of World War III between the U.S. and Russia). So, as it relates to A.I, surely this threat is at least a few hundred years in the future? Have you got any guess to how far into the future?

Jaan Tallinn: I think it’s important to be humble when projecting future timelines. If you look at the previous breakthrough technologies such as heavier than air flight… Heavier than air flight looked like 500 years off when it was actually 500 years off, but it still looked 500 years off when it was merely just a couple of decades off. Some people didn’t even know that heavier than air flight was possible after it was done on this planet.

I think in was the ’30s (last century) when Ernest Rutherford, the really prominent nuclear scientist said that anyone who is thinking or talking about harnessing nuclear energy is talking moonshine. Then, Leo Szilard invented the nuclear reactions the next day. When people say that some technology is in principle possible but hundreds of years off, they are what’s called über-confidently pessimist, in the sense that they don’t actually have sufficient evidence to make such a confident prediction.

Paul Kemp: This is where you’re testing my history now… I’m pretty sure that a lot of money went into trying to discover flight, and the Wright Brothers were incredibly under-funded. Nevertheless, they had a passionate group of people around them and they were the ones that got to fly before some of the rich universities and other inventors with lots of money and resources. I’m guessing it’s not just money that will get people to where we are inevitably going. True?

Jaan Tallinn: Exactly. Actually, that’s been more like a rule rather than an exception. I think that was a prominent exception to the rule that you really can’t buy scientific breakthroughs with money was the Manhattan project. There was a deliberate, well-funded effort to construct nuclear weapons. Other than that, usually the way breakthroughs happen is like the Wright Brothers stumbling on some new approach that actually opens up a new landscape of possibilities.

Paul Kemp: Where are we now with regards to artificial intelligence?Because I’m sure a lot of people are [reading] and thinking:

Well, I’ve got the Amazon Echo and it just doesn’t work very intelligently

You’ve also got Siri who fails over and over again. Where are we with A.I right now?

Jaan Tallinn: There are many ways to frame the current situation. One important background context is the AI has been coming and going in these fashion cycles called “AI summers” and “AI winters.” Somebody makes a breakthrough and a lot of people pile in, and the topic of AI becomes fashionable. Then at some point the researchers don’t live up to the promise, the investment dies down and then you have what’s called an “AI winter.”

Right now we are currently in the AI summer.

There is an increasing amount of investment being poured into artificial intelligence and machine learning. One of the reasons is that now people have figured out ways how to actually turn marginal improvements in AI into actual profits and revenue. If you actually improve Google’s AI algorithms, that will actually show up in Google’s profits and eventually share price; therefore, there is this existing economic pressure to improve AI algorithms.

The other way of framing AI, ia asking

What kind of approach has been the dominant one?

When AI started, the dominant approach was so-called “symbolic AI”, trying to code up rules of thinking in terms of symbols and symbol manipulations, logical operations.

Currently, as almost everyone is aware, the dominant approach is deep learning, which is, instead of having very clear-cut symbols, you have these almost human intuition-like systems that look at a lot of data and try to develop patterns or intuitions about the data.

Paul Kemp: Let’s talk then about the thing that perhaps is the biggest breakthrough that we could almost realize, but also then the biggest danger,

Consciousness

When machines get some form of consciousness- will we be to them like ants are to us? Do you think a lot about consciousness within the framework of artificial intelligence?

Jaan Tallinn: Well, interestingly, almost everything you said is false.

Paul Kemp: Okay, that’s why we’ve got you on the show to learn the truth. LOL

Jaan Tallinn: I think consciousness is a bit of a red herring.

If you think about it, when we ask whether a machine is conscious or not, what we are asking is what this machine is. However, I’m much more interested in what the machine does. It’s actually pretty plausible and it seems even likely that the machines that will dominate humanity will actually not be conscious, they are just very competent.

Are we creating machines that will break free of the programs that we gave them?

NO.

This isn’t how little machine programs work. There is no ghost in the computer that could read the instructions and then decide whether to execute them or discard them — no, the software is the machine, obviously, implementing the hardware.

The thing is that we are going to create really, really competent systems that have very precise work models and are able to foresee the consequences of their own actions very well, much better than humans, so even better than humanity in its entirety. So whenever they want something, they’ll obviously want something that we actually programmed them to want, and they are much more competent than the humans that created them. Basically, they will “get it.” They will be almost like a King Midas or a genie story, where we basically think we want something, yet actually, if we let the machine loose to do it, we quickly find out that actually we don’t want this after all. However, then it might be too late.

Paul Kemp: It’s interesting because this comes on the back of something I learned recently. It’s regarding the rise of the financial institution Black Rock. As you know, in our pre-chat I was saying that I was in finance. More specifically, in Asset Management. As it happens, Black Rock gained its prominence and financial success through its use of AI / machines to predict the potential risks and accurately assess the economic outlook. This was initially many years ago. Nevertheless, if we extrapolate this timeline, will we reach a point where was ask ourselves

Do we really need humanity to run institutions?

Subscribe To The App Guy Podcast

Jaan Tallinn: Yes, I think the comparison with various institutions is pretty apt. It may be possible to have an organization do something without actually any humans working for that organization. The question is — do we actually want this thing to happen? It’s obvious that if there’s a very centralized organization, then the CEO has the ultimate control. But even then, the CEO’s hands might be tied. Corporations might exhibit these types of characteristics that I describe.

Paul Kemp: You know, I’m getting a lot of my information from films, which I’m sure a lot of people [reading] this are thinking about. I can’t help but think about the Matrix. In this film, there was a discussion about control.

what is control?

What I’m learning from you is that we may have a future where we [humans] lose control because we are a low-level existence compared to these very competent machines. Have you got any views on who will control who in the future?

Jaan Tallinn: Importantly, the technologies that we have developed so far and the institutions that we have created, they kind of assume that the controller is external to the system. That’s very starkly visible in the so-called “autonomous system” discussions.

For example, when we talk about autonomous weapons,

Where is the human? Is he in the loop, on the loop or out of the loop?

So far, whatever tools we have developed, we assume that a human is actually the one that uses the tool and controls the tool. However, once we start talking about autonomous systems, especially systems that are actually able to foresee the consequences of being turned off, the control mechanisms kind of assume that the system that’s being controlled is not aware of the control system itself.

So if there’s a button that we use to turn off the machine, yet the machine is aware of that button and can predict what are the consequences of it being turned off, and weigh its goal fulfillment, then it’s actually not really about anything human, like survival instinct or anything. It’s just a very natural consequence of goal-directed behavior. 
For example, if there’s a robot who’s goal is to fetch coffee, yet there’s a huge hole in the floor, of course it’s going to go around it. It’s not because it’s afraid of falling into that hole, but it knows that if it falls into that hole, then it will not be able to fetch that coffee.

It really frustrates me how people think that in order to go around that hole you have to have survival instinct.

NO, it’s just basic goal-directed behavior. You avoid obstacles when you want to get to the goal.

It’s a similar thing — if you have an off switch and your goal is to fetch coffee, if you see that being switched off you will not be able to fetch the coffee, you will avoid being switched off.

Paul Kemp: This is really fascinating. What I’m learning from you is that it’s all about the goal of these machines, these new artificial intelligence robots. They’re going to show very different characteristics to humans.

However, I’m guessing that there’s good and bad that will come from Artificial Intelligence? You talked about nuclear and the good versus bad which comes from it.

Is it just the same with artificial intelligence? Will there be good versus bad?

Jaan Tallinn: Yes, absolutely. One of the big reasons why I’m focusing on AI (and not the other risks such as new kinds of biological technologies) is that it has this huge upside. In fact,

if we get the AI right, then basically we’ll be able to address all the other existential risks

Whereas, if we just fix the nanotechnology or nuclear or synthetic biology risks, this will not help us directly with AI risk.

As far as I know, Elon Musk was interested in AI actually as a direct consequence of him realizing that

Wait a minute! Becoming a multi-planetary species will help us against all the existential risks — except the A.I risk

…because of course, if we are able to travel inter-planetary distances, so will A.I.

Paul Kemp: It’s fascinating. You’re in Elon’s group. I know he’s put a pact together to try to prevent the extinction of the human species. Are you involved in this?

Jaan Tallinn: When it comes to AI, as far as I know, there are two initiatives that Elon is supporting.

One is supporting AI safety research through what’s known as Future of Life Institute, which is an institute at MIT where I’m a co-founder. We’ve been ‘kind of’ handing out Elon’s money. He donated 10 million dollars in research grants to different AI researchers.

The other initiative that Elon Musk started earlier this year is OpenAI. There’s this group in San Francisco that is basically trying to do AI development, but they also have hired many excellent safety people. In fact, I do think that OpenAI’s safety team is right now the best in the world. They focus on developing AI outside of commercial context. Whether that’s a good idea or not, I have some reservations. Nevertheless, I certainly acknowledge that the group of people that they have over there is really excellent.

Paul Kemp: You’ve obviously seen the benefits of technology from building something that’s commercially-focused — Skype. I’m wondering, for the appster tribe who follows me and for the readers who have quit corporate jobs — how is it best to get involved in startups, tech, and AI?

Jaan Tallinn: I think regardless of whether you’re interested in AI research, (increasing the competence of machines) or AI safety research (improving the value alignment); I think a good place to start is to just get a better understanding about AI.

There are many excellent books out there, and open courses. I think Hacker News publishes some new course every week or couple of weeks, actually.

When you’re specifically interested in AI safety, there is a website called 80000hours.org, that has a career guide for so-called “effective altruists”, and it has a career guide for potential AI safety researchers. Also, they have published what they call AI Safety Syllabus — books to read and videos to watch and so forth.

Paul Kemp: Yes, because I’m also thinking of the future workforce that we may have in the future. In fact, who would have predicted 15 years ago that people would be going into careers such as Google AdWords, Pay Per Click experts and SEO. Getting involved in AI is probably a good bet on where the future workforce may be heading — right?

Jaan Tallinn: That’s true… Or, more generally, programming. It’s something that it’s harder and harder to avoid. Having some idea of how computers work and what software can do is a more and more important component of jobs, regardless of what domain you are in.

Paul Kemp: As we wrap this up, Jaan, I’m often thinking — the human race. We have to realize that technology is not the solution to everything. It could actually wipe us out instead of helping solve our problems in the world. What is going to be, in your opinion, the way that the human species is going to become extinct because of AI? How can we prevent it?

Subscribe to The App Guy Podcast

Jaan Tallinn: Well, there are many ways how human species can go extinct. The most mundane thing is that every ten million years or so a large or, more important, a fast enough rock comes along to cause a wipeout of many species, potentially humans. It has happened before here, and if you just wait long enough, it will happen again. But it’s more likely, indeed, that there could be catastrophic scenarios from 21st-century technologies, or perhaps even 20th-century technologies such as nuclear weapons.

The reason I’m focused on AI, for one, it has all this positive upside. The way I describe it is that you can frame AI research as humanity’s search for the best possible future, with an important caveat — we are going to irrevocably commit to the first result that we find from that search. The other reason for thinking about AI is that Ai is sort of a meta-technology; it’s a technology that can develop its own technologies. Whatever concerns we have of a biotechnology, nanotechnology, terraforming and so on and so forth — if you don’t actually make AI being concerned about the same singular constraints that people would like to exercise on our technology development, well, once it starts developing its own technology, we’re just no longer going to have an environment that is able to sustain humanity.

Paul Kemp: It’s interesting, I think it definitely highlights how we need to think about his subject. Often, we feel like it’s very science fiction, it’s very put towards the future, but we don’t know when there could be this big breakthrough, so these safety things that are going on are obviously very important.

Thank you very much for coming on this very special episode, episode #500 of The App Guy Podcast. I’ll be putting full show notes on the website — it’s theappguy.co.

Jaan, thanks very much for speaking with us today and leaving us with the possibility of us all potentially becoming extinct.

Jaan Tallinn: Alright, thank you.

Follow me [Paul Kemp] on Twitter or join my Slack community. Subscribe to The App Guy Podcast for 500 episodes with other successful founders