Benedict Evans of Andreessen Horowitz shares his insights on cryptocurrency, data privacy, the role of ethics in programming artificial intelligence and more.
As a partner with multi-billion dollar Silicon Valley venture capital firm Andreessen Horowitz (‘a16z’), Benedict Evans is widely respected for his shrewd insight on the emerging trends and technologies quickly shaping the world of tomorrow. If you are looking for a cogent perspective on anything tech-related, from blockchain to neural networks, social media to smart homes, then Benedict Evans is a man you want to spend some time with.
Bob Schukai, Thomson Reuters global head of Design, Digital Identity Solutions, spoke with Evans on a variety of topics and was treated to a cascade of insights delivered with the passionate and break-neck speed of a man who clearly enjoys discovering the promise of new technologies. What follows is a condensed version of their discussion.
The below interview has been edited for length and clarity. Click here to listen to the expanded conversation.
BOB SCHUKAI: Of all the technologies that you’re seeing out there, what’s got you most excited over the next five years? And what do you think is the most hyped at this point?
BENEDICT EVANS: I’ve joked before that the tech industry tends to suffer from Tourette Syndrome, and the repetitive word that gets shouted out tends to be things like “bots” or “crypto.” Clearly, crypto at the moment is getting a lot of attention. However, I don’t think there’s much correlation between whether something is massively hyped and whether it turns out to be a big deal or not. Both the firm and I take the view that crypto will be a big deal, but we’re at the early stage.
I think there’s a generalized point here that mobile happened, and people are waiting for new things to take off. We’re building on the stuff that we have but also thinking about what the next primary S-curves might be. Take autonomous cars — I don’t think many people would put autonomous cars as being a really big deal in the next five years. There is crypto at the moment, where we’re waiting for use cases. Then there is mixed reality, which I would say definitely is a three- to five-year story.
It’s where multi-touch was in 2005 or 2006. That’s to say, you see the engineering and you see the primary technology and you think, “This is part of the future.” You don’t have a $200 consumer product yet. That’s something that’s going to come out over the next three to five years in the same way that from 2005 to 2010 we had this transformation in smartphones.
You have the demo for Magic Leap, which is one of our investments, or some of the competitors, and there are smoke signals that Apple is working on something similar. That gets so exciting. You put a pair of glasses on, and then you see stuff.
I think sitting underneath all of those technologies, in a sense, is machine learning. We’re talking about autonomous cars, not electric, but autonomy, because it’s machine learning. We’re talking about mixed reality. Well, partly there’s a primary display technology, but in working at what you should do, it becomes a machine learning question. I think machine learning is a primary generational change in what is possible in computing. We’re working through now our understanding of what those fundamental capabilities will be and how those will change things.
SCHUKAI: At the Mobile World Congress this year, as in years past, the tendency seemed to be to just see a host of rectangular slabs of glass where everybody was mimicking Apple in one way, shape or form. We’ve gone through the arms race of hardware; we can make the cameras have more megapixels, we could do all sorts of interesting things, etc. I think it speaks to the idea that it’s going to have to be something interesting in software (and not hardware) that’s going to make the next big impact.
EVANS: I’m not sure I agree with that framing. One could have said the same thing in 2005. You could have said, “Well, it is laptops, new laptops, same as last year’s laptops and everything new was going to be about software.” That was true within PCs, but then there were new hardware generations, and a new hardware model comes along. I think, absolutely, smartphones are now where PCs were in 2005 or 2006. The new smartphone is not as much better than last year’s smartphone relative to where we were five years ago.
Rates of observable change have absolutely reduced, and it is now about what do you build on top of that. However, there will be new things.
You look at Magic Leap’s demos or some of the other companies in that space, and you can absolutely imagine in 10 years’ time a billion people will have something that looks like a pair of reading glasses that they put on and see stuff: I put them on and the wall becomes a screen, the table becomes Minecraft, or I see your LinkedIn profile or your Tinder profile in front of you.
I had a meeting with a government agency recently, and I said, “Well, the use case for you is ‘I look at a person and they’re blacked out and redacted because I’m not allowed to talk to them. Well, they don’t exist.’” Whatever your use case might be, a pair of glasses that can place stuff into the world that looks like it’s there is interesting and that’s not science fiction anymore. That is the 2005 moment where PCs are boring so it’s assumed that nothing else is going to happen. That other stuff does happen but it’s in a new sphere. Smartphones are where PCs were in 2005, but that doesn’t mean there’s not some new stuff happening.
SCHUKAI: On the point that we’re moving into other forms of realization – for instance, glasses – it brings up a question about privacy. There’s a constant battle these days that we’re really starting to see between how far technology can take us and the invasion of privacy that is resulting as a function of that. How do you see that tension playing out long-term?
EVANS: I think that there are multiple axes here. People think about privacy very differently based on the context, based on the author, based on the character of the product. Most obviously, your bank knows everything you spend money on and how much you have. You don’t really think that your bank is invading your privacy. Your mobile operator knows where you go, but again, you don’t really think of your mobile operator as invading your privacy.
There’s a really interesting tension point for Alexa where I’ve heard it said that Google could not have launched Google Home before Amazon had launched Alexa. Amazon had anchored the sense of this product, and some people have a different sense of privacy vis-a-vis Amazon, and Google, and Facebook. Apple has positioned themselves in a very unique place in that conversation.
You could also argue that it will be very difficult for Facebook to launch an Alexa competitor this year just because of where the news cycle is for Facebook regardless of what the actual product was. There are questions of perception in general about what this thing is, and also questions of perception around which company is doing it. They’re not necessarily entirely rational or predictable, it’s just where you come out as products evolve and people’s perceptions evolve.
I think there are a bunch of conflicted feelings around the Facebook News Feed. A lot of the questions around Facebook News Feed are not actually privacy questions, per se. They are questions around what Facebook chooses to show you and how it chooses to show it to you which are actually not privacy questions. The conversation is around, “Are you addicted to this stuff, and are you being manipulated?” Those aren’t privacy questions, those are different questions.
People have a lot of unresolved feelings on this topic; you’re using this thing and you’re telling it what you want but also it’s using you, and what exactly do we think about that?
SCHUKAI: Do you think, in a similar way, it’s almost a precursor to the tension we’re starting to see in the sphere of cognitive computing and artificial intelligence machine learning related to the ethics, the transparency, and the biases that are built into these models? Where does that take us, and how do you teach AI how to hold British ethics, or American ethics, or Australian ethics, or whatever the case may be in its decision-making process?
EVANS: There are a bunch of subtle and complex questions in that. You could draw analogies to the introduction of databases and so on a generation or two ago where the wrong data may have been keyed into the database and someone gets arrested or denied a mortgage, and the organization behind it didn’t really realize that there might be a mistake and didn’t have the processes to deal with that.
You see the same thing now with questions of AI bias. Everybody doing research on this is conscious and aware of this, that in a sense the whole system is very sensitive to what data you give it and you can skew it in all sorts of ways when you give it data. The point of machine learning is that if you could write down all of the rules then you wouldn’t be using it.
The analogy I often use here is, think of people in the 19th century trying to make a mechanical horse. There’s no law of physics that said you can’t have a mechanical horse. In fact, Boston Dynamics are maybe doing it now, but in practice, it was impossible in the 19th century to make a mechanical horse just because of the degree of complexity that was involved. So, you make a steam engine and you make a bicycle.
To transition it into machine learning, people were trying to make AI systems by writing down lots of rules like, “How do you recognize a cat?” Let me write down all the ways that you do it, and that was like trying to make a mechanical horse. Theoretically, yes, that’s how you would do it, but in practice, you could never create enough rules. What machine learning does is the system creates the rules. If you could have written down all of the rules, you wouldn’t be using machine learning in the first place.
Inherent within that, then you get questions of, “Well, how exactly is it written in those rules? And what exactly is within that data? And what was in it that I didn’t know was there?” This is something that people actually working on this and building stuff for Google and Facebook and so on are very conscious of.
The concern is like the small bank that just bought a system off the shelf and said, “You can’t have a mortgage because the computer says no.” Or the police department that says, “We’re going to follow you because the computer says, ‘Follow you’,” without them actually understanding or thinking about what that system is or how it works. To my original point, it is exactly the kind of thing that you would be reading about 30 years ago where you can’t get your mail delivered because the computer’s got the wrong address, and it didn’t occur to anyone that the computer might have the wrong address and there’s no way to change the address.
Where the problems will arise is from the human bureaucracy not understanding what this technology is and is not.
SCHUKAI: You think that it’s a transparency thing then? Not understanding all the algorithms?
EVANS: It’s an understanding of the limitations of the system. Another way that I describe this is my dog hates my wife’s uncle, and I don’t know why and I can’t ask my dog. What is my dog seeing? What has my dog learned in her seven years of life that she’s now applying to my wife’s uncle? Well, I don’t know and I can’t ask. That’s sort of the machine learning problem; you always need to be conscious that you know it’s just a machine and it’s just been given data and it’s producing an answer. You don’t know why, and you can’t ask. You need to be very conscious of that limitation.
For additional content concerning the use of personal data in the digital age, be sure to explore the rest of our multimedia series: A new dawn for data privacy and transparency.