A personal perspective with Dr. Vivienne Ming
Dr. Vivienne Ming is one of 10 Women to Watch in Tech, an entrepreneur and theoretical neuroscientist. She talks with Exchange about machine learning, human potential and how they will change our world.
Exchange: How did you come to this kind of work?
Vivienne Ming: When I got into the field of neuroscience generally, originally as an undergrad, I thought I’d be sort of a wet neuroscientist, the kind of person who sticks wires into brains and tries to figure out what all the cells do. And I do do that.
But very early on, I got introduced to a lab called the Machine Perception Lab where we tried to recreate natural intelligence by studying artificial intelligence and come up with better artificial intelligence by studying natural intelligence. It was really about trying to develop algorithms that could do what the brain did in terms of actual algorithms, things that you could use.
Exchange: What are some applications for this type of discipline?
Ming: The resurgent interest in neural networks and what they call deep learning today, all comes right out of the same field of trying to understand the brain. The core researchers in that field, (among them, Geoff Hinton, now at Google), actually got their start at UC San Diego, where I was and really laid that foundation for using machine learning to understand the brain. And then figuring out how it could be applied. So, now Google® and Microsoft® speech recognition, Facebook® and Google’s and Microsoft’s image recognition, are all based on those years of research that theoretical neuroscientists put in.
Exchange: In the past decade, machine learning has given us self-driving cars, effective web search and an understanding of the human genome. What applications interest you most or where do you see great potential?
Ming: I am interested in understanding how people think, to understand what people want and need and when. My own personal interest as an academic is neuroprosthetics. Can we make people smarter by connecting their brains directly with technology?
I got an early-release pair of Google Glass™ and I thought, what if we could process faces or emotions? Right? What if you could display the emotions of the person you’re looking at up on the screen and then give it to autistic kids. They could learn how to read facial expressions, natural interactions. And so, we went and built that.
I am on the board of a company called Emozia, which is building passive emotional state estimation systems with mobile phones. You carry your phone around with you, without doing anything with it, and it estimates your emotional state. And part of the reason I joined the board was I wanted to get ahold of that data so I could build a system to predict manic and depressive episodes in bipolar subjects.
Or, you have Alzheimer’s. Before you ever experience not recognizing your child, it cues your brain, and gives you a signal. Not only are you more functional, but you never even experience that loss of self that comes from the progression of Alzheimer’s. There are so many amazing things we can do with it in health, but also in day-to-day life.
Exchange: You have also used it to find people with particular sets of skills that are in demand in certain disciplines and commercial endeavors these days.
Ming: I was for some time the chief scientist of a company called Gild. And one of the cofounders had a very clever idea that if we crawled across all of GitHub with a simple little algorithm that ranked the code that people wrote (Answering questions like: how good is your java? How good are your Python abilities?). Then we could present this data as a product.
So today we’re not just pulling in GitHub but LinkedIn® and Facebook and all of the websites that you hang out at. And we’ve learned how to predict the best people, our system learned to be the ultimate tech recruiter and recognize those patterns that represent specific talent.
So, that was where I started working on this idea of how machine learning can revolutionize recruiting and not just recruiting abstractly but combat some of the pernicious elements of bias and discrimination that are built into the system. We threw 55,000 different variables into the mix. What school you went to. What gender? What race? What age? What skills you know. Where you live. What you tweet out. What your social feed is like.
And what was fascinating is that school, grades, standard test scores were not predictive at all. It turns out race and gender, not predictive … a Bachelor of Computer Science from Stanford not a particularly good predictor of whether someone was a good developer.
It wasn’t valueless, but if someone tweeted out “Celery is awesome” at 2 a.m. some morning, it turned out that a tweet like that (Celery is a programming tool) carried more value all by itself than the other variables. In fact, it doesn’t even matter whether they like Celery or not. Just that they were passionate, they were tweeting about it and from applying these machine learning techniques we not only found that predictor of whether they were good at it, but it automatically could generalize:
… Oh, if they know Celery, then they must know Lettuce or RabbitMQ, they are probably programming in Django or Flask and they certainly know Python …
A simple little tweet. What we found is for every really talented developer with a high-end pedigree, there were as many as 10 to 100x as many other equally talented developers without the same pedigree. And we could uncover them by designing these machine-learning systems that could go out and do it in an unbiased way. And if we could figure that out for engineers, can we figure it out for everyone else?
Exchange: What do you think is the next big thing?
Ming: What’s coming over the next few decades, and is already underway, is a cognitive/neuroprosthetic revolution. This is literally about making people smarter. Initially, it will be about things like preserving identity and capability of Alzheimer’s sufferers and repairing traumatic brain injury. All of that is amazing, but what’s coming up next? The work is being done with animals today and this work will begin in humans in the near future … can we allow you to pay attention to more things, be more creative, self-regulate your emotions? It is the ultimate convergence of machine learning with actual human learning.
The second is the Internet of Things. Right now, all of these little devices that we’re starting to wire together and communicate with are dumb and proprietary. And by that I mean they’re doing very little local processing. Even your very sophisticated phone is largely just sending raw data back to some big servers at Apple® and Google and Microsoft and so forth, your Fitbit® and things. What if they all did their own intelligent processing, and if they talked to one another?
This is pretty profound. My son has Type I diabetes. He wears a wireless insulin pump. He wears what’s called a continuous glucose monitor that puts a little wire under his skin and estimates his blood glucose levels. We have him wearing a Fitbit and other devices. And from that, I built an algorithm that predicted whether his blood sugar will go high or low. But I had to hack his hardware to be able to get the data off of it for my algorithm. This is literally data coming out of my son’s body and I didn’t own it or have access to it; it was going back to the company.
And I think that’s part of what needs to change. If information is shared by all these devices, what we’re really describing is a big distributed neural network. The devices themselves become little processing nodes in a giant neural network, and talk more like how cells in our brain talk – sharing information, coming up with conclusions and then leveraging that to make interesting observations. I think the next big thing in machine learning is “turning on” the world around us.
About the interviewee
Dr. Vivienne Ming is an expert in the interface between machine learning and human cognition, and advises companies on adding intelligence to their user intelligence. Cofounder and managing partner of Socos, a cutting-edge EdTech company which applies cognitive modeling to align education with life outcomes, Vivienne is also a vice-president and distinguished scientist at Shiftgig, prior to which she was chief scientist at Gild. She was named one of 10 Women to Watch in Tech by Inc. Magazine.
Get more from Exchange Magazine in the Know 360 App.
Visit Innovation @ ThomsonReuters.com for more on how our technology teams bring together smart data and human expertise to find trusted answers.