This is part of our work on Artificial Intelligence. One of our most important principles is that systems are accountable to humans. I think this is an important principle. There are benign use cases that you want to delegate to computers because it is not consequential like maybe automatically correcting your spelling in something you type but there are things which are more important on peoples’ lives. You have to design them in a way that humans ultimately make the decision.
The ethical debate is especially crucial in autonomous driving. How should a car decide who it kills if an accident is unavoidable?
In any case we want accountability, explainability and interpretability. Ultimately, we as a society should decide how some things should work and make sure our systems follow these rules. In an area like that, thoughtful regulation may make sense. Our responsibility is to make our design choices transparent. Today the state-of-the-art machine learning models tend to work like a blackbox. But one of the areas of research we are working on is how are we able to improve explainability for some of the more important use cases, where until we have more explainability, we don’t deploy machine learning models. This is the kind of trade-off you need to make.
You don’t use any machine learning model that is not fully explainable?
For decisions such as health decisions, it would be important to have a certain level of explainability or accountability. Even if there is no regulation requiring explainability, we would strive to do the right thing. This is why we are trying to articulate a set of principles on how we approach our work.
This might delay your work for a lot of time. There will be enough other companies who don’t care as much about explainability and just go ahead with their products.
I’m just saying: If there is an important use case in which we feel like it needs to be explainable and we are not able to make it explainable, we will hold back. As a company, we want to work on a set of principles and be long-term focused. In areas like facial recognition, this is why we do not even make a general-purpose API available today. We could do it from a technological standpoint but we don’t because of concerns that somebody could misuse the information.
Microsoft is pushing towards regulation. Would you go the same road, or would you prefer to follow your own rules?
There is always good regulation which will improve the state of affairs and we have always constructively welcomed it. To the extent there is better regulation, it is a good thing and we would welcome it. We want to make sure you are able to drive innovation forward. Tech is increasingly impacting our lives. We have already seen some of the effects what can go wrong when you don’t think about the consequences ahead of time …
... for instance?
For example making sure that you provide information correctly versus dealing with misinformation. It is always important to be responsible about innovation. But I think the stakes have gotten higher. So with it comes an increased sense of weighing responsibly at a high level.
Googlers are very special, more rational than the average, some might say more left-leaning. How can you make sure that the algorithm your employees create reflect the values of society?