Interview with Google-CEO : Do we have to be afraid of Google, Mr. Pichai?

Google CEO Sundar Pichai speaks during the opening day of a new Berlin office of Google in Berlin. Bild: AFP

Google-CEO Sundar Pichai talks about the dangers of Artificial Intelligence, the chances of minimizing data and the beauty of his simple life in India.

          7 Min.

          Mr. Pichai, Google is collecting huge amounts of data and analyzes the information in ways the users don’t understand. Do we have to be afraid of Google?

          Patrick Bernau
          Verantwortlicher Redakteur für Wirtschaft und „Wert“ der Frankfurter Allgemeinen Sonntagszeitung.
          Corinna Budras
          Wirtschaftskorrespondentin in Berlin.

          We take our role in handling information seriously. We think that the data belongs to our users and that we are stewards of it. You are right that users might find it hard to understand, because honestly, there is a lot of complexity. People are increasingly living a digital life. So one of the challenges for us is: How can we make it even simpler for users to make the choices they want? We always try to evolve to stay ahead of user expectations. So for example, we ask ourselves: can we minimize data being used? Most of the data we collect today  we use to improve the products for users. We make most of our money from search ads, but the data we need to value advertising is actually much, much smaller than the data we need to make your experience better.

          What do you need the data for?

          There are many examples, one of the most popular features is when we tell people: You need to leave 20 Minutes early because there is more traffic and you need to go to your next meeting. People want us to be more helpful, and we can be more helpful if we understand the context of your query better. We are trying to make sure that data belongs to the user. If you ever want to delete your account we should be able to delete it. We want to try and make it easy for you to take your data and go to another service and we want to be transparent about it. But we need to work hard to simplify it even more. For example, we are storing your Gmail – just for you. We’re storing your photos – just for you.

          Google is not only storing our Gmail content, you also read the mails.

          We do not use the information contained in your emails. Our automated systems scan it for spam, but we don’t take data from your Gmail account and use it anywhere for advertising any more. We use Gmail data to remind you about an upcoming flight or travel for instance.

          There are a lot of demands to reduce the collection of data right from the start.

          This is the direction in which you will see us go. For example, as some of our machine learning chips on the phone get better, we can run more on the device and need to send less information in the cloud. When we warn you in Chrome about bad sites, we do it locally. We send a list of bad sites to your device instead of sending the sites that you visit to the cloud.

          Still, the Data Protection Agency in France just imposed a record fine of 50 Million Euros on you for data protection infringements.

          We have been very supportive of the European Data Protection Regulation. In fact, we had hundreds of people working for many months prior to its implementation to get us ready. In this case, we think we've built a consent process for personalized ads that is as transparent and straightforward as possible. We built it based on the guidance given by regulators, and on a lot of user experience testing we did ourselves. So we're appealing the decision.

          Google has decided to put AI first in all of its inventions. How can you make sure that computers decide in a way that humans want them to decide?

          This is part of our work on Artificial Intelligence. One of our most important principles is that systems are accountable to humans. I think this is an important principle. There are benign use cases that you want to delegate to computers because it is not consequential like maybe automatically correcting your spelling in something you type but there are things which are more important on peoples’ lives. You have to design them in a way that humans ultimately make the decision.

          The ethical debate is especially crucial in autonomous driving. How should a car decide who it kills if an accident is unavoidable?

          In any case we want accountability, explainability and interpretability. Ultimately, we as a society should decide how some things should work and make sure our systems follow these rules. In an area like that, thoughtful regulation may make sense. Our responsibility is to make our design choices transparent. Today the state-of-the-art machine learning models tend to work like a blackbox. But one of the areas of research we are working on is how are we able to improve explainability for some of the more important use cases, where until we have more explainability, we don’t deploy machine learning models. This is the kind of trade-off you need to make.

          You don’t use any machine learning model that is not fully explainable?

          For decisions such as health decisions, it would be important to have a certain level of explainability or accountability. Even if there is no regulation requiring explainability, we would strive to do the right thing. This is why we are trying to articulate a set of principles on how we approach our work.

          This might delay your work for a lot of time. There will be enough other companies who don’t care as much about explainability and just go ahead with their products.

          I’m just saying: If there is an important use case in which we feel like it needs to be explainable and we are not able to make it explainable, we will hold back. As a company, we want to work on a set of principles and be long-term focused. In areas like facial recognition, this is why we do not even make a general-purpose API available today. We could do it from a technological standpoint but we don’t because of concerns that somebody could misuse the information.

          Microsoft is pushing towards regulation. Would you go the same road, or would you prefer to follow your own rules?

          There is always good regulation which will improve the state of affairs and we have always constructively welcomed it. To the extent there is better regulation, it is a good thing and we would welcome it. We want to make sure you are able to drive innovation forward.  Tech is increasingly impacting our lives. We have already seen some of the effects what can go wrong when you don’t think about the consequences ahead of time …

          ... for instance?

          For example making sure that you provide information correctly versus dealing with misinformation. It is always important to be responsible about innovation. But I think the stakes have gotten higher. So with it comes an increased sense of weighing responsibly at a high level.

          Googlers are very special, more rational than the average, some might say more left-leaning. How can you make sure that the algorithm your employees create reflect the values of society?

          Our core product, the search engine which we built 20 years ago, works on principles: We are focused on providing accurate and relevant high-quality information. We do it globally. When we build products we don’t infuse our personal biases. We have goals and we are validating the results and we test it. In case of search, we use search raters, we have guidelines and we test it. That is how we make sure that we work as intended.

          Do you only rely on this structure or is it also necessary to make sure that your employees are as diverse as society is?

          All of these aspects matter over time. But in our case, the users are actually shaping our products, they are the ones that give us feedback and they are the ones who rate the searches. Our products are reflecting that. We define high quality by what the users are telling us. It almost doesn’t matter who develops the system: It is designed to reflect the values of society not the values of Google employees.

          That doesn’t stop biases to infiltrate AI. Let’s take as an example a job ad for a software-engineer that an AI-system might show to more men than women because that’s what the system learned from the data from the internet. That might be accurate but it is still controversial.

          Advertisement is different from organic search. The way we approach such questions is that we are transparent about our policies. This way society can look at what we're doing and give us feedback. This is why there are many areas in which we don’t allow for sensitive information to be used to target advertising. This is the case for political ads for example.  

          Your employees have been very outspoken. 20 000 of them took part in a walkout last November to protest management decisions they saw as sexist. What have you done to approach the problem?

          At Google we've always given our employees a voice for as long as I've been there. We've always welcomed employees input. Our employees are our most valuable resource and we view it as a strength. Anytime they have feedback it's an important input to us and we take it very seriously. There are a lot of factors we balance but we cherish the input.

          Is Google a sexist company?

          We work very hard to make sure we have  a culture which is inclusive for everyone. We care about it, we hold ourselves to a very high degree of accountability. If we ever fail we feel it and we acknowledge it. As a company I think we have led the industry on many of the practices we do in terms of making sure that our company is a welcoming place for everyone and I know we are committed to doing it that way.

          You yourself have a very interesting background. You grew up in India in a very small apartment. What was it like?

          I grew up in a middle class family in India and I had a happy childhood. But for sure it was very different from how I live today. The setting was much simpler, I would say.

          Do you miss the simplicity of those days?

          Oh yes, for sure. I never felt a lack of anything. I had lots of friends and family in my life, I had the relationships that mattered. The concept was much simpler. The world has become more complex, and I am not sure that we always want this complexity but it is a part of the modern world. I miss the simplicity at times. But this is probably true for most people.

          Technology has contributed to the complexity of life. How many devices are you carrying around?

          Right now, I am carrying only one. But I test a lot of devices. We build phones and we provide software for phones so I am testing a lot of products also from our partners at any given time.

          You didn’t even have a phone until you were 12 years old. How did you get interested in computers?

          Yes, that’s true. Imagination is very powerful. As a kid I was always fascinated by technology. I read a lot about computers and understood what they were before I could actually use one.

          What was your favorite book?

          Wow, that is a tough choice with all the books that I read. The books that Influenced me most where from authors like Charles Dickens.  But I read a lot about how semiconductors were built in Silicon Valley and the story behind it in computer magazines. I tried to learn about Hewlett Packard, how they build something so important but never forgot about the right values. These were the kind of stories that influenced me. I always had a love for computers and for technology and I sensed very early that technology could dramatically improve life.  When I finally had access to computing I realized how dramatic the impact was. I was very inspired by the “one laptop-per-child-project” because I always wanted to play a part in making computing accessible to as many people as possible. That’s the reason behind projects like Android where we are making high quality smartphones cheaper and more affordable to everyone. AI will play the same role of leveling access to knowledge and information for more people than ever before in humanity.    

          But Android is getting more expensive now after you decided to turn to a licensing model for some of your Google-Apps on smartphones. This came as a response to the antitrust-fine the EU commission imposed on Google last summer.

          We disagree with the decision and think that the benefits that the Android systems has brought are very clear. But we take our responsibilities seriously and part of our responsibility is to comply with laws and rulings. Obviously it's a high-scale investment today to make a state-of-the-art mobile operating system. We need to invest heavily in the platform to make sure it is reliable and provides users with the security they need. We want it to do well. But we need to have a sustainable business model. There are many ways in which we could do this. As part of our compliance with the European Commission ruling, we've changed the licensing model for our services on top of Android. That means we are now offering different business models around the world.

          What does this mean for the user?

          We have balanced the need of providing an open platform with a sustainable business model and I think we have done it in a way in which it works for the industry and it works for our users. Actually, we still don't charge for Android.  Now, in Europe, partner phone makers can, if they want, buy a license to Google Play and a suite of our other apps. We also have new, optional commercial arrangements to distribute Google Search and Google Chrome on partner phones.The phone industry plans for their phones well in advance so it takes time to roll out the changes. We will only see the effect in months or even years.

          Weitere Themen


          Manuela Schwesig am 12. Mai in Schwerin

          „Offiziere Russlands“ : Schwesig und ein ominöser russischer Verein

          Vor einem Jahr nahm Ministerpräsidentin Manuela Schwesig an einer Gedenkveranstaltung in Greifswald teil. Initiiert hatte sie ein russischer Ukraine-Hasser und Stasi-Freund, der einen dubiosen Verein vertritt.


          Immer auf dem Laufenden Sie haben Post! Die wichtigsten Nachrichten direkt in Ihre Mailbox. Sie können bis zu 5 Newsletter gleichzeitig auswählen Es ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut.
          Vielen Dank für Ihr Interesse an den F.A.Z.-Newslettern. Sie erhalten in wenigen Minuten eine E-Mail, um Ihre Newsletterbestellung zu bestätigen.
          Erzielen Sie bis zu 5% Rendite
          Lernen Sie Englisch
          Verkaufen Sie zum Höchstpreis
          Ihre Weiterbildung im Projektmanagement