Thursday, April 5, 2018
Hey everyone. Thanks for joining today. Before we get started today, I just want to take a moment to talk about what happened at YouTube yesterday.
Silicon Valley is a tight-knit community, and we all have a lot of friends over there at Google and YouTube.
We’re thinking of everyone there and everyone who was affected by the shooting.
Now I know we face a lot of important questions. So I just want to take a few minutes to talk about that upfront, and then we’ll take about 45 minutes of your questions.
Two of the most basic questions that I think people are asking about Facebook are: first, can we get our systems under control and can we keep people safe, and second, can we make sure that our systems aren’t used to undermine democracy?
And I’ll talk about both of those for a moment and the actions that we’re taking to make sure the answers are yes. But I want to back up for a moment first.
We’re an idealistic and optimistic company. For the first decade, we really focused on all the good that connecting people brings. And as we rolled Facebook out across the world, people everywhere got a powerful new tool for staying connected, for sharing their opinions, for building businesses. Families have been reconnected, people have gotten married because of these tools. Social movements and marches have been organized, including just in the last couple of weeks. And tens of millions of small business now have better tools to grow that previously only big companies would have had access to.
But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.
So now we have to go through every part of our relationship with people and make sure that we’re taking a broad enough view of our responsibility. It’s not enough to just connect people, we have to make sure that those connections are positive and that they’re bringing people closer together. It’s not enough to just give people a voice, we have to make sure that people are not using that voice to hurt people or spread disinformation. And it’s not enough to give people tools to sign into apps, we have to ensure that all of those developers protect people’s information too. It’s not enough to have rules requiring they protect information, it’s not enough to believe them when they tell us they’re protecting information — we actually have to ensure that everyone in our ecosystem protects people’s information.
So across every part of our relationship with people, we’re broadening our view of our responsibility, from just giving people tools to recognizing that it’s on us to make sure those tools are used well.
Now let me get into more specifics for a moment.
With respect to getting our systems under control, a couple of weeks ago I announcedthat we were going to do a full investigation of every app that had a large amount of people’s data before we locked down the platform, and that we’d make further changes to restrict the data access that developers could get.
[VP, Product Partnerships] Ime Archibong and [Chief Technology Officer] Mike Schroepfer followed up with a number of changes we’re making, including requiring apps you haven’t used in a while to get your authorization again before querying for more of your data. And today we’re following up further and restricting more APIs like Groups and Events. The basic idea here is that you should be able to sign into apps and share your public information easily, but anything that might also share other people’s information — like other posts in groups you’re in or other people going to events that you’re going to — those should be more restricted. I’m going to be happy to take questions about everything we’re doing there in a minute.
I also want to take a moment to talk about elections specifically.
Yesterday we took a big action by taking down Russian IRA pages targeting their home country.
Since we became aware of this activity, their activity after the 2016 US elections, we’ve been working to root out the IRA and protect the integrity of elections around the world. And since then there have been a number of important elections that we’ve focused on. A few months after the 2016 elections there was the French presidential election, and leading up to that we deployed some new AI tools that took down more than 30,000 fake accounts. After that there was the German election, where we developed a new playbook for working with the local election commission to share information on the threats we were each seeing. And in the US Senate Alabama special election last year, we successfully deployed some new AI tools that removed Macedonian trolls who were trying to spread misinformation during the election.
So all in, we now have about 15,000 people working on security and content review, and we’ll have more than 20,000 by the end of this year.
This is going to be a big year of elections ahead, with the US midterms and presidential elections in India, Brazil, Mexico, Pakistan, Hungary and others — so this is going to be a major focus for us.
But while we’ve been doing this, we’ve also been tracing back and identifying this network of fake accounts the IRA has been using so we can work to remove them from Facebook entirely. This was the first action we’ve taken against the IRA in Russia itself, and it included identifying and taking down Russian news organization that we determined were controlled and operated by the IRA. So we have more work to do here, and we’re going to continue working very hard to defend against them.
All right. So that’s my update for now. We expect to make more changes over the coming months, and we’ll keep you updated, and now let’s take some questions.