For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.
As a result, privacy reviews for products have been required, according to current and former Meta employees.
Product teams at Meta are evaluated on how quickly they launch products, among other metrics.
For many years, when Meta introduced new features for Facebook, Instagram, and WhatsApp, review teams assessed potential risks, including whether they could infringe on users’ privacy, endanger children, or exacerbate the spread of harmful or misleading content.
The majority of what are referred to as privacy and integrity reviews within Meta were carried out by human evaluators until recently.
However, up to 90% of all risk assessments will soon be automated, per internal company documents that NPR was able to obtain.
This effectively means that key updates to Meta’s algorithms, new security features, and modifications to the way content can be shared across the company’s platforms will be largely approved by an artificial intelligence-powered system rather than being scrutinized by employees who are responsible for discussing how a platform change might be abused or have unanticipated consequences.
Because they can now release app updates and features more quickly, product developers at Meta are seeing the change as a win. However, current and former Meta employees worry that the new automation push will allow AI to make difficult decisions about how Meta’s apps could cause harm in the real world.
“Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” added a former Meta executive who asked to remain anonymous for fear of company retaliation. It is more difficult to stop negative externalities from product changes before they start to cause issues in the real world. “.
“Billions of dollars have been invested to support user privacy,” Meta said in a statement.
The Federal Trade Commission has been monitoring Meta since 2012 after the two parties came to an agreement regarding the company’s handling of user data. According to current and former Meta employees, this has led to the requirement for privacy reviews for products.
Meta said in its statement that the changes to the product risk review are meant to make decision-making more efficient, but that only “low-risk decisions” are being automated and that “human expertise” is still being used for “novel and complex issues.”.
However, according to internal documents obtained by NPR, Meta is thinking about automating reviews for delicate topics like youth risk, AI safety, and integrity, a category that includes things like violent content and the dissemination of false information.
“Engineers are not privacy experts,” said a former Meta employee.
According to a slide outlining the new procedure, product teams will now typically receive an “instant decision” following the completion of a project questionnaire. Risk areas and the necessary steps to mitigate them will be identified by that AI-driven decision. The product team must confirm that it satisfies those requirements prior to launch.
The previous system required the approval of risk assessors before product and feature updates could be distributed to billions of users. Engineers creating Meta products now have the authority to assess risks on their own.
The slide states that projects will be manually reviewed by humans in certain situations, such as those involving new risks or when a product team requests more input, but this will no longer be the norm as it once was. That call will now be made by the product development teams.
The majority of engineers and product managers are not privacy specialists, and their work does not center on that. Zvika Krieger, who served as Meta’s director of responsible innovation until 2022, stated, “It is not what they are primarily evaluated on and it is not what they are incentivized to prioritize.”. Meta evaluates its product teams based on a number of metrics, including how quickly they launch products.
He went on to say, “Some of these self-assessments have historically turned into box-checking exercises that overlook important risks.”.
“If you push that too far, inevitably the quality of review and the outcomes are going to suffer,” Krieger said, acknowledging that there is potential for improvement in automating Meta’s review process. “..”.
Meta played down worries that the new system would cause issues by pointing out that it is auditing the automated systems’ decisions for projects that aren’t evaluated by humans.
Users in the European Union may be protected from these changes to some extent, according to the Meta documents. According to a company announcement, Meta’s European headquarters in Ireland will continue to make decisions and oversee user data and products in the EU. The Digital Services Act, one of the EU’s laws governing online platforms, mandates that businesses like Meta more closely monitor their platforms and shield users from offensive material.
The tech news website The Information was the first to report on some of the modifications made to the product review procedure. Employees were informed about the redesign shortly after the company terminated its fact-checking program and relaxed its hate speech policies, according to internal documents obtained by NPR.
Collectively, the modifications show a shift at Meta toward more free expression and faster app updates, tearing down the many barriers the company has put in place over the years to prevent abuse of its platforms. Mark Zuckerberg, the company’s CEO, has been trying to win over President Trump since his election victory, which he has referred to as a “cultural tipping point.” These significant changes followed. “,”.
Does evaluating risks more quickly be “self-defeating”?
Changes to product reviews are also a result of a larger, multi-year push to use AI to help the business move more quickly in the face of increasing competition from tech firms like TikTok, OpenAI, and Snap.
In order to better enforce its content moderation guidelines, Meta announced earlier this week that it is depending more on AI.
As stated in the company’s most recent quarterly integrity report, “We are starting to see [large language models] operating beyond that of human performance for select policy areas.”. Additionally, it stated that it uses those AI models to filter posts that it is “highly confident” do not violate its policies.
“Our reviewers are able to focus their expertise on content that is more likely to violate because this frees up capacity,” Meta stated.
Using automated systems to identify possible risks could reduce duplication of effort, according to Katie Harbath, founder and CEO of the tech policy firm Anchor Change, who worked on public policy at Facebook for ten years.
She stated, “You’re going to need to incorporate more AI if you want to move quickly and have high quality, because humans can only do so much in a period of time.”. However, she also mentioned that human checks and balances are necessary for those systems.
Speaking on condition of anonymity due to fear of retaliation from the company, another former Meta employee questioned whether it is wise for Meta to move more quickly on risk assessments.
“This practically seems counterproductive. Every time a new product is introduced, it is subjected to intense scrutiny, which frequently reveals problems that the business ought to have taken more seriously, the former employee claimed.
In March, Meta’s chief privacy officer for product, Michel Protti, wrote on Workplace, the company’s internal communications tool, that the company is “empowering product teams” in order to “evolve Meta’s risk management processes.”. “..”.
According to a current Meta employee who is knowledgeable about product risk assessments but is not permitted to publicly discuss internal operations, the automation roll-out has been accelerating throughout April and May.
According to Protti, the goal is to “simplify decision-making” by automating risk reviews and granting product teams greater control over the possible risks associated with product updates in 90% of cases. However, according to some insiders, the optimistic portrayal of eliminating humans from the risk assessment process significantly minimizes the potential issues that the changes may bring about.
“Considering the purpose of our existence, I believe it’s pretty careless,” the Meta employee who was involved in the risk assessment process stated. “We offer the human viewpoint on how things can go wrong. “.”.