Meta to Use AI for Privacy Risk Assessment in App Updates

Author's Avatar
May 31, 2025

Meta, formerly known as Facebook, is planning to implement an AI-driven system to evaluate up to 90% of updates for potential harm and privacy risks in its applications like Instagram and WhatsApp. This move aligns with a 2012 agreement with the U.S. Federal Trade Commission, requiring Meta to conduct privacy assessments on its products. Traditionally, these evaluations were performed by human assessors.

Under the new system, product teams will complete a questionnaire about their work, after which AI will provide an "instant decision" identifying risks and outlining requirements that must be met before updates or features are released. While this AI-centric approach allows for faster product updates, it also poses increased risks. A former executive expressed concerns that negative external effects of product changes might not be identified before they start affecting users globally.

A Meta spokesperson stated that the company has invested over $8 billion in privacy projects and is committed to offering innovative products while fulfilling regulatory obligations. The spokesperson added that as risks evolve, Meta will refine its processes to better identify risks, streamline decision-making, and enhance user experience, leveraging technology for consistency and predictability in low-risk decisions, while relying on human expertise for complex issues.

Disclosures

I/We may personally own shares in some of the companies mentioned above. However, those positions are not material to either the company or to my/our portfolios.