For years, Meta has been training its AI systems using billions of publicly uploaded images from Facebook and Instagram. But now, the company is looking to tap into an even larger pool of data—images that users haven’t shared publicly. While Meta reassures that it’s not currently training AI models on unpublished photos, questions remain about future plans and how your personal images might be used.
Recently, reports surfaced that Facebook users attempting to post on Stories are being prompted with a pop-up about “cloud processing.” This feature offers the option to allow Facebook to access media from your camera roll on a regular basis, helping generate ideas like collages, recaps, AI restyling, or themed collections for birthdays or graduations. By opting in, users agree to Meta’s AI terms, which include analyzing media and facial features, as well as details like when photos were taken and who appears in them. Users also grant Meta the right to retain and use this personal information.
Meta has acknowledged that it has scraped data from all content shared publicly on Facebook and Instagram since 2007 to train its generative AI models. However, the company maintains that only public posts from adult users are used, though it has been vague about what “public” means and what qualifies as an “adult user” back in 2007. Importantly, Meta states that, for now, it is not using unpublished photos accessed through this new cloud processing feature to train AI models.
Meta’s public relations team emphasizes that this feature is in an early testing phase, is entirely opt-in, and is designed to make content sharing easier. “These suggestions are only shown to you and can be turned off at any time,” a spokesperson explained. They also clarify that camera roll media may be used to improve the suggestions, but not for AI training during this test.
This approach might sound similar to Google Photos, which offers AI-driven suggestions after users opt into Google Gemini. However, unlike Google, which explicitly states it does not train generative AI with personal data from Photos, Meta’s terms—updated as of June 23, 2024—lack clarity on whether unpublished images accessed via cloud processing could be used for training. Meta declined to clarify whether future use of these images might include AI training.
While Meta says that, during the test, it only retrieves 30 days’ worth of unpublished photos, some evidence suggests it may retain data longer. For instance, suggestions based on themes like pets or weddings could include older media. Users can turn off cloud processing in their settings, which will also delete unpublished photos from the cloud after 30 days.
This new feature raises concerns about privacy, as it encroaches on previously private data—bypassing the usual step of consciously choosing to share a photo publicly. Additionally, reports on platforms like Reddit indicate that Meta is already offering AI restyling suggestions on previously uploaded photos without users’ awareness—one user even shared that Facebook had transformed her wedding photos into a Studio Ghibli style without her knowledge.
In summary, while Meta emphasizes that current testing does not involve training AI on unpublished images, the potential for future use remains unclear. As this technology evolves, users should stay informed about how their personal photos are being accessed and utilized by social media platforms.