Meta stated that it will continue to train its artificial intelligence systems using public Facebook and Instagram postings from adult users in the United Kingdom.
The company claims it has “incorporated regulatory feedback” into a redesigned “opt-out” method to ensure that it is “even more transparent,” as stated in its blog post.
The move comes after Meta decided in mid-June to postpone the rollout of its AI models in Europe in response to a directive from the Irish privacy regulator to postpone the company’s intention to collect data from social media posts.
Subsequently, the corporation declared that the postponement would enable it to attend to enquiries from the Information Commissioner’s Office (ICO) in Britain.
It also wants to portray the move as allowing its generative AI models to “reflect British culture, history, and idiom.”
However, it is unclear what distinguishes its most recent data collection.
Meta stated that it has “engaged positively” with the UK’s ICO and welcomes its guidance on how Meta can implement its new program, which has been difficult for the company to navigate due to the UK’s stringent data privacy laws and heightened data privacy awareness across the UK and Europe in general.
According to Stephen Almond, executive director of regulatory risk at the ICO, the social media corporation has agreed to make it easier for users to object to data processing and give them more time to do so.
“Organizations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing,” the statement said. “The ICO has not provided regulatory approval for the processing and it is for Meta to ensure and demonstrate ongoing compliance.”
We earlier reported that Meta issued an update on how it intends to comply with the Digital Markets Act (DMA), a European law aimed at promoting competition in digital marketplaces that affects the company’s messaging programs, Messenger and WhatsApp.