How to prevent LinkedIn from using your data to train Artificial Intelligence

LinkedIn imposes restrictions on opt-outs for future training and warns that AI models could expose personal data.

On Wednesday, LinkedIn revealed that it has been using the data of several users to train its own Artificial Intelligence, without obtaining prior consent. Currently, users do not have the option to opt out of training already carried out, as LinkedIn limits opt-outs to future AI initiatives only.

In a blog post detailing the updates scheduled for November 20, LinkedIn’s general counsel, Blake Lawit, announced that the platform’s user agreement and privacy policy will be revised to clarify how personal data contributes to the development of AI on LinkedIn.

According to the new privacy policy, LinkedIn informs users that “we may use your personal data… [to] develop and train artificial intelligence (AI) models, personalize our services, and gain insights with the help of AI, automated systems, and inference, to make our services more relevant and useful to you and other users.”

An FAQ section clarified that personal data may be collected whenever a user interacts with generative AI features, composes a post, changes their preferences, provides feedback or uses the platform, regardless of the time spent.

This data is stored until the user deletes the AI-generated content. LinkedIn recommends that users use the data access tool if they wish to delete or request the removal of information collected during previous activities on the platform.

The AI models that underpin LinkedIn’s generative AI capabilities “can be trained by LinkedIn or by third parties”, such as Microsoft, which makes some AI models available through its Azure OpenAI service.

A significant privacy risk, as highlighted in the FAQ, is that users who “provide personal data as input to a generative AI feature” may inadvertently see their personal information disclosed as output.

LinkedIn states that it “seeks to minimize personal data in the datasets used to train the models” by employing “privacy enhancing technologies to anonymize or remove personal data from the training sets”.

Although Lawit’s blog doesn’t clarify whether data already collected can be removed from AI training datasets, the FAQ states that users who have automatically consented to sharing personal data for AI training can only opt out of future invasive data collection.

The refusal “does not affect the training already carried out”, as clarified in the FAQ.

A LinkedIn spokesperson told Ars that “participating in AI training by default is beneficial for all members”.

“People connect to LinkedIn to be discovered for job opportunities and networking, and generative AI is a key part of how we’re supporting professionals in this transition,” the spokesperson added.

By allowing users to opt out of future AI training, the spokesperson stressed that the platform is offering “more choice and control for those who use LinkedIn over how their data is used to train our generative AI technology”.

How to disable AI training on LinkedIn

Users can deactivate AI training by accessing the “Data privacy” section in their account settings and unchecking the option that allows the collection of “data for generative AI improvement”, which LinkedIn automatically activates for most users.

The only exception applies to users from the European Economic Area or Switzerland, who are protected by stricter privacy legislation that requires consent for the collection of personal data or a legitimate interest justification. These users will not have the option to opt out, as they were never included in this collection, as repeatedly confirmed by LinkedIn.

In addition, users can “object to the use of their personal data for the training” of generative AI models that are not used to generate LinkedIn content – such as those employed for personalization or content moderation. This objection can be made via LinkedIn’s Data Processing Objection Form.

Last year, LinkedIn shared AI principles, promising to implement “meaningful measures to mitigate the potential risks associated with AI”.

Among the risks highlighted in the updated user agreement is the possibility that using generative features to fill out a profile or generate suggested posts will result in content that “may be inaccurate, incomplete, outdated, misleading, or otherwise unsuitable for your purposes”.

Users are warned about their responsibility to avoid sharing misleading information and disseminating AI-generated content that may violate LinkedIn’s community guidelines. They should also exercise caution when relying on any information available on the platform.

“As with all content and other information on our services, regardless of whether they are labeled as created by ‘AI’, it is essential to review them carefully before trusting them,” states LinkedIn’s user agreement.

In 2023, LinkedIn stated that it would always “seek to clearly and simply explain how our use of AI impacts people”, as “users’ understanding of AI starts with transparency”.

Legislation such as the European Union’s AI Act and GDPR – especially for their strong privacy protections – if implemented elsewhere, could reduce the surprise for unsuspecting users, leveling the playing field between companies and their customers regarding the training of AI models and resulting in fewer frustrations and dissatisfied customers.

Leave a Comment