New Guide Explains Where Creators Can—and Can’t—Opt Out of AI Training

The AI Rights Project’s Guide to Opting Out of AI Training, a practical resource for creators navigating platform training controls.
Guide gives creators step-by-step opt-out tools as The AI Rights Project issues its first platform grade, evaluating LinkedIn’s AI training controls.
The AIRights Guide to Opting Out of AI Training: Protecting Creator-Uploaded Content on Major Platforms provides step-by-step, platform-specific instructions explaining what controls exist, how they function, and what limits apply in practice. It covers a range of creative media, including text, images, audio, video, and code, and focuses specifically on training uses of creator-uploaded content, rather than generalized “data use” policies. The release comes amid rapid expansion of generative AI tools and increasing reliance on creator-uploaded content to train and refine those systems.
To support broad public understanding at launch, the full Guide is being made publicly available for a limited launch period. Following this window, ongoing access to the complete Guide and future updates will be available to members, with free email membership providing continued access to core materials.
Alongside the Guide, The AI Rights Project has published its first platform report card, evaluating how effectively a major platform’s stated opt-out mechanisms translate into meaningful control for creators. The inaugural report card assesses LinkedIn’s approach to AI training on uploaded creator content, using a standardized grading framework focused on consent architecture, transparency, and the practical effect of opt-out choices.
The report card methodology evaluates platforms based on what they represent and operationalize today, rather than corporate intent or legal compliance. Criteria include whether AI training occurs by default, how clearly training uses are disclosed, whether an opt-out exists and is accessible, and whether exercised choices bind downstream uses, affiliates, and partners.
The Guide and the report cards are designed to function together. The Guide helps creators navigate existing platform controls, while the report cards assess whether those controls plausibly deliver the outcomes they promise. Future report cards will be released on a rolling basis, using the same methodology, to allow for comparison across platforms over time.
The Guide will be updated periodically as platform policies evolve, with material changes documented to maintain transparency over time.
The AI Rights Project emphasizes that the grading framework does not assess the legality of AI training practices or the outputs of AI systems. Instead, it focuses narrowly on the degree of meaningful choice and control platforms provide creators over the use of their uploaded works for generative AI training.
Following the launch period, new report cards will be released as part of a structured publication cadence. Free email members of The AI Rights Project receive the complete monthly set of report cards, while paid members have access to the full archive, comparison tables, and historical evaluations.
The Guide to Opting Out of AI Training: Protecting Creator-Uploaded Content on Major Platforms marks the first installment of The AIRights Guide Series on AI Training & Creator Consent, a planned sequence of practical resources for creators navigating AI’s impact on copyright and content control. Future guides scheduled for publication include:
• The AIRights Guide to Protecting Self-Hosted Content from AI Scraping, and
• The AIRights Guide™ to Asserting Rights Against the Big AI Developers.
The AI Rights Project is an independent initiative working to make AI training practices intelligible to creators and the public. It focuses on transparency, procedural clarity, and practical tools for understanding consent, control, and content use in generative AI systems. For more information, to access the Guide, or to view the first report card, visit The AI Rights Project website at https://airightsproject.org.
Additional Press Resources: For more press materials, organizational background, media contacts, and downloadable assets, visit the press and media page.
Jim W. Ko
The AI Rights Project
email us here
Visit us on social media:
LinkedIn
Bluesky
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

