New Study Raises Questions About OpenAI’s Data Use
AI Disclosures Project Investigates Large Language Model Training Data
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates that the GPT-4o model from OpenAI demonstrates a "strong recognition" of paywalled and copyrighted data from O’Reilly Media books.
The AI Disclosures Project
The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialization by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.
Methodology and Key Findings
The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.
The key findings from the report include:
- GPT-4o shows "strong recognition" of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%).
- GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively).
- GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores).
- GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%).
Implications and Conclusion
The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data.
The study highlights the potential for "temporal bias" in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.
The report concludes that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.
FAQs
Q: What is the AI Disclosures Project?
A: The AI Disclosures Project is a initiative led by technologist Tim O’Reilly and economist Ilan Strauss that aims to address the potentially harmful societal impacts of AI’s commercialization by advocating for improved corporate and technological transparency.
Q: What is the purpose of the study?
A: The study aims to investigate whether LLMs from OpenAI were trained on copyrighted data without consent, and to provide empirical evidence on the use of non-public, copyrighted data in AI model training.
Q: What are the implications of the study’s findings?
A: The study’s findings suggest that access violations may have occurred via the LibGen database, and that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data. The study also highlights the potential for "temporal bias" in the results, due to language changes over time.

