Date:

Endor Labs: AI Transparency vs “Open-Washing”

The Ongoing Debate: What Does it Mean for an AI Model to be "Open"?

As the AI industry focuses on transparency and security, the debate around the true meaning of "openness" has intensified. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Applying Lessons from Software Security to AI Systems

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasized the importance of applying lessons learned from software security to AI systems. He pointed out that the US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organizations to produce a software bill of materials (SBOM) for each product sold to federal government agencies. An SBOM is an inventory detailing the open-source components within a product, helping to detect vulnerabilities. Stiefel argued that applying these same principles to AI systems is the logical next step.

What does it mean for an AI model to be "open"?

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and "openness." Sobrier broke down the complexity inherent in categorizing AI systems as truly open. "An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open.’ It is a broad definition for now."

Open-Source AI is Hot Right Now

DeepSeek, one of the rising players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.

Building a Systematic Approach to AI Model Risk

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centered around three key steps:

  1. Discovery: Detect the AI models your organization currently uses.
  2. Evaluation: Review these models for potential risks, including security and operational concerns.
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.

Beyond Transparency: Measures for a Responsible AI Future

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors, including SaaS models, API integrations, and open-source models.

Conclusion

The debate around the true meaning of "openness" in AI models is ongoing. As the industry focuses on transparency and security, it is crucial to develop best practices for safely building and adopting AI models. By applying lessons learned from software security to AI systems and implementing a systematic approach to AI model risk, we can move toward a more responsible AI future.

FAQs

Q: What is the importance of applying lessons learned from software security to AI systems?
A: It is crucial to apply these lessons to ensure the security and transparency of AI systems.

Q: What is an SBOM, and how does it relate to AI systems?
A: An SBOM is an inventory detailing the open-source components within a product, helping to detect vulnerabilities. Applying this principle to AI systems can help ensure transparency and security.

Q: What is open-source AI, and how is it related to the debate around "openness" in AI models?
A: Open-source AI refers to the practice of making AI models and code available for free and open collaboration, which can advance transparency and security.

Q: What are the key steps to building a systematic approach to AI model risk?
A: The key steps include discovery, evaluation, and response, which involve detecting AI models, reviewing them for potential risks, and setting and enforcing guardrails for safe and secure model adoption.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here