Microsoft Takes Action Against Cybercriminals Exploiting Generative AI Systems
Microsoft has taken legal action against a group of foreign-based cybercriminals who allegedly used sophisticated software to bypass the company’s guardrails and generate harmful and illicit content using its generative AI services.
Banned Content
Microsoft and other technology companies have banned the use of their generative AI systems to create certain types of content. This includes materials that feature or promote sexual exploitation or abuse, are erotic or pornographic, or attack, denigrate, or exclude people based on their race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. Additionally, the use of AI systems to create content containing threats, intimidation, promotion of physical harm, or other abusive behavior is also prohibited.
Code-Based Restrictions Bypassed
Microsoft has developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates these terms. However, these code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers, and others by malicious threat actors.
Lawsuit Allegations
Masada wrote in a court filing:
Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.
Legal Action
The lawsuit alleges that the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act, and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”
Conclusion
Microsoft’s actions demonstrate its commitment to ensuring that its generative AI services are not used for illegal or harmful activities. The company will continue to develop and improve its safety measures to prevent similar incidents from occurring in the future.
Frequently Asked Questions
Q: What type of content is prohibited from being created using generative AI systems?
A: Prohibited content includes materials that feature or promote sexual exploitation or abuse, are erotic or pornographic, or attack, denigrate, or exclude people based on their race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits.
Q: How do guardrails inspect the content requested?
A: Microsoft’s guardrails inspect both prompts inputted by users and the resulting output for signs the content requested violates the prohibited terms.
Q: Have other technology companies also banned the use of generative AI systems?
A: Yes, other technology companies have also banned the use of their generative AI systems to create certain types of content.

