Date:

Who Faces Liability Exposure for Faulty AI-Generated Code?

Liability and Exposure: The Dark Side of AI-Generated Code

To frame this discussion, I turn to attorney and long-time Internet Press Guild member Richard Santalesa. With his tech journalism background, Santalesa understands this stuff from both a legal and a tech perspective. (He’s a founding member of the SmartEdgeLaw Group.)

Functional Liability

“Until cases grind through the courts to definitively answer this question, the legal implications of AI-generated code are the same as with human-created code,” he advises.

Keep in mind, he continues, that code generated by humans is far from error-free. There will never be a service level agreement warranting that code is perfect or that users will have uninterrupted use of the services.

Send in the Trolls

Sean O’Brien, a lecturer in cybersecurity at Yale Law School and founder of the Yale Privacy Lab, points out a risk for developers that’s undeniably worrisome:

The chances that AI prompts might output proprietary code are very high, if we’re talking about tools such as ChatGPT and Copilot, which have been trained on a massive trove of code of both the open source and proprietary variety.

Who is at Fault?

None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome.

For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product?

Conclusion

As every attorney has told me, there is very little case law thus far. We won’t really know the answers until something goes wrong, parties wind up in court, and it’s adjudicated thoroughly.

We’re in uncharted waters here. My best advice, for now, is to test your code thoroughly. Test, test, and then test some more.

FAQs

Q: Who is responsible if AI-generated code results in a catastrophic outcome?
A: The company delivering the product, the library coder, or the company that chose the product may share responsibility.

Q: Can AI-generated code be used in proprietary projects?
A: Yes, but the risk of proprietary code being generated is high, and the legal implications are unclear.

Q: Will AI-generated code be subject to cease-and-desist claims by enterprising firms?
A: Yes, it’s possible that AI-generated code may be subject to cease-and-desist claims, creating a new type of “troll” industry.

Q: What is the best course of action for developers using AI-generated code?
A: Test your code thoroughly. Test, test, and then test some more.

Q: Will there be a regulatory agency to oversee AI-generated code?
A: There is currently no regulatory agency specifically focused on AI-generated code, but it’s possible that one may be established in the future.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here