Date:

AI: The Creative Boost

The Issue

Many of the most popular AI media generators on the market, including OpenAI’s DALL-E 2, trained their models by scraping the entire internet, including the original work of artists, without asking for their explicit permission. This has led to the loss of control over their work being reproduced, ownership of their creative style, and the extra revenue that AI companies make from reproducing their ideas.

Steps Companies Can Take

AI text-to-media generators offer clear accessibility benefits, enabling anyone to create regardless of skill or resources. Ideally, they should support creators and enrich the ecosystem, not replace it. The first step towards that goal is simple, according to Ed Newton-Rex: "Firstly, you can’t steal stuff."

Some companies have already started to take this approach. For example, Getty Images launched Generative AI by Getty Images, which was trained on Getty’s robust library of stock images and provides ongoing revenue for those whose work was used. Adobe took a similar approach with its Firefly generative model, which is also commercially safer. To train its model, Adobe only used Adobe Stock images, openly licensed content, and public domain content. It also compensates creators whose work was used in the training set.

The Challenge

However, more companies don’t take this approach due to inherent challenges, including accessing and creating a clean dataset, which can be costly and time-consuming. This is especially detrimental for AI companies racing to release the next model and better compete in the AI race.

The Introspective Look

Another factor worth consideration is taking an introspective look at the use case of these models. Since creating content is easier than ever, it can be tempting to flood media platforms with AI-generated content, such as music, images, and videos. Ultimately, Newton-Rex found this could lead to the dilution of the interest in revenues and royalties that people are getting for their work.

Conclusion

The relationship between AI companies, AI models, and artists has been extractive, stripping artists of their life’s work and using it for their own profit. To create a mutually beneficial agreement, AI companies must support creators and enrich the ecosystem, not replace it. By training models on their own content, using open-licensed content, and compensating creators, companies can ensure a fair and sustainable future for both AI and art.

FAQs

Q: Can AI companies train their models on copyrighted materials without permission?
A: Yes, AI companies can legally train their models on copyrighted materials through the concept of Fair Use, which stipulates that you are not violating copyright law if you are using an existing work to inform the creation of something new.

Q: Is it fair to use artists’ work without their permission?
A: No, it is not fair to use artists’ work without their permission. It is considered copyright infringement and can lead to the loss of control over their work being reproduced, ownership of their creative style, and the extra revenue that AI companies make from reproducing their ideas.

Q: How can AI companies support creators and enrich the ecosystem?
A: AI companies can support creators and enrich the ecosystem by training their models on their own content, using open-licensed content, and compensating creators. This approach ensures a fair and sustainable future for both AI and art.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here