Date:

AI in 2024: A New Creative Frontier

Theft or Remix?

Perhaps the biggest complaint creatives have had about generative AI since the likes of Midjourney burst onto the scene is the use of original, human-made artwork as the raw material to train AI systems to produce its own images. Whether you view scraping (finding online content to use as training data) as outright copyright infringement or as a 21st-century spin on the famous maxim (usually attributed to Picasso) “good artists copy, great artists steal”, it suddenly becomes a lot more real when you see AI-made work that’s stylistically eerily similar to your own, as many prominent illustrators have.

Fairly Trained, an organisation advocating for the ethical use of data in training, has issued a straightforward declaration: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” Among the 38,000 signatories to date are illustrator Claire Wendling and comic book artist Duncan Fegredo.

Where Do People Fit In?

Many creatives are worried or angry when they see generative AI producing material they might have expected themselves or their industry colleagues to have made previously. Artists and designers who’ve spent years honing their technical skills and their personal styles now find themselves pitching against people making material through text prompts.

It’s a hornets’ nest that Coca-Cola, arguably the world’s biggest brand, has poked on more than one occasion in 2024. In July, Ian reported from Siggraph on how agency WPP was collaborating with Nvidia to produce AI-generated Coca-Cola assets by the truckload, placing AI-made 3D assets into configurable virtual photoshoots. More contentious was the recent remake of a classic Coca-Cola Christmas TV spot, made entirely with AI and challenging the company’s own ‘Real Magic’ tagline: it felt neither real nor magical.

Against Slop

Beyond economic concerns, the strong feedback from both the creative community and the broader audience of consumers towards AI creative content is that so much of it is – how do I put this diplomatically? – awful. AI-shaming, where everyone competes to find the most egregious examples of bad AI art, has become one of social media’s biggest bloodsports.

AI Done Right?

If large-scale generative AI brings with it the risk of producing generic, lowest-common-denominator material, perhaps the most promising direction forward lies with private AI models – engines trained on smaller, privately owned data sets, with a tighter end goal than the ability to create anything that might appear in a text prompt.

The agency Rehab is a compelling example that Joe looked at for Creative Bloq’s AI Week in June. Rehab is using AI not to produce artwork, but to streamline the research stage of its projects, leaving responsibility for generating ideas and creative from that research to people. Data from social media and commissioned research helps Rehab’s AI to create profiles of potential customers: what they’re thinking, what they’re buying, and so on.

A Return to Discernment (Please)

If 2024 has taught us anything about AI in the creative fields, it’s that human taste is going to be more important than ever. The novelty of having access to a toolset that makes creating images and video easier than ever before is rubbing off fast, and expecting people to respond to your content simply because of the tool you used to make it isn’t, and has never been, enough.

We’ve been here before with digital art that felt too airbrushed and 3D graphics that hurled us into the Uncanny Valley, where something looks right and wrong at the same time. Step into this broader context, and generative AI becomes the newest inflexion point in a challenge artists and creatives have faced ever since Photoshop was unveiled in 1987. At what stage in your use of technology do you lose the sense of human connection that lies at the heart of any worthwhile creative endeavour? When does making art become so easy that you miss the part where you put something of yourself into the work?

AI image-making is succeeding massively at democratising the creative process, and failing wholeheartedly at fostering art with a genuine human connection. Can it ever find a balance? 2025 will start to answer that question.

Conclusion

The future of AI in the creative fields is uncertain, but one thing is clear: the stakes are high. As AI becomes more integrated into our workflows, we must be mindful of the potential consequences and strive to create a balance between technology and human connection. The onus is on us to ensure that AI is used responsibly and ethically, and that it enhances our creativity rather than replacing it.

FAQs

Q: What is generative AI?
A: Generative AI refers to artificial intelligence that can create new content, such as images, videos, or music, based on patterns and structures it has learned from existing data.

Q: Is AI going to replace human artists?
A: While AI can generate impressive content, it is unlikely to replace human artists entirely. AI can augment human creativity, but it lacks the emotional depth and nuance that human artists bring to their work.

Q: How can I use AI in my creative work?
A: AI can be used in a variety of ways, from generating ideas and inspiration to automating repetitive tasks and enhancing your workflow. However, it is essential to use AI responsibly and ethically, and to ensure that it does not replace human creativity.

Q: What are the ethical implications of using AI in the creative fields?
A: The ethical implications of using AI in the creative fields are significant. AI can be used to create content that is biased, inaccurate, or misleading, and it can also be used to exploit or manipulate people. It is essential to consider the ethical implications of using AI and to ensure that it is used responsibly and ethically.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here