Date:

OpenAI peels back ChatGPT’s safeguards around image creation.

OpenAI’s New Image Generator: A Shift in Content Moderation Policies

Introduction

OpenAI has recently launched a new image generator in ChatGPT, which has quickly gone viral for its ability to create Studio Ghibli-style images. However, one of the most notable changes OpenAI made this week involves its content moderation policies.

New Content Moderation Policies

OpenAI has "evolved" its approach to content moderation, according to a blog post published by OpenAI’s model behavior lead, Joanne Jang. The company is shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm.

Allowing Requested Content

Under the updated policy, ChatGPT can now generate and modify images of public figures, including Donald Trump, Elon Musk, and other individuals that OpenAI did not previously allow. The company is giving users an opt-out option if they don’t want ChatGPT depicting them.

Generating Hateful Symbols and Racial Features

OpenAI is also allowing ChatGPT users to generate hateful symbols, such as swastikas, in educational or neutral contexts, as long as they don’t clearly praise or endorse extremist agendas. Additionally, the company is changing how it defines "offensive" content, allowing for requests around physical characteristics, such as "make this person’s eyes look more Asian" or "make this person heavier".

Consequences and Controversies

The changes in content moderation policies have raised concerns about the potential for misuse and the ethics of allowing AI chatbots to answer sensitive questions. The culture war around AI content moderation may be coming to a head, with some calling for more transparency and others arguing that the shift is necessary to give users more control.

Conclusion

OpenAI’s new image generator and changes in content moderation policies have significant implications for the future of AI technology. While it remains to be seen how these changes will be received, it is clear that the company is committed to giving users more control and adapting to the evolving landscape of AI.

Frequently Asked Questions

Q: Why is OpenAI changing its content moderation policies?
A: OpenAI is shifting its approach to content moderation to focus on preventing real-world harm and giving users more control.

Q: What types of content are now allowed on ChatGPT?
A: ChatGPT can now generate and modify images of public figures, generate hateful symbols in educational or neutral contexts, and fulfill requests around physical characteristics.

Q: How do these changes affect the potential for misuse?
A: While OpenAI is implementing safeguards to prevent real-world harm, the changes in content moderation policies may increase the potential for misuse.

Q: What is the impact on the culture war around AI content moderation?
A: The changes in content moderation policies are likely to intensify the culture war around AI, with some calling for more transparency and others arguing that the shift is necessary to give users more control.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here