Date:

AI Themes Dominate SXSW

SXSW: AI Safety and Responsibility

Although AI technology capable of taking over the world is limited to science fiction literature and movies, existing artificial intelligence is capable of wrongdoing, such as producing hallucinations, training on people’s data, and using other people’s work to create new outputs. How do these shortcomings align with rapid AI adoption?

The Use Case Matters

There is no denying that AI systems are flawed. They often hallucinate and incorporate biases in their responses. As a result, many worry that incorporating AI systems into the workplace will introduce errors in internal processes, negatively impacting employees, clients, and business goals.

The key to mitigating this issue is carefully considering which task to delegate to AI. For example, Sarah Bird, CPO of responsible AI at Microsoft, looks for use cases that are a good match for what the technology can do today.

"You want to make sure you have the right tool for the job, so you shouldn’t necessarily be using AI for every single application," said Bird. "There are other cases where perhaps we should never use AI."

Humans are Here to Stay

As AI systems become more intelligent and autonomous, people are naturally alarmed at the technology’s potential to negatively impact the workforce by making humans more replaceable. However, the business leaders all agreed that even though AI will transform work as we know it, it won’t necessarily replace it.

"AI is allowing people to do more than they did before, not necessarily a wholesale replacement," said Ella Irwin, head of generative AI safety at Meta. "Will some jobs be replaced? Yes, but like with any other technology, such as the internet, we will see new jobs develop, and we will see people using this technology and doing their jobs differently than before."

User Trust Will Be One of the Biggest Challenges

When discussing obstacles to AI developments, the roadblocks that people consider typically involve the technical development of the AI models, that is, how the models can be built safer, quicker, and cheaper. However, a part of the discussion that is often left out is consumer sentiment.

At SXSW, the role of the consumer was heavily discussed because, ultimately, these models will only be helpful and transformative if people trust them enough to consider trying them out.

"AI is only as trustworthy as people place the trust in it — if you don’t trust it, it’s useless; if you trust it, you can start the adoption of it," said Lavanya Poreddy, head of trust & safety at HeyGen.

Conclusion

The future of AI is not as ominous as the headlines may suggest. While AI systems are capable of wrongdoing, the key to mitigating this issue is careful consideration of use cases and ensuring that AI is used in a way that aligns with the goals and values of the organization. Moreover, AI will not replace humans, but rather enable them to do more and do it better.

FAQs

Q: Is AI capable of taking over the world?
A: No, AI is not capable of taking over the world.

Q: How can we ensure the safety of AI systems?
A: By carefully considering the use cases and ensuring that AI is used in a way that aligns with the goals and values of the organization.

Q: Will AI replace humans?
A: No, AI will not replace humans, but rather enable them to do more and do it better.

Q: How can we increase trust in AI?
A: By being transparent about the models, how they were trained, and ensuring that there are safety approaches in place.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here