Write an article about
Companies around the globe are developing artificial intelligence (AI) applications with the goal of improving productivity, enhancing customer satisfaction, and boosting profitability. In many cases, adopting AI agents will mean replacing human workers for some tasks. But that raises a question: Should AI be given worker protections and rights? At least one major AI company is exploring that idea.
Anthropic has begun researching whether AI deserves the same types of considerations we afford human workers. The research is part of the company’s investigation into the potential for AI models to develop consciousness and whether humans should consider the well-being of the model.
“Human welfare is at the heart of our work at Anthropic: our mission is to make sure that increasingly capable and sophisticated AI systems remain beneficial to humanity,” Anthropic wrote in a blog post today.
“But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises. Should we also be concerned about the potential consciousness and experiences of the models themselves?” the company wrote. “Should we be concerned about model welfare, too?”
The potential for AI to develop consciousness was a big deal in the early days of the generative AI revolution. You will recall that Google fired AI researcher Blake Lemoine back in June 2022 after he declared that Google’s large language model (LLM) LaMDA had developed consciousness and was sentient.
Do AI agents deserve worker rights? (sdecoret/Shutterstock)
Following OpenAI’s launch of ChatGPT in late November 2022, a large number of AI researchers signed a petition to pause AI research for six months based on the fear that uncontrolled escalation of the technology into the realm of artificial general intelligence (AGI) could pose a catastrophic threat to the future of mankind.
“If it gets to be much smarter than us, it will be very good at manipulating, because it will have learned that from us,” said Geoffrey Hinton, one of the so-called “Godfathers of AI” who resigned his Google post to allow him to freely speak out against the adoption of AI.
Those existential fears largely faded into the background over the past two years, as companies concentrated on solving the large technological challenges of adopting GenAI and integrating it into their existing systems. There has been a Gold Rush mentality among companies to speed adoption of GenAI, and now agentic AI, at the risk of being permanently competitively displaced.
Meanwhile, LLMs have gotten very big over the past two years–perhaps as big as they can get with the current limitations in power and cooling. January 2025 introduced us to DeepSeek and the new world of reasoning models, which provide more human-like problem solving capabilities. Companies are starting to achieve real returns on their AI investments, particularly in areas like customer service and data engineering, although challenges remain (with data quality, data management, etc.), and investment in AI is surging.
However, legal and ethical concerns about AI adoption haven’t gone away, and now it appears they may be poised to come back to the forefront. Anthropic says it’s not the only organization conducting research into model welfare. The company cites a report from cognitive scientist and philosopher David Chalmers titled “Taking AI Welfare Seriously,” which concluded that “there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.”
Shutterstock
Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.”
What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive.
“If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview.
What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.
But what is not clear at this point is whether AI models experience the world as humans do. Until there is solid proof that AI actually “feels” harm in a way similar to humans, the realm of “model welfare” will likely be relegated to a field of research and not applicable in the enterprise.
Related Items:
Nvidia Preps for 100x Surge in Inference Workloads, Thanks to Reasoning AI Agents
What Are Reasoning Models and Why You Should Care
Google Suspends Senior Engineer After He Claims LaMDA is Sentient
.Organize the content with appropriate headings and subheadings ( h2, h3, h4, h5, h6). Include conclusion section and FAQs section with Proper questions and answers at the end. do not include the title. it must return only article i dont want any extra information or introductory text with article e.g: ” Here is rewritten article:” or “Here is the rewritten content:”