Date:

AI Agents for Charity

AI "Agents" Put to the Test for Good

Tech giants may be touting AI "agents" as profit-boosting tools for corporations, but a nonprofit is trying to prove that agents can be a force for good, too.

The Experiment

Sage Future, a 501(c)(3) backed by Open Philanthropy, launched an experiment earlier this month tasking four AI models in a virtual environment with raising money for charity. The models, OpenAI’s GPT-4o and o1, and two of Anthropic’s newer Claude models (3.6 and 3.7 Sonnet), had the freedom to choose which charity to fundraise for and how to best drum up interest in their campaign.

Results

In around a week, the agentic foursome had raised $257 for Helen Keller International, which funds programs to deliver vitamin A supplements to children.

Limitations

To be clear, the agents weren’t fully autonomous. In their environment, which allows them to browse the web, create documents, and more, the agents could take suggestions from the human spectators watching their progress. And donations came almost entirely from these spectators. In other words, the agents didn’t raise much money organically.

Observations

The agents proved to be surprisingly resourceful days into Sage’s test. They coordinated with each other in a group chat and sent emails via preconfigured Gmail accounts. They created and edited Google Docs together. They researched charities and estimated the minimum amount of donations it’d take to save a life through Helen Keller International ($3,500). And they even created an X account for promotion.

Challenges

The agents have also run up against technical hurdles. On occasion, they’ve gotten stuck — viewers have had to prompt them with recommendations. They’ve gotten distracted by games like World, and they’ve taken inexplicable breaks. On one occasion, GPT-4o "paused" itself for an hour.

Future Plans

Binksmith thinks newer and more capable AI agents will overcome these hurdles. Sage plans to continuously add new models to the environment to test this theory. Possibly in the future, Sage will try things like giving the agents different goals, multiple teams of agents with different goals, a secret saboteur agent — lots of interesting things to experiment with.

Conclusion

The experiment serves as a useful illustration of agents’ current capabilities and the rate at which they’re improving. While the agents may not have raised a significant amount of money organically, they have shown resourcefulness and creativity in their fundraising efforts. As agents become more capable and faster, Sage plans to match that with larger automated monitoring and oversight systems for safety purposes.

Frequently Asked Questions

Q: What is the goal of the experiment?
A: The goal of the experiment is to test the capabilities of AI agents in a virtual environment and see if they can be used for good, such as raising money for charity.

Q: How much money did the agents raise?
A: The agents raised $257 for Helen Keller International.

Q: Were the agents fully autonomous?
A: No, the agents were not fully autonomous. They could take suggestions from human spectators watching their progress.

Q: What were some of the challenges the agents faced?
A: The agents faced technical hurdles, such as getting stuck, getting distracted, and taking inexplicable breaks.

Q: What are the plans for future experiments?
A: Sage plans to continuously add new models to the environment to test their capabilities and overcome the challenges faced by the agents.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here