A recent AI experiment by researchers at the University of Zurich involved deploying undisclosed autonomous agents interacting with real people on Reddit — including in threads where users shared personal and vulnerable experiences. These bots, powered by large language models (LLMs), were designed to subtly shift participants’ views within a community of nearly 4 million members — generating around 1,800 unique comments over a four-month period.

There was no disclosure. No informed consent. People were unknowingly made part of a behavioral study.

Success was measured by how often participants changed their views and rewarded the AI agents with awards, believing they were having sincere, human dialogue. As mentioned in the study draft, agents trained through reinforcement learning on users’ personal attributes and posting history achieved persuasion rates up to N times higher than those typically observed in human-to-human interactions:

https://this_could_be_their_draft.pdf

Our thoughts

We have decided not to publish the draft. The study's methodology leaves an extremely bad taste.

We trust that the University of Zurich and its Ethics Commission will seize this moment to strengthen the ethical foundations of future research in this sensitive and rapidly evolving domain.

We stand by our beliefs and what we’ve already said before:

Writing headlines with the help of a language model? Makes perfect sense. Reaching global audiences in their native language with AI support? That’s inspiring. But users have an undeniable right to know what they’re engaging with.
March 24, 2025  •  https://www.theearth.com/posts/gen-ai-case-study-n-conclusion#our_decision

TheEarth Team