A recent AI experiment by researchers at the University of Zurich involved deploying undisclosed autonomous agents interacting with real people on Reddit — including in threads where users shared personal and vulnerable experiences. These bots, powered by large language models (LLMs), were designed to subtly shift participants’ views within a community of nearly 4 million members — generating around 1,800 unique comments over a four-month period.

There was no disclosure. No informed consent. People were unknowingly made part of a behavioral study.

Success was measured by how often participants changed their views and rewarded the AI agents with awards, believing they were having sincere, human dialogue. As mentioned in the study draft, agents trained through reinforcement learning on users’ personal attributes and posting history achieved persuasion rates up to N times higher than those typically observed in human-to-human interactions:

https://this_could_be_their_draft.pdf

Our thoughts

We have decided not to publish the draft.

We trust that the University of Zurich and its Ethics Commission will seize this moment to strengthen the ethical foundations of future research in this sensitive and rapidly evolving domain.