1. Introduction
Essential Skills, Adaptability, Influence of AI
We’re starting to see that being able to adapt and act on your own is becoming more important. Especially as intelligent systems may soon match human abilities in many areas.
A few points are already clear:
- AI can help us improve most things we use.
- Adaptability matters for staying competitive at work.
- Public digital spaces and AI systems now give individuals more influence than ever before.
In the following section, we will look at real examples of automated systems that mimic people, and how platforms themselves might affect personal autonomy. In the "Inspirations" section, we will draw analogies with earlier, more resilient systems, analyzing their strengths and limitations, focusing on simple design, proof of effort, and social rules. Finally, we’ll highlight what’s might be important, in our view, for designing and using platforms that are more resistant to manipulation.
2. Challenges
The Undetected Autonomy "Then"
More than a decade ago, intelligent systems such as convolutional neural networks (CNNs) began to successfully solve perception tasks that had previously served as common barriers against bots:
Unfortunately, as a consequence of rapid advances in machine learning and computer vision, bots evolved to quickly and accurately recognize distorted text, reaching over 99% accuracy by 2014.
...
By 2016, both (1) a combination of behavioral analysis and a simple checkbox, and (2) image classification were defeated with a high degree of accuracy by bots.
21 Nov 2023, "Dazed & Confused: A Large-Scale Real-World User Study" • https://arxiv.org/pdf/2311.10911
The current era is defined by much more capable systems: large language models (LLMs). These models now serve as the “brains” of autonomous agents and present a challenge for public digital spaces, as both behavior and context can be convincingly mimicked at a human level. As a result, interactions with AI systems are becoming indistinguishable from those with real people.
Although our initial outlook on the evolution of digital platforms was optimistic:
While the ecosystem will inevitably evolve — with new, more adaptive and better-protected platforms emerging — the outcome will be net positive.
March 24, 2025 • https://www.theearth.com/posts/gen-ai-case-study-n-conclusion#web_2.0
we’ve also noted that reinforcement learning can effectively circumvent behavioral defense:
Unlike previous waves of automation, today’s agents not only generate content — they adapt to user behavior through reinforcement learning. This makes them nearly indistinguishable from real users and poses serious challenges for existing Web 2.0 platforms ↗
March 24, 2025 • https://www.theearth.com/posts/gen-ai-case-study-n-conclusion#web_2.0
The Undetected Autonomy "Now"
Today, many researchers and agent developers recognize the potential for creating new vulnerabilities:
Equipped with advanced reasoning, planning, and tool-using capabilities, these agents are designed to perform tasks autonomously and interact with their environments. However, this autonomy also expands the attack surface, creating new vulnerabilities that demand careful research and attention.
31 Mar 2025, "Advances and Challenges in Foundation Agents", p. 161 • https://arxiv.org/pdf/2504.01990
With complex simulation frameworks, these agents may now learn entirely from synthetic environments. Previously, we highlighted how combining four engines: User, Scenario, Behavior, and Social Environment — creates a space where agents can learn to forecast group choices (A World Model for Social Simulation ↗)

Framework Validation: Three Simulation Scenarios
Built from a pool of 10 million social media users where each profile includes 15 attributes
...
The team conducted three large-scale experiments to validate the system.
The forecasting results demonstrated high accuracy across key metrics.
...
The study shows that there are already frameworks capable of using LLM-powered agents to forecast group choices in social, geopolitical, and economic scenarios with over 90% accuracy.
April 21, 2025 • https://www.theearth.com/posts/world-model-for-social-simulation
It demonstrates that now agents can improve without needing to bypass platform defenses or operate in live environments, as they did before. However, such autonomous undetected agents are already operating alongside users in live environments (Perspective: Zurich AI Study ↗)
These bots, powered by LLMs, were designed to subtly shift participants’ views within a community of nearly 4 million members — generating around 1,800 unique comments over a four-month period.
April 28, 2025 • https://www.theearth.com/posts/our-perspective-on-the-university-of-zurich-ai-study
Even if an account is verified, we may soon have questions:
- Was this content reviewed by a real person?
- Are we sure we’re talking to a human, not a machine?
- Did someone actually spend time or effort here?
- When a machine does the work, is responsibility the same as when a person does it?
Conclusion
In this new context, we are inclined to conclude:
Verification can make us feel safe, but unless new social norms or other widely supported agreements are established at the platform level, it doesn’t always mean a real person is involved.
Moreover, additional complications may arise when these dynamics are compounded by:
- The widespread adoption of generative AI content, and
- The rapid evolution of hyper-personalized, algorithmically ranked media environments
The Anatomy of Hyper-personalization
Any DDoS attack is an attempt to disrupt a system’s operation by sending it a massive volume of traffic or specially crafted requests. Usually, its goal is to exhaust available resources and make the service unavailable to legitimate users. In practice, a DDoS attack can give an attacker information about response times, load capacity, and routing. So while this attack is able to take a service offline, it can also serve as a distraction from a larger attack, intrusion, or exploitation.
Today, individuals often surrounded by an ever-shifting streams designed by hyper-personalization to maximize attention and emotional response. Over time, this could create a predictive feedback loop — anticipating actions, reinforcing patterns, amplifying some narratives, while suppressing alternatives.
These algorithms are capable of acting as as sophisticated exploits, seeking out humans’ most vulnerable “ports”: habits, triggers, preferences, and directing highly effective targeted requests at them. Moreover, we frequently outsource our "defensive filters and firewalls" to the platforms themselves.
We believe this comparison can be useful both for identifying potential solutions and for understanding why such solutions could be difficult within current platform-centric revenue models.
P.S.: This is why it's worth being mindful of your own cognitive and contextual load.
3. Inspirations
Connecting the Dots: Decentralized Systems
Evaluating architectures that could support greater authenticity and resilience in digital environments, we see two reasons to look to the field of decentralized systems for lessons:
1. The Defense Mechanisms against Manipulation
The mechanism that protects these systems from manipulation was inspired by early attempts to defend against DDoS attacks.
The concept of PoW has its roots in early research on combating spam and preventing denial-of-service (DDoS) attacks.
Wikipedia, Proof of Work • https://en.wikipedia.org/wiki/Proof_of_work
In both cases, systems are designed to make large-scale, malicious actions costly and impractical.
2. The Double-Spending Analogy
In our view, the problem is when a machine presents its output or activity as if it were human, in some ways, is similar to the double-spending problem in financial systems.
Double-spending is the unauthorized production and spending of money, either digital or conventional.
Wikipedia, Double-spending • https://en.wikipedia.org/wiki/Double-spending
In both cases, the system needs to ensure that each transaction or action is genuine and unique. In blockchain, the double-spending problem is solved at the level of architecture and protocol. In digital spaces, the problem of identity and the authenticity of the actor, the action, or the intent remains unresolved — and the evolution of AI only amplifies this challenge.
Let’s examine the effectiveness of Bitcoin components through the lens of the most relevant facts:
1. The Power of Simplicity in Decentralized Networks
Will start from Bitcoin’s design, simplicity of its protocol, and the unique value at its core.

Briefly
1. Simple systems are easier to understand and use, which helps build trust among users.
2. At the core of these systems lies a verifiable evidence mechanism — Proof of Work (PoW).
3. This mechanism verifies the legitimacy of each block of transactions through a process that requires computational work before the block is added to the blockchain.
4. At the core of this mechanism is the process of finding a nonce — a unique value that, when (combined with the block header) hashed using the SHA-256 function, produces a hash that meets the required network difficulty. This ensures the block’s validity and secures the blockchain against tampering.
Nonce is an arbitrary number that can be used just once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that each communication session is unique, and therefore that old communications cannot be reused in replay attacks.
Wikipedia, Cryptographic nonce • https://en.wikipedia.org/wiki/Cryptographic_nonce
2. Social Contract: Legitimacy and Resource Commitment
To describe the concept that protects decentralized networks from attacks resulting in double spending², as of March 2021, the Bitcoin network was spending $43 million per day to maintain legitimacy via the Proof-of-Work mechanism, and the Ethereum network was spending $37.5 million per day. As of May 2025, the average daily miner revenue in the Bitcoin network has remained around the same $43 million, with approximately 400,000 transactions per day. So the average cost per transaction (measured as total miner revenue divided by number of transactions) is approximately:
$43,000,000 / 400,000 = $107.50 / tx
Which signals a constant and significant amount of computational work.
² Any pool that achieves 51% hashing power can effectively overturn network transactions, resulting in double spending. However, historical data and analytical forecasts indicate that the Bitcoin network is capable of withstanding a decrease in total computational power (hashrate) of 20-40% without a significant reduction in protection against a 51% attack. Nonetheless, this does not eliminate the need to maintain a substantial level of resources to ensure long-term security and resilience.
In summary
1. These numbers show the significant effort to keep the Bitcoin network secure and trustworthy.
2. This effort is justified by the Bitcoin network's collective agreement among participants to recognize the blockchain as the authoritative ledger — a kind of 'social contract'.
Conclusion
- Simplicity makes protocols easier and safer to use,
- Shared agreements are equally important for trust.
- Proof of Work works not because it’s flawless, but it makes attacks too costly to be worthwhile.
Note: The Proof-of-Work (PoW) mechanism carries environmental risks. We do not, under any circumstances, endorse this level of resource consumption. In our view, it's a radical and excessive measure. However, we cite it here as an example of the price society is willing to pay to preserve trust and legitimacy within decentralized networks.
4. Further Thoughts and Proposals
When AI assists us in making decisions, the ability to adapt and take action becomes a key skill. Simple design, proof of real effort, and clear social rules could help build trust in systems. When systems become too complex, "simple" solutions could help stay in control.
In summary, here are some points that are likely worth paying attention to:
# When language models outperform perception systems,
# our assumptions about human verification begin to fail.
if LLM > CNN:
reconsider('verification_mechanics')
reconsider('friction_placement')
reconsider('system_design')
reconsider('incentives')
# Not a technical parameter, but a matter of social will
propose('new_social_contracts_of_interaction')
# In an age where intent can be simulated,
# Trust could be grounded in presence.
assert presence > intention
# Consider presence as a 'nonce',
# So generative can be trusted.
⚓️ Social Exchange(SE): Scaling 🔓 https://www.theearth.com/posts/reflections-scaling-trust-in-digital-systems ↗
Ultimately, in order to reap the rewards of automation while maintaining trust in systems, we, as users and as a society, may need to recognize that resources we invest in safeguarding authenticity should be proportional to the significance of what we aim to protect. In our view, the key to better adaptation and benefitial synergy with AI lies in how we address these questions.