Back to Blog
8 min read

Learning to Critique AI

AIPhilosophyFuture of Work
i

This post was originally written in Chinese and published on my WeChat Moments (朋友圈). It has been translated by Claude and adapted for this blog.

Long post. I watched the Anthropic launch event this morning, and I'd like to share some reflections I've gathered over the past few years about intelligent agents.

Let's start with a question. Do you believe your current personal value — whether measured by salary, investments, or assets — still derives from what I call "Old Knowledge": the knowledge accumulated through 12 years of compulsory education, university coursework, and work experience? By Old Knowledge, I specifically mean anything that anyone could learn on their own given enough time. I want to argue that the value of Old Knowledge will asymptotically approach zero, and what will replace it is the "New Knowledge" demanded by a new era.


The Collapse of Old Knowledge

Any value built upon a massive accumulation of Old Knowledge will become worthless. When a system large enough to contain millions of libraries — aggregating millennia of human knowledge — picks up tools forged by the world's most elite engineers, your Old Knowledge looks infinitely small and insufficient by comparison.

Deep down, we all understand this. Every time we have a freewheeling conversation with AI, we realize that the Old Knowledge this entity commands is probably — no, already is — more than any single person could master in a lifetime. Any question with a definitive answer becomes, for this super-brain, merely a matter of time and compute. By elementary economics: when this super-entity costs less than your labor and its capabilities exceed yours, you will be replaced. This isn't future tense — it's a structural shift happening right now.

Yet most people remain in denial. Some raise doubts, others push back, but the majority of AI criticism is still stuck at the level of "AI can't." I'll break down three types of low-level criticism that are still rooted in Old Knowledge thinking.


Three Low-Level Criticisms of AI

1. The Skill Layer

Critics fixate on temporary shortcomings at the specific-skill level: "AI can't do my dishes." This type of criticism is already dissolving in the face of technological progress. The AI-generated art and animation that everyone in 2024 said was impossible has long since been upended. Anyone criticizing AI from the skill layer is ignoring the explosive growth of large models over just a few years, and the inevitability that comes with technological progress. As robotics continues to advance and become widespread, the manual labor you think is irreplaceable will be displaced just as swiftly as the textile workers who met the spinning jenny. This skill-based criticism is fundamentally no different from — and just as shallow as — the earlier criticism based on breadth of knowledge.

2. The Thinking Layer

Critics argue that "AI's mode of thinking is inferior" — that it cannot engage in deep thought or draw connections to underlying causes. This critique is a step above the first, attempting to attack the lack of foundational logic in language models. First, I'd recommend most of these critics try the premium tier of advanced models. But if you're still unconvinced, let me invoke the famous infinite monkey theorem. The idea is that given enough monkeys, enough typewriters, and infinite time, one of them will eventually type out the Bible. The analogy maps neatly onto AI: when a mode of thinking can be expressed in text, and you drive massive compute to have enough large models generate output — even if the model itself doesn't possess thought — if what it produces is indistinguishable from what a thinking engine (a human) would create, then this criticism becomes meaningless.

3. The Species Layer

This is, in my opinion, the most valuable criticism. Critics argue that "AI cannot bear responsibility" and other social functions unique to humans. I can't offer a highly constructive rebuttal here, because any counterargument would be based on a social model that doesn't yet exist. But that is also the most resounding response: every major technological revolution is accompanied by a reshuffling of social structures. The assumption that human modes of interaction will remain unchanged is, without question, unsustainable. Whether the future evolves AI into legally recognized entities capable of bearing social responsibility, or whether accountability is assigned to algorithms at the corporate or national level, existing models will inevitably be affected and transformed.


What's Actually Worth Learning

I've spent a lot of time on what I consider fairly misguided criticisms. Now I want to propose what I believe is genuinely valuable. The New Knowledge behind these skills is what I believe algorithms lacking true intelligence cannot replace.

1. Short-Term: Learn Meta-Architecture

Architecture as a skill isn't limited to engineers. At its core, it means designing an efficient, maintainable system with room for growth. A mature e-commerce platform, the web of employee relationships within a company, even a nation's constitution — all are forms of architecture. Meta-architecture, then, is about how to build systems that produce architecture.

In the software world, architects are essentially the first wave of designers who introduced systems thinking and chose containers and runtime frameworks. This function is built on Old Knowledge and is already being rapidly replaced. But the meta-architectural question — how to construct a powerful architect — still needs to be solved.

A mature meta-architect is someone who can highly abstract a vast body of existing architectures, not fixating on the elegance or shortcomings of any single one, but instead developing their own universal framework generator. I admit my description of this concept is incomplete, but think of the difference between thinking and thinking about how to think.

Why do I believe meta-architecture won't be replaced in the short term? Because the logic of meta-architecture is not as readily accessible as Old Knowledge. Most people spend their entire lives living within architectures. If we rarely get the opportunity to change architectures, let alone engage in the deep thinking meta-architecture demands, then AI's Old Knowledge has not yet widely absorbed the mental models and hard-won experience distilled from a meta-architect's countless trials and failures.

2. Medium-Term: Rethink What It Means to Be Human

Years ago, we used to say that the greatest difference between humans and animals lies in the knowledge systems we proudly build and pass down (not social structures — animal social diversity, including cooperation and resource distribution, rivals human complexity). But AI has already shaken our sense of human value.

When the robotics revolution — which will inevitably arrive within years — gives intelligent agents physical bodies; when massive learning, simulation, and computation begin producing Shakespeare written by monkeys, with emotional discourse surpassing what generations of writers understood about the full spectrum of human feeling — can we still proudly claim emotion as our baseline for being human?

I can't give an answer. But I believe that if you're still using "I have a body, I have feelings" as proof of your humanity, your future will be filled with confusion. Perhaps someday the ability to defecate will be the greatest difference between humans and machines — because surely no designer, human or AI, would bother giving a machine such a peculiar system.

Regardless, just as social structures continually evolve, our definition of what it means to belong to the human species must also evolve. This process belongs to all of humanity. I believe we neither should nor will delegate such a crucial undertaking to AI. Every moment of redefining humanity will bring enormous opportunity. Learning what is human and what is not — grasping the first principles of being human — will be an essential skill for the future.

3. Long-Term: Bias Toward Action

My first two points lean toward thinking — thinking about meta-architecture, thinking about humanity. But I've long overlooked the importance of action. I often dismiss an idea with "this lacks a practical landing point," rejecting it on technical grounds before even attempting execution. But this bias toward thinking over action, based on one's own past experience, is itself a form of Old Knowledge.

The cost of taking an idea from zero to one is also trending toward zero. If we keep evaluating the feasibility of ideas based on our prior experience, we've already fallen into the Old Knowledge trap. Whenever a wild idea strikes, find a way to validate it with minimal resources. Try it. Even something as absurd as the uniqueness of human defecation might carry a grain of truth — and a bizarre idea that flickers through our minds in an instant might take AI hours to generate.

Wild ideas paired with cheap agent-driven action will be the irrefutable model of human creation, now and far into the future.


Closing Thoughts

I've written a lot, mostly because I'm tired of discussing AI with people who offer what I see as shortsighted and shallow critiques. I wanted to put forward my thoughts on what's worth learning and applying in the short, medium, and long term. If you are engaging in any thinking about AI, no matter how simple — you are still early. Don't let an exponentially developing technology open up a knowledge gap between you and it.

This was my first long-form post on WeChat Moments. If there are any shortcomings, I welcome suggestions, discussion, and collaborative thinking. Let's build a super-human collective brain together.