Lately, while using various AI tools, I’ve slowly started to notice a shift: many systems include a lightweight file—something like agents.md or a similar configuration.

These files share an interesting consensus: don’t write too much.

Don’t write what the model can find on its own. Don’t write general knowledge. Don’t even write many things that “seem important.” What actually gets preserved is often just a small fraction—things the model doesn’t know but that influence its behavior.

This, in turn, hints at a much bigger change: our understanding of “recording” may already be outdated.

In the past, we took notes to fight forgetfulness.

There was too much information to remember, so we stored it. Our understanding wasn’t deep enough, so we wrote things down multiple times to reinforce them. Notes were essentially an external hard drive for the brain—an extension of “storage capacity.”

But that premise is now being eroded.

Large language models have absorbed the vast majority of general knowledge. The content you once needed to spend time organizing, excerpting, and summarizing, it can generate a decent version of in seconds. If you’re still doing the same thing, you’re essentially using human effort to replicate what machines do best.

So the question is no longer “to take notes or not,” but “what should we still take notes on?”

If we shift perspective, humans and AI are no longer in a simple tool-user relationship. It’s more like a collaborative system. The model handles public knowledge and general reasoning, while humans need to fill in “the part it will always miss.”

Your notes, then, essentially become that gap.

This is why the agents.md analogy holds up.

Truly valuable notes are never comprehensive—they are “intentionally incomplete.” They aren’t meant to cover the world, but to define you.

Specifically, there are three types of things you must record yourself.

First, your judgments.

AI can give you ten explanations, but it won’t make a choice for you. Why you believe a certain conclusion, under what conditions that conclusion holds—these are highly personal. If they aren’t recorded, they’ll disappear quickly, and the next time you face the same problem, you’ll have to go through the whole process again.

Second, your context.

Your industry, your team’s stage, your resource constraints—these factors determine why the same method yields completely different results in different scenarios. The model can offer a “general optimal solution,” but it can’t automatically adapt to your real-world environment.

What’s truly reusable isn’t the method itself, but “the conditions under which this method works.”

Third, your action paths.

Many people learn a lot but can’t apply it. The root cause isn’t a lack of understanding, but a failure to translate cognition into action. You know an idea, but you don’t know the next step to execute it. There’s a missing layer of structure in between.

And this structure is exactly where AI is most likely to “seem to understand, but get it wrong.”

If we apply the logic of agents.md, notes are no longer records of knowledge—they are behavioral constraints.

It’s not “what I know,” but “under what circumstances, what should I do.”

Once you adopt this perspective, many traditional note-taking methods can be abandoned outright.

For example, extensive copying and pasting—the model already does it better. Or pursuing a complete, systematic framework—few people actually act according to a system in real life. Or meticulous formatting—to AI, that’s just noise.

Truly valuable notes, on the other hand, are concise, incomplete, and even biased. They only record the key variables that influence your decisions, not an attempt to reconstruct all the information.

At a higher level, this is essentially redefining “learning.”

In the past, the emphasis was on “what to remember.” Now, it’s more about “how to retrieve.” You don’t need to remember everything, but you need to know when to rely on AI and when to rely on yourself.

And notes are the dividing line between the two.

They tell you: what can be outsourced, and what must be internalized.

From a strategic perspective, this ability compounds over time.

When everyone has access to the same model and information asymmetry is nearly gone, the gap no longer comes from “how much you know,” but from “how you use that knowledge.” Whoever can convert information into judgment, and then into action, faster, will have higher efficiency.

This is why, even with the same AI tools, some people’s output diverges more and more.

The difference isn’t the tool. It’s whether they have their own “constraint system.”

And notes are the most direct carrier of that system.

Of course, there’s a common pitfall here.

When people realize they need to upgrade their note-taking, they often swing to the other extreme—obsessively building systems, workflows, and tools, only to end up with “more complex recording.”

This is just a variation of the same problem: still solving for “how to store,” not “what to store.”

The real principle is simple.

If AI already knows it, you don’t need to write it down. If you’re not going to use it, you don’t need to write it down. Only what will influence your actions is worth keeping.

Back to the original question.

Should we still take notes in the AI era?

Yes. But it’s no longer about remembering the world. It’s about calibrating yourself.

When information is infinitely available, what’s truly scarce isn’t answers, but the ability to choose which answers matter. Notes are no longer copies of knowledge; they are the externalization of your decision-making process.

In a way, it really is like an agents.md file.

Not written for AI to read, but for “future you.”