Regardless of your exact AGI timelines, it’s clear that extremely powerful AI is coming within the next decade. Even Francois Chollet, creator of ARC-AGI and the Twitterverse’s favorite “AI skeptic” recently said that he thinks AGI is 5 years out.
Some people think it’s already here. Tyler Cowen proclaimed AGI is here after trying o3 in April. We know there are millions of knowledge workers using ChatGPT on a regular basis.
If AGI is either here or close, why hasn’t the world changed? Where are the productivity numbers?
In the Basis seed memo back in April 2023 we wrote:
AGI will come sooner than expected but take longer to diffuse.
Back then we were discussing the challenge of diffusion in the context of making accountants agent-native, and it’s one of the reasons we built the world’s first Deployed Intelligence team.
Today, it’s clear that the challenge of diffusion applies internally as well; the dawn of AGI impacts Basis the organization just as much as it does our product. The Basis organization needs to be built agent-native.
To make this a reality, we had to act quickly. Tony Hsieh estimated that at 25 employees, culture starts solidifying like concrete. So eight months ago, as we accelerated hiring, we mapped out what it means to build a company for this era.
We're still figuring it out, just like everyone else. But we've learned enough to start sharing. This will be the first article in a series on how we’re building Basis to prepare for what’s coming and leverage what’s already here.
Ilya has referred to LLMs as an “alien intelligence,” and I find this useful for conceptualizing part of why diffusion is slow.
By most metrics, GPT-5 is already smarter than me. It knows more about most subjects, can read 100 pages in less than a minute, and has a far larger working memory (to understand human working memory, think about a waiter trying to remember an order for a large table).
George Miller’s famous paper estimates human working memory at max 7 “chunks”. Even if you assume a chunk is 100 tokens, that maxes humans out at a <1000 token context window. GPT-5 has a context window of 400k tokens.
So AI has all of these advantages, yet the world hasn't changed. Why? The main reason is that these models have a big limitation: continual learning.
Dwarkesh recently wrote about this limitation:
The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.
How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.
This just wouldn’t work. No matter how well honed your prompt is, no kid is just going to learn how to play saxophone from just reading your instructions. But this is the only modality we as users have to ‘teach’ LLMs anything.
So we have an intelligence that knows everything, speaks every language and has a working memory 100x larger than a human, but it can’t learn on the job.
Seems pretty alien.
While an LLM doesn't learn continually in the same way a human does, its large working memory means you can still accomplish a lot.
For example, if I were to ask ChatGPT to help me brainstorm a new interview question for a role, it would provide very generic information.
But if I have context stored somewhere about:
Then, I’ll have a far more informed collaborator merely by pasting in this context. This leads to a key learning: productivity from model usage depends less on “using AI” and more on organizing and maintaining context.
Let’s look at another example. Consider a simple a coding task. I ask an engineer “Add a filter to dashboard for tags.” Seems simple, but I’m assuming they know:
A human engineer learns this once and remembers. An agent? You have to explain it every time. (Recall the saxophone analogy)
Now multiply this by every task, every employee, every day. Each employee has to become their own context manager needing to organize, refine, and maintain information themselves just to get useful work out of a model.
It’s exhausting.
This explains why "AI-first" initiatives often disappoint. They provide employees the means to interact with agents, but don’t provide agents the context to interact with employees.
Without solving the context problem at an organizational level, you're asking every employee to do that work themselves. Some will excel at it, most won't. And even those who excel cannot alone unlock full potential and are duplicating effort across the org.
Building a company for the AGI era requires solving context at the organizational level.
Go into your actual codebase, delete a single comma in main.py then deploy to prod. Most likely something breaks. Small changes to your code can have a massive blast radius. That’s why we have tests, reviews, version control etc.
Now go into your company knowledge base (i.e., your company context) and delete a whole folder. People might not even notice.
As agents take on more real work, your company context becomes as critical as your codebase.
Imagine the future where thousands of agents are doing work in the background guided by your company context. In that world, deleting a file could cause them to take incorrect steps, follow stale procedures, etc.
Planning for that future requires a paradigm shift in managing your internal context.
At Basis we’ve started to treat our context with the same rigor we treat code: version control, clear content ownership, automated reviews, deployment pipelines etc.
The companies that get this right will have a massive competitive advantage. Their internal agents will work effectively from day one with compounding productivity gains as model capabilities continue to improve.
This will allow for scaling operations in ways that don’t seem possible today.
Every computing era creates new organizational needs. The PC era created IT departments. The internet era created data teams. The AGI era demands an internal agent team.
We've established why: agents can't learn on the job, individual context management doesn't scale, and bad documentation compounds into thousands of errors.
Our solution is Atlas. A team with one mandate: make every employee at Basis 100x more productive. (Adam D’Angelo made a similar job posting a couple months ago.)
Atlas owns four critical areas:
We're still early in this journey. We're building toward this vision, not claiming we've achieved it.
But the direction is clear.
The companies that thrive in the AGI era won't be the ones that use AI the most. They'll be the ones that reshape themselves to work with agents most effectively. That means taking context as seriously as code, building new organizational structures, and fundamentally rethinking how knowledge flows through a company.
We help accountants start this transformation with our Deployed Intelligence team, and our Atlas team is spearheading this internally.
In future posts, I'll dive deeper into specific aspects of this transformation:
etc.
We’re hiring across the Atlas team and all teams at Basis. If you’re interested in working at the frontier, check out our open roles here