Chat

Finding a Moat When Reverse Engineering Becomes Free

A manifesto for founders and investors

navigating the age of zero-cost software.

Andreas Dahlström

Founder, Thalius.ai 2026

I’ve been programming for 43 years. I started at ten, sold my first software at twelve, and spent my adult life building software companies. Now, the era of human programming is over.

Not because AI writes perfect code. It doesn’t. But because code is no longer the bottleneck. For decades, the scarce resource in building software was the ability to write it and invent algorithms. That scarcity is collapsing. I’m not grieving hard - programming was a powerful tool for me, often a boring means to an end. But I want to understand what comes next.

What triggered this manifesto was a specific moment. I watched an AI reverse-engineer a complex feature in one of our products - a feature involving substantial mathematics that human competitors had failed to replicate. The AI didn’t have our source code. It studied the UI, reasoned about the constraints, and reconstructed the approach. It wasn’t perfect. But it was close enough to make me rethink everything I believed about defensibility. My entrepreneur heart was in mild shock.

This document is for people like me: founders, builders, investors and anyone who needs to make real decisions about what to build and where to invest.

Note: Full AGI, when AI can do all things better than humans, is likely a medium or long time away. When it arrives, all competitive logic changes for everyone simultaneously. This document is for the window before that, where human judgment still matters and the physics of knowledge work still holds. It may be the most consequential window in the history of building companies.

1. What actually changed

When manufacturing was industrialized, the value didn’t disappear. It migrated. From the craftsman’s hands to the factory designer’s mind. Then to the brand. Then to the distribution network. Each transition made the previous layer cheaper and the next layer up more valuable. Software is doing the same thing now, compressed into years instead of decades.

What AI actually does is not conceptually new. It’s the same work, done at a speed where the category changes. A car at 10 km/h is a pedestrian aid. At 100 km/h it’s transportation.

At 1,000 km/h it’s aviation. Same physics, different world. Software generation went from weeks of human effort to minutes of AI effort. That’s not a new capability. It’s a speed change so large that it becomes a category change - and suddenly, things that were too slow to be economical become trivial.

But not all things become trivial equally, and this is where most people’s thinking goes wrong. Consider space travel. Many can reach orbit - it’s hard but solved. Going to the moon is qualitatively harder. Mars is harder still. And our closest star? That’s not Mars times a thousand. It’s a different category entirely. The critical insight is how the required fuel grows: exponentially. You need fuel to carry the fuel to carry the fuel. Each increment of distance demands disproportionately more resources.

Knowledge work has the same structure. Some problems are orbit - AI solves them now. Building a complex CRM or a great design tool: these are commodity software that a good AI can produce in a days. Products built entirely on this layer - your Figma, your basic Salesforce - are exposed. Their architecture is their product. Once visible, replicable. Including underlying algorithms you thought were secrets.

But ranking a billion web pages so the best result appears first? Or building a fraud detection system that decides in milliseconds whether a transaction is legitimate, trained on billions of payments across millions of merchants? Those are moon and Mars problems. The complexity compounds. Each layer of quality requires exponentially more data, more learning, more accumulated judgement. Google Search and Stripe’s fraud engine are broadly understood architecturally - but their quality lives in the accumulated learning. Years of sequential signal on private data. That’s the fuel a competitor can’t buy.

The question for every founder is: are you building an orbit product or a moon product? Because the economics just diverged completely.

2. The deepest moat is sequential learning

Distribution, brand, and switching costs still matter. But there is a category of moat that becomes more important as execution gets cheaper:

Sequential learning - things that require step N before step N+1, where each step builds on the last, and you cannot skip ahead.

Think of making wine. You can buy the best grapes, the best equipment, hire the best winemaker - but you cannot fast-forward the fermentation. Each day’s chemistry depends on the day before. A competitor with infinite money still can’t buy your 2019 vintage in 2020.

Knowledge works the same way. A system that has spent twelve months learning which questions matter, which patterns recur, which decisions pay off - that accumulated judgment cannot be replicated by throwing more money at the problem. You have to live through the sequence.

The moat is the integral of your learning rate over time. Nobody can buy that. Nobody can shortcut it.

But here’s the harder truth: as AI gets better at reasoning, it gets better at inference. A competitor doesn’t need to live through your learning sequence if they can infer your destination from the trail you’ve left. Every public artifact leaks signal - your API behavior, your product decisions, how your system handles edge cases. A sufficiently capable model treats all of that as training data and works backward. The question isn’t just “can they replicate our journey?” It’s “can they infer where we’re going from watching where we’ve been?”

This means the real moat isn’t just running faster. It’s running in directions that are hard to infer even if someone can see where you’ve been. Guard every signal you leak. The race isn’t just sequential anymore. It’s inferential.

Two things prevent this from collapsing into despair. First, information theory sets a hard floor: no amount of reasoning power can extract signal that isn’t in the input. Complex knowledge work depends on private data that was never externalized - customer behavior in your logs, operational ground truth from running the actual process, corrections that exist only in your feedback loop. Even a perfect reasoner can’t infer what was never there to find.

Second, even when a problem is solvable from its statement, heavy knowledge work is still heavy. The rocket fuel problem applies here: processing 50,000 contracts, integrating two

years of port data, building taste models from millions of browsing sessions - this work takes real compute, real data, and real time. A well-tuned knowledge machine does it faster and cheaper. When the work itself is genuinely large, the speed of your machine is the moat. Not because others can’t build one, but because yours is already running.

3. How to stay private in a public world

You want your product visible, your architecture understood, your methods known - but if everything is visible, everything is copyable. How do you build a moat in a glass house?

Cryptography solved this problem long ago. The field is built on a principle from 1883: a system’s security must depend only on the key, not on the secrecy of the design.

Think about your online banking. The encryption algorithm is published. The software is often open source. What keeps your account safe is not that the method is secret, but that your specific key is secret. The design is public. The key is private. That’s enough.

The same principle now applies to business. Your defensibility must depend on your parameters - not on hiding how your system works. But what does that mean concretely?

4. Parameters: the settings of the machine

Every machine has settings that determine how it behaves. A thermostat has a target temperature. A car engine has fuel injection timing. An AI has parameters - millions of numerical values that collectively determine what the system knows, how it reasons, and what it produces. These parameters are not designed by hand. They are learned, gradually tuned through exposure to data and feedback, each adjustment building on the last.

A nuance that matters: if someone obtains your parameters - the actual learned values - they can copy your machine. If you get the weights, you get the system. The AI industry knows this, which is why leading companies guard their parameters fiercely.

But here’s why the moat holds: the defensible part is not the parameters at any given moment. It’s controlling the loop that keeps improving them. The proprietary data. The

human feedback pipeline. The customer workflows that generate new signal. The ability to keep tuning faster than anyone else. By the time someone steals today’s snapshot, you’ve already moved on. The parameters are the output. The learning loop is the asset.

The AI industry has already proven this at scale. The architecture of a transformer is fully public - published, open-sourced, buildable by anyone. But Anthropic and OpenAI have something the open models don’t: continuously improving parameters, refined through millions of human feedback interactions, embedded in customer workflows that generate new training signal every day. The architecture is the published algorithm. The learning loop is the secret key.

This isn’t limited to AI companies. In AI, the learned state is called “parameters.” In a law firm, it’s the accumulated judgment about which contract clauses predict disputes. In a logistics company, it’s operational intelligence about which routes actually work. The vocabulary differs. The structure is identical: a learning loop that continuously improves how your system makes decisions. The question is whether you’re building one deliberately.

5. Build knowledge machines, not products

A product is a snapshot - what your system outputs today. A knowledge machine is the process that decides what to produce next, and gets better at that decision with every cycle.

Think of it as the difference between a library and a librarian. A library is a collection - valuable, but static. The librarian knows which book you need before you ask, built from years of observing what people read, what they came back for, what they abandoned. You can copy the library. You cannot copy the librarian’s personal recommendations.

The flywheel is self-reinforcing. Capital funds parameter updates. Better parameters produce better selection. Better selection generates more value. More value attracts more capital. Each state depends on all previous states. Nobody can jump in mid-stream.

This leads to a counterintuitive business model: make your architecture open on purpose. Let people build on your format. Openness becomes your distribution strategy. The more people use your system, the more data flows back about what the parameters should be

next. Then license the continuously updated parameters - priced reasonably, because months or years of sequential learning cannot be skipped.

6. What this looks like in practice

In every case below, the architecture is generic and copyable. The learning loop is not.

Legal discovery. A firm builds a system that reviews contracts for specific risks. Anyone could build the same thing in a week. But after processing 50,000 pharmaceutical liability contracts, the system has learned which clause patterns actually predict disputes and which combinations of terms create hidden exposure. A competitor can copy the software overnight. They cannot copy the judgment that came from 50,000 sequential reviews.

Supply chain optimization. A logistics company builds a routing system. Standard architecture. But after two years operating across Southeast Asian ports, it has learned the actual delay patterns versus the official schedules, which customs brokers speed things up, how monsoon season really affects different routes. A competitor can copy the algorithm in a weekend. They cannot copy two years of operational intelligence.

E-commerce curation. A retailer builds a recommendation engine. But after a year of tracking not just what people buy but how they browse - which attributes actually drive decisions, which combinations lead to returns versus satisfaction - the system learns a taste model that predicts preference better than any survey. The recommendation algorithm is commodity. The taste is not.

7. The 95/5 rule

The scenario that should scare people isn’t mass unemployment. It’s a rapid hollowing out of the middle. The people who can see structure and direct AI become extraordinarily productive. Everyone else becomes cheaper. And cheaper fast.

But AI systems are structurally broken in specific ways: they hallucinate, they have no judgment on ambiguous cases, and they can’t know what they don’t know. The systems that

actually work in production need humans - not as a concession but as an architectural requirement.

I call this the 95/5 rule. AI does 95% of the work. Humans do the remaining 5%. But that 5% is the steering wheel and pedals, and the 95% is the engine. Without direction, power is just noise.

The 95% is volume: generating, processing, drafting, computing. The 5% is judgment: choosing the right question, catching the wrong answer, knowing when the confident output is confidently wrong. Scale without judgment produces garbage at scale. Judgment without scale produces insight too slowly to matter. The combination is what works.

This matters for three reasons. It’s good for results - human oversight catches compounding errors. It’s good for humanity - people keep meaningful roles. And it’s good for adoption - organizations move faster when the system makes their people more powerful, not more replaceable.

The machine does the heavy lifting. The human gives critical guidance. Together, they are the knowledge machine.

8. The playbook

If I had to compress everything above into operating principles:

Old software moats are dead. Assume everything that emits enough signals about its inner mechanics will be reverse engineered and copied.

Know your altitude. Are you building an orbit product or a moon product? If your value lives in the visible layer, speed of replication will eat you. Build where complexity compounds.

Stop protecting your architecture. Open it. Let it become a standard. Your defensibility comes from your learning loop (your banking encryption key), not your design (the encryption algorithm).

Build knowledge machines, not products. A product is what your system outputs today. The value is the selection function that chose it.

Optimize for sequential learning. Everything that requires step N before step N+1 is defensible. Everything that can be skipped is not.

Guard your signal. Every public output leaks information about your internal state. Build where the gap between what’s visible and what’s valuable is widest.

Apply the 95/5 rule. Let AI do the volume work. Keep humans on the judgment work. The combination is structurally better than either alone.

9. The real question

Forty-three years ago, I sat down at a keyboard and started writing code because that was how you built things. Producing software is now free. That’s both scary and wonderful.

The founders who will define the next era are not the ones who write the best code or ship the cleverest features. They’re the ones who build knowledge machines - systems that learn what to create, improve that judgment with every cycle, and compound faster than anyone can copy.

The architecture is open. The design is known. Our job is to know how to tune it.

Andreas Dahlström

Founder, andreas@thalius.ai $\star$ March 2026

About Thalius

Thalius is a deep tech AI lab building a core component of a knowledge factory: a long-term, self-tuning AI memory with built-in quality control. A place to build, refine, and navigate your knowledge capital.

This manifesto was written the way it describes: one human steering two frontier AI models, one helping to write, one criticizing. The 95/5 rule in practice in a small knowledge factory.