Skip to content

Predictable performance, low complexity, small as a side effect

A short essay on why kerf exists.


There is a moment in the life of every dashboard where you stop building features and start protecting performance. The data feeds multiply. The state graph thickens. A re-render that was free at three widgets is no longer free at thirty. So you start memoizing, profiling, conditionally rendering, hoisting state, splitting components, fighting StrictMode’s double-invocation, adding useCallback and useMemo until they’re noise — and then you do it again next quarter when someone wires up a new feed and the flame graph gets a new tall spike.

It’s a slow, expensive process. It’s easy to regress. And it’s almost impossible to automate a test against — “this view should render twice, not seventeen times, when this one signal updates” is a thing you can mostly only see by hand, on your laptop, against the data shape you happened to have that day.

I’ve spent a lot of years inside that loop. React is a genuinely convenient tool. It’s also the reason “performance is hard to predict, manage, and debug in a large app” reads as a truism rather than a complaint.

I wanted out of the loop for my own projects. So I started writing very plain, very small reactive UIs in roughly the same shape, project after project — vanilla JavaScript, an obvious render path, no abstractions I couldn’t trace through in my head. After about the fourth project I noticed I was hand-porting the same patterns and quietly improving them in only one of them. So I extracted the patterns into a package. That’s kerf. It’s about 6 KB minified + gzipped. The size isn’t the point. The fact that it’s small enough to be small is a consequence of the design rule that actually matters: every part of it should be predictable.

The case for a tiny reactive framework with a flat learning curve isn’t new. The reason I think it lands differently in 2026 is that the population of “things writing UI code” got noticeably larger and noticeably different. A meaningful fraction of the code in my last few projects was generated by an LLM — sometimes by me prompting one, sometimes by an agent running on its own for several minutes. And the constraints those collaborators put on a framework are not the constraints a human puts on a framework.

For a human, the React ecosystem’s real value is its breadth. One mental model spans web + native + SSR + a thousand libraries; a senior engineer carries it from job to job. That cross-platform write-once-run-anywhere proposition is hard to compete with, and I’m not interested in competing with it.

For an LLM, breadth is a liability. The context window can hold the framework, or the framework’s hidden rules, or your app — pick one. The rules-of-hooks page, the rules-of-keys page, the rules-of-concurrent-features page, the “what counts as a stable identity for useEffect” page — all real, all important, all elsewhere. So the LLM does what LLMs do under context pressure: it averages. It writes plausible code that almost works, with a subtle hook-rule violation buried two render passes deep, and you find it Tuesday.

The cross-platform argument also weakens. The whole reason “write once, run anywhere” was a strong value proposition was that writing it was the expensive part. If a coding agent can produce a polished native iOS app from the same requirements doc that produced the web app, the abstraction tax you pay for the unified codebase stops being a bargain. You’d rather pay the cheap-to-generate cost twice and have each codebase be obvious in its own platform’s idiom than pay the abstraction tax everywhere forever.

So my private case for kerf, the one I actually believe, is some mix of:

  • React is mature, but it’s mature for humans. The same maturity becomes a liability when the collaborator can’t hold the rules and your code at the same time.
  • The cross-platform argument is mostly an argument about who writes the code. When that’s an agent that can re-derive a UI per platform from the underlying requirements, “one codebase” stops being a competitive advantage.
  • Predictability is the feature you build for, when you can’t watch every render personally. A flame graph someone else has to read is a flame graph that needs to be obvious.

If predictability is the feature, the rest of the choices follow:

  • One render path. JSX renders to an HTML string. A small diff walks the live tree and patches what changed. No virtual DOM, no scheduler, no concurrent mode, no priority queue. When something re-renders, there is exactly one place you look and exactly one thing it can do.
  • A surface you can hold in your head. Sixteen exports at the top of the package: signal, computed, effect, batch, defineStore, resetAllStores, mount, morph, each, delegate, delegateCapture, toElement, SafeHtml, isSafeHtml, raw, Fragment. One more — arraySignal — lives on the opt-in kerfjs/array-signal subpath for granular collection updates, so apps that don’t need it pay nothing for it. The full surface is in the API reference. An LLM holds it in context with room left for your app.
  • No compiler, no hooks, no lifecycle. Components are plain functions returning JSX. There are no rules you can break without the call stack telling you exactly which line broke them.
  • Reactivity is one library. @preact/signals-core. Reading signal.value inside a render subscribes. Writing it triggers a re-render. There’s no hidden dependency tracking, no useEffect dep array, no stale closure tax.
  • Lists are keyed and reconciled directly. each(items, render, key?) ships rows through a keyed reconciler that operates on live DOM children, not a virtual one. Partial updates on a thousand-row table are O(changes), not O(rows). You can read the algorithm in an afternoon.
  • The framework ships its own docs for the agent. llms.txt at the repo root is the entry point. Below it, docs/ai/usage-guide.md has the four core patterns and the hard rules in a form an agent can fit and an agent can verify against. There’s a copy-paste system prompt on the AI page too. For the side-by-side count — minimum docs, hidden-rule count, public-API surface, render-path steps — see AI evidence: structural. The full set of evidence layers (structural / diagnostic / operational / empirical) is indexed at AI evidence.

None of those rules is a moonshot. Taken individually they’re each pretty old. What’s actually new is that they cluster into something coherent right now because of who’s reading them.

If you ask me “what should I use to ship?” my answer is, for almost any team larger than one, the existing framework you already know. React, Vue, Svelte, Solid — all mature, all extensively documented, all backed by ecosystems an LLM has been trained on for years. The single-engineer-with-an-AI-pair-programmer case where kerf wins is real, but it’s narrower than the population of teams shipping UI on Monday morning.

I think the right way to read this essay is: there’s a specific shape of project — an interactive island over server-rendered HTML, a focus-critical admin UI, a third-party widget you don’t want to bring React into, a one-engineer-plus-an-agent prototype that should ship as-is — where the tradeoff genuinely flips. For those shapes, predictability over breadth, simplicity over conventions, an obvious render path over a sophisticated reconciler. For the rest, stay where you are.

What I’m more confident about is the direction of travel. The fraction of UI code generated by an agent isn’t going to shrink in 2026. Whatever you pick to build with, it’s worth asking what your framework looks like from the agent’s seat — whether its rules fit in context, whether its render path is something the agent can simulate in its head, whether the failure modes are easy to test against. If “very large, very mature, very conventional” was the right answer when the limiting resource was human attention, it’s not obviously the right answer when the limiting resource is context.

  • The framework page is the landing — what it is, who it’s for, where to start.
  • The live demo runs seven small reactive primitives side by side in one page.
  • Mini Kanban and the streaming chat are the headline complete-app demos.
  • Built by an AI is a Pomodoro timer one-shotted by Claude from kerf’s usage guide plus a single paragraph of intent. The whole point of that example is that it works without me intervening.
  • The AI usage guide has the copy-paste system prompt, the four core patterns, the rules, and the common-errors table.

If you build something with it, I’d love to see it. If you fight it, I’d love to know where.

— Brian