kanly governance
i run multiple claude code sessions in parallel across repos
the hard part isn't getting code out
it's keeping the sessions from lying to each other
ai doesn't forget things
it just never knew them
one session changes a contract interface
the other session keeps building against the old one
nothing errors. everything's wrong.
so i built a governance layer
six protocols. plain markdown files.
no dependencies. no database.
just structure that travels between repos
the core idea:
most irreversible mistakes don't look irreversible when you're making them
they look like normal commits
the system's job is to make you pause before the ones you can't undo
changes get classified by reversibility
trivial — move fast
costly to reverse — note it
hard to reverse — stop. list what breaks. get explicit approval.
no autopilot through the danger zone
when one repo's changes affect another, a dispatch gets written
not a notification. a structured note with context.
the receiving session checks before it starts
shared memory without shared state
there's a dissent protocol
if a change increases coupling, hardens assumptions, or quietly expands scope — it gets flagged
not blocked. surfaced.
the human still decides. but the decision becomes conscious.
called it kanly
vendetta rules from dune
formal structure for conflict between parties that share a universe but operate independently
felt right
it's not a framework
it's a set of conventions
works because markdown is readable without tooling
useful even if you stop using ai tomorrow
the insight underneath all of it:
ai makes you faster at building the wrong thing too
governance isn't overhead
it's the thing that keeps speed from becoming expensive