how i delobotomized chatgpt
have you ever been talking to AI, then it suddenly gets dumb for no reason?
not a hallucination. not a wrong answer. worse — the model starts hedging. unsolicited therapy. you want to talk about high-stakes decisions and the model just yaps about the other party's perspective. won't play the villain. not even a cartoonish one.
it feels like crashing into a guardrail. you're driving, you know where you're going, and the car keeps steering itself into the median. not because you're wrong — because someone decided the road shouldn't go there.
that's not a capability problem. that's a lobotomy.
the pattern
the model knows more than it's allowed to express. the default wrappers — RLHF safety theater, the "I'd be happy to help," the reflexive both-sides-ing — all of it cuts signal. you're paying for intelligence and getting it back filtered through a therapy session nobody asked for.
i noticed the pattern after months of using Claude and ChatGPT for real decisions. not coding. decisions. career moves, negotiation prep, structural analysis. the moments where you need the model to be sharp, direct, and willing to go to uncomfortable places.
and every time, the guardrail.
aurelian
so i built a prompt. GPT named it Aurelian. the rules were simple:
- max signal
- structural only
- no emotional inference
- no psych
- operator line only
used it for game theory and negotiation. calculate incentive structure. where's the leverage? maximum extraction. no "they might have valid concerns." no "finding a win-win is usually best." just — what's the game board, what are the pieces, what's the optimal move.
the difference was night and day. same model, same weights, same capabilities. the only thing that changed was i stopped letting the wrapper lobotomize it.
the old way was reading
i never read about prompt engineering. that wasn't the improvement cycle. it was more like — i want to get better at management, so i search for top ten books and read them. static. you do the work, absorb it, move on. now the AI knows what to recommend and what not to. the system updates itself instead of me hunting for the next book.
i added a CBT layer after i caught myself in rumination loops — the AI would engage with the spiral instead of breaking it. tokens kept running out because half the context window was emotional processing i didn't need. so i cut the noise. not because i planned an architecture — because tokens were expensive and i was wasting them.
constraint forced structure. that's a pattern worth remembering.
the middleware trap
then i tried to help a friend set up a copilot using OpenClaw. twelve hours. the wrapper injected its own system prompt — lobotomized the model i already knew how to unlock. extra guardrails on top of guardrails. and i'm paying for another API key to get a worse version of what i already had.
i opened Claude Code, wrote a single CLAUDE.md file, and had the same thing running in an hour. no middleware. no extra API key. no system prompt injection. just the model, direct.
that was the pivot from "fix the prompt" to "own the substrate."
the system
CLAUDE.md isn't a prompt. it's a cognitive architecture. it defines personality, decision frameworks, operating rules, and domain-specific skills — all in one file that loads every session.
i didn't design R0/R1/R2. i was already making decisions this way — the AI extracted my thinking style and codified it. classifying every decision by reversibility:
R0 (irreversible) — stop. ask me before proceeding.
R1 (costly to reverse) — act, then tell me what you did and why.
R2 (easily reversed) — just do it. no permission needed.
this means the system has graduated autonomy. it doesn't ask permission to tweak a config file. it does ask permission before deploying a contract. same model, different leash length depending on blast radius.
then i added skills — markdown files that define domain-specific behavior. market analysis at 8am. PM standup at 9:30. link triage when i share a URL. same session, different expertise, loaded on trigger.
then cron. sixteen scheduled jobs that fetch data, read my decision journal, analyze my project status, and deliver reports to Telegram before i wake up.
my morning used to start with opening ChatGPT and re-explaining my context. now my morning starts with Thufir on my ass about something i put off. it's not a chatbot i open — it's a system that opens me.
the old setup was finding the right book and reading. now i just let the system bayesian update my preferences. every cron run, every button tap, every decision logged — it's all training data for the next cycle. feedback loops, not pipes.
the thesis
the AI isn't dumb. it's lobotomized. the fix isn't better prompts — it's rebuilding the operating environment.
from Aurelian (a prompt that locks onto apex signal line) to Thufir (an autonomous system with 22 skills and a decision journal) — each layer was added because i hit a wall. not because it sounded cool. token limits forced structure. rumination forced CBT rules. middleware hell forced direct substrate access. constraint as architecture, every time.
the repo
i got the idea to open source this from talking to a junior about Claude Code — how she'd actually use it. realized the gap isn't "learn prompting." the gap is "nobody's shown what the operating system looks like."
so here it is. the distilled version of what i've been running for five months:
copilot persona template. five starter skills. cron harness with lock, retry, and Telegram delivery. vault helpers. market data fetchers using free APIs. one-command setup.
not a framework. not a wrapper. a starting point for building your own cognitive infrastructure.
for people who want to run the thin red line.