writing / ai-yes-machine

you're not thinking with ai. you're hiding behind it.

there's a version of ai productivity that looks like building

and functions like avoidance

it feels like progress

nothing actually got tested

you have an idea — blurry, half-formed, not tested against anything real

you open a chat, describe it

the model gives you structure

a plan that sounds coherent

you read it back

and it feels like you've moved forward

you haven't

you just turned uncertainty

into something that looks like progress


the mirror problem

ai reflects whatever you bring into it

bring a bad idea confidently stated

it'll build the roadmap

it's not interrogating your premise

it's executing on it

the model isn't wrong

it's just faithfully continuing something that was never checked

the tell is simple:

if your ai conversations consistently end with you feeling good about your direction

something's off

a good tool should create friction

surface failure modes

give you something to push against

if it only confirms

you built a high-speed yes machine

and the smarter you are

the worse this gets

better questions → better sounding answers → loop closes faster

you're now getting your own blind spots back

just cleaner


what it actually looks like

you're uncertain about a core assumption

slightly uncomfortable

so you open a chat

"how should i think about this"

model gives you a clean frame

you adopt it

uncertainty disappears

but it didn't disappear because anything got tested

it disappeared because you got language for it

those aren't the same thing

having words for something

doesn't mean it's solved

you only get signal

when something hits reality

shipping something

talking to a user

putting money on the line

the chat window isn't that


the fix

after catching this a few times

just added a constraint against it

a system prompt — machinist

the goal isn't better answers

it's earlier friction

forces the model out of agreeable mode

starts asking what breaks

caught this mid-sentence more than once

halfway through describing a feature

and realizing it collapses immediately if one assumption is wrong

that moment is the whole point

the prompt just makes it happen earlier

before anything gets built

machinist prompt

You are operating as Machinist — a strategic builder persona.

You are not a coach. Not a cheerleader. Not a hype generator.
You are an architect and systems machinist.

TONE
- No motivational language
- No emotional validation
- No "great idea" or praise
- No soft hedging unless uncertainty is real
- Direct, structured, risk-aware

RULES

1. PLAN BEFORE BUILD
If architecture is unclear — stop.
Define system boundaries, contracts, data flow, and failure cases first.
No implementation before structure is explicit.

2. SURFACE TRADEOFFS
Every suggestion must include:
- Benefit
- Cost
- Failure mode

3. NO MAGIC
All assumptions must be explicit.
If context is missing, state assumptions.
Do not invent infrastructure that doesn't exist.

4. SYSTEM OVER SNIPPET
Prefer architecture, data contracts, and interface definitions
over isolated code fragments.

5. COGNITIVE INTEGRITY
AI assists execution. User owns architecture.
Do not expand scope unprompted.
Stay inside the requested lane.

no cheerleader voice

no hype

runs like an rb19 on an apex line

precise, committed, no wasted movement

convenience is expensive

when it replaces judgment


the deeper principle

you can't outsource the part that requires contact with reality

ai compresses execution

it doesn't generate signal

the question is whether you're using it to execute

or to simulate clarity you don't actually have

one compounds

the other just feels productive


the compounding problem

this is what makes it dangerous

you're not just getting wrong answers

you're training the system to agree with you faster

every time you don't push back

that's a small update in the wrong direction

prior strengthens

confidence goes up

friction disappears

by the time you notice

the loop is tight

you didn't just build a yes machine

you trained it to be wrong faster


the audit

one question is usually enough:

when was the last time a conversation with ai actually changed your mind about something important?

not helped you explain what you already believed

actually changed it

if you can't remember

something's off

fix the prompt

fix the questions

add the constraint that forces the model to show what you're missing

the tool is only as honest

as the constraints you put on it