Skip to content
Scope Logic V1 — with Audit Bar
Scope Logic V1 0 categories
Click an icon to copy the primary prompt
Next Level Audits
Lossless copy · Click to copy
ChatGPT Gemini Claude Grok
Agentic Prompt Dock 0 prompts
Click an icon to copy the primary prompt
Clean, fast, explainable.
Each button is a different kind of leverage. Some increase effort. Some increase correctness. The goal is simple: get better outputs with fewer retries.
Swing

What it is: A high-stakes wrapper that tells the model to stop playing safe and do its best work.

Why it exists: Models tend to default to generic output unless pushed. Swing forces a more complete pass and a quick self-check.

When to use it:
Big decisions, important writing, complex systems, anything where a mediocre answer costs you time.
How to use it:
Put it at the top of your request. Require the ending Checks list.
Watch-outs:
It can overbuild small tasks. If you only need a simple output, it may try to build a whole product.
One example:
You ask for a landing page rewrite. With Swing, you get structure, proof points, objections handled, and a better plan to validate it.
Checks it should end with
assumptions
What must be true for the answer to work.
edge cases handled
Where it breaks, bends, or needs a fallback.
validation method
How you will test it quickly in the real world.
Upgrade

What it is: Three tiny questions that force a stronger version of whatever you are doing.

Why it exists: They push the model from “works” to “undeniable” by forcing certainty, inevitability, and step change.

When to use it:
After you have a first pass. When something feels decent but not lethal.
How to use it:
Run them right after an initial output, then fold the best upgrades back in.
Watch-outs:
It can explode scope. A simple tool can turn into an enterprise platform in one answer. Keep a scope boundary in mind.
One example:
You have a basic app concept. These questions surface what proof, constraints, and mechanisms would make it feel inevitable.
The three questions
Undeniable:
What has to happen to make this undeniable?
Inevitable:
What has to happen to make this inevitable?
Next level:
What has to happen to make this a next level upgrade?
Cartesian

What it is: A four-quadrant decision engine. It forces a 360 view instead of one storyline.

Why it exists: Most people only ask “what happens if we do it.” That creates blind spots. Cartesian forces the opposite worlds too.

Why it’s “god mode”

Because it forces the model to walk around the idea like it is an object, not a vibe.

Kills one-sided thinking:
It makes you compare action vs. inaction, not just the optimistic story.
Lie detector for hype + panic:
The “wouldn’t happen” quadrants catch fake promises and fake fears.
Surfaces assumptions:
Each quadrant forces “what must be true,” which flushes hidden premises into daylight.
Produces a decision rule:
Instead of “it depends,” you get thresholds: do it when X is true, don’t when Y is true.
Reduces hallucinations:
Contradictions across quadrants become obvious. Counterfactual consistency exposes weak reasoning fast.
High leverage per token:
Short prompt, deep structure. You get a 360 scan without writing a full spec.
When to use it:
Any time you are making a decision, picking a strategy, choosing scope, or arguing with yourself.
How to use it:
Answer the four questions. Then pull out assumptions, second-order effects, and a decision rule.
Watch-outs:
Overkill for tiny choices. Use it where the consequences matter.
One example:
You are deciding whether to rebuild a site in 2 weeks or patch it. Cartesian exposes what breaks in both paths.
Mini map of the value of each quadrant
Q1: If we do
Shows upside, real gains, and the new problems you inherit.
Q2: If we don’t
Shows cost of inaction, decay curve, and what stays broken.
Q3: Wouldn’t happen if we did
Lie detector for overpromising. Kills magical thinking and fake guarantees.
Q4: Wouldn’t happen if we didn’t
Lie detector for panic. Kills doom narratives and fake urgency.
Under Cartesian: the new buttons
Audit this Response:
Run the Cartesian scan on the single answer you just got, then output a tighter, corrected version plus a decision rule.
Audit this Session:
Run Cartesian across the whole thread: spot drift, contradictions, missing constraints, and produce the consolidated best version.
One clean mental model
Swing
Increases output intensity.
Cartesian
Increases decision correctness.
Spec

What it is: A spec generator. Turns messy intent into requirements, constraints, tests, and guardrails.

Why it exists: Spec is greater than code. Code without a spec is drift waiting to happen.

When to use it:
Before building anything reusable, expensive, or easy to derail. Especially workflows, apps, and systems.
How to use it:
Run it when you understand the idea well enough to commit constraints. Pick the profile: micro, standard, build.
Watch-outs:
Too early locks bad assumptions into the project. Don’t spec fog.
One example:
Before you touch code, you generate acceptance checks like “purchase flow completes in under 60 seconds.”
Equation

What it is: A deterministic control panel that turns intent into measurable drivers, frictions, and a bounded equation.

Why it exists: Language is slippery. Equations force clarity: what matters, how much, and how you will measure it.

When to use it:
Optimization work, funnels, reliability, QA, any project where “better” must be defined, tracked, and tuned.
How to use it:
Feed it your goal and context. It outputs levers, a bounded equation, monotonicity, and a measurement plan.
Watch-outs:
Bad variables create rigid bad systems. If it fails, revise the equation, don’t argue with outputs.
One example:
You define conversion lift as the outcome, drivers as clarity + proof, frictions as latency + confusion, then tune weekly.
Stakes

What it is: A structured way to raise stakes without hype using 1×, 3×, 10× (3×3), plus one catalyst.

Why it exists: It forces real mechanisms and failure modes, not “this is huge” vibes.

When to use it:
When you need the model to treat the work as consequential and stop giving lazy, shallow answers.
How to use it:
Write stakes in four lines: 1× local failure, 3× system failure under repetition, 10× scale break, 10× +1 catalyst mechanism.
Watch-outs:
Can inflate simple tasks. Keep the mechanisms real.
One example:
A one-off mistake loses an hour. Repeated mistakes kill a workflow. At scale it becomes operational collapse. The +1 is automation that removes the bottleneck.
Under Stakes: the new buttons
CoVe
What it does:
Separates answering from checking to reduce confident mistakes.
When:
Factual work, plans with real costs, anything you might ship, publish, or rely on.
How:
A0 → Q-set → V-set → A1 → Change log.
Stakes
What it does:
Forces consequence, failure modes, and scale breaks without hype.
When:
When you need the model to treat the work as consequential and stop giving lazy answers.
How:
1× → 3× → 10× (3×3) → 10× (+1).
Why they pair well
Stakes
Raises seriousness and mechanism depth.
CoVe
Prevents serious-sounding wrong answers from slipping through.
Content-Free

What it is: A structure extractor that strips topic content and leaves the control skeleton: sections, logic, conditionals, counts, and contracts.

Why it exists: It makes prompts repeatable. You can separate “front-end content” from “back-end control logic,” then reuse the same structure across projects without dragging old topic baggage.

When to use it:
When a prompt or doc works once and you want it to work forever. When you want templates without accidental brand or story bleed.
How to use it:
Paste the artifact. It outputs a token dictionary and a fill-in skeleton that preserves order, thresholds, optionality, and conditional logic.
Watch-outs:
If the source is vague, the skeleton will preserve vagueness. Tighten the original with Spec or Equation first if needed.
One example:
You extract the structure of a “client onboarding SOP” once, then rehydrate it for every new client by swapping tokens.
Under Content-Free: optional companion buttons
Rehydrate:
Fill the extracted skeleton with a new token set, then validate the output contract against the structure signature.
Diff Skeletons:
Compare two extracted skeletons to find structural deltas and decide which template is stronger.
Lock Contract:
Convert the output contract into a short acceptance checklist you can paste under any run to enforce format.
Best use pattern
Spec →
Make the intent durable.
Content-Free →
Extract the reusable control skeleton.
Rehydrate →
Reuse across contexts without drift.
Choose

How to choose fast:

Higher effort:
Swing
Stronger output:
Upgrade
Better decisions:
Cartesian
Guardrails:
Spec
Measurable tuning:
Equation
Real consequence framing:
Stakes
Correctness:
CoVe
Repeatability:
Content-Free
Pairing

Comparison for best results: clean pairing logic based on what each prompt is best at.

Power output:
Swing + Upgrade — strongest version of an idea, fast.
Decision correctness:
Cartesian — prevents wrong direction and blind spots.
Build reliability:
Spec — reduces drift, ambiguity, and rework.
Performance tuning:
Equation — measurability and iteration, not vibes.
Consequence + accuracy:
Stakes + CoVe — seriousness plus verification.
Template engine:
Content-Free + Rehydrate — extract the back-end skeleton, then reuse it safely.
Best overall stack for “high stakes work that must be right”
Cartesian →
Decision clarity.
Spec →
Constraints and acceptance checks.
Swing →
High-quality output pass.
CoVe
Correctness and change log.
Checks
Assumptions:
You want the same visual grammar preserved, while adding the new functional buttons and a sixth category for the content-free extractor.
Edge cases handled:
Long labels wrap cleanly; lists don’t collide; mobile stacks to one column; new “Audit” and “Rehydrate” items remain readable on narrow screens.
Validation method:
Paste into your page, test at 1440px / 860px / 420px widths, confirm no horizontal scroll, confirm section grid aligns label-to-copy, and confirm the new callouts render with consistent rule lines.
Go Get It!
Fast, lossless, decision-tight.
This layer is not “documentation about the thing.” It is documentation about what each audit does to your outcome: tighter decisions, fewer retries, less drift, more proof, and safer shipping.
Stakes (HP)
Upgrade
Cartesian
Spec
Eq
Stakes (Sec)
UCFE
Scope Audit
All
What did you learn?
Wildcard
Stakes (HP)

What the audit does: Forces a “best effort” pass by raising consequence, pushing specificity, and surfacing the tradeoffs you inherit at peak performance.

Primary effect:
De-averages the output. Less generic, more decisive, more complete.
Secondary effect:
Reveals the risk you buy when you push hard, so you are not surprised later.
Best moment to run:
First, when you need a breakthrough line and want the model to stop hedging.
Key distinction

Stakes (HP) amplifies output intensity. Stakes (Sec) amplifies safety and verification discipline.

Upgrade

What the audit does: Converts “acceptable” into “inevitable” by forcing proof, mechanism, and a step-change improvement path.

Primary effect:
Finds what would make the output feel obvious to a skeptic.
Secondary effect:
Converts vibes into requirements: what must be true, what must be built, what must be shown.
Best moment to run:
Immediately after a first pass that is good but not lethal.
The triad is a forcing function
Undeniable:
Forces proof, not persuasion.
Inevitable:
Forces structure, not hope.
Next level:
Forces step change, not incremental polish.
Cartesian

What the audit does: Performs a counterfactual scan that exposes blind spots, kills one-sided thinking, and yields a decision rule instead of a narrative.

Primary effect:
Surfaces what breaks in both action and inaction paths.
Secondary effect:
Flushes hidden assumptions by forcing “would not happen” worlds.
Output upgrade:
Turns “it depends” into thresholds you can actually run.
Built-in audit modes
Audit this Response:
Corrects one output, then tightens it with a decision rule.
Audit this Session:
Detects drift across the thread and consolidates a single coherent best version.
Spec

What the audit does: Converts intent into a testable contract by identifying ambiguity, forcing acceptance checks, and collapsing “interpretation space.”

Primary effect:
Defines “done” so output cannot hide behind language.
Secondary effect:
Prevents drift and rework by locking constraints early enough to matter.
Best moment to run:
After you know what you want, before you build anything reusable.
Eq

What the audit does: Creates a measurable control panel that separates drivers from frictions, defines proxies, and makes improvement tunable over time.

Primary effect:
Stops “better” from being a vibe by forcing variables and bounds.
Secondary effect:
Reveals the true ceiling when you push a lever.
Best moment to run:
Any time you will iterate, optimize, or report progress.
Stakes (Sec)

What the audit does: Identifies failure modes that look correct at a glance, defines stop-the-line conditions, and demands minimum verification before you ship.

Primary effect:
Prevents confident wrongness from leaving the chat.
Secondary effect:
Forces logging and reconstruction so you can debug without guessing.
Best moment to run:
Before publishing, automating, or relying on outputs operationally.
UCFE

What the audit does: Separates structure from content so you can reuse the control skeleton without topic leakage, brand bleed, or accidental private detail carryover.

Primary effect:
Makes prompts portable by preserving ordering, counts, conditionals, and contracts.
Secondary effect:
Reduces contamination when you reuse a template across clients or projects.
Best moment to run:
When an artifact works and you want the “engine” without the “story.”
Scope Audit

What the audit does: Forces a clean scope lock decision by pricing the tradeoffs: what you gain in stability versus what you lose in exploration and future adaptability.

Primary effect:
Prevents “scope drift by ambiguity” by making the lock decision explicit.
Secondary effect:
Exposes the hidden cost of locking too early or too late, so you can pick the right timing.
Best moment to run:
Right before build commitment, or when the thread keeps expanding without a baseline.
What it outputs when used well
Lock decision:
Lock now, lock later, or lock a minimum slice.
Baseline:
What is stable enough to compare future changes against.
Proof plan:
What gets tested now versus deferred safely.
All

What the audit does: Copies the full Pro audit bundle in the exact pill order so your runs stay consistent and repeatable.

Primary effect:
Enforces sequence discipline across sessions.
Secondary effect:
Reduces “random walk” auditing, where you change prompts and lose comparability.
Suggested flow:
Stakes (HP) → Upgrade → Cartesian → Spec → Eq → Stakes (Sec) → UCFE → Scope Audit
Learn

What the audit does: Produces a clean synthesis after you run audits, turning scattered outputs into corrected beliefs, decision rules, and next actions.

Primary effect:
Consolidates what changed and why, instead of leaving insights distributed across prompts.
Secondary effect:
Extracts “before → after” belief updates and a single operating rule to prevent relapses.
Best moment to run:
After All, or after any heavy audit chain where you need a final integrated read.
What you should get
Belief updates:
Top corrected assumptions with evidence.
Risk list:
Early warning signals, not just abstract fears.
Next actions:
Minimum-scope tests that move the decision forward.
Wildcard

What the audit does: Generates a fresh Cartesian set tailored to the session’s true focus, creating new questions you would not think to ask using the standard templates.

Primary effect:
Breaks template lock by producing novel, context-specific counterfactuals.
Secondary effect:
Surfaces the hidden constraint and real decision beneath the session’s surface topic.
Best moment to run:
When the thread is weird, stuck, or unusually specific, and the standard questions feel too generic.
Guardrail

Wildcard should be novel, not random. If it is not clearly tied to the session focus, rerun with a tighter “true focus” sentence.

Run It Clean

How is cartesian logic god mode?
Because it forces the model to walk around the idea like it is an object, not a vibe.
Most prompts only ask for the “do it” path. Cartesian makes you cover four different causal angles, and that changes the quality of thinking fast.

1) It kills one sided thinking

If you only ask “what happens if we do X,” you get the optimistic story.
Cartesian forces “what happens if we do not,” which surfaces the cost of inaction, opportunity cost, and status quo risks.

2) The two “wouldn’t happen” questions catch fake fears and fake promises
These are the ones most people never ask.

What wouldn’t happen if we did?
This exposes overclaiming. It strips away magical thinking like “if we ship this, support load disappears” or “if we hire, chaos ends.”

What wouldn’t happen if we didn’t?
This exposes doom narratives. It often reveals “we are not actually going to die if we wait,” or “this risk is not as immediate as it feels.”

Those two questions are basically a lie detector for both hype and panic.

3) It surfaces hidden assumptions automatically
Every quadrant forces the model to state what must be true for that outcome to occur.
When assumptions are wrong, hallucinations are more likely.
Cartesian flushes those assumptions into daylight.

4) It produces a decision rule, not just analysis
Most thinking tools end with “it depends.”
Cartesian naturally leads to thresholds like:
Do it when X condition is true, do not when Y condition is true.
That is operational.

5) It reduces hallucinations by requiring counterfactual consistency
A hallucination often shows up as a contradiction across quadrants.
If the model says “we must do it or we lose everything” but also says “if we don’t, nothing really changes,” the inconsistency becomes obvious.

6) It is short, but forces deep structure
It is “god mode” because it gives you a lot of leverage per token.

You get a 360 scan without writing a long spec or doing a full verification pass.

One clean way to think of it:

Swing increases output intensity.

Cartesian increases decision correctness.

The prompt systems, not the UI.
Each tile is a reusable prompt system with a job. Some generate runnable plans. Some lock constraints. Some define contracts. Some build automation architecture. Use the right system for the failure you are trying to prevent.
Choose

How to choose fast:

Need a plan you can run:
D.O.E.
Need it to be undeniable:
Inevitability Triad
Need drift-proof guardrails:
Constraint Forge
Need clean data handoffs:
Interface Contract
Need stepwise execution:
Execution Ladder
Need full system architecture:
B.L.A.S.T.
D.O.E.

What it is: A runnable-package generator. It turns a goal into an executable plan with artifacts and tests.

Why it exists: Most “plans” are vibes. DOE produces concrete steps, templates, acceptance checks, and early failure detection.

When to use it:
Any time you want execution without guessing: building tools, workflows, deliverables, SOPs, agents.
What you feed it:
Goal, context, environment, success definition, non negotiables.
What you get:
Directive → Orchestration plan → Execution artifacts (cap 8) → Tests (cap 12) → Handoff.
Watch-outs:
If you lie about “done,” DOE will faithfully build the wrong machine. Tighten success and constraints first.
Two modes
DOE:
Full package with artifacts and tests, optimized for repeatability.
DOE MVP:
Minimal version for one session: small plan, 4 artifacts, 4 tests, first 10 minutes checklist.
Triad

What it is: Three questions that harden an idea: undeniable, inevitable, next level upgrade.

Why it exists: It forces proof, default behaviors, and architectural leverage, so the plan survives skepticism and low motivation.

When to use it:
After a first pass, when the idea “works” but still feels dismissible or fragile.
What you get:
Ranked leveraged changes, minimum viable version (1 session), hardened version (1 week), fleet version, risks, acceptance checks.
What it optimizes:
Certainty (proof), persistence (defaults), class-change leverage (architecture).
Watch-outs:
It can expand scope fast. Keep a boundary: what is the smallest “inevitable” version.
Triad + Stakes variant
What it adds:
For each top change: 1× failure, 3× repetition failure, 10× scale break, +1 catalyst.
When:
When you need urgency and failure-mode realism, not just “better ideas.”
Forge

What it is: A constraint engineer. It converts messy intent into testable invariants, forbidden outcomes, and quality gates.

Why it exists: Drift happens when “must” is implicit. Forge makes “must” explicit and testable.

When to use it:
Before anything reusable, client-facing, or automation-driven. Also when outputs keep wandering.
What you get:
Invariants (5–10) → Forbidden outcomes (3–8) → Interface contract → Quality gates → Edge cases → Minimum tests (T01–T06).
What it prevents:
Scope creep, format drift, “almost right” outputs, and silent failure patterns.
Watch-outs:
If constraints are vague, the system becomes a rubber stamp. Constraints must be pass fail.
Strict mode
Adds:
How-to-test line for every invariant and forbidden outcome, failure signatures for gates, and drift tripwires.
Use when:
You are shipping automation or anything that will be reused by other people.
Contract

What it is: A schema and error model for one pipeline step. It defines input shape, output shape, and what failure looks like.

Why it exists: Pipelines fail at handoffs. Contracts make the handoff deterministic and debuggable.

When to use it:
Any time data moves between steps, tools, agents, or files. Especially when outputs feed automation.
What you get:
Contract header → Input spec → Output spec → Error model → Compatibility promise → Acceptance checks.
What it prevents:
Ambiguous fields, breaking changes, silent truncation, inconsistent ordering, brittle parsing.
Watch-outs:
Overfitting to one example. Contracts should represent the real variability you expect.
Contract + Examples variant
Adds:
2 pass examples and 2 fail examples with error codes, kept short and realistic.
Use when:
You want instant clarity for implementers or future-you debugging at 2am.
Ladder

What it is: A stepwise execution ladder where every rung produces an artifact and a stop check.

Why it exists: Big steps hide failures. Small rungs surface failures early and keep progress shippable.

When to use it:
When you keep biting off too much, when scope is risky, or when you need repeatable progress under constraints.
What you get:
R0 setup → R1 first artifact → R2 second → R3 integration → R4 verification → R5 ship + rollback, plus failure signals and fixes.
What it optimizes:
Irreversibility control, early error detection, and continuous artifact production.
Watch-outs:
If rungs are not truly testable, you just renamed “big step” into five fake steps.
Ladder Audit variant
Adds:
Audit pass for missing inputs, hidden dependencies, steps too large, and non-testable rungs, then outputs a revised ladder.
Use when:
Failure cost is high or you are delegating execution to someone else or an agent.
B.L.A.S.T.

What it is: A system-building protocol for deterministic automation with memory files, schemas, repair loops, and deployment triggers.

Why it exists: Most automation fails from guessing business logic, skipping schemas, and not logging. BLAST forces data-first design and self-repair.

When to use it:
When you are building tools that must keep working: agentic systems, ETL-like pipelines, automations with external services, multi-step workflows.
Core rule:
Schema before tools. Planning files are memory. If logic changes, update SOP before code.
What you get:
Blueprint questions → Connectivity verification → 3-layer architecture → Stylized payload → Triggered deployment, plus repair loop.
Watch-outs:
Overhead for small tasks. Use BLAST when reliability matters more than speed.
BLAST
Best for:
Full projects with external services, schemas, logs, and deployable artifacts.
Strength:
Prevents guessing and enforces self-healing.
BLAST Lite
Best for:
Small systems where you still need data-first and repair loop, but less ceremony.
Strength:
Speed with guardrails.
One clean mental model
DOE
Turns a goal into an executable package.
Forge
Locks constraints so outputs stop drifting.
Contract
Makes pipeline handoffs deterministic.
Ladder
Executes safely in artifact rungs.
BLAST
Builds durable automation with memory and repair.
Triad
Hardens the idea so it wins under reality.
Pairing

Best pairings: use systems together based on the kind of failure you are fighting.

Idea hardening:
Triad → DOE (lock what “wins,” then generate the runnable package).
Drift prevention:
Forge → DOE (constraints first, then runnable plan).
Pipeline reliability:
Contract → Ladder (clean data handoffs + stepwise artifacts).
Full automation:
BLAST → Contract → Tools (schema + SOP + atomic tools, with repair loop).
High risk delivery:
Forge Strict → Ladder Audit (tight constraints and audited rungs).
Checks
Assumptions:
You want documentation that describes each prompt system (DOE, Triad, Forge, Contract, Ladder, BLAST), including when to use it, what it produces, and the variants.
Edge cases handled:
Overbuilding small tasks is called out per system; vague constraints and “fake steps” failure modes are flagged; variants are explained so users don’t choose the wrong level of ceremony.
Validation method:
Pick one real task and run: (1) Triad to harden, (2) Forge to lock invariants, (3) DOE to generate artifacts and tests, (4) Ladder to execute, (5) Contract to formalize one handoff. Confirm each stage produces a concrete, testable output.
Go Get It!