name, tag, prompt.
Optional: prompt2, copyLabel, copyLabel2, icon (an SVG string).
Save persists in this browser via localStorage.
name, tag, prompt.
Optional: prompt2, copyLabel, copyLabel2, icon (an SVG string).
Save persists in this browser via localStorage.
What it is: A high-stakes wrapper that tells the model to stop playing safe and do its best work.
Why it exists: Models tend to default to generic output unless pushed. Swing forces a more complete pass and a quick self-check.
What it is: Three tiny questions that force a stronger version of whatever you are doing.
Why it exists: They push the model from “works” to “undeniable” by forcing certainty, inevitability, and step change.
What it is: A four-quadrant decision engine. It forces a 360 view instead of one storyline.
Why it exists: Most people only ask “what happens if we do it.” That creates blind spots. Cartesian forces the opposite worlds too.
Because it forces the model to walk around the idea like it is an object, not a vibe.
What it is: A spec generator. Turns messy intent into requirements, constraints, tests, and guardrails.
Why it exists: Spec is greater than code. Code without a spec is drift waiting to happen.
What it is: A deterministic control panel that turns intent into measurable drivers, frictions, and a bounded equation.
Why it exists: Language is slippery. Equations force clarity: what matters, how much, and how you will measure it.
What it is: A structured way to raise stakes without hype using 1×, 3×, 10× (3×3), plus one catalyst.
Why it exists: It forces real mechanisms and failure modes, not “this is huge” vibes.
What it is: A structure extractor that strips topic content and leaves the control skeleton: sections, logic, conditionals, counts, and contracts.
Why it exists: It makes prompts repeatable. You can separate “front-end content” from “back-end control logic,” then reuse the same structure across projects without dragging old topic baggage.
How to choose fast:
Comparison for best results: clean pairing logic based on what each prompt is best at.
What the audit does: Forces a “best effort” pass by raising consequence, pushing specificity, and surfacing the tradeoffs you inherit at peak performance.
Stakes (HP) amplifies output intensity. Stakes (Sec) amplifies safety and verification discipline.
What the audit does: Converts “acceptable” into “inevitable” by forcing proof, mechanism, and a step-change improvement path.
What the audit does: Performs a counterfactual scan that exposes blind spots, kills one-sided thinking, and yields a decision rule instead of a narrative.
What the audit does: Converts intent into a testable contract by identifying ambiguity, forcing acceptance checks, and collapsing “interpretation space.”
What the audit does: Creates a measurable control panel that separates drivers from frictions, defines proxies, and makes improvement tunable over time.
What the audit does: Identifies failure modes that look correct at a glance, defines stop-the-line conditions, and demands minimum verification before you ship.
What the audit does: Separates structure from content so you can reuse the control skeleton without topic leakage, brand bleed, or accidental private detail carryover.
What the audit does: Forces a clean scope lock decision by pricing the tradeoffs: what you gain in stability versus what you lose in exploration and future adaptability.
What the audit does: Copies the full Pro audit bundle in the exact pill order so your runs stay consistent and repeatable.
What the audit does: Produces a clean synthesis after you run audits, turning scattered outputs into corrected beliefs, decision rules, and next actions.
What the audit does: Generates a fresh Cartesian set tailored to the session’s true focus, creating new questions you would not think to ask using the standard templates.
Wildcard should be novel, not random. If it is not clearly tied to the session focus, rerun with a tighter “true focus” sentence.
How is cartesian logic god mode?
Because it forces the model to walk around the idea like it is an object, not a vibe.
Most prompts only ask for the “do it” path. Cartesian makes you cover four different causal angles, and that changes the quality of thinking fast.
1) It kills one sided thinking
If you only ask “what happens if we do X,” you get the optimistic story.
Cartesian forces “what happens if we do not,” which surfaces the cost of inaction, opportunity cost, and status quo risks.
2) The two “wouldn’t happen” questions catch fake fears and fake promises
These are the ones most people never ask.
What wouldn’t happen if we did?
This exposes overclaiming. It strips away magical thinking like “if we ship this, support load disappears” or “if we hire, chaos ends.”
What wouldn’t happen if we didn’t?
This exposes doom narratives. It often reveals “we are not actually going to die if we wait,” or “this risk is not as immediate as it feels.”
Those two questions are basically a lie detector for both hype and panic.
3) It surfaces hidden assumptions automatically
Every quadrant forces the model to state what must be true for that outcome to occur.
When assumptions are wrong, hallucinations are more likely.
Cartesian flushes those assumptions into daylight.
4) It produces a decision rule, not just analysis
Most thinking tools end with “it depends.”
Cartesian naturally leads to thresholds like:
Do it when X condition is true, do not when Y condition is true.
That is operational.
5) It reduces hallucinations by requiring counterfactual consistency
A hallucination often shows up as a contradiction across quadrants.
If the model says “we must do it or we lose everything” but also says “if we don’t, nothing really changes,” the inconsistency becomes obvious.
6) It is short, but forces deep structure
It is “god mode” because it gives you a lot of leverage per token.
You get a 360 scan without writing a long spec or doing a full verification pass.
One clean way to think of it:
Swing increases output intensity.
Cartesian increases decision correctness.
How to choose fast:
What it is: A runnable-package generator. It turns a goal into an executable plan with artifacts and tests.
Why it exists: Most “plans” are vibes. DOE produces concrete steps, templates, acceptance checks, and early failure detection.
What it is: Three questions that harden an idea: undeniable, inevitable, next level upgrade.
Why it exists: It forces proof, default behaviors, and architectural leverage, so the plan survives skepticism and low motivation.
What it is: A constraint engineer. It converts messy intent into testable invariants, forbidden outcomes, and quality gates.
Why it exists: Drift happens when “must” is implicit. Forge makes “must” explicit and testable.
What it is: A schema and error model for one pipeline step. It defines input shape, output shape, and what failure looks like.
Why it exists: Pipelines fail at handoffs. Contracts make the handoff deterministic and debuggable.
What it is: A stepwise execution ladder where every rung produces an artifact and a stop check.
Why it exists: Big steps hide failures. Small rungs surface failures early and keep progress shippable.
What it is: A system-building protocol for deterministic automation with memory files, schemas, repair loops, and deployment triggers.
Why it exists: Most automation fails from guessing business logic, skipping schemas, and not logging. BLAST forces data-first design and self-repair.
Best pairings: use systems together based on the kind of failure you are fighting.