Back to YouTube
Parker RexJune 4, 2025

CRACKED AI Debug Workflow Cursor & VSCode Users Are Missing

Cracked AI debugging workflow for developers: cut debugging time, empower VSCode users, plan with PRDs, bind prompts, boost productivity.

Show Notes

Parker lays out a practical, multi-level AI-assisted debugging workflow tailored for Cursor and VS Code users, showing how to move from prompt-driven RCA to self-healing agents and offering concrete tooling guidance.

Level 1: Prompt-driven incident response workflow

  • You bind prompts to keys (e.g., B1) and work through a role-based chain that mimics real-world debugging roles.
  • Roles and flow:
    • Incidence Response Engineer: input logs/traces/observability data and produce a root cause analysis (RCA).
    • Solutions Architect: takes the RCA and generates a sequence diagram and a markdown design doc; outputs key responsibilities (design the solution, tech specs, integration plan, etc.).
    • Expert Incident Remediation Engineer: uses the prior outputs to craft the final remediation plan.
  • How it looks in the IDE:
    • Open Cursor or VS Code with Augment, press B1, paste the error context, and let the pipeline generate the outputs.
  • Outputs you get:
    • RCA, sequence diagram, a step-by-step remediation guide, and a formal design/document package.
  • Why this matters:
    • Frontloads planning, requirements, UI/UX, and architecture so the actual coding/debugging phase is lighter.
  • Next steps for you:
    • Check the prompts in the VI AI-SLC repository (prompts section) and try applying them to a real bug.
  • Quick action tip:
    • Use the three-step prompt chain to turn a bug report into a concrete remediation plan.

Actionable takeaways:

  • Bind a basic incident-response prompt to a hotkey (like B1) for quick invocation.
  • Treat RCA as the input to subsequent design docs, not the final word—iterate with the solution architect prompt.

Level 2: CLI/agent workflow with persistent memory

  • Moving beyond prompts, Level 2 introduces a CLI/agent flow that progresses through the SDLC with memory persistence.
  • How it works:
    • You can iterate with the chat, then switch to an agent mode.
    • The workflow uses its own memory (local) via a lock file to track progress with tags and steps (e.g., next step, prestep).
    • The flow moves from the previous step to the next using a clear SDLC progression cue like “SDLC next.”
  • Tool-agnostic approach:
    • You can mix tools you love (Cursor, VS Code Augment) with other lightweight agents; the setup isn’t tied to a single environment.
  • What I like about Level 2:
    • It keeps state locally, so you can pause, resume, and maintain context across steps without losing progress.
  • What you feed it:
    • The PRD, the previous step’s outputs, and a prestep that primes the next action.
  • End result:
    • A smoother transition from planning to implementation with an auditable, stepwise history.
  • Note on two workflows:
    • There are two separate workflows in the repo: one for new feature generation and another dedicated to debugging workflows. Both can be extended or PR’d.

Actionable takeaways:

  • Adopt a local memory mechanism (lock file) to persist step context across agent turns.
  • Use explicit prestep and next-step prompts to guide the agent through the SDLC.

Level 3: Self-healing code with AI debug agents

  • The frontier Parker mentions is self-healing code with AI-driven debug agents.
  • Core idea:
    • Pipe logs and traces from your observability stack (OpenTelemetry, Prometheus, Loki, Grafana) into AI-driven agents that can diagnose and even remediate.
    • The Dino/Deno thread: using Deno’s native capabilities to wire native-like pipelines for cross-stack debugging (Next.js, FastAPI, etc.), reducing boilerplate in multi-framework apps.
  • Why this matters:
    • Proactive QA and debugging, not just reactive fixes. Agents can handle routine issues and provide repeatable remediation patterns.
  • Real-world context:
    • Mark Zuckerberg has referenced a heavy role for agents in code production; Parker notes VI’s ecosystem and potential product directions (marketplace, discounts, workshops).
  • What this enables:
    • A more automated, self-healing feedback loop: detect issue, trace to root cause, apply fixes, verify, and close the loop.
  • Practical setup guidance:
    • Leverage OpenTelemetry to feed data into your AI agents.
    • Connect your observability stack to agents via Prometheus/Loki/Grafana and an integration layer (Deno-based or equivalent).
  • What to expect next:
    • VI is building toward products and community activities (workshops, product roasts) that will help teams adopt these patterns.

Actionable takeaways:

  • Start with a robust observability baseline (OpenTelemetry + Prometheus/Loki/Grafana) to feed AI agents.
  • Experiment with a Deno-based pipeline to reduce friction when wiring multiple frameworks together.
  • Treat self-healing as a longer-term target: prototype a minimal agent that can suggest fixes, then expand to automated remediation.

Practical debugging best practices (bonus)

  • Proactive formatting and linting:
    • Use strict formatting and linting to catch issues early (the video mentions tools like rough and biome for this purpose).
  • End-to-end type safety:
    • Favor pedantic TypeScript and TRPC to reduce runtime surprises and improve AI grounding.
  • Stick to established technologies:
    • Don’t chase new stacks just for novelty—the agents will be stronger with a stable base.

Actionable takeaways:

  • Implement strict linting/formatting as part of the CI to reduce token waste and improve accuracy.
  • Prioritize type safety and a proven tech stack to keep agent outputs reliable.

Repos and how to contribute

  • Primary repo: vibewai/ai-slc
    • Contains prompts and the multi-level workflow concept.
  • Prompts section:
    • The prompts are organized for new feature generation but will include debugging workflows soon.
  • How to participate:
    • Open PRs to contribute debugging workflow improvements, prompts, and integrations.
    • The project is tool-agnostic (Cursor, VS Code Augment, or other editors), so contributions can target prompts, memory management, or integration patterns.

Actionable takeaways:

  • Explore vibewai/ai-slc and try a PR that adds or refines a debugging flow prompt.
  • Experiment with both a Cursor/VS Code setup and a CLI/agent setup to see what level works best for you.

If you found at least one useful idea, consider liking the video and subscribing for more practical guides on building AI-assisted development workflows.