Show Notes
Parker walks through the practical side of Cursor context, then grounds it with a real-world refactor of his Echo project. You’ll get a concrete workflow for prompts, code structure, and staying productive without tool-hopping.
Core ideas: context, intent, and state
- Context is a blend of your intent (what you’re trying to do) and the current state (what exists in your codebase).
- Don’t dump your whole repo into context. Be surgical and focused.
- Use explicit signals (the @ patterns) and write reusable rules to keep the team aligned.
- Patterns and rules help ensure consistency across data fetching, mutations, tests, etc.
Patterns, rules, and tooling
- Write rules that capture reusable knowledge so the team can stay consistent over time.
- Centralize common operations (fetching data, mutating data, tests) so there’s a single source of truth.
- Extend Cursor with MCP to surface live data from external systems for better agent awareness.
- Avoid reliance on auto-gathered context; curated signals beat everything else.
Short-lived tools and HITL workflows
- A powerful pattern: let the agent write short-lived tools it can run to gather more context.
- HITL approach: run the code, inspect outputs, and let the model review results to refine the next steps.
- Debug statements and live outputs can guide the model’s reasoning without overburdening the prompt.
Echo case study: a practical workflow from idea to refactor
- Project goal: automate video metadata workflows (thumbnails, chapter markers, metadata) to reduce manual boilerplate.
- Prompt discipline: stay as close to the prompt as possible. Start with a PRD to shape the problem and solution.
- Research and opinions: ask questions about the codebase, gather multiple perspectives, and compare outputs to decide the direction.
- Back-end structure (TypeScript focus):
- A singleton client to manage external services.
- Domain-first layout: domains (e.g., video processor), services, interfaces, DTOs, adapters.
- Infrastructure, API endpoints, utilities, tests, and scripts.
- Example architecture layout:
- main package for the video processor
- domains/model
- services
- interfaces
- dtos (data transfer objects)
- adapters (one per service)
- infrastructure, api, utils, tests, scripts
- File-tree and prompts:
- Include the current file tree and a before/after snapshot in prompts.
- Use relative paths to anchor the model to your real project structure.
- Task extraction is critical: print tasks as atomic steps, not mega goals.
- Taskmaster vs deeper prompts:
- Taskmaster is useful, but for deeper Python work the richer, prompt-driven approach can yield more detailed, production-ready tasks.
- If you want to push toward a refactor, generate a long, detailed task list (hundreds of lines) that covers concrete steps.
- Example-driven prompts:
- Provide examples (e.g., a GCS adapter) so the model can adapt to your style and expectations.
- Review the examples for correctness and realism; don’t assume the model wrote perfect code.
- Architecture shift: monolith to hexagonal
- Use a detailed implementation plan to guide the refactor.
- Incorporate an “enthropic coding assistant” style prompt to maintain momentum and keep the plan aligned with reality.
- Memory bank and outputs:
- Maintain an overview, task list, file tree, comparisons, PRD, and examples as a cohesive prompt memory.
- This helps keep context across interactions without losing ground.
- Practical takeaway: stay in a workable toolset
- For TypeScript, you may prefer Taskmaster, but don’t rely on one tool for every problem.
- For Python, you may opt for a different approach if it better fits the gap you’re addressing.
- Community and learning
- The Vibe with AI Discord and weekly workshops are great for real-world feedback and accountability.
- The project Echo and its workflow illustrate how to balance learning with production progress.
Quick takeaways and actionable steps
- Be surgical with context: feed just the intent and the relevant state; avoid overloading the model.
- Use at-symbols and reusable rules to codify team conventions.
- Let the agent generate short-lived, testable tools to expand context when needed (HITL-friendly).
- Start with a PRD and a concrete file-tree snapshot to ground prompts in reality.
- Break tasks into atomic steps; avoid “one giant task” prompts.
- Build competency by writing or iterating on critical paths yourself; use prompts to guide and accelerate, not replace learning.
- When refactoring, propose architecture incrementally (monolith -> hexagonal) with a clear implementation plan and real-world examples.
- Pick a stable toolset per language and use-case; don’t tool-hop just for the sake of novelty.
- Leverage community resources (Discord, workshops) to sanity-check approaches and get feedback.
Code example: proposed TS project structure (illustrative)
src/
video_processor/
domains/
model/
services/
interfaces/
dtos/
adapters/
infrastructure/
api/
utils/
tests/
scripts/
This mirrors Parker’s described layout: a domain-driven layout with adapters for each service, plus infrastructure, API, tests, and scripts.
Links
- Cursor AI (context and patterns for AI coding)
- Taskmaster AI (task extraction tool for prompts to tasks)
- FastAPI (Python web framework)
- Vibe with AI Discord (community and weekly workshops)
- Raycast (prompt management and quick context cues)
- Raycast Prompt Explorer (browse and add prompts)
If you want more depth on a specific section (e.g., the TS file tree, the exact prompt format Parker used, or the atomic task examples), tell me which part you want expanded and I’ll tailor that section.