Back to YouTube
Parker Rex DailyJanuary 8, 2026

3 Years of AI Coding Lessons in 30 Seconds (or 5 if you want)

3 years of AI coding lessons in 30 seconds: why test multiple AI models, ship real product value, and avoid model lock-in in 2025.

Show Notes

Parker distills 3 years of AI-coding lessons into punchy, actionable takeaways for daily work: rotate models, embrace refactoring with AI, and keep your tooling resilient with a two-model workflow.

Key takeaways at a glance

  • Don’t get married to a single model. Rotate and test during promos; model quality varies over time.
  • Expect heavy refactoring. Use AI to generate a big backlog, then prioritize and split work across multiple agents.
  • Know the primitives across languages. The easier you map concepts (not just syntax), the faster you adapt and refactor.
  • Don’t fight the model’s preferred workflow. Let tooling do what it does best (e.g., Bun, npm conventions) and adapt.
  • Compare OpenAI, Claude, Gemini on coding tasks. OpenAI often leads today, Gemini has strengths in other areas (like Flow for video); test to decide what fits.
  • Get out of the harness. Two-model workflows beat a single CLI wrapper. Use two tools in parallel for closer-to-the-metal control.

Model strategy: rotate, don’t lock in

  • Don’t stick to one model. Quality shifts with updates; a "best" label is time-bound.
  • Use promo windows to push limits:
    • Sign up for one-month trials during promotions.
    • Run your typical tasks to compare results across models.
  • Expect variability even with the same product name across versions.
  • Practical approach:
    • Keep OpenAI and Claude (and Gemini where useful) in rotation.
    • Periodically re-evaluate which model handles your core tasks best.

Refactor with AI: a practical playbook

  • After a sprint of output-heavy work, ask the model for big refactor opportunities:
    • “Give me an extended list of 30+ refactors that would improve this codebase.”
  • Use the list to create tracks and assign ownership to multiple agents.
  • Visualizing workflow:
    • Left pane: a running, prioritized refactor backlog.
    • Right pane: multiple agents working in parallel on the tracks.
  • Follow-up: have the model prioritize by impact/ROI and split into separate, trackable items.

Know your primitives

  • Language primitives matter more than exact syntax.
  • If you know the core concepts (e.g., how a feature maps across languages), you can navigate refactors quickly.
  • Quick language transitions:
    • C# to TypeScript is usually smoother because TypeScript is syntactically friendly to C-style concepts.
    • Other stacks (Rails, Haskell) require more upfront bridging work.
  • The takeaway: aim to understand the fundamentals so you can reason about changes regardless of the language.

Don’t fight upstream from the model

  • If the model leans toward a tool or pattern, don’t fight it—adapt.
  • Example: Bun vs traditional npm workflows. If the model pushes Bun, consider leaning into its strengths rather than forcing old patterns.
  • Let the model guide tool choice to reduce friction and errors.

Cloud vs OpenAI vs Gemini: where the value is

  • OpenAI remains very strong for coding tasks today; test across the big players to know what’s best for your use case.
  • Gemini:
    • Has advantages in areas like video tooling (Flow) and certain UI experiences.
    • Gemini Ultra bundle offers a bundled path for media-related needs; pricing and depth can be compelling for specific workflows.
  • Claude:
    • Strong in general reasoning and certain tasks; coding tooling can lag behind OpenAI for now.
    • Useful to pair with OpenAI for a broader toolset, depending on your needs.
  • Actionable approach:
    • Run side-by-side tests for your typical coding tasks.
    • Use Flow or similar video tooling when you have video-generation needs.
    • Keep an eye on pricing windows and feature updates; switch when a promo or product improvement makes sense.

Get out of the harness: two-model workflow

  • Move toward a two-model setup for coding tasks:
    • Open two interfaces in parallel (e.g., Codex/OpenAI and Claude or Gemini variants).
    • Keep them in separate windows; don’t funnel everything through one wrapper.
  • Benefits:
    • Closer to the metal; fewer hidden abstractions.
    • Lower risk of “surprises” from a single model’s update.
    • Faster detection of edge cases and different strengths.
  • Practical tip: in your UI, have a simple left-side backlog and run the two models side by side, auto-iterating unless you explicitly override it.

Quick actionable steps

  • Start a monthly promo experiment: sign up for a model you don’t regularly use and push a full sprint through it.
  • After every completed sprint, run a 15–30 minute refactor-walkthrough with AI to generate a prioritized list of refactors.
  • Build a two-model studio: keep two distinct model sessions open (e.g., OpenAI and Claude/Gemini) and compare outputs on critical tasks.
  • Map primitives when refactoring across languages to keep changes safe and understandable.
  • For non-coding tasks (like video), test Gemini’s Flow and related tools to see if they fit your workflow.

If you have questions, drop them below—great for Q&A videos. Like, subscribe, and I’ll see you in the next quick update. Peace.