Show Notes
Parker lays out a hands-on plan for building an AI-powered growth engine, focusing on a lean content system from long-form pillars to social snippets and a live, test-first approach.
Tools and approaches I'm evaluating
- Prompt Methus: a centralized prompt management and testing hub to save, test, and compare prompts across models.
- MCPS (Model Context Protocol) from Anthropics: a gateway between LLMs and external tools, enabling real-time tool integration (e.g., Blender) and cross-application workflows.
- RSS-to-LLM pipeline: feeds of AI/news filtered and turned into llm-friendly outputs for automation.
- Content system backbone: a repeatable workflow tying long-form content to short-form assets across platforms, tracked in sheets and project tools.
Content funnel blueprint (pillar to social)
- Pillar content: long-form piece (video/article) as the basis.
- Transcription and enrichment: AI transcription, captions, summaries.
- Distribution stack: publish on YouTube, then create short-form clips and posts for X/LinkedIn/Blog.
- Creative assets: use Canva bulk creation for social visuals; generate image prompts via a meta-prompt system.
- Audio/voice: 11 Labs-style voice output for podcast-like formats.
- Project management: track status in Google Sheets/ClickUp; define “ready to post” vs. “human in the loop.”
The RSS-to-LLM pipeline (key steps)
- Source selection: pull five items from RSS feeds using keywords (AI, prompting, GPT, etc.).
- Data extraction: fir crawl (a tool) scrapes pages and returns llm-friendly data (cleaned, structured).
- Iteration and summarization: an article summarizer processes each item; run multiple iterators to get separate outputs per article.
- Prompt engineering on outputs: adjust prompts to improve tone, style, and usefulness; move examples into a developer/system framing for consistency.
- Output handling: store summaries and caption text in a structured format (e.g., GSheets), with image text separate from caption text.
- Filtering and quality control: prototype a classifier (via ChatGPT) to blacklist topics (e.g., celebrity content) and prioritize high-impact stories.
- Next-gen flow: route outputs through image prompts, then into image generation (Flux/AI image gen), then to Canva bulk creation and finally to posting.
Live experimentation notes (what Parker is tweaking)
- Real-time prompt optimization: switching prompts between user vs. developer/system roles to get clearer outputs; iterating on the article summarizer prompts.
- Data organization: splitting image text from caption text, ensuring headers are correctly positioned in Sheets, and separating different asset types.
- Content governance: building a “staff editor” style classifier to pick top stories (e.g., top 3 of 15 every 30 minutes).
- Automation vs. human in the loop: identifying where a human should approve vs. where the system can auto-publish.
- Visuals and branding: experimenting with Canva bulk-create pipelines and Flux-based image prompts to keep visuals on-brand.
Actionable takeaways you can apply now
- Start with Prompt Methus to manage prompts and maintain a reusable prompt data set.
- Build a lean RSS-to-LLM pipeline: RSS -> Fir Crawl -> article summarizer -> structured outputs (title, summary, image text) in Sheets.
- Use a simple classifier to blacklist low-value content and high-signal topics before you route to automation.
- Separate assets early: keep image text and caption text in distinct fields so you can reuse for multiple platforms without confusion.
- Test one pillar piece across platforms first, then scale to bulk social creation (Canva bulk, Flux image generation) to save time.
- Treat long-form output as your memory; generate meta prompts for image generation and store those prompts for reuse.
Next steps
- Continue refining the end-to-end content system in small bets, validating each step before scaling.
- Push toward a repeatable MVP: one pillar video, five RSS items, five llm-friendly outputs, one batch of social visuals, and a closed-loop posting workflow.