Back to YouTube
Parker Rex DailyJune 1, 2025

I Created an AI Clone of Myself and the Results Were CREEPY

Parker Rex tests an AI clone to automate daily YouTube uploads—can four videos a day work, and would the results be creepy or convincing?

Show Notes

Parker tests the limits of automation by trying to fully automate the Daily Upload channel with AI clones, from intro hooks to voice avatars, and ends up highlighting what actually works and what’s still creepy or unfinished.

Can you fully automate this channel?

  • Four videos a day vs. one: doable in theory, but quality hinges on a robust data pipeline and human-in-the-loop checks.
  • The core question: would the AI content feel “like Parker” enough to publish without friction? Realistically, not yet; it’s a work in progress.

Template ideas to boost production quality

  • Strong opening: a short, punchy 4-second hook with a memorable visual or sound cue.
  • Storyboard approach: scene 1 (hook), scene 2 (core content), scene 3 (transition), scene 4 (outro/CTA).
  • Audio branding: a simple jingle or tone for transitions to improve recall.
  • Multi-tone pacing: use Ableton-like techniques to craft sounds for different scene changes.
  • Voice/avatar tests: compare AI-generated voice and avatar against actual footage to identify gaps.

The data pipeline and tech stack (conceptual)

  • Data sources:
    • GitHub generative AI marketing repository for news-like prompts and automation ideas.
    • Bright Data proxies for controlled scraping of social feeds.
  • Content flow:
    • News/topics are scraped and indexed, then summarized and scripted.
    • A cron/timer schedules daily releases (e.g., 8:00 a.m. publish).
  • Platform and tooling:
    • GCP for hosting and orchestration.
    • A search/vector layer to surface relevant topics for each video.
  • Goal: automate topic curation and scripting while keeping production quality high enough to publish.

Descript AI video test: avatars, voices, and results

  • What was tested:
    • Descript’s AI video maker to generate a video from a script.
    • Creating an AI avatar from a photo and training a synthetic voice.
    • Narration and scene transitions driven by AI-generated visuals.
  • The outcome:
    • The AI avatar and voice can produce a video, but the result feels uncanny or “creepy” and not ready for prime time.
    • Real-time editing and iterative improvements are still needed to reach Parker’s standard.
  • Key takeaway: avatar/voice cloning tech is advancing, but quality and naturalness still require substantial tuning and data.

Practical takeaways and caveats

  • Tooling quality vs. speed:
    • Building your own tooling blend (data ingestion, scripting, video assembly) yields higher control, but it’s a lot of work.
  • Data and training needs:
    • To capture nuanced inflections and pacing, you’ll need a lot of video data and careful polishing.
  • Privacy and ethics:
    • Voice cloning has privacy implications; use your own data and be transparent about AI-generated content.
  • When to push forward:
    • Use a human-in-the-loop for QA, especially for the avatar/voice outputs.
    • Start with a skeleton video and iterate on the intro, tone, and visuals before aiming for a full automation pipeline.

Actionable takeaways

  • Start with a solid hook template and a small audio branding cue to make each video identifiable.
  • Build a minimal viable data pipeline: sources → summarize → script → storyboard → single-video prototype.
  • Test AI avatars and voices on short clips, compare to real footage, and document improvements needed.
  • Evaluate privacy and ethical considerations early; avoid over-reliance on cloning until the quality and safeguards are solid.