Back to YouTube
Parker Rex DailyAugust 30, 2025

The Future of AI Cybersecurity (or lack thereof) is Frightening

The future of AI cybersecurity: risks, vulnerabilities, and business opportunities from AI coding tools to reverse engineering devices.

Show Notes

A brisk, punchy look at AI-assisted cybersecurity and what a real-world security services business could actually look like—and the red flags to watch.

The security-vs-opportunity tension

  • AI coding assistants drive more vulnerabilities; security testing is increasingly valuable.
  • The business angle: finding and monetizing those vulnerabilities can be lucrative, with fast-moving sales cycles if you’re positioned right.

A practical model: Pen Test as a Service (PaaS)

  • High-ticket, quick-turnaround assessments for mid-market to growth-stage companies.
  • Focus on prioritization and remediation plans rather than just finding issues.
  • Brand positioning matters: emphasize white-hat, trustworthy security rather than “hacker” vibes.

Who to sell to and why

  • Ideal customers: companies with 2–10 million+ run rate (or higher) that can afford and need security testing.
  • Create a compelling avatar: not the vibe coder; think risk-aware teams who respond quickly to security findings.
  • Sales approach: tests and a clear remediation plan; demonstrate quick ROI and measurable risk reduction.

Architecture and tooling sketch

  • Backend: Go for performance and reliability; orchestration layer to run tests.
  • AI layer: Verscell AI SDK (for rapid client-side orchestration); use AI to gather context signals (funding, breaches, industry chatter) for outreach and scoping.
  • Data sources for outreach: LinkedIn data (Sales Navigator, Amplify Scraper, PhantomBuster).
  • Outreach signals: funding rounds, industry incidents, recent breaches, etc.
  • Testing toolkit: Wireshark, Charles Proxy; test devices (e.g., iPhone) to probe network paths and APIs.
  • Browser agents: Anthropics/AI browser agents canaries to surface insights, though capabilities are still evolving.
  • Core mindset: pick the right tool for the job; AI is a tool, not a default must-use solution.

Product/Market/Founder risk framework

  • Product risk: Is there a compelling product with a clear aha moment? Can you deliver measurable value?

  • Market risk: TAM vs SAM; top-down vs bottom-up sizing; price points and volume; customer willingness to buy.

  • Founder risk: Team capabilities (e.g., Golang proficiency); ability to execute end-to-end.

  • The triangle: constantly assess product risk, market risk, and founder risk as you design and test the idea.

Practical experiments and takeaways

  • Run real-world explorations (e.g., hotel-pentesting scenario) to test the concept and reporting flow.
  • Build a testable curriculum for topics the AI should cover during a real engagement; keep prompts focused.
  • Use clarifying questions upfront and chain prompts to refine plans and outputs.

Safety, ethics, and guardrails

  • AI models tend to please; beware over-promising or enabling misuse.
  • Stay within ethical/legal boundaries; frame everything as white-hat defense with clear guardrails.
  • Use this space as a learning sandbox, not a how-to for offensive abuse.

Tips for moving forward

  • Start with a minimal viable test harness and a few pilot engagements to validate viability.
  • Use browser agents as a signal of where the space is going, but don’t over-rely on them yet.
  • Be pragmatic: if a non-AI approach works better for a given task, use it.

Questions and prompts strategy

  • Prepare clarifying questions before engaging an AI for a scenario.
  • Use a two-step prompt approach: first gather context, then build a concrete plan and adapt with follow-ups.

Final takeaway

  • The AI cybersecurity space is real and potentially scalable, but success hinges on clear target customers, a disciplined risk framework, and choosing the right tools—AI where it adds value, not by default.