Hari Rajashekar
All work
(00)
Case study · 01

Channel Fusion

How I sourced $587K in pipeline for a channel marketing agency in 90 days.

Role  Solo operator·Timeline  90 days·Stack  Instantly, n8n, Apify, Claude API, Google Sheets
$587K
Pipeline sourced
14
Enterprise demos booked
90d
Start to first demo
2,387
Cold emails sent
(01)
TL;DR

I ran cold outbound for a channel marketing agency called Channel Fusion. In 90 days, we booked 14 enterprise demos and sourced $587K in pipeline. The interesting part: the thing that actually worked was the opposite of what most AI outbound playbooks tell you to do.

(02)

The problem.

Channel Fusion is a channel marketing agency. Their sales motion was inbound and referrals, which kept the lights on but capped growth. They wanted to add outbound without hiring an SDR team.

Cold outbound has a bad reputation for a reason. Most of it is terrible. We had 90 days to prove it could work without being spammy.

(03)

Constraints.

No existing outbound infrastructure. Small budget. I was building this solo while also building SpartanFlow. The client's reputation mattered more than short-term volume, so deliverability had to stay clean.

Three hard limits:

  • 01Budget cap on tooling, roughly $400 / month.
  • 02Emails couldn't sound like templates.
  • 03Couldn't end up on blacklists.
(04)

What I tried first.

The obvious move was to let GPT write the emails. Give it a lead, give it a persona, give it some product context, and let it generate a personalized email. I built that in n8n in a weekend. It worked — until we actually sent them.

Reply rates were bad. Open rates were fine. Bounces were fine. But the replies we did get were mostly "please remove me." When I read the emails back, I could tell. They were technically personalized, but they sounded like AI. Everyone's inbox gets 50 of these a day now. Recipients can spot them immediately.

(05)

What actually worked.

I rebuilt the whole thing on a boring premise: humans still write better cold emails than LLMs. The LLM's job got downgraded to filling in 2 or 3 specific variables per lead: a casual icebreaker line, the company name stripped of any formal suffixes, and sometimes the city.

Everything else was hand-written templates with deliberate imperfections. Lowercase subject lines. Short sentences. Comma splices where a human would actually pause.

Before · LLM-written< 1% reply
"Hi Sarah, I hope this email finds you well. I came across Channel Fusion and was impressed by your innovative approach to managed IT services. I'd love to explore how we might..."
After · human template + vars4–5% reply
"sarah — saw {{company}} expanded into {{city}}last quarter. quick question: are you handling helpdesk in-house or contracting it out? have a weird idea that might save you a person."

The variation engine picked which human-written template to use. It didn't generate new ones. That alone changed reply rates. We went from under 1% to 4–5% on good weeks.

variation-engine.tssimplified
// pick a template, not a message. LLM only fills variables.
const template = pickTemplate(lead.vertical, lead.seniority);

const vars = await claude.fill({
  template: template.body,
  need: ['icebreaker', 'company_short', 'city'],
  lead,
});

return render(template, vars); // no free-form generation
Fig. 01 · pipeline diagramOutbound pipeline diagramLead, then enrichment, then template pick (1 of 12 human-written templates), then variable fill (Claude API fills 2-3 variables), then send.01Lead02Enrichment03Template Pick04Variable Fill05Send12 templates{{ }}Claude APIEach lead picks 1 of 12 human-written templates. LLM only fills 2-3 variables. No free-form generation.

The second thing that worked was the sequence structure. I ran three parallel sequences with different angles:

  • AReferral ceiling framing. You've hit the top of what referrals can do.
  • BPerformance guarantee. Pay only when it works.
  • CGive-first. I sent a useful report before asking for anything.

Sequence C outperformed the others on open rate by a wide margin. Turns out leading with something useful works better than leading with an offer. Obvious in hindsight.

(06)

The numbers.

In 90 days, we sent 2,387 cold emails across the three sequences. Booked 14 demos, most of them enterprise. Pipeline from those demos: $587K. The best-performing sequence had a 4.2% reply rate, well above industry average for cold outbound in the agency space.

Reply rate by sequence
Industry avg.Channel Fusion
5%3%1%0industry avg ~1%3.1%2.6%4.2%SEQ ASEQ BSEQ CReferral ceilingGuaranteeGive-first
(07)

What I'd do differently.

  • 01Start with Sequence C (give-first) earlier. I spent the first 3 weeks on the offer-led sequences.
  • 02Build the variation engine before the first send. I wrote it during week 5 and wished I had it from day one.
  • 03Talk to the client's existing customers first. The language in the emails improved massively once I interviewed two of their current accounts about why they chose Channel Fusion.