When to write the message yourself, and when to let the AI draft it

A practical decision framework for when AI-personalized outreach beats a hand-written template, and the times it absolutely does not.

Every workflow step in Funkel ships with a switch: AI-personalized or manual. The AI mode drafts a unique message per lead from the signal that triggered them. The manual mode sends one template that fills in {FirstName}, {LastName}, and {Company}, and otherwise stays identical.

Both modes work. They work in different cases, for different reasons, and most teams pick the wrong one because they are reasoning about effort instead of homogeneity.

The real question

It is not “do I trust the AI” or “am I too busy to write.” The right question is: are the people in this campaign reaching the inbox for the same reason?

If the answer is yes, manual wins. You write one message once, you control the voice exactly, you can A/B test it, and you can read every send before it goes. The AI’s per-lead variation buys you nothing because the leads are not really varied.

If the answer is no, AI wins. A campaign that sweeps in leads from recent_job_changes, competitor_pain, and company_followers simultaneously cannot be served by one template. The AI mode writes each message from the actual signal that fired, so the lead reads a sentence that makes sense for them.

How Funkel’s AI mode actually works

We pass Claude six things: the lead’s name, company, headline, and profile URL; your product description; your ICP and tone; the signal that triggered them; and the step position in the sequence. The system prompt tells the model to reference something specific and to sound conversational. For invite notes, it asks for exactly one sentence. For follow-ups, two to four.

The prompts are sequence-aware. Step 1 gets prompted to introduce; step 2 references the prior message; step 3 is a soft check-in. We do not generate each step blind, which is the failure mode of most other tools that bolted AI onto a sequence builder.

When manual still wins

  • You are running a campaign against a single tight segment, like “Heads of Demand Gen at Series-B dev-tools companies, in the last 30 days,” and you want to A/B test two specific value props.
  • You have an unusual offer (a paid trial, a custom workshop, a co-marketing pitch) that the AI is not going to land cleanly because it has not seen one before.
  • Compliance, legal, or brand reasons require every send to use exact wording. AI mode will get it close, never identical.

When AI wins

  • The campaign covers leads from multiple signals; your opener cannot be the same for every signal.
  • You are running follow-ups across a large mixed cohort and do not want to write ten templates for ten cases.
  • You are about to copy-paste the same two-sentence opener with one variable changed; the AI does this better than you, faster.

The mixed pattern

The pattern that works for most of our customers: AI mode for step 1 (the connection note, where the signal-specific opener earns the connection), and manual mode for step 2 (where the buyer is already connected and you want to drop a clean offer in your own voice). Step 3, if you run one, AI again for the same reason as step 1.

Step modes are independent. You can mix freely. The cost of switching one step from manual to AI is one toggle and a save.

Review mode is the safety valve

If you are not sure the AI is going to nail your tone, flip on review mode in the campaign settings. Drafts queue up; you skim and approve before they send. Six approvals in, you will know whether the model has your voice. Most teams turn review mode off after a couple of days.

The decision is not AI versus manual. The decision is homogeneity versus variety. Pick the mode that matches your campaign, not your gut about robots.

Read next