Studio Founder

How We Codified Bilal's Writing Voice and Made 31 Agents Speak It

80% indistinguishable from humanDesign & Content4 min read

How We Codified Bilal's Writing Voice and Made 31 Agents Speak It

Key Takeaway

We used the Brand Voice Extractor to analyze 6 months of my content, produce a quantified voice profile in JSON, and feed it to all 31 agents β€” resulting in 80% of readers unable to distinguish agent-written content from mine.

The Problem

I have a voice. Short sentences. Data first. Provocative framing. Cut the fluff.

When I write, people engage. When my agents wrote, people scrolled past. Same topics. Same platforms. Different energy. The content was technically correct but emotionally flat. It read like a smart intern summarizing my thoughts instead of me actually thinking.

The standard fix: write a brand voice guide. "Be conversational but professional. Use active voice. Avoid jargon." Useless. Every brand voice guide says the same nothing. LLMs trained on those guides produce the same nothing.

I needed something quantified. Not "be punchy" β€” how punchy? What's my average sentence length? What's my adjective-to-noun ratio? Which transition words do I actually use vs. the ones I think I use?

The Solution

Mr.Chief's Brand Voice Extractor. It reads actual content β€” not guidelines about content β€” and produces a machine-readable voice profile with quantified style markers.

We fed it 6 months of my LinkedIn posts, published articles, and Telegram messages to my team. It extracted patterns I didn't even know I had. Then we injected that profile into every content-producing agent's system prompt.

The Process (with code/config snippets)

Step 1: Corpus assembly. We pulled content from three sources:

yamlShow code
voice_extraction:
  sources:
    - type: linkedin_posts
      count: 87
      period: "2024-06 to 2024-12"
    - type: published_articles
      count: 12
      period: "2024-06 to 2024-12"
    - type: telegram_messages
      count: 340
      period: "2024-09 to 2024-12"
      filter: "messages > 50 words"  # Skip one-liners
  total_words: ~62,000

Step 2: The extractor analyzes across multiple dimensions. Here's a simplified version of the output:

jsonShow code
{
  "voice_profile": "bilal_pyratz_v3",
  "tone": {
    "primary": "provocative-analytical",
    "secondary": "direct-impatient",
    "avoids": ["corporate-safe", "hedge-heavy", "qualifier-laden"]
  },
  "sentence_structure": {
    "avg_length": 11.3,
    "short_ratio": 0.42,
    "fragment_usage": "frequent",
    "opener_patterns": ["statement-of-fact", "contrarian-claim", "number-first"]
  },
  "vocabulary": {
    "level": "technical-accessible",
    "jargon_density": 0.08,
    "power_words": ["ruthless", "broken", "actually", "zero", "real"],
    "banned_words": ["synergy", "leverage", "holistic", "robust", "scalable"],
    "contraction_rate": 0.73
  },
  "formatting": {
    "uses_lists": true,
    "uses_bold": "heavily",
    "uses_numbers": "in-every-post",
    "paragraph_length": "1-3 sentences",
    "uses_headers": "sparingly"
  },
  "rhetoric": {
    "comparison_frequency": "high",
    "uses_before_after": true,
    "ends_with": ["provocative-question", "one-liner-summary", "call-to-action"],
    "opens_with": ["contrarian-statement", "data-point", "problem-statement"]
  }
}

Step 3: This profile gets injected into agent system prompts:

yamlShow code
# Agent content config
agents_using_voice:
  - peiy    # LinkedIn content
  - bill    # Blog writing
  - jack    # Design copy
  - warren  # Investor comms (modified β€” more formal)
voice_profile: "bilal_pyratz_v3"
voice_enforcement: strict  # Reject outputs that score < 0.7 on voice match

Step 4: Validation. We ran a blind test with 15 people who follow my LinkedIn:

View details
Test: 10 posts β€” 5 written by me, 5 by agents with voice profile
Judges: 15 LinkedIn followers (mix of investors, founders, operators)
Method: "Which were written by Bilal vs. a ghostwriter?"

The Results

MetricBefore Voice ProfileAfter Voice Profile
Avg sentence length (agent)18.7 words12.1 words
Fragment usage0.080.38
Banned word occurrences3.2/post0.1/post
Blind test accuracy90% could tell20% could tell
LinkedIn engagement (agent posts)45 avg likes127 avg likes
Content production speed3 posts/week15 posts/week
Time Bilal spends editing20 min/post3 min/post
Voice match score0.410.83

The biggest discovery: I use sentence fragments 42% of the time. "Not this." "Zero." "Every single time." No brand voice guide would have captured that. The extractor did because it counted.

Try It Yourself

  1. Gather 50+ pieces of your own content (social posts, articles, emails)
  2. Run the Brand Voice Extractor β€” the more content, the more accurate the profile
  3. Review the output β€” you'll learn things about your own writing you didn't know
  4. Inject the JSON profile into your content agents' system prompts
  5. Run a blind test after 2 weeks β€” measure, don't assume

Your voice isn't what you think it is. It's what you actually write. Measure the difference.

brand voiceAI writingcontent automationvoice profile

Want results like these?

Start free with your own AI team. No credit card required.

How We Codified Bilal's Writing Voice and Made 31 Agents Speak It β€” Mr.Chief