Lead Assessment & Rotator

AI-powered lead qualification and intelligent assignment system that ensures every high-quality lead reaches the right team member quickly and fairly.

Overview

This system automatically evaluates, qualifies, and assigns inbound leads from Zendesk to the appropriate team members. It uses AI to assess lead quality, confirm industry or academic usage, and route qualified leads using a rotator-based system that respects agent availability.

Full Zapier Workflow

Zoom: 30% | Use mouse wheel or pinch to zoom | Click and drag to pan
Full Zapier Workflow workflow diagram

Impact & Metrics

Response Time

Reduced by ~60%

Hours → Minutes

From hours to minutes for lead response and assignment

Assignment Coverage

100% since launch

Zero Missed Leads

No leads left unassigned with OOO logic and fallback handling

Time Reclaimed

120-150 hours/year

8-10 min per lead

Automated processing saves significant time per lead

Fair Distribution

Tracked & verified

Transparent Process

Improved fairness and transparency in lead distribution

Lessons Learned

The Lead Gen Assistant tested my patience and persistence more than almost any other automation I've built. The hardest part wasn't the logic; it was getting the AI to reliably interpret messy Zendesk ticket text and match it to the right institution in a massive dataset, all while following strict business rules without deviation.

The Challenge of Inconsistent Interpretation

In the early stages, I was frustrated because the same prompt would work perfectly one run and fail completely the next. The model would sometimes latch onto the wrong part of a ticket comment, for example, treating a user's email signature ("Department of Biology, McGill University") as the requester's organization, even when the actual request came from a Gmail address. I tried adding disclaimers like "ignore signatures and unrelated names," but the model started overcorrecting and skipping valid organizations. It became clear that I couldn't just "prompt my way" out of inconsistent inputs.

Structural Solutions Over Perfect Prompts

Another roadblock was output formatting. I needed the model to return results that Zapier could parse predictably, but GPT would occasionally add bullets, headings, or change label names. I spent hours rewriting the prompt trying to force Markdown consistency, until I realized that switching to numbered key-value pairs (1. Organization:, 2. License Type:) gave the structure enough rigidity for parsing while still keeping it readable. That small change eliminated nearly all format-related misfires.

Then came the data-handling crisis. I initially passed in Google Sheet rows as comma-delimited text, assuming it would be simple to parse. It wasn't. Institutions like "University of Toronto, Mississauga" broke the logic every time. The model would split those names into multiple entries or drop them entirely. After a long debugging session, I discovered the fix: instead of using delimiters, I could pass the raw JSON row objects directly into the code step. That change made the dataset stable, searchable, and completely eliminated comma-related edge cases.

Choosing the Right Model

I also learned how much model choice matters. I started with GPT-3.5-turbo, but it was too unpredictable for domain parsing. It would confuse "biorender.com" with "biomed.com" or hallucinate fake matches. Upgrading to GPT-4 instantly improved reasoning, but it also required more controlled prompts to prevent verbosity. I ultimately switched to Gemini 2.0 Flash because its larger context window allowed it to follow instructions more consistently and hallucinate less frequently.

Programming an LLM with Absolute Logic

The most critical lesson came when I stopped treating the LLM as a helpful assistant and started treating it as a system that needed to be programmed with absolute, unambiguous logic.

The "Helpful" Inference Problem: When a lead requested 11 seats, the model qualified it, reasoning it was "close enough" to the 13-seat minimum. This demonstrated that the LLM's desire to be helpful can override subtle rules. The solution was replacing suggestions with absolute commands. Phrasing changed from "the minimum is 13" to "ZERO TOLERANCE: This is an ABSOLUTE, NON-NEGOTIABLE THRESHOLD. Any request for 12 seats or fewer is an immediate and irreversible disqualification."

Rule Shopping and Hierarchy Violations: The most frustrating issue was when the model ignored a clear disqualification rule in Step 1 and jumped ahead to Step 3 to misapply a different rule. It was actively looking for a reason to qualify the lead, even if it meant breaking the logical hierarchy. The fix was adding explicit hierarchy enforcement: "If any disqualification condition is met...STOP ALL FURTHER EVALUATION." This built a logical firewall that prevented the model from reading ahead and misapplying rules out of order.

Defining Business Context Explicitly: The model couldn't distinguish between generic terms ("licenses") and specific product terms ("Lab Licenses"). It didn't know my team's internal assumption that "5 licenses" means 5 individual seats, not a 5-seat Lab License product. The breakthrough was creating a "DEFAULT SEAT COUNT RULE" that explicitly stated: "a request for 'X licenses' MUST be interpreted as a request for 'X seats'." By hard-coding these business definitions directly into the prompt, I removed the model's need to guess and prevented critical misinterpretations.

Key Takeaways

In the end, I learned that building reliable AI automations isn't just about good prompts. It's about building guardrails around the model. The system became both consistent and scalable only after I:

  • Prioritized structure over cleverness: rigid output formats and JSON data handling eliminated edge cases
  • Used absolute, non-negotiable language: replaced suggestions with commands to prevent helpful but incorrect inferences
  • Enforced logical hierarchy explicitly: added hard stops to prevent rule shopping
  • Hard-coded business context: defined every assumption and default to eliminate ambiguity
  • Chose the right model for the task: balanced reasoning capability with context window and consistency

Once I stopped trying to make the model perfect and instead focused on containing its variability through structure, validation, and better data formatting, the Lead Gen Assistant became a reliable, production-ready system.

Technical Deep Dive

Round-Robin Assignment Algorithm

The system uses a sophisticated round-robin assignment algorithm that respects agent availability while maintaining fair distribution. It starts from the current rotator position and cycles through available agents, with fallback logic for when the preferred agent is unavailable.

// Round-Robin Assignment with Availability Check
const names = inputData.names.split(",").map(n => n.trim());
const available = inputData.available.split(",").map(a => a.trim().toLowerCase() === "true");
const rotatorNum = parseInt(inputData.next, 10); // 1-based index
const numAgents = names.length;

// Normalize rotator number to 0-based index
const startIndex = (rotatorNum - 1 + numAgents) % numAgents;
let assigned = null;
let assignedIndex = null;
let fallbackUsed = false;

// Attempt to assign starting from rotator index
for (let i = 0; i < numAgents; i++) {
  const currentIndex = (startIndex + i) % numAgents;

  if (available[currentIndex]) {
    assigned = names[currentIndex];
    assignedIndex = currentIndex;
    fallbackUsed = i !== 0; // If not the first choice, it's a fallback
    break;
  }
}

// Build result with detailed reasoning
if (!assigned) {
  return {
    assignee: "NO_ONE_AVAILABLE",
    reason: `No agents available. Tried to assign starting at ${names[startIndex]} (rotator #${rotatorNum}).`,
    newRotatorNumber: rotatorNum // Keep the same for next attempt
  };
}

// Calculate next rotator position (1-based)
const nextRotator = ((assignedIndex + 1) % numAgents) + 1;

return {
  assignee: assigned,
  reason: fallbackUsed 
    ? `Assigned to ${assigned} (fallback from ${names[startIndex]})`
    : `Assigned to ${assigned} (rotator #${rotatorNum})`,
  newRotatorNumber: nextRotator,
  fallbackUsed: fallbackUsed
};

Data Processing & Edge Case Handling

The system handles complex data parsing challenges and prevents AI model inconsistencies through structural solutions rather than relying on perfect prompts. This includes robust institution matching and strict business rule enforcement.

JSON Data Handling: Raw JSON objects prevent comma-delimited parsing errors with complex institution names
Structured Output Format: Numbered key-value pairs ensure consistent parsing by Zapier
Logical Hierarchy Enforcement: Hard stops prevent rule shopping and out-of-order evaluation
Business Context Hard-Coding: Explicit definitions eliminate model guesswork and misinterpretations