AI Signals detect specific conditions in a call and deliver a formatted notification to a Slack channel, DM, or email. They are conditional and targeted — unlike summaries, they only fire when something specific happens.
Smart Clips use the same prompt-based detection logic as AI Signals to automatically extract specific moments from calls as short video clips. If you know how to write a Signal trigger prompt, you already know how to configure a Smart Clip — the prompting approach is identical. The difference is the output: instead of a Slack notification, Momentum surfaces a clipped moment from the call.
Smart Tags extract structured values from calls and persist them on the call record — enabling filtering, automation, and trend analysis. Covered at the end of this page.
How Signals work: two-stage architecture
| Stage | What it does |
|---|
| Stage 1 — Trigger Prompt | Reads the transcript. Outputs: TRUE or FALSE. Nothing else. If TRUE: signal fires and Stage 2 runs. If FALSE: nothing happens. |
| Stage 2 — Follow-Up Prompt(s) | Reads the transcript (signal already confirmed TRUE). Extracts and formats the relevant content. Multiple follow-ups can chain — each creates a block in the notification. Common pattern: (1) summary, (2) verbatim quote, (3) context or rep response. |
Stage 1: Writing the trigger prompt
The #1 mistake with trigger prompts: writing the trigger as a content-extraction prompt instead of a binary detection prompt.Wrong: “What competitors did the customer mention during this call?” — This is a follow-up prompt, not a trigger. It will always return something.Correct: “Did the customer explicitly mention a competing vendor by name or refer to evaluating an alternative solution? Output TRUE or FALSE only.”
Trigger prompt template
ROLE
You are a signal detection system monitoring a B2B sales call transcript.
CONTENT
Full call transcript provided below.
GOAL
Detect whether [specific condition] occurred during this call.
RULES
- Output TRUE if: [precise definition of what counts]
- Output FALSE if: [what does NOT count — be specific]
- Do not explain your reasoning
- Do not qualify or hedge
OUTPUT FORMAT
Output TRUE or FALSE only. No other text.
When to simplify: natural language trigger prompts. The five-part template above is the gold standard for complex detection logic. But for straightforward conditions, a well-written natural language prompt works just as well — and is often easier to maintain.Use natural language when the condition is clear-cut and doesn’t require multi-step reasoning. Use the full template when you need tight control over edge cases, silence behavior, or ambiguous scenarios.Example — a production-tested natural language trigger:“Did the rep fail to clearly propose specific next steps before the end of the meeting, or did the customer not explicitly agree to those next steps if clearly proposed by the rep? Answer True if either: (1) the rep did not propose clear next steps (e.g., scheduling another call or sending follow-up materials), OR (2) the rep proposed next steps but the customer did not explicitly agree (e.g., by confirming, expressing alignment, or committing to a follow-up).”Notice: no ROLE, no OUTPUT FORMAT header — just a clear question with precise TRUE conditions defined inline. That’s enough for Momentum to evaluate it correctly.
Four techniques for more accurate triggers
- Define “Return false when” explicitly. Defining what does NOT count is just as important as defining what does. Without it, the AI will fire on weak or tangential signals. Include a clear “Return false when” block for any trigger where the boundary between true and false isn’t obvious.
- Use markdown structure for complex multi-condition triggers. When a signal has multiple independent conditions that can each independently fire
TRUE, use headers and grouped examples to organize them. This makes the logic clearer to the AI and easier to maintain.
- Scope the source explicitly. Specify whose statements count — customer/prospect only, not the rep. Unprompted buyer mentions are not the same as rep-led mentions. If it matters, say so.
- Add a multi-instance rule when needed. For signals that could appear multiple times in a conversation (e.g., competitor comparisons), tell the AI not to stop evaluating after the first match: “Continue analyzing the entire conversation to identify all instances.”
Stage 2: Writing follow-up prompts
Follow-up prompts run only when the trigger fires TRUE. Write them in plain natural language — a simple, direct question is all that’s needed. Each follow-up is its own prompt block and appears as a separate line in the Slack notification.
In practice, follow-up prompts look like this:
| Label | Prompt |
|---|
| Customer took the lead | Did the customer take the lead in proposing the next step or a follow-up action? |
| Rep did attempt next steps | Did the rep attempt to propose next steps, but the customer declined or deflected? |
| Time constraint | Was there a time constraint or external reason (e.g., “I have to jump to another meeting”) that may have prevented next steps from being discussed? |
| No-commitment pre-agreed | Was the purpose of the meeting clearly established as an exploratory or discovery-only call with no commitment expected at this stage? |
Follow-up prompts: what works. Write them as plain questions — no ROLE, no OUTPUT FORMAT, no structure needed. Each prompt should ask one thing.Unlike Autopilot CRM fields where silence is the right default, follow-up prompts benefit from an explicit response when nothing is found — it confirms the prompt ran and adds context to the Slack notification. Silence looks like the follow-up didn’t fire.Example follow-up: “Extract the exact quote where the customer mentioned a competitor. Do not paraphrase. If no clear quote exists, output: No direct quote found.”
Signal examples
The examples below show both prompt styles. Examples 1 and 2 use structured format — appropriate when false positives matter and edge cases need to be spelled out. Example 3 uses plain natural language — appropriate when the condition is clear enough that a single sentence is sufficient.
Example 1 — Deal Risk Signal
A structured trigger that explicitly defines what does and doesn’t qualify — reducing false positives from rep-raised hypotheticals.
[TRIGGER]
Did the customer express concern about something that could prevent or delay
a deal from closing? Output TRUE or FALSE only.
[Qualifying concerns include]
- Budget constraints or approval requirements
- Internal alignment or stakeholder issues
- Procurement, legal, or security requirements
- Active evaluation of competing solutions
- Technical limitations or integration concerns
- Timeline mismatches
[Return FALSE when]
- The concern was raised by the rep, not the customer
- The language is generic and not tied to this deal
- The statement is future-state planning, not a current concern
[FOLLOW-UP 1: What's the risk]
What specific concern did the customer raise? Summarize in one sentence.
If no risk was found, output: No deal risk identified.
[FOLLOW-UP 2: Verbatim quote]
Extract the exact quote from the customer that best captures the concern.
Do not paraphrase. If no clear quote exists, output: No direct quote found.
Example 2 — Competitor Comparison Signal
A structured trigger using ROLE and RULES — with an explicit buyer-only scope and a multi-instance evaluation rule to prevent early exits.
[ROLE]
You are a competitive intelligence analyst reviewing a sales call transcript.
Your job is to determine whether the customer is actively comparing [Company]
to a named alternative.
[GOAL]
Return TRUE if the customer explicitly names a competitor and frames it as an
alternative they are evaluating or have used. Return FALSE in all other cases.
[RULES]
- Return TRUE only if the competitor is named by a buyer — not the rep
- Return TRUE only if the competitor is framed as an active alternative,
not a passing reference
- Return FALSE if the rep names the competitor and the customer does not confirm
- Return FALSE for general market commentary ("we looked at a few options")
- Evaluate ALL competitor mentions before deciding — do not stop at the first
Output TRUE or FALSE only.
[FOLLOW-UP 1: Which competitor]
Which competitor was named? If multiple, list all.
If none found, output: No competitor identified.
[FOLLOW-UP 2: Context]
What did the customer say about the competitor? Summarize in one sentence.
If no clear context, output: No competitive context found.
Example 3 — Simple Natural Language Signal
Not every signal needs a structured prompt. When the condition is clear and unambiguous, a single plain-language question is enough.
Did the customer mention how many people are on their sales or revenue team?
Output TRUE or FALSE only.
Tips for better signals. Keep triggers specific — a trigger that’s too broad will fire on tangential mentions and create noise. The more precisely you define what counts as TRUE, the more useful the signal. Follow-up prompts are optional — but they add a lot of context. A trigger alone tells you something happened; follow-ups tell you what, how, and who said it.
Smart Tags automatically extract meaningful signals from conversations and turn them into structured, reusable data. Where AI Signals fire a Slack notification when a condition is met, Smart Tags persist the extracted value directly on the call record — making that data available for filtering in the Call Library, triggering downstream Autopilot workflows, and analyzing trends across your full conversation history.
A common example is a Competitors tag: if a prospect says “we’re currently evaluating Gong,” the tag is populated with Gong. That value can then be used to filter calls, attach battlecards automatically, or track competitor mention trends — without any manual tagging by reps.
Smart Tags use simple prompts. Unlike AI Signals, Smart Tags don’t return TRUE/FALSE — they extract and return a value (or list of values). Prompts are short and direct. No ROLE, no elaborate rules structure needed in most cases.
Smart Tag examples
Identify any competitors to our product that were mentioned in the call.
A meaningful competitor is one that offers a solution which might prevent
[Company]'s product from being purchased.
Do not identify tangentially related companies — identify only the names of
companies you can justify as posing a competitive threat to the purchase
or renewal of [Company]'s platform.
Analyze the conversation and decide from the following categories:
What type of call did we just have with this customer?
The first example returns a list of competitor names as structured tag values. The second drives a call-type classification tag — the categories are defined in the tag configuration, and the prompt just asks the AI to choose. In both cases, the output is persistent, structured, and immediately usable across the platform.