Momentum runs all AI prompts post-call — there is no live analysis. Every prompt runs against a completed transcript.The five-part prompt architecture below is a useful starting point across features — but each feature has its own prompt interface, output rules, and constraints. Think of this page as a shared foundation, not a universal formula.The biggest mistake new customers make is treating all features identically. Each feature has different output rules. This guide explains exactly where they diverge.
What’s in this guide
Signals, Smart Clips & Smart Tags
Two-stage trigger architecture, follow-up prompts, and structured value extraction.
Autopilot
Field-type rules, silence behavior, and examples for picklist, textarea, and more.
Call Summaries
The multi-lens framework — configure separate summaries for Sales and CS.
Coaching Agent
Writing competency rubrics with observable positive and negative indicators.
Quick Reference
Cheat sheets — structured vs. natural language, silence patterns, and common mistakes.
The Universal Prompt Architecture
Before you write your first prompt
When an AI produces bad output, the instinct is to blame the AI. In practice, the problem is almost always the prompt — not enough context, an ambiguous instruction, or a missing rule for an edge case the author didn’t anticipate. AI forces clarity: the more precisely you can describe what you want, the closer the output will be to what you actually need. If something isn’t working, read your prompt as if you were seeing it for the first time with no context about your company, your Salesforce schema, or what “good” looks like. That’s exactly how the AI reads it. Reliable Momentum prompts tend to address five things. You don’t need all five every time — a single plain-language question can outperform a structured prompt when the task is simple. But when a prompt is underperforming or producing inconsistent output, the fix almost always comes down to one of these.| Component | What it defines |
|---|---|
| ROLE | The AI’s function for this specific task. Be precise — not “you are an AI assistant” but “you are a sales intelligence analyst extracting qualification data from a B2B sales call transcript.” |
| CONTENT | What input the prompt receives. Transcript? A filtered set of historical calls? Signal outputs? Making this explicit prevents the AI from hallucinating context that isn’t there. |
| GOAL | The extraction or detection objective, stated precisely. One goal per prompt — if you need two things, write two prompts. |
| RULES | All constraints: what counts, what doesn’t, how to handle missing data, verbatim quote requirements, silence rules, edge cases. This is where most prompts fail — not enough rules. |
| OUTPUT FORMAT | Exact format specification. Field type for Autopilot. TRUE/FALSE for signal triggers. Section structure for summaries. No trailing statements. No preamble. |
Two prompt styles
In practice, prompts fall into two styles. Both are valid — the right choice depends on task complexity.| Style | When to use it | Examples |
|---|---|---|
| Structured | Complex extraction, multi-condition logic, picklist classification, any output where edge cases matter | Full ROLE / GOAL / RULES / OUTPUT FORMAT blocks |
| Natural language | Simple, unambiguous detection or extraction where the task is self-evident | Signal: “Did the customer mention how many people are on their sales team? Output TRUE or FALSE only.” Smart Tag: “Identify any competitors mentioned in the call.” Autopilot: “Extract any next steps agreed on this call. Format as a bullet list. If nothing was agreed, return nothing.” |
Handling absent data
Every prompt should define what happens when the expected content isn’t found in the transcript. The only wrong answer is leaving it unspecified — the AI will fill the gap with hallucinated data, a placeholder, or a hedging statement. What you output when nothing is found depends on your use case.This guidance applies primarily to Autopilot CRM fields. Signal follow-up prompts work differently — see Signals, Smart Clips & Smart Tags.
- ✓ Return nothing: “If no budget information is discussed, return nothing.”
- ✓ Return an explicit value: “If the topic was not discussed, output N/A.” — For AI-specific fields in Salesforce dedicated to Momentum, this is expected and correct. It lets you validate that Autopilot ran and spot missing values for coaching purposes.
- ✗ Say nothing: Prompt doesn’t address the absent-data case at all → AI guesses.
How to make silence stick
The AI’s default instinct is to say something — even when there’s nothing to say. To override it, you need to be explicit and raise the stakes. A weak rule like “return nothing if not discussed” often isn’t enough. These patterns work:- Frame silence as the correct choice, not a fallback: “Complete silence is the correct output.”
- Treat any output as an error: “Outputting any text when no qualifying data exists is a critical error.”
- Ban trailing statements by name: “Do not append any statements about what was not found, not discussed, or not mentioned — even when valid data has been extracted. Phrases such as ‘no competitors were mentioned,’ ‘no further information was discussed,’ or ‘no update required’ are strictly forbidden.”
- End with a positive constraint: “Your output must contain only [qualifying data] and nothing else.”
Starting templates
Two templates — use whichever fits the task.Structured format
Use when the task has multiple conditions, edge cases, or a specific output format that must be exact.Natural language format
Use when the task is simple and unambiguous. Same five ingredients, no explicit labels needed.Using Salesforce field bindings {FieldName}
Momentum lets you inject an existing Salesforce field value directly into a prompt using {FieldName} bindings — giving the AI context about what’s already captured before it processes the transcript. Bindings serve two distinct purposes:
- Live enrichment — inject the current value of the field being written, compare it against the transcript, then update, expand, or leave unchanged. This prevents overwriting valid data on repeat calls.
- Context injection — inject values from other fields on the same object to give the AI richer context for a different prompt. For example, inject
{Industry}into a Next Steps prompt so the AI tailors its output to the type of company it’s writing for — even though that field isn’t being updated.

