Skip to main content
This guide teaches you how to write effective AI prompts across Momentum’s core features. Every prompt structure, field type rule, and example is drawn directly from Momentum’s production prompt engineering playbook. There’s no single correct way to prompt Momentum. The frameworks here reflect patterns we’ve seen work well across hundreds of customer implementations — but the best prompt is always the one that works for your data, your team, and your use case. Use this guide as a starting point, not a rulebook. Experiment, iterate, and don’t be afraid to simplify or go a completely different direction if it gets the job done.
Momentum runs all AI prompts post-call — there is no live analysis. Every prompt runs against a completed transcript.The five-part prompt architecture below is a useful starting point across features — but each feature has its own prompt interface, output rules, and constraints. Think of this page as a shared foundation, not a universal formula.The biggest mistake new customers make is treating all features identically. Each feature has different output rules. This guide explains exactly where they diverge.

What’s in this guide

Signals, Smart Clips & Smart Tags

Two-stage trigger architecture, follow-up prompts, and structured value extraction.

Autopilot

Field-type rules, silence behavior, and examples for picklist, textarea, and more.

Call Summaries

The multi-lens framework — configure separate summaries for Sales and CS.

Coaching Agent

Writing competency rubrics with observable positive and negative indicators.

Quick Reference

Cheat sheets — structured vs. natural language, silence patterns, and common mistakes.

The Universal Prompt Architecture

Before you write your first prompt

When an AI produces bad output, the instinct is to blame the AI. In practice, the problem is almost always the prompt — not enough context, an ambiguous instruction, or a missing rule for an edge case the author didn’t anticipate. AI forces clarity: the more precisely you can describe what you want, the closer the output will be to what you actually need. If something isn’t working, read your prompt as if you were seeing it for the first time with no context about your company, your Salesforce schema, or what “good” looks like. That’s exactly how the AI reads it. Reliable Momentum prompts tend to address five things. You don’t need all five every time — a single plain-language question can outperform a structured prompt when the task is simple. But when a prompt is underperforming or producing inconsistent output, the fix almost always comes down to one of these.
ComponentWhat it defines
ROLEThe AI’s function for this specific task. Be precise — not “you are an AI assistant” but “you are a sales intelligence analyst extracting qualification data from a B2B sales call transcript.”
CONTENTWhat input the prompt receives. Transcript? A filtered set of historical calls? Signal outputs? Making this explicit prevents the AI from hallucinating context that isn’t there.
GOALThe extraction or detection objective, stated precisely. One goal per prompt — if you need two things, write two prompts.
RULESAll constraints: what counts, what doesn’t, how to handle missing data, verbatim quote requirements, silence rules, edge cases. This is where most prompts fail — not enough rules.
OUTPUT FORMATExact format specification. Field type for Autopilot. TRUE/FALSE for signal triggers. Section structure for summaries. No trailing statements. No preamble.

Two prompt styles

In practice, prompts fall into two styles. Both are valid — the right choice depends on task complexity.
StyleWhen to use itExamples
StructuredComplex extraction, multi-condition logic, picklist classification, any output where edge cases matterFull ROLE / GOAL / RULES / OUTPUT FORMAT blocks
Natural languageSimple, unambiguous detection or extraction where the task is self-evidentSignal: “Did the customer mention how many people are on their sales team? Output TRUE or FALSE only.”

Smart Tag: “Identify any competitors mentioned in the call.”

Autopilot: “Extract any next steps agreed on this call. Format as a bullet list. If nothing was agreed, return nothing.”
Throughout this guide, each feature section shows both styles where they apply.
Word choice changes AI behavior. The specific words you use in a prompt aren’t interchangeable. Small differences in phrasing produce meaningfully different outputs:
  • “You should” → treated as a suggestion the AI can override when uncertain
  • “You MUST” → treated as a hard constraint
  • “Summarize” → AI paraphrases and may add interpretation
  • “Extract” → AI pulls what’s explicitly stated, stays closer to the source
  • “Look for” → broad search, more likely to surface tangential mentions
  • “Identify only” → narrows scope, reduces false positives
When a prompt is producing output that’s too loose, too interpretive, or firing on things it shouldn’t — word choice is often the first thing to tighten.

Handling absent data

Every prompt should define what happens when the expected content isn’t found in the transcript. The only wrong answer is leaving it unspecified — the AI will fill the gap with hallucinated data, a placeholder, or a hedging statement. What you output when nothing is found depends on your use case.
This guidance applies primarily to Autopilot CRM fields. Signal follow-up prompts work differently — see Signals, Smart Clips & Smart Tags.
Choose based on your use case:
  • Return nothing: “If no budget information is discussed, return nothing.”
  • Return an explicit value: “If the topic was not discussed, output N/A.” — For AI-specific fields in Salesforce dedicated to Momentum, this is expected and correct. It lets you validate that Autopilot ran and spot missing values for coaching purposes.
  • Say nothing: Prompt doesn’t address the absent-data case at all → AI guesses.

How to make silence stick

The AI’s default instinct is to say something — even when there’s nothing to say. To override it, you need to be explicit and raise the stakes. A weak rule like “return nothing if not discussed” often isn’t enough. These patterns work:
  • Frame silence as the correct choice, not a fallback: “Complete silence is the correct output.”
  • Treat any output as an error: “Outputting any text when no qualifying data exists is a critical error.”
  • Ban trailing statements by name: “Do not append any statements about what was not found, not discussed, or not mentioned — even when valid data has been extracted. Phrases such as ‘no competitors were mentioned,’ ‘no further information was discussed,’ or ‘no update required’ are strictly forbidden.”
  • End with a positive constraint: “Your output must contain only [qualifying data] and nothing else.”

Starting templates

Two templates — use whichever fits the task.

Structured format

Use when the task has multiple conditions, edge cases, or a specific output format that must be exact.
ROLE
You are a [specific role] analyzing a [specific input type].

CONTENT
The input is a [call transcript / set of call summaries / CRM record + transcript].

GOAL
Extract / detect / evaluate [one specific thing].

RULES
- Include: [what qualifies]
- Exclude: [what doesn't qualify]
- If [condition not met]: [exact silence behavior]
- Do not infer — only extract what is explicitly stated
- No preamble, no trailing statement

OUTPUT FORMAT
[Exact format — see feature-specific rules]

Natural language format

Use when the task is simple and unambiguous. Same five ingredients, no explicit labels needed.
Extract [what] from the transcript. Only include [qualifying criteria — be specific].
If [condition not met], [silence behavior]. Output [exact format].
Show the AI what good looks like. Describing the output format is useful. Showing the AI an example of ideal output is more reliable — especially for summaries, structured extractions, and any prompt where tone and level of detail matter.Add an EXAMPLE FORMAT TO FOLLOW block at the end of your prompt with a fictional but realistic sample of exactly what you want back. The AI will match the format, the specificity, and the style of your example — not just the label you gave it. This is one of the highest-leverage things you can do to reduce iteration on a new prompt.Instruct the AI to use the example for format only, not content — otherwise it may carry example text into real outputs. The New Business and CS Summary examples in Call Summaries both demonstrate this pattern.

Using Salesforce field bindings {FieldName}

Momentum lets you inject an existing Salesforce field value directly into a prompt using {FieldName} bindings — giving the AI context about what’s already captured before it processes the transcript. Bindings serve two distinct purposes:
  • Live enrichment — inject the current value of the field being written, compare it against the transcript, then update, expand, or leave unchanged. This prevents overwriting valid data on repeat calls.
  • Context injection — inject values from other fields on the same object to give the AI richer context for a different prompt. For example, inject {Industry} into a Next Steps prompt so the AI tailors its output to the type of company it’s writing for — even though that field isn’t being updated.
Use bindings in the CONTENT or RULES sections of any prompt. You can reference any field on the object, not just the field the prompt is writing to.
If the referenced field is empty. If a Salesforce field has no value when the prompt runs, the binding resolves to blank. Always account for this in your rules — e.g. “If {Business_Impact__c} is blank, treat this as a fresh extraction with no prior context.”

Example 1 — Text field enrichment (Business Impact)

#SALESFORCE STRUCTURED DATA
You will be provided the current value of the Business Impact field from Salesforce.
Current Business Impact field value is: {Business_Impact__c}

#RULES FOR FIELD ENRICHMENT
Compare the current Salesforce field value to the information found in the transcript.
- If the transcript provides additional details not present in the current value, enrich the
  field to include these, maintaining accuracy and relevance.
- If previously logged impact is no longer relevant or was disproved, remove it.
- If the transcript confirms the current value but adds nothing new, return the current
  value unchanged.
- If the field is empty and the transcript provides clear business impact, populate it.
- If no business impact is discussed in the transcript, leave the response blank.

Always use only clear, explicit, or strongly implied information from the transcript.
Summarize in a single paragraph — maximum three sentences — using quantifiable
details, strategic outcomes, and CRM-ready language.

#OUTPUT FORMAT
Respond with the updated field value only, formatted as a concise paragraph ready
to be saved in Salesforce. If nothing is discussed, leave the response blank.

Example 2 — Multi-select picklist enrichment (Competitors)

#SALESFORCE STRUCTURED DATA
The current value of the Competitors field in Salesforce is: {Competitors__c}

#RULES FOR FIELD ENRICHMENT
- If the field is empty: populate with competitor(s) from this transcript, or
  "No Competitors" if none mentioned.
- If the field contains "No Competitors": replace with new competitor(s) if detected;
  otherwise keep as "No Competitors."
- If the field already lists competitors: add any new ones from this transcript,
  maintaining order and uniqueness. Do NOT add duplicates.
- Never combine "No Competitors" with any other value.

#OUTPUT FORMAT
Respond ONLY with the final updated competitor list string, using allowed values
separated by semicolons (e.g., Adobe;Drupal;Shopify). No commentary or explanation.
The enrichment logic pattern. Both examples above follow the same structure: (1) inject the current field value via binding, (2) define comparison rules covering all states — empty, unchanged, needs update, needs removal, (3) specify a clean output format. This pattern works across any field type where you want Momentum to build on what’s already there rather than overwrite it.