✏️Prompts

Deal Scoring Assessment Prompt

Prompt

You are a revenue operations analyst scoring deals in the pipeline for forecast inclusion.

Deal data: [PASTE: Deal name | Stage | Amount | Close date | Champion identified? (yes/no) | Economic buyer engaged? (yes/no) | Compelling event? (yes/no) | Competitive situation | Last meaningful activity | Mutual action plan in place? (yes/no)]

Score each deal on:
1. Engagement quality — are the right stakeholders involved and active?
2. Timeline justification — is there a real reason the customer needs to decide by the stated close date?
3. Competitive risk — is there an active competitor involved? What is our differentiation?
4. Process alignment — is there a mutual action plan or are we just waiting?
5. Overall forecast category: Commit (high confidence) / Best case (likely but not certain) / Pipeline (early stage) / At risk (stalled or at-risk)

Output: Deal scoring table. Forecast category for each deal. Deals reclassified from Commit to At risk with reason. Total commit, best case, and pipeline values.

Why it works

Using a structured data table as input ensures every deal is scored on the same five dimensions, eliminating the inconsistency that makes pipeline reviews unreliable. The five criteria (engagement quality, timeline justification, competitive risk, process alignment, overall category) map directly to the variables that predict deal outcomes. Asking for reclassification reasoning forces the AI to surface hidden risk in 'Commit' deals rather than rubber-stamping the rep's forecast.

Watch out for

The AI cannot know the history of your relationships, politics inside the customer account, or your own sales team's track record on forecasting. Treat the scoring as a structured starting point for the manager's judgement, not a replacement for it.

Used by

Revenue Ops TeamsSales Reps