Forecast Model Accuracy Review Prompt
Prompt
You are a revenue operations manager reviewing forecast accuracy. Forecast vs. actual data (last 6 periods): [PASTE: Period | Forecast submitted (commit/best case) | Actual revenue | Variance $ | Variance % | Notes on large variances] Analyze: 1. Forecast accuracy % = 1 − |Variance| ÷ Actual; calculate for each period 2. Bias direction — are we consistently over-forecasting or under-forecasting? 3. Variance by source — are large misses coming from specific reps, regions, or deal types? 4. Best case conversion — what % of best case deals typically close? Is this predictable? 5. Improvement recommendations — process changes that would improve forecast accuracy Output: Forecast accuracy analysis. Bias assessment. Variance attribution. Recommended changes to forecasting process or methodology.
Why it works
Bias analysis — whether forecasts are consistently too high (over-optimistic) or consistently too low (sandbagging) — is the most actionable output of a forecast accuracy review because it identifies a systemic behavioural pattern that can be corrected. Separating commit accuracy from best-case accuracy recognises that these are used differently — commit is a promise, best-case is a scenario — and should be held to different standards. The calibration recommendation (adjust commit by x%) converts the historical accuracy analysis into a practical forecasting improvement.
Watch out for
Forecast accuracy reviews that lead to applying mechanical adjustment factors to rep forecasts (always multiply commit by 85%) may temporarily improve accuracy but will cause reps to game the adjustment by inflating their commit to compensate. Address forecast accuracy through coaching on qualification discipline and stage definition rather than through mathematical correction of the forecast output.
Used by