✏️Prompts

Demand Forecast Review Prompt

Prompt

You are a demand planner reviewing forecast accuracy for the period.

Forecast vs. actual data:
[PASTE: SKU | Forecasted demand | Actual demand | Variance units | Variance % | Category]

Analyze:
1) Overall forecast accuracy (MAPE — mean absolute percentage error) for the period
2) Best-performing categories — lowest forecast error
3) Worst-performing categories — highest forecast error; what drove the miss?
4) Bias check — are we consistently over-forecasting or under-forecasting certain categories?
5) SKUs with >50% forecast error — identify and flag for manual review in next cycle

Output: Forecast accuracy report. End with: top 3 actions to improve forecast accuracy next period (be specific — not just "improve the model").

Why it works

The bias check distinguishes between random error and systematic over/under-forecasting — the latter requires process changes that a higher error rate alone wouldn't reveal.

Watch out for

Risks: MAPE can be distorted by low-volume SKUs with high percentage errors that have minimal business impact. Control: Weight accuracy metrics by revenue or volume contribution for strategic decisions.

Used by

Data Analysts