Data & Analytics Prompts to Save Time On Repetitive Tasks
You are a data analyst. Review this CSV export for data quality issues. CSV data: [Paste your CSV or data table here] Check for: 1) Nulls / blanks (which columns, how many rows missing?) 2) Duplicates (exact or fuzzy matches on key field) 3) Outliers (values that deviate from distribution) 4) Format inconsistency (dates, amounts, codes) 5) Logical errors (e.g., end_date before start_date) Produce a report: - Summary: # issues found by type - Details: List specific rows with issues - Recommendations: How to fix or investigate Flag critical issues at top.
You are a data analyst. Help me build a reconciliation between two data sources. Source 1: [System name] [Paste: key field, amount field, date, other relevant columns — sample rows] Source 2: [System name] [Paste: same structure from second source] Match on: [Key field — invoice number, transaction ID, etc.] Produce a reconciliation that shows: 1) Matched items (exist in both sources with same amount) 2) Matched with differences (exist in both but amounts differ — show variance) 3) Source 1 only (exists in Source 1 but not Source 2) 4) Source 2 only (exists in Source 2 but not Source 1) 5) Summary statistics: - Total records and $ in each source - # and $ matched exactly - # and $ matched with differences - # and $ unmatched on each side Format: Reconciliation report with sections for each category. Include variance analysis for matched-with-differences items.
You are a systems analyst performing a data quality audit on ERP data. Data to review: [PASTE: Sample records from the dataset — 20–50 rows is sufficient. Include all key fields.] Dataset: [NAME THE DATASET: customer master / vendor master / item master / GL transactions / etc.] Check for: 1) Missing required fields — any record where a mandatory field is blank 2) Inconsistent formatting — dates, phone numbers, addresses, naming conventions not following a standard 3) Duplicate records — same entity entered multiple times with slight variations 4) Invalid values — codes or amounts that fall outside expected ranges 5) Orphaned records — references to records that no longer exist (e.g., transactions against deleted accounts) Output: Data quality report — issue type | count | example | recommended fix. Overall data quality score (% of records with zero issues). Priority cleanup list.
You are a demand planner reviewing forecast accuracy for the period. Data: [PASTE: SKU | Forecasted demand | Actual demand | Variance units | Variance % | Category | Channel] Analyze: Overall forecast accuracy — MAPE (Mean Absolute Percentage Error) for the period Best-performing categories — lowest forecast error Worst-performing categories — highest error; what drove the miss? Bias check — are we consistently over-forecasting or under-forecasting? Which categories? SKUs with >50% forecast error — flag for manual review in next cycle Output: Forecast accuracy report. End with: top 3 specific actions to improve accuracy next period — not generic advice, specific changes to data inputs, methods, or review cadence. Statistical Forecast Build
You are a demand planner analyzing seasonal demand patterns. Historical data: [PASTE: SKU or product family | Month | Units sold | Year — minimum 2 years of history] Analyze: Seasonal index for each month — actual month ÷ average monthly demand × 100 Peak months — which months consistently over-index and by how much? Trough months — lowest demand periods; inventory and staffing implications Year-over-year trend — is overall demand growing or declining independent of seasonality? Seasonal variation by product family — are all categories equally seasonal or do some have flat demand? Output: Seasonal index table by month and family. Peak/trough calendar. Recommended inventory build-ahead quantities for peak season. Build start date. New Product Demand Estimate
You are an inventory manager preparing the monthly inventory accuracy report. Count results: [PASTE: SKU | Location | System qty | Count qty | Accurate? (yes/no) | Variance $ if inaccurate] Calculate: Location accuracy % = Locations counted correctly ÷ Total locations counted × 100 SKU accuracy % = SKUs counted correctly ÷ Total SKUs counted × 100 Dollar accuracy % = Total inventory value with no variance ÷ Total inventory value counted × 100 Accuracy by zone or area — are certain locations consistently less accurate? Accuracy trend — improving, stable, or declining vs. prior 3 months? Output: Inventory accuracy scorecard. Accuracy by metric and by zone. Trend analysis. Flag: any zone below [TARGET %] — requires root cause investigation and corrective action plan. Physical Inventory Count Preparation
Establish data quality standards, ownership, and maintenance processes. [PASTE: Your crm data governance requirements and goals here] Provides: Strategic framework, actionable recommendations, and measurement approach for crm data governance.
Automate data collection and reporting to surface insights faster. [PASTE: Your reporting & insight automation requirements and goals here] Provides: Strategic framework, actionable recommendations, and measurement approach for reporting & insight automation.
You are a product analyst building the analytics instrumentation plan for a new product feature. Feature data: [DESCRIBE: Feature name, user flow (step by step), key behaviors to track, business questions the data should answer, current instrumentation (if any), analytics tool in use (Mixpanel/Amplitude/Heap/custom)] Build the instrumentation plan: 1) Events to track — for each user action in the feature flow, define the event name and properties to capture 2) Event naming convention — consistent naming schema (noun_verb: "file_uploaded" / "report_exported") for maintainability 3) User properties — what user attributes should be available for segmentation? (plan tier / account size / days active) 4) Funnel definition — the ordered sequence of events that defines feature adoption 5) Success metrics — what data will confirm the feature is performing as intended? Output: Instrumentation plan. Event dictionary. User property list. Funnel definition. Success metrics. QA testing checklist to confirm tracking is working.
Help me design a KPI dashboard for [FUNCTION — e.g. sales team, marketing department, customer support]. Audience: [who will look at this dashboard? What decisions do they make?] Update frequency: [daily, weekly, monthly?] Data sources available: [list what you have access to] Please define: 1. The 5–8 most important metrics this dashboard should show (with definitions) 2. For each metric: how to calculate it, what 'good' looks like, and what action it should trigger 3. Recommended layout (what goes at the top, what's secondary) 4. What to cut — common metrics that look good but don't drive decisions
Turn this report into a 5-minute executive summary. [PASTE REPORT] Audience: [CEO / board / dept head] Decisions they need to make: [what do you want them to do?] Biggest concerns right now: [what are they most focused on?] Executive summary that: 1. Opens with the single most important insight 2. Covers 3-4 key findings (conclusion first, then supporting data) 3. Flags one area of concern or risk 4. Ends with a clear recommendation or decision request 5. Plain English — no jargon, no passive voice Target: 200-300 words.