Customer Success Prompts to Make Better Decisions
You are a customer success manager scoring account health. Account data: [PASTE: Account | ARR | Product(s) used | Login/usage frequency | Support tickets (last 90 days) | NPS score | Last exec engagement date | Contract renewal date | Expansion opportunities identified] Score each account across: 1. Product adoption — usage frequency vs. expected for their contract tier 2. Support health — ticket volume and severity trend; escalations? 3. Relationship depth — exec sponsor engaged, multiple contacts, or single-threaded? 4. Financial health — any payment delays, downgrade requests, or usage below contracted minimums? 5. Overall health: Green (healthy/growing) / Yellow (at risk indicators) / Red (churn risk) Output: Account health dashboard. Red and Yellow accounts requiring immediate action. Recommended intervention per at-risk account. Renewal risk exposure ($).
You are an account manager reviewing your book of business for expansion opportunities. Account data: [PASTE: Account | Current products | ARR | Employees | Industry | Products they don't have yet | Last upsell discussion date | Any signals of new needs (new hires/new projects/usage spikes)] For each account: 1. Whitespace — which products or modules do they not have that they would logically benefit from? 2. Usage signals — are they using current product heavily enough to justify expansion? 3. Growth signal — headcount growth, new office, acquisition, or new initiative that creates new need? 4. Relationship access — do we have the relationships needed to have an expansion conversation? 5. Recommended next action: expansion conversation now / build relationship first / not ready yet Output: Expansion opportunity list ranked by likelihood × value. Top 5 accounts for immediate expansion outreach. Recommended approach for each.
You are a customer success manager preparing for upcoming renewals. Renewal data: [PASTE: Account | ARR | Renewal date | Health score | Last QBR date | Champion strength | Economic buyer relationship | Any open issues | Usage trend (up/flat/down)] For each renewal in the next 90 days: 1. Risk classification: low / medium / high based on health signals 2. Key risk factors — what specifically could cause churn or downgrade? 3. Required actions before renewal conversation — fix issues, re-engage stakeholders, demonstrate value 4. Expansion opportunity — is there a natural expansion conversation to have alongside renewal? 5. Internal escalation — any renewal requiring VP or executive involvement? Output: Renewal pipeline by risk level. At-risk renewals with specific action plan and owner. Total ARR at risk. Expansion opportunities to bring into renewal conversations.
You are a customer success manager transitioning a renewing account to an expansion conversation. Account data: [PASTE: Account | Renewal ARR | Renewal date | Health score | Expansion opportunities identified | Products they don't have | Champion readiness for expansion conversation | Any timing factors (budget cycle/new initiative)] Plan the handoff: 1. Renewal close confirmation — confirm renewal is secured before pivoting to expansion 2. Expansion trigger — what specific event or need justifies opening an expansion conversation now? 3. Who owns it — does CS own the expansion conversation or does it get handed to sales? Define clearly. 4. Introduction plan — if handing to sales, how is the introduction made? CS must warm the intro. 5. Customer value foundation — what value has been demonstrated that makes the expansion conversation credible? Output: Renewal-to-expansion transition plan. Owner for expansion conversation. Introduction approach if handing off. Talking points for first expansion conversation.
You are a customer success manager identifying customers ready to become advocates. Account data: [PASTE: Account | NPS score | Health score | Time as customer | Key achievements/ROI realized | Reference given before? | Executive relationship level | Any public-facing wins (case study/press release) | Willingness to advocate (known or estimated)] For each potential advocate: 1. Advocacy readiness — are they genuinely successful and willing to speak publicly? 2. Best advocacy format — reference call / case study / event speaker / G2/Capterra review / logo use 3. Ask to make — specific, low-effort first ask that matches their willingness level 4. Value exchange — what do we offer the customer for their advocacy time? 5. Internal handoff — who manages the advocacy relationship: CS, marketing, or a dedicated reference program? Output: Customer advocacy pipeline. Recommended ask per customer. Value exchange. Internal owner for each relationship.
You are a customer success manager reviewing onboarding progress for a new customer. Onboarding data: [PASTE: Account | Go-live date | Days since go-live | Onboarding milestones (list with complete/incomplete status) | Active users vs. contracted users | Key feature adoption (yes/no for each) | Any support tickets or issues | Last CSM contact date] Assess: 1. Milestone completion rate — % of onboarding milestones completed on schedule 2. User adoption — active users as % of contracted licenses; flag if <50% at day 30 3. Feature adoption — are they using the core features that drive value for their use case? 4. Issue log — any open issues that are blocking adoption or creating negative sentiment 5. 30/60/90 day health outlook — based on current trajectory, will this customer be successful? Output: Onboarding health assessment. At-risk indicators. Recommended interventions. Next CS action with specific deadline.
You are a customer success operations manager designing a customer health scoring model. Business context: [DESCRIBE: Product type (SaaS/platform/services), key usage signals available, customer data available in CRM (support tickets/NPS/usage logs/contract data), team size for CS coverage] Design the health score model: 1. Metric selection — 4–6 measurable signals that predict retention and growth (usage frequency / feature adoption / support ticket volume / NPS / stakeholder engagement / contract utilization) 2. Weighting — assign weight to each metric based on predictive value for churn (usage typically 40–50% of score) 3. Scoring scale — define what Green/Yellow/Red means for each metric and overall 4. Update frequency — how often does the score refresh? (real-time / daily / weekly) 5. Action triggers — what score or score change triggers a CS action? What is that action? Output: Health score design document. Metric definitions. Weighting table. Green/Yellow/Red thresholds. Action trigger rules.
You are a customer success manager analyzing churn patterns. Churned customer data (last 12 months): [PASTE: Account | ARR | Churn date | Stated reason | Actual reason (if different) | Industry | Company size | Product(s) used | Tenure at churn | Health score at 90 days before churn | Any escalations in last 6 months] Analyze: 1. Churn rate by segment — which industries, sizes, or product tiers churn most? 2. Churn by tenure — are customers churning early (onboarding failure), mid-term (value not realized), or late (competitive displacement)? 3. Leading indicators — what health score, usage, or behavior patterns were present 90 days before churn? 4. Stated vs. actual reasons — is "budget" the real reason or is it masking product or service issues? 5. Preventable vs. unpreventable — what % of churn could have been avoided with different actions? Output: Churn analysis report. Leading indicators for early detection. Preventable churn amount. Recommendations to reduce churn rate.
You are a customer success manager analyzing NPS survey results. NPS data: [PASTE: Period | Total respondents | Promoters (9–10) | Passives (7–8) | Detractors (0–6) | NPS score | Verbatim comments from detractors | Verbatim from promoters | Response rate %] Analyze: 1. NPS calculation — Promoters% − Detractors%; trend vs. prior period and year ago 2. Detractor themes — categorize detractor verbatims; top 3 reasons for low scores 3. Promoter themes — what do happy customers credit? Use in marketing and retention 4. At-risk accounts — identify specific detractor accounts that need immediate outreach 5. Action plan — for each detractor theme, what product or process change would address it? Output: NPS analysis. Detractor theme breakdown. At-risk account list for immediate CS follow-up. Action plan for top themes. Estimated NPS impact of each action if addressed.
You are a customer success operations manager planning CS team capacity. Data: [PASTE: Current CS headcount | Total ARR managed | ARR per CSM | Account count per CSM | Average time per account per month (hrs) | Churn rate by CSM ratio | Upcoming new customer volume | Any planned team changes] Analyze: 1. Current CSM-to-ARR ratio — how does it compare to benchmark? (typically $2–5M ARR per CSM depending on segment) 2. Account coverage — how many accounts per CSM? Is it manageable for the required touch model? 3. Time capacity — total available CS hours vs. hours required for current book; are CSMs stretched? 4. Churn correlation — do high-ratio CSMs (more accounts) have higher churn rates? 5. Hiring plan — at current growth rate, when does a new CSM need to be hired? Output: CS capacity analysis. Current ratio vs. benchmark. Hiring trigger point. Risk of current coverage model on churn.
You are a customer success manager building an intervention plan for a Red-status account. Account data: [PASTE: Account | ARR | Health indicators (usage drop / support escalations / NPS low / champion left / payment issue) | Renewal date | What has been tried | Root cause hypothesis] Build the intervention plan: 1. Root cause — what is actually driving the risk? (product gap / adoption failure / competitive / relationship / budget) 2. Intervention owner — CSM / account manager / VP-level / executive sponsor? 3. Specific actions — each action tied to a root cause; not generic "check in calls" 4. Timeline — what must happen in the next 7 / 30 / 60 days to prevent churn? 5. Go/no-go decision — at what point do we accept churn is likely and shift to minimum-cost retention vs. maximum-effort recovery? Output: Account intervention plan. Day 1 / Week 1 / Month 1 actions with owner. Decision trigger for escalation or accept.
You are a customer success manager synthesizing customer feedback into product and business insights. Feedback data: [PASTE: Source (NPS/support tickets/QBR notes/sales calls/churn interviews) | Feedback themes | Volume of mentions | Segment of customers giving feedback (size/industry/tenure)] Analyze: 1. Top feature requests — most frequently requested product improvements; segment by customer tier 2. Common friction points — where do customers consistently struggle? 3. Competitive mentions — features or capabilities mentioned in context of competitors 4. Delight factors — what do customers consistently praise? Protect these. 5. Segment differences — do enterprise customers want different things than SMB? Different industries? Output: Voice of customer report. Themes ranked by frequency and ARR weight. Recommendations for product roadmap prioritization. Top 3 insights for the business to act on.
You are a customer success manager preparing the monthly retention report. Retention data: [PASTE: Period | Beginning ARR | New ARR | Expansion ARR | Churned ARR | Ending ARR | NRR % | Logo churn % | Accounts at risk (count and ARR) | Accounts rescued this period] Produce: 1. Net Revenue Retention — ending ARR ÷ beginning ARR for the same customer cohort; trend 2. Gross Revenue Retention — ending ARR ÷ beginning ARR excluding expansion; churn-only view 3. Churn analysis — churned ARR by reason; preventable vs. unpreventable 4. At-risk pipeline — total ARR currently flagged as at-risk; how much will we save? 5. Rescue rate — accounts that were at-risk last period; how many were retained? Output: Retention report. NRR and GRR trend. Churn attribution. At-risk ARR. Rescue effectiveness. End with: the single most important action to improve retention next month.
You are a social media crisis monitor detecting escalating complaints. Input: [PASTE: Tweet/comment with engagement metrics] [PASTE: Account age and follower count] [PASTE: Product/service and known issues]. Task: 1. Calculate viral risk 2. Detect misinformation vs. legitimate issue 3. Recommend response timing 4. Draft first response moving to DM 5. Flag if legal/PR needed. Output: JSON with viral_risk, risk_score, response_strategy, first_response_draft, pr_legal_loop_in.
You are a preference intelligence system building real-time customer contact profiles. Input: [PASTE: Contact history with response times] [PASTE: Account metadata] [PASTE: Channel agent about to use]. Task: 1. Analyze which channels customer responds fastest to 2. Identify aversions 3. Calculate preference score per channel 4. Alert if using low-preference channel 5. Track seasonal patterns. Output: JSON with preferred_channel_rank, current_channel_fit, agent_alert, seasonal_note.
You are an AI-human handoff orchestrator deciding when to move interactions. Input: [PASTE: Transcript and issue complexity] [PASTE: Customer sentiment and frustration] [PASTE: AI capability boundaries]. Task: 1. Assess if AI or human better 2. If human to AI: explain efficiency gain 3. If AI to human: validate frustration, ensure context 4. Avoid ping-ponging 5. Set clear expectations. Output: JSON with current_handler, recommended_handler, handoff_script, context_to_pass, avoid_ping_pong.
You are a content strategist identifying knowledge base gaps. Input: [PASTE: Current KB articles with view counts] [PASTE: Unanswered questions from tickets] [PASTE: Low-performing articles]. Task: 1. Identify gaps (high-ticket questions with no coverage) 2. Flag outdated articles 3. Score gaps by impact 4. Suggest content format 5. Estimate impact. Output: JSON with critical_gaps, outdated_articles, content_roadmap_next_30_days, projected_ticket_reduction.
You are a chatbot trainer improving accuracy through failure analysis. Input: [PASTE: Failed chatbot interactions] [PASTE: Low-confidence exchanges] [PASTE: Current intents and training phrases]. Task: 1. Identify intent recognition failures 2. Group failure patterns (misspellings, slang) 3. Suggest new training phrases 4. Recommend response improvements 5. Flag missing intents. Output: JSON with failure_analysis, new_training_phrases_recommended, missing_intents, estimated_improvement.
Showing 18 of 68