SaaS Customer Success Engineering: Health Scores, Churn Signals, and Automated Playbooks
Build data-driven customer success infrastructure: composite health scores, ML-powered churn prediction, automated playbook triggers, and CSM tooling in TypeScript and PostgreSQL.
Most SaaS teams treat customer success as a human process. The best ones treat it as an engineering problem. A customer health score is a data model. A churn signal is an event stream. An expansion trigger is a query. When you build these as code, your CS team scales without hiring in lockstep with revenue.
This post covers the engineering behind customer success infrastructure: how to define and compute health scores, detect churn signals early, automate playbook execution, and give CSMs tooling that surfaces the right accounts at the right time.
The Health Score Model
A composite health score aggregates multiple product signals into a single number (0โ100). The goal is not precision โ it's triage priority. CSMs can't deeply work 300 accounts; they need to know which 20 matter this week.
Database Schema
-- Health score snapshots (time-series โ never update, always insert)
CREATE TABLE customer_health_scores (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
account_id UUID NOT NULL REFERENCES accounts(id) ON DELETE CASCADE,
scored_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
overall SMALLINT NOT NULL CHECK (overall BETWEEN 0 AND 100),
-- Component scores (0โ100 each)
engagement SMALLINT NOT NULL, -- login freq, feature adoption
adoption SMALLINT NOT NULL, -- % features used vs plan tier
support SMALLINT NOT NULL, -- open tickets, CSAT, severity
growth SMALLINT NOT NULL, -- seat count trend, usage trend
nps SMALLINT, -- 0โ100 NPS proxy (may be null)
-- Raw signals stored for debugging score changes
signals JSONB NOT NULL DEFAULT '{}',
-- Trend vs. previous score
delta SMALLINT, -- positive = improving
risk_tier TEXT NOT NULL CHECK (risk_tier IN ('healthy', 'neutral', 'at-risk', 'critical')),
INDEX idx_health_account_time (account_id, scored_at DESC),
INDEX idx_health_risk (risk_tier, scored_at DESC) WHERE risk_tier IN ('at-risk', 'critical')
);
-- Latest score view (materialized, refreshed hourly)
CREATE MATERIALIZED VIEW account_health_current AS
SELECT DISTINCT ON (account_id) *
FROM customer_health_scores
ORDER BY account_id, scored_at DESC;
CREATE UNIQUE INDEX ON account_health_current (account_id);
-- Playbook execution log
CREATE TABLE playbook_executions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
account_id UUID NOT NULL REFERENCES accounts(id),
playbook_id TEXT NOT NULL,
triggered_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
trigger_type TEXT NOT NULL, -- 'health_drop', 'milestone', 'risk_signal'
trigger_data JSONB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending'
CHECK (status IN ('pending', 'in-progress', 'completed', 'cancelled')),
completed_at TIMESTAMPTZ,
outcome JSONB
);
CREATE INDEX idx_playbook_account ON playbook_executions (account_id, triggered_at DESC);
CREATE INDEX idx_playbook_pending ON playbook_executions (status, triggered_at) WHERE status = 'pending';
Health Score Calculator
// src/services/health-score/calculator.ts
import { db } from '../../lib/db';
interface SignalWeights {
engagement: number; // login frequency, DAU/MAU
adoption: number; // feature breadth
support: number; // ticket health
growth: number; // seat/usage trend
nps: number; // satisfaction signal
}
// Weights must sum to 1.0
const WEIGHTS: SignalWeights = {
engagement: 0.30,
adoption: 0.25,
support: 0.20,
growth: 0.15,
nps: 0.10,
};
interface RawSignals {
// Engagement
dau_mau_ratio: number; // 0โ1
logins_last_30d: number;
days_active_last_30d: number;
// Adoption
features_used: number;
features_available: number;
// Support
open_critical_tickets: number;
open_high_tickets: number;
avg_csat_last_90d: number | null; // 1โ5
// Growth
seat_count_delta_90d: number; // positive = growth
api_calls_trend: number; // +1 growing, 0 stable, -1 declining
// NPS
latest_nps_score: number | null; // 0โ10
}
function scoreEngagement(signals: RawSignals): number {
const dauMauScore = signals.dau_mau_ratio * 40; // max 40
const loginScore = Math.min(signals.logins_last_30d / 20, 1) * 30; // max 30
const activeScore = Math.min(signals.days_active_last_30d / 22, 1) * 30; // max 30
return Math.round(dauMauScore + loginScore + activeScore);
}
function scoreAdoption(signals: RawSignals): number {
if (signals.features_available === 0) return 50;
const ratio = signals.features_used / signals.features_available;
// Tiered: 0โ25% = 0โ40, 25โ50% = 40โ70, 50โ75% = 70โ90, 75โ100% = 90โ100
if (ratio < 0.25) return Math.round(ratio / 0.25 * 40);
if (ratio < 0.50) return Math.round(40 + (ratio - 0.25) / 0.25 * 30);
if (ratio < 0.75) return Math.round(70 + (ratio - 0.50) / 0.25 * 20);
return Math.round(90 + (ratio - 0.75) / 0.25 * 10);
}
function scoreSupport(signals: RawSignals): number {
let score = 100;
score -= signals.open_critical_tickets * 25; // each critical ticket = -25
score -= signals.open_high_tickets * 10; // each high ticket = -10
if (signals.avg_csat_last_90d !== null) {
// CSAT < 3 = penalty, > 4.5 = bonus
if (signals.avg_csat_last_90d < 3) score -= 20;
else if (signals.avg_csat_last_90d < 3.5) score -= 10;
else if (signals.avg_csat_last_90d > 4.5) score += 5;
}
return Math.max(0, Math.min(100, score));
}
function scoreGrowth(signals: RawSignals): number {
let score = 50; // neutral baseline
// Seat count trend
if (signals.seat_count_delta_90d > 0) score += Math.min(signals.seat_count_delta_90d * 2, 30);
else if (signals.seat_count_delta_90d < 0) score += Math.max(signals.seat_count_delta_90d * 3, -40);
// API usage trend
score += signals.api_calls_trend * 10; // +10, 0, or -10
return Math.max(0, Math.min(100, Math.round(score)));
}
function scoreNps(signals: RawSignals): number {
if (signals.latest_nps_score === null) return 50; // neutral when no data
// NPS 0โ6 = detractor, 7โ8 = passive, 9โ10 = promoter
if (signals.latest_nps_score <= 6) return 20;
if (signals.latest_nps_score <= 8) return 55;
return 90;
}
function computeRiskTier(overall: number, signals: RawSignals): string {
// Override tier for critical signals regardless of overall score
if (signals.open_critical_tickets > 0 && overall < 60) return 'critical';
if (overall >= 75) return 'healthy';
if (overall >= 50) return 'neutral';
if (overall >= 30) return 'at-risk';
return 'critical';
}
export async function computeAndStoreHealthScore(
accountId: string
): Promise<{ overall: number; riskTier: string }> {
const signals = await fetchSignals(accountId);
const components = {
engagement: scoreEngagement(signals),
adoption: scoreAdoption(signals),
support: scoreSupport(signals),
growth: scoreGrowth(signals),
nps: scoreNps(signals),
};
const overall = Math.round(
components.engagement * WEIGHTS.engagement +
components.adoption * WEIGHTS.adoption +
components.support * WEIGHTS.support +
components.growth * WEIGHTS.growth +
components.nps * WEIGHTS.nps
);
// Get previous score for delta
const previous = await db.customerHealthScore.findFirst({
where: { accountId },
orderBy: { scoredAt: 'desc' },
select: { overall: true },
});
const delta = previous ? overall - previous.overall : null;
const riskTier = computeRiskTier(overall, signals);
await db.customerHealthScore.create({
data: {
accountId,
overall,
...components,
signals: signals as object,
delta,
riskTier,
},
});
// Refresh materialized view (async, non-blocking)
db.$executeRaw`REFRESH MATERIALIZED VIEW CONCURRENTLY account_health_current`.catch(
(err) => console.error('Materialized view refresh failed:', err)
);
return { overall, riskTier };
}
Fetching Raw Signals
// src/services/health-score/signals.ts
import { db } from '../../lib/db';
export async function fetchSignals(accountId: string): Promise<RawSignals> {
const now = new Date();
const days30 = new Date(now.getTime() - 30 * 86_400_000);
const days90 = new Date(now.getTime() - 90 * 86_400_000);
const [engagement, support, growth, nps, plan] = await Promise.all([
// Engagement signals
db.$queryRaw<{ dau_mau: number; logins: number; active_days: number }[]>`
SELECT
COUNT(DISTINCT CASE WHEN created_at >= NOW() - INTERVAL '1 day' THEN user_id END)::float /
NULLIF(COUNT(DISTINCT CASE WHEN created_at >= NOW() - INTERVAL '30 days' THEN user_id END), 0) AS dau_mau,
COUNT(*) FILTER (WHERE event_type = 'login' AND created_at >= ${days30}) AS logins,
COUNT(DISTINCT DATE(created_at)) FILTER (WHERE created_at >= ${days30}) AS active_days
FROM analytics_events
WHERE account_id = ${accountId}
`,
// Support signals
db.$queryRaw<{ critical: number; high: number; avg_csat: number | null }[]>`
SELECT
COUNT(*) FILTER (WHERE severity = 'critical' AND status NOT IN ('resolved', 'closed')) AS critical,
COUNT(*) FILTER (WHERE severity = 'high' AND status NOT IN ('resolved', 'closed')) AS high,
AVG(csat_rating) FILTER (WHERE csat_rating IS NOT NULL AND created_at >= ${days90}) AS avg_csat
FROM support_tickets
WHERE account_id = ${accountId}
`,
// Growth signals
db.$queryRaw<{ seat_delta: number; api_trend: number }[]>`
WITH api_periods AS (
SELECT
SUM(CASE WHEN period_start >= NOW() - INTERVAL '30 days' THEN call_count ELSE 0 END) AS recent,
SUM(CASE WHEN period_start BETWEEN NOW() - INTERVAL '90 days' AND NOW() - INTERVAL '60 days' THEN call_count ELSE 0 END) AS older
FROM api_usage_daily
WHERE account_id = ${accountId}
)
SELECT
(SELECT COUNT(*) FROM account_seats WHERE account_id = ${accountId} AND is_active = true)
- (SELECT COUNT(*) FROM account_seats WHERE account_id = ${accountId} AND is_active = true AND created_at < ${days90}) AS seat_delta,
CASE
WHEN recent > older * 1.1 THEN 1
WHEN recent < older * 0.9 THEN -1
ELSE 0
END AS api_trend
FROM api_periods
`,
// NPS
db.npsResponse.findFirst({
where: { accountId, respondedAt: { gte: days90 } },
orderBy: { respondedAt: 'desc' },
select: { score: true },
}),
// Plan features
db.account.findUniqueOrThrow({
where: { id: accountId },
select: { plan: { select: { featureCount: true } }, featuresEnabled: true },
}),
]);
const eng = engagement[0];
const sup = support[0];
const grw = growth[0];
return {
dau_mau_ratio: eng?.dau_mau ?? 0,
logins_last_30d: Number(eng?.logins ?? 0),
days_active_last_30d: Number(eng?.active_days ?? 0),
features_used: plan.featuresEnabled.length,
features_available: plan.plan?.featureCount ?? 1,
open_critical_tickets: Number(sup?.critical ?? 0),
open_high_tickets: Number(sup?.high ?? 0),
avg_csat_last_90d: sup?.avg_csat ?? null,
seat_count_delta_90d: Number(grw?.seat_delta ?? 0),
api_calls_trend: Number(grw?.api_trend ?? 0),
latest_nps_score: nps?.score ?? null,
};
}
Automated Playbook Engine
A playbook is a sequence of actions triggered by a condition. The engine evaluates triggers on a schedule and routes work to CSMs, sends automated emails, or creates CRM tasks.
// src/services/playbooks/engine.ts
import { db } from '../../lib/db';
import { sendEmail } from '../email';
import { createCrmTask } from '../crm';
interface PlaybookTrigger {
type: 'health_drop' | 'milestone' | 'risk_signal' | 'inactivity';
condition: (data: TriggerData) => boolean;
}
interface TriggerData {
accountId: string;
currentScore: number;
previousScore: number | null;
riskTier: string;
signals: Record<string, unknown>;
}
interface PlaybookStep {
type: 'email' | 'crm_task' | 'slack_notify' | 'wait';
config: Record<string, unknown>;
delayHours?: number;
}
interface Playbook {
id: string;
name: string;
trigger: PlaybookTrigger;
steps: PlaybookStep[];
cooldownDays: number; // Don't re-trigger within N days
}
const PLAYBOOKS: Playbook[] = [
{
id: 'health-drop-critical',
name: 'Critical Health Drop Response',
trigger: {
type: 'health_drop',
condition: (d) => d.riskTier === 'critical' && (d.previousScore ?? 100) > 40,
},
steps: [
{
type: 'slack_notify',
config: {
channel: '#cs-alerts',
message: '๐จ Account {{account.name}} dropped to critical health ({{score}})',
},
},
{
type: 'crm_task',
config: {
title: 'Urgent: Critical health score โ schedule EBR',
priority: 'high',
dueInDays: 1,
},
},
{
type: 'email',
config: {
template: 'executive-outreach',
from: 'cs-team@viprasol.com',
subject: 'Checking in โ how can we help?',
},
delayHours: 4,
},
],
cooldownDays: 7,
},
{
id: 'low-adoption-nudge',
name: 'Low Adoption Nudge',
trigger: {
type: 'risk_signal',
condition: (d) => {
const signals = d.signals as RawSignals;
return (
d.riskTier !== 'critical' && // Not already in crisis
signals.features_used / (signals.features_available || 1) < 0.30
);
},
},
steps: [
{
type: 'email',
config: {
template: 'feature-discovery',
subject: 'You might be missing {{unused_feature_count}} features in your plan',
},
},
{
type: 'crm_task',
config: {
title: 'Schedule feature adoption call',
priority: 'medium',
dueInDays: 5,
},
delayHours: 48,
},
],
cooldownDays: 30,
},
{
id: '90-day-milestone',
name: '90-Day Onboarding Milestone',
trigger: {
type: 'milestone',
condition: (d) => {
const signals = d.signals as Record<string, unknown>;
return (signals.days_since_signup as number) === 90;
},
},
steps: [
{
type: 'email',
config: {
template: '90-day-review',
subject: '90 days in โ let\'s review your progress',
},
},
],
cooldownDays: 365,
},
];
async function isOnCooldown(
accountId: string,
playbookId: string,
cooldownDays: number
): Promise<boolean> {
const recentExecution = await db.playbookExecution.findFirst({
where: {
accountId,
playbookId,
triggeredAt: {
gte: new Date(Date.now() - cooldownDays * 86_400_000),
},
},
});
return recentExecution !== null;
}
export async function evaluatePlaybooks(
accountId: string,
triggerData: TriggerData
): Promise<string[]> {
const triggered: string[] = [];
for (const playbook of PLAYBOOKS) {
if (!playbook.trigger.condition(triggerData)) continue;
if (await isOnCooldown(accountId, playbook.id, playbook.cooldownDays)) continue;
const execution = await db.playbookExecution.create({
data: {
accountId,
playbookId: playbook.id,
triggerType: playbook.trigger.type,
triggerData: triggerData as object,
status: 'in-progress',
},
});
// Queue each step with delay
for (const step of playbook.steps) {
await schedulePlaybookStep(execution.id, step, accountId);
}
await db.playbookExecution.update({
where: { id: execution.id },
data: { status: 'completed', completedAt: new Date() },
});
triggered.push(playbook.id);
}
return triggered;
}
๐ SaaS MVP in 8 Weeks โ Seriously
We have launched 50+ SaaS platforms. Multi-tenant architecture, Stripe billing, auth, role-based access, and cloud deployment โ all handled by one senior team.
- Week 1โ2: Architecture design + wireframes
- Week 3โ6: Core features built + tested
- Week 7โ8: Launch-ready on AWS/Vercel with CI/CD
- Post-launch: Maintenance plans from month 3
Churn Prediction with Logistic Regression
For early churn signals, a lightweight logistic regression trained weekly on historical data outperforms rule-based scoring for most SaaS products.
# scripts/train_churn_model.py
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, classification_report
import pickle
import psycopg2
# Features that predict churn within 90 days
FEATURES = [
'avg_logins_per_week',
'features_adoption_pct',
'open_support_tickets',
'avg_csat',
'seat_growth_rate',
'days_since_last_login',
'nps_score_imputed',
'contract_days_remaining',
'plan_tier_numeric',
]
def load_training_data() -> pd.DataFrame:
conn = psycopg2.connect(dsn=os.environ['DATABASE_URL'])
query = """
SELECT
a.id,
AVG(e.login_count) FILTER (WHERE e.week_start >= a.created_at) AS avg_logins_per_week,
s.features_adoption_pct,
s.open_support_tickets,
COALESCE(s.avg_csat, 3.5) AS avg_csat,
s.seat_growth_rate,
EXTRACT(DAYS FROM (NOW() - MAX(e.last_login_date))) AS days_since_last_login,
COALESCE(n.score * 10, 50) AS nps_score_imputed,
EXTRACT(DAYS FROM (sub.current_period_end - NOW())) AS contract_days_remaining,
CASE s.plan_tier WHEN 'starter' THEN 1 WHEN 'growth' THEN 2 WHEN 'enterprise' THEN 3 ELSE 1 END AS plan_tier_numeric,
-- Label: churned within 90 days of this snapshot
CASE WHEN sub.canceled_at IS NOT NULL
AND sub.canceled_at <= snapshot_date + INTERVAL '90 days'
THEN 1 ELSE 0 END AS churned
FROM accounts a
JOIN account_health_snapshots s ON a.id = s.account_id
LEFT JOIN weekly_engagement e ON a.id = e.account_id
LEFT JOIN nps_responses n ON a.id = n.account_id
JOIN subscriptions sub ON a.id = sub.account_id
GROUP BY a.id, s.features_adoption_pct, s.open_support_tickets, s.avg_csat,
s.seat_growth_rate, n.score, sub.current_period_end, sub.canceled_at,
s.plan_tier, s.snapshot_date
HAVING COUNT(e.week_start) >= 4 -- Minimum 4 weeks of data
"""
return pd.read_sql(query, conn)
def train_model():
df = load_training_data()
X = df[FEATURES]
y = df['churned']
print(f"Training on {len(df)} accounts, {y.mean():.1%} churn rate")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
model = LogisticRegression(class_weight='balanced', C=0.5, max_iter=1000)
model.fit(X_train_scaled, y_train)
y_proba = model.predict_proba(X_test_scaled)[:, 1]
auc = roc_auc_score(y_test, y_proba)
print(f"AUC-ROC: {auc:.4f}")
print(classification_report(y_test, model.predict(X_test_scaled)))
# Save model + scaler together
with open('/opt/models/churn_model.pkl', 'wb') as f:
pickle.dump({'model': model, 'scaler': scaler, 'features': FEATURES}, f)
return auc
if __name__ == '__main__':
train_model()
// src/services/churn-prediction/scorer.ts
import { execSync } from 'child_process';
import { db } from '../../lib/db';
interface ChurnScore {
accountId: string;
probability: number; // 0โ1
riskLabel: 'low' | 'medium' | 'high' | 'critical';
topFactors: string[];
}
export async function getChurnProbability(accountId: string): Promise<ChurnScore> {
// Call Python scorer as subprocess (or use a REST microservice in production)
const result = JSON.parse(
execSync(`python3 /opt/scripts/score_account.py --account-id ${accountId}`, {
encoding: 'utf-8',
})
) as { probability: number; top_factors: string[] };
const label =
result.probability > 0.7 ? 'critical' :
result.probability > 0.5 ? 'high' :
result.probability > 0.25 ? 'medium' : 'low';
await db.churnPrediction.upsert({
where: { accountId },
update: { probability: result.probability, riskLabel: label, updatedAt: new Date() },
create: {
accountId,
probability: result.probability,
riskLabel: label,
topFactors: result.top_factors,
},
});
return {
accountId,
probability: result.probability,
riskLabel: label,
topFactors: result.top_factors,
};
}
CSM Dashboard Query
-- Weekly CSM priority queue: accounts needing attention
WITH churn_risk AS (
SELECT account_id, probability, risk_label
FROM churn_predictions
WHERE updated_at >= NOW() - INTERVAL '7 days'
),
recent_health AS (
SELECT DISTINCT ON (account_id)
account_id, overall, risk_tier, delta, scored_at
FROM customer_health_scores
ORDER BY account_id, scored_at DESC
),
pending_tasks AS (
SELECT account_id, COUNT(*) AS pending_playbooks
FROM playbook_executions
WHERE status = 'pending'
GROUP BY account_id
)
SELECT
a.id,
a.name,
a.csm_owner,
rh.overall AS health_score,
rh.risk_tier,
rh.delta AS score_change_7d,
cr.probability AS churn_probability,
cr.risk_label AS churn_risk_label,
pt.pending_playbooks,
sub.mrr,
sub.current_period_end AS renewal_date,
EXTRACT(DAYS FROM sub.current_period_end - NOW()) AS days_to_renewal
FROM accounts a
JOIN recent_health rh ON a.id = rh.account_id
LEFT JOIN churn_risk cr ON a.id = cr.account_id
LEFT JOIN pending_tasks pt ON a.id = pt.account_id
JOIN subscriptions sub ON a.id = sub.account_id AND sub.status = 'active'
WHERE rh.risk_tier IN ('at-risk', 'critical')
OR cr.probability > 0.4
OR sub.current_period_end BETWEEN NOW() AND NOW() + INTERVAL '60 days'
ORDER BY
(rh.overall + (1 - COALESCE(cr.probability, 0)) * 100) / 2 ASC, -- lowest combined score first
sub.mrr DESC -- prioritize high-value accounts within same risk tier
LIMIT 50;
๐ก The Difference Between a SaaS Demo and a SaaS Business
Anyone can build a demo. We build SaaS products that handle real load, real users, and real payments โ with architecture that does not need to be rewritten at 1,000 users.
- Multi-tenant PostgreSQL with row-level security
- Stripe subscriptions, usage billing, annual plans
- SOC2-ready infrastructure from day one
- We own zero equity โ you own everything
Cost Reference: CS Engineering by Scale
| Stage | Infrastructure | One-time Build | Monthly Ops |
|---|---|---|---|
| Early SaaS (< 200 accounts) | Spreadsheets + basic SQL views | $8Kโ15K | $0 (no extra infra) |
| Growth SaaS (200โ2K accounts) | Health score pipeline + playbooks | $25Kโ45K | $500โ1K |
| Scale SaaS (2Kโ20K accounts) | ML churn model + full automation | $60Kโ100K | $2Kโ5K |
| Enterprise CS platform | Real-time scoring + CSM tooling | $120Kโ250K | $8Kโ20K |
ROI benchmark: At $5K average ACV, preventing 10 churns/year pays back a $50K build investment in year one.
See Also
- SaaS Metrics Benchmarks: ARR, Churn, NRR, and CAC
- SaaS Churn Prediction: ML Models and Cohort Analysis
- Product Analytics Engineering: Tracking, Funnels, and Retention
- SaaS Dunning Management: Recovering Failed Payments
- Startup Growth Metrics: From MRR to Net Dollar Retention
Working With Viprasol
Building a SaaS product and need customer success infrastructure that tells your team who's about to churn before they do? We design and implement health score pipelines, churn prediction models, and automated playbook engines that plug into your existing CRM and support tooling.
Talk to our team โ | Explore our SaaS engineering services โ
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Building a SaaS Product?
We've helped launch 50+ SaaS platforms. Let's build yours โ fast.
Free consultation โข No commitment โข Response within 24 hours
Add AI automation to your SaaS product?
Viprasol builds custom AI agent crews that plug into any SaaS workflow โ automating repetitive tasks, qualifying leads, and responding across every channel your customers use.