Lab – Churn Prediction#

⏳ Loading Pyodide…

Welcome to your final showdown in this classification chapter:

Predicting which customers are about to ghost your business 👻

You’ll be the data-driven fortune teller of customer loyalty — except instead of a crystal ball, you’ve got pandas, sklearn, and a strong cup of coffee ☕.


🎯 Objective#

You’re working for SuperTel, a telecom company. Management suspects customers are silently slipping away to competitors. Your mission:

  • Predict who’s likely to churn

  • Identify key drivers

  • Save the company a fortune in customer retention 🍾


📦 Step 1: Load the Data#

Let’s start by importing a mock dataset — or you can load your own!

`

Typical columns might look like:

  • tenure → months as a customer

  • monthly_charges → how much they pay

  • contract_type → “Month-to-Month”, “Yearly”, etc.

  • churn → 1 if they left, 0 if they stayed


🧹 Step 2: Clean & Prepare Data#

Let’s tidy things up a bit — because no model deserves messy data. 😅

Split your data:


⚙️ Step 3: Train Your Classifiers#

Let’s train two contenders:

  1. Logistic Regression – the confident statistician

  2. Naive Bayes – the probabilistic wizard 🧙‍♂️


📈 Step 4: Evaluate the Models#

We’ll use precision, recall, F1, and accuracy — because “95% accuracy” alone means nothing when all your churners are ignored. 😅

Which one wins? Logistic Regression is often more calibrated. Naive Bayes might overpredict churn — but that’s still better than missing one.


📊 Step 5: Predict Probabilities (and Get Business Insights)#

Let’s peek under the hood and interpret the probabilities.

Want to feel fancy? Plot a calibration curve:

A well-calibrated model = a humble model that knows what it doesn’t know. 🤓


💼 Step 6: Business Interpretation#

Once you’ve got a model you trust, share insights like:

Feature

Effect

Action

Low tenure

Higher churn

Create loyalty discounts 💳

High monthly charge

Higher churn

Offer flexible plans 📉

Annual contract

Lower churn

Promote long-term deals 📅

Remember: ML ≠ Magic. The real power comes from turning these insights into business strategy.


🧩 Challenge Extensions#

Try one (or all):

  1. Add new features — like “number of complaints” or “payment method.”

  2. Compare models with and without class_weight='balanced'.

  3. Use SMOTE to handle class imbalance.

  4. Build a simple dashboard with churn probabilities per customer.

Feeling ambitious? Deploy your model as a churn alert system — Ping the sales team when a customer looks 80% likely to quit. 🚨


🎓 TL;DR#

Step

What You Did

Load & clean data

Got the dataset ready

Train models

Logistic & Naive Bayes

Evaluate

Precision, recall, F1

Calibrate

Checked probability accuracy

Interpret

Derived business insights


💬 “Remember: predicting churn isn’t about math — it’s about keeping humans happy.” ❤️📊


🔗 Next Chapter: Optimization & Training Practicalities Because your model’s training process could use some optimization too — unlike your caffeine intake. ☕⚙️

# Your code here