Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Dynamic Programming and Greedy Algorithms

(a.k.a. The “Trust Issues” Chapter of Computer Science)

Welcome to the land where past mistakes are stored, future decisions depend on memory, and “optimal” doesn’t always mean happy.

Dynamic Programming (DP) and Greedy Algorithms are both about making smart choices — but like humans, they disagree on when to think before acting.


🪞 Meet the Duo

🧩 Dynamic Programming (DP)

DP is that one friend who:

  • Writes everything down,

  • Plans ahead,

  • And refuses to repeat themselves.

DP’s motto:

“Why suffer twice for the same problem?”


💰 Greedy Algorithm

Greedy is that other friend who:

  • Sees a donut, takes it. 🍩

  • Doesn’t think about the next donut.

  • Sometimes gets lucky anyway.

Greedy’s motto:

“Take what looks good now. Future-me can deal with it.”


⚖️ The Core Difference

TraitDynamic ProgrammingGreedy Algorithm
Planning styleThinks long-termYOLO (You Only Live Once)
MemoryRemembers everythingForgets immediately
OptimalityAlways correct (if done right)Sometimes right, sometimes a disaster
AnalogyChess playerHungry toddler
ExampleKnapsack ProblemCoin Change with normal coins

🧮 Step 1: The Philosophy of DP

DP is based on two magical ideas:

  1. Overlapping Subproblems – You keep solving the same smaller problem again and again.

  2. Optimal Substructure – Big problems can be built from small optimal answers.

Think of it like Netflix caching episodes — it remembers what you watched so it doesn’t have to buffer again.


🎬 Classic Example: Fibonacci Sequence

Without DP (a.k.a. recursion hell 🔥):

def fib(n):
    if n <= 1:
        return n
    return fib(n-1) + fib(n-2)

This version is pure chaos. Your computer will age before it finishes fib(50).


With DP (a.k.a. wisdom 🧘):

def fib_dp(n):
    dp = [0, 1]
    for i in range(2, n+1):
        dp.append(dp[i-1] + dp[i-2])
    return dp[n]

Now it’s lightning fast ⚡ — because you finally learned to remember your past mistakes.


💼 Real-World Analogy

Dynamic Programming is like:

  • Remembering all your exes’ mistakes so you don’t date the same type again.

  • Or caching emotional pain for faster decision-making next time. 💔💾


🧩 Step 2: The Greedy Way

Greedy doesn’t care about the past. It just grabs the best option right now.

Example: Coin Change (Greedy version)

coins = [25, 10, 5, 1]
def greedy_change(amount):
    count = 0
    for coin in coins:
        count += amount // coin
        amount %= coin
    return count

For normal coins (U.S. currency), it works perfectly. For weird coins (like [9, 6, 1]), it fails miserably.

Moral: Greedy works only when local choices lead to global happiness. (So basically… never in relationships.) 😬


💡 DP vs Greedy in Action

ProblemDP SolutionGreedy Solution
FibonacciPerfectLaughably bad
Shortest Path (Dijkstra)Can use DPGreedy works (but with math)
KnapsackDP = exactGreedy = approximate
Job SchedulingDP = optimalGreedy = good enough
Love lifeOverthinksActs on impulse

🧠 How to Build a DP Solution

  1. Define the State → What does dp[i] represent?

  2. Find the Recurrence → How does it relate to previous states?

  3. Base Case → Where does it all begin?

  4. Order of Solving → Bottom-up or top-down (with memoization).

  5. Return Answer → Usually dp[n], unless plot twist.


🏋️‍♂️ Example: 0/1 Knapsack Problem

“You’re robbing a store… but politely. You can only carry W weight.”

def knapsack(weights, values, W):
    n = len(weights)
    dp = [[0]*(W+1) for _ in range(n+1)]

    for i in range(1, n+1):
        for w in range(W+1):
            if weights[i-1] <= w:
                dp[i][w] = max(
                    dp[i-1][w],
                    values[i-1] + dp[i-1][w - weights[i-1]]
                )
            else:
                dp[i][w] = dp[i-1][w]
    return dp[n][W]

Time complexity: O(nW) Brain complexity: O(∞) 🤯


💰 Example: Greedy Algorithm — Activity Selection

“You have meetings to attend. You want to attend as many as possible without overlap.”

def activity_selection(activities):
    activities.sort(key=lambda x: x[1])  # sort by finish time
    last_end = 0
    count = 0
    for start, end in activities:
        if start >= last_end:
            count += 1
            last_end = end
    return count

Greedy wins here because every small choice (earliest finish time) leads to a global optimum.

DP would also work — but Greedy finishes in half the time and still makes it to happy hour. 🍻


🧘 DP vs Greedy: The Life Lesson

TraitDP ThinkerGreedy Thinker
PersonalityCareful, analyticalImpulsive, confident
Memory useHighLow
AccuracyAlways rightOften right, sometimes chaotic
AnalogyA chess grandmasterA golden retriever with snacks
Best used forOptimization with dependenciesOptimization with independence

🧩 When to Use Which

Use DP if:

  • Future depends on previous decisions.

  • You feel déjà vu while coding.

  • You’re okay with slightly more RAM crying.

Use Greedy if:

  • Each step can be made without regret.

  • Local decisions lead to global success.

  • You’re in a rush. (Or the interview is almost over.)


💬 Final Thoughts

Dynamic Programming and Greedy Algorithms aren’t enemies — they’re just two different kinds of geniuses:

  • DP: “I’ve thought about this 10 moves ahead.”

  • Greedy: “I made a decision. It felt right.”

Both have their place in coding… and in life. 😎

So next time you face a problem, ask yourself:

“Do I plan for the future or just grab the best donut now?” 🍩💭

Either way, as long as it runs in O(n log n), you’re winning. 🏁


# Your code here

Exercises

Exercise