recently, i taught my students about greedy choice algorithms. specifically, on how they differ from dynamic programming algorithms.
a lot of names and definitions in algorithms feel weirdly pretentious. for example, the “path relaxation lemma” basically says by traveling on every edge that makes up the shortest path, you achieve the shortest path. my calculus teacher in high school said he wished he was born earlier because he thought that mathematicians got to stick their names on the most obvious of theorems—he said he could’ve probably come up with one of those too. i never really felt that way in college because everything i learned seemed so intensely complicated, but i think the path relaxation lemma came close. with greedy algorithms, the name so perfectly suits the strategy—i
feel greedy when i come up with a greedy algorithm.
the way i described it to my students was that in a
dynamic programming algorithm, at every step along the way, you make your next choice by evaluating how every option plays out to its end and choosing the best one based on that. in a
greedy algorithm, you only have tunnel vision—you don’t get the benefit of seeing how every option plays out, so you have to choose what’s best for you
right now—you are, quite literally, making the greedy choice. in a way, dynamic programming algorithms are a lot more computationally expensive, but offer strong promises. you have to see far into the future and are guaranteed to always make the optimal choice. greedy algorithms are much faster. you have fewer comparisons to make and can’t see into the future, so you might as well make the best choice based on the information you have. sometimes, greedy algorithms produce the overall optimal solution as well—this is the best-case scenario. you are computationally efficient and achieve the best results. the takeaway i gave my students was that we always look for optimal greedy solutions because they are easier and faster; that’s why we choose to study greedy algorithms.
i
love dogs. however, whenever people ask me if i’d ever get a pet dog, my answer is always an immediate no. the lifespan of a dog is factually much shorter than a human’s—and i can’t fathom having to deal with that loss. this is the answer and reasoning i’ve given people for as long as i can remember, but my greedy algorithm recitation made me start thinking about this a little more. somehow i feel like i’m trying to dynamically program my way out of answering the question of whether i should own a pet, as if my happiness is some kind of function that can be optimized. on second thought, i feel kind of stupid for being so sure about foregoing something that would bring me so much joy just because i can see its end. because, if i’ve really accepted the fact that my life is filled with uncertainty, then isn’t making a choice based on calculating how it will play out to its end pointless?
we prove that greedy algorithms are optimal by first proving the
greedy choice property—this states that out of all the optimal ways to solve a problem, at least one solution involves choosing the greedy option. if we can show that the greedy choice property holds, then we can deduce that continuously taking the greedy choice eventually leads to the best overall solution—a series of always choosing the local optimum aggregates to the global optimum. obviously, the greedy choice property is not always true. i’ve made greedy choices in algorithms and in my life that did not pan out in the slightest bit. but, if the only information i have at a given moment is the
now, if i only have my tunnel vision, shouldn’t i just try to maximize local happiness and hope that it leads to the global optimum?
coming up with the best algorithms would be a lot easier if the greedy choice property was always true. but solving them wouldn’t be as fun. i have less fun teaching or coming up with greedy algorithms because it all just seems too easy—what’s the fun in solving a puzzle if it wasn’t a puzzle in the first place? but in my obsession with trying to fit the puzzle pieces of my life perfectly together, maybe i look past the obvious choice. i often tell myself that my metric for self-growth is that i should always work to ensure my future self is proud of me—in a way, i’m optimizing for future happiness. the truth is that i don’t know how i will change in the future, so my dynamic programming algorithm is inherently flawed because the computations i’m doing to see how it ends are just
wrong. i’m trying to make my future self proud but forget about my present self.
this is not to say that i’ll change my answer to the “will you ever get a dog?” question anytime soon. i’m not sure why i find it so hard to stop trying to make the perfect choice. it never
is perfect, it’s computationally inefficient, and i’m trying to prove something to myself based on a false premise. i’m even doing it right now—i’m scared that if i adopt a greedy approach to making decisions, i’ll lose my desire to
do the right thing (whatever that means)—because, ultimately, my drive to compute so far ahead into the future comes from making sure everything and everyone will fall flawlessly into place. but maybe the bigger issue is that i don't know how to accept that my life isn’t some large logic puzzle that can be reasoned with using an algorithm. and that there’s no greedy choice property to prove or dynamic programming table to build because the constraints of the problem, the pieces of the puzzle, and even the objectives are ever-changing.