Here I list some compact real problems that I was forced to solve either by circumstance or by necessity during the course of my PhD. Their relation with my PhD is totally
incidental, so in the rare case that you are impressed by these problems and go "wow! PhD sounds cool!", think again. Most or all of these problems can be solved
using integer programming (IP), so the trick is to avoid IP if possible. Also, I have not bothered with proving whether they are NP-hard/complete or ridiculously in P.

#### The DVD burning problem

You pride yourself on your collection of movies, songs, software, whatever, that coolly reside on your PC. But you are also paranoid and you wish to backup these
fine pieces of art because you have spent hours and dollars collecting them. Naturally, being a grad student, you are also a cheapskate and would like to minimize
the number of DVDs used (no, another hard disk will not do, you are a cheapskate, remember!). The hitch is that you are also a connoisseur, and would like each DVD
to contain atleast one of each kind. Say one horror movie, one chick flick to calm you down after the horror movie, one western to make you realize that you really
do not like chick flicks, and one Pink Floyd album to make you realize that screw the movies, all this while you just wanted to listen to some good music.
So given class labels
and file size for each object, how do you burn all using minimum DVDs while satisfying the connoisseur constraints?
Sure you can use IP, but can you use constrained clustering to compute this cheaply yet accurately?

#### The experimentation conundrum

The paper submission deadline is in sniffing distance, and understandably you have developed a stuffy nose. You realize that your workload is huge. It has many tasks
and each task takes its own sweet time (different from others) to complete. Enter your advisor, who offers you a cluster for parallelizing your workload. You know
that ideally you would split the workload into *k* groups where *k* is the number of **free** CPUs in the cluster, such that the finish time for the worst partition
is minimized (also called 'makespan'), enabling you to make it in time for the deadline. Problem is that because there are other users of the cluster who are more hard working than you,
and who have ran away to play cricket after starting their own tasks, you do not know *k* (it will vary over time anyway). So you cry for 10 minutes on this conundrum and
console yourself that you will minimize the makespan where instead of a fixed known *k*, you will use a probability distribution over various possible *k*'s.
Can you do this without solving an IP, and without solving it for each *k* separately and combining?

#### Food for thought

You are in a group of k friends, all residing in possibly different dorms which have arbitrarily different menus for lunch. The cost of eating in one's own dorm is
\epsilon, the cost of eating in another dorm is \delta > \epsilon, and the cost to go out of campus is \gamma > \delta (all these costs are per-person).
A person x has a 'food-hatred function' f(x,i) where 'i' is the food item served by dorm i. Assume f(.,out-of-campus) = 0, and the rest of the f are given as input.
We are also given a 'group-enjoyment function' g(S) that states the maxim of 'the more the merrier' (S \subseteq {1,...k}).
Assume g is submodular and only depends on |S|. Given the dorm residence of each friend, f, g, and cost,
the goal is to compute a partition of the k friends and assign a dorm (or out-of-campus) to each partition
such that the overall g-f-cost value is maximized.

Will add more here when I find time...