Finite Playground

To verify is human; to prove, divine.


Leave a comment

Practice on maximum coverage problem

Recently I just realized that the analysis of approximation algorithms of covering typed problems are unintuitive to me.  I don’t know whether it’s because the analysis-based nature which always gets me, or there’s just that I’m not familiar with the proof enough.  In any case, this is my attempt on decoding the proofs.  Let me take maximum coverage problem as an example, which given a set system with a number k, we ask what is the largest amount of elements we can cover using k sets.

Theorem.  Greedy covering gives a (1-1/e)-approximation to the maximum coverage problem.

Proof.  Let \textit{new}(i) be the number of elements newly covered at step i.  Let \textit{covered}(i) be the number of elements covered from step 1 to step i; we have

\textit{covered}(i) = \textit{new}(1) + \dots + \textit{new}(i)

Let \textit{gap}(i) be the number of elements we still need to cover in order to cover opt elements; therefore \textit{gap}(i) = \textit{opt} - \textit{covered}(i).

Key observation.  At step i, there is always a set that covers at least k-th fraction of the uncovered elements in \textit{OPT}.
Proof.  This is exactly the part I feel uncomfortable with; I find the following formulation helps my intuition:
“No matter the removal of any subset of elements in \textit{OPT}, the original k sets in the optimal cover still covers \textit{OPT}; and because k sets are enough to cover a partial of \textit{OPT}, there is one set among the k sets that covers at least k-th fraction of the partial \textit{OPT}.”

Therefore \textit{new}(i) \ge \textit{gap}(i-1)/k; from this point on things becomes easy.

\textit{gap}(i) \le \textit{gap}(i-1) - \textit{gap}(i-1)/k = {k-1 \over k} \textit{gap}(i-1),

so each step will shrink the gap by (k-1)/k.  After k steps the gap has size at most (1-1/k)^k \le 1/e-th fraction of \textit{opt}, which proves the statement.