Weighted Vertex Cover - Fully p-time Approximation of Subset Sum




CS255

Chris Pollett

May 7, 2018

Outline

Introduction

Weighted Vertex Cover

0-1 Program for Minimum Weight Vertex Cover

Using Relaxation to Approximately Solve Problems

Approximation Algorithm For Minimum Weight Vertex Cover

APPROX-MIN-WEIGHT-VC(G, w)
1 C = ∅
2 Compute x, an optimal solution to the 
  linear program of the previous slide
3 for each v ∈ V
4     if x(v) ≥ 1/2
5         C = C ∪ {v}
6 return C

APPROX-MIN-WEIGHT-VC is a 2-approximation algorithm

Theorem. APPROX-MIN-WEIGHT-VC is a polynomial time 2-approximation algorithm for the minimum-weight vertex-cover problem.

Proof. As we have already mentioned, line 2 in the algorithm can be done in p-time using the ellipsoid method. Lines 3-5 are linear time in the number of vertices, so the whole algorithm is p-time.

Let `C^star` be an optimal solution to a minimum-weight vertex-cover problem. Let `z^{\star}` be an optimal solution to the linear program described on the previous slides. Since an optimal cover is a feasible solution to the linear program, we have
`z^{\star} le w(C^star)`.
The Theorem follows from the following claim which we prove on the next slide:

Claim. The rounding of variables `x(v)` in APPROX-MIN-WEIGHT-VC produces a set `C` that is a vertex cover and satisfies `w(C) le 2z^{\star}`.

Proof of Claim

As one of our constraints is `x(u) + x(v) ge 1`, at least one `x(u)` or `x(v)` must be at least 1/2. Therefore, at least one of `u` or `v` is included in the vertex cover, and so every edge is covered.

Consider the weight of the cover. We have
`z^{\star} = sum_(v in V) w(v) x(v)`
`ge sum_(v in V; x(v) ge 1/2)w(v) x(v)`
`ge sum_(v in V; x(v) ge 1/2)w(v) 1/2`
`= sum_(v in C)w(v) 1/2`
`= 1/2 sum_(v in C)w(v)`
`= 1/2 w(C)`
So this gives:
`w(C) le 2z^{\star} le 2w( C^star)`
completing the proof.

Quiz

Which of the following statements is true?

  1. Our APPROX-TSP-TOUR algorithm was a p-time, 2-approximation algorithm for general TSP.
  2. Our GREEDY-SET-COVER algorithm was a p-time, 2-approximation algorithm for SET COVER.
  3. Picking a satisfying assignment uniformly at random is a randomized `8/7`-approximation algorithm for MAX-3SAT.

The Optimization Version of Subset Sum

An Exponential-time Exact Algorithm

Improving the Exponential Time Algorithm

Example of Exponential Time Algorithm

A Fully p-time Approximation Scheme

A Trimming Procedure

The Subset Sum Approximation Algorithm

APPROX-SUBSET-SUM(S, t, ε)
1 n = |S|
2 L[0] = (0)
3 for i = 1 to n
4     L[i] = MERGE-LISTS(L[i-1], L[i-1] + x[i])
5     L[i] = TRIM(L[i], ε/2n)
6     remove from L[i] every element that is greater than t
7 let zstar be the largest value in L[n]
8 return zstar

An Example of Our Algorithm in Action

Proof That APPROX-SUBSET-SUM works

Theorem. APPROX-SUBSET-SUM is a fully `p`-time approximation scheme for the subset-sum problem.

Proof. Trimming and removing elements of value greater than `t` maintain the property that every element of `L[i]`, which from now on we'll write as `L_i`, is also a member of `P_i`. So zstar, which we'll write from now on as `z^star`, returned by line 8 is the sum of some subset of `S`. let `y^star in P_n` denote an optimal solution to the subset-sum problem. From line 6, we know that `z^(star) leq y^star`. So we need to show `y^star/z^star le 1 +epsilon` and that the algorithm runs in polynomial time in both `1/epsilon` and the input size.

From the definition of trimming one can show for every element `y in P_i` that is at most `t` there exists a `z in L_i` such that
`y/(1+epsilon/(2n))^i le z le y`.
In particular as `y^star in P_n`, there exists an element `z in L_n` such that
`y^star/(1+epsilon/(2n))^n le z le y^star`,
and so,
`y^star/z leq (1+epsilon/(2n))^n`.
Since there exists a `z in L_n` satisfying the above, it must also hold for `z^star`, the largest element of `L_n`. Therefore
`y^star/z^star le (1+epsilon/(2n))^n`.
We now argue `(1+epsilon/(2n))^n le 1 + epsilon` to get `y^star/z^star le 1 + epsilon`. To see this notice
`lim_(n -> infty) (1 + epsilon/(2n))^n = e^(epsilon/2)` and as `d/(dn)(1 + epsilon/(2n))^n > 0`, `(1 + epsilon/(2n))^n` increases with `n`. So we have
`(1 + epsilon/(2n))^n le e^(epsilon/2)`
`le 1 + epsilon/2 + (epsilon/2)^2` (looking at terms in Taylor series)
`le 1 + epsilon`.

So we have that the algorithm satisfies the desired approximation ratio, to complete the proof we need to show a bound on the length of `L_i`.

Bound the Length of the `L_i`'s

After trimming, succesive elements `z` and `z'` of `L_i` must have the relationship `(z')/z > 1 + epsilon/(2n)`. That is, they must differ by a factor of at least `1 + epsilon/(2n)`. So each list contains the value 0, possibly the value 1, and up to `|__ log_(1+epsilon/(2n)) t__|` additional values. So the number of elements in each list `L_i` is at most
`log_(1+epsilon/(2n)) t +2 = (ln t)/(ln(1+epsilon/(2n))) + 2`
`le (2n(1 + epsilon/(2n)) ln t)/(epsilon) + 2`
`< (3n ln t)/epsilon +2`
which is polynomial in both `n` and `epsilon` as `ln t < n` as it is provided as part of the input.