Uživatelské nástroje

Nástroje pro tento web


teaching:ip1920

Integer Programming and Parameterized Complexity 2019/20

First lecture

  • What is IP? ILP is NP-hard, but maybe restricted instances can be easier?
  • IP in fixed dimension: flatness theorem, branching, etc.
  • Building intuition using the knapsack problem and its DP. We can use DP to solve knapsack in time $O(n \cdot b)$, and to solve multidimensional knapsack in $d$ dimensions in time $O(n \cdot b^d)$. But what if $b$ is much larger than the item sizes, and/or what if we are given items by multiplicities $\mu_1, \dots, \mu_n$? Turns out we can start from some solution and then iteratively augment to the optimal solution. Each augmentation requires one DP to be solved, but this DP is simpler than the original one. In the next lecture we will formalize this.

Second lecture

  • basic notation and assumptions: [ATIP beginning of sec 2]
  • showing that the Graver basis $\mathcal{G}(A)$ is always finite. This amounts to the Gordan-Dickson Lemma. Proof idea: restrict ourselves to the non-negative orthant. We want to show that the ordering $\leq$ there has no infinite antichain as that will imply that the number of minimal elements is finite. We prove a stronger statement: every infinite set $S \subseteq \mathbb{N}^n$ contains an infinite chain, i.e., elements $a_1 \leq a_2, \leq ...$. Proof goes by induction over $n$ (the dimension). For $n=1$ it's obvious. Now we assume the statement holds for $n-1$ and want to prove it for $n$. Take $S'$ to be the restriction of $S$ to the first $n-1$ coordinates, apply the IH and obtain a sequence $a_1, a_2,\dots$ (which is a chain when forgetting the $n$-th coordinate). Now two cases can happen. Either there is a number $x \in \mathbb{N}$ which repeats as the last coordinate infinitely many times in the sequence $a_1, a_2, \dots$ – take all those elements and note that they form a subsequence $b_1, b_2, \dots $ which now forms a chain in all $n$ dimensions. Otherwise, no such number $x$ exists. But that implies that for any $a_i$, there always exists $j > i$ s.t. $(a_j)_n > (a_i)_n$, i.e., we can always „jump up“ in the last coordinate.
  • by definition of $\mathcal{G}(A)$, any element from $\ker_{\mathbb{Z}}(A)$ can be decomposed into sign-compatible (conformal) elements of $\mathcal{G}(A)$. However, it is not clear how many (distinct) elements will be needed. A theorem of Cook, Fonlupt, and Schrijver says that we need at most $2n-1$. The proof is not too hard: considering $\mathcal{G}(A)$ as a matrix with the Graver elements being its columns, we solve the LP $\max 1^{\intercal}y \colon \mathcal{G}(A)y=x, \, y \geq 0$, i.e., we are searching for a (non-integral) decomposition of $x$ into elements of $\mathcal{G}(A)$ and we are maximizing (this will be important!) the 1-norm of the decomposition. (We should be more careful here and restrict ourselves to elements of $\mathcal{G}(A)$ which are conformal to $x$.) Assume $y$ is a basic solution, which means that it has at most $n$ non-zero coordinates. Now, take $x' = \mathcal{G}(A) \lfloor y \rfloor$ and observe that $x'' = x-x'$ is an integer vector belonging to $\ker_{\mathbb{Z}}(A)$, so it has some decomposition $\sum_{j} \lambda_j g_j$ into Graver elements. But since the decomposition according to $y$ is of maximum $1$-norm, we have $\sum_j \lambda_j \leq \| y- \lfloor y \rfloor \|_1 < n$. Hence we found a decomposition of $x'$ into at most $n$ elements and $x''$ into at most $n-1$ elements, altogether $2n-1$ elements. (Feel free to look into the original paper, the proof is fairly intelligible.)
  • more definitions from [ATIP 2.1] such as feasible solution, augmenting step, Graver-best step, halfling, halfling procedure.
  • sketched proof that halfling procedure converges quickly. I'm only interested in the fact that it requires $O(n \log f_{\max})$ iterations, don't bother with the constants. (The calculations get a bit complicated.)
  • proof that a step which is „best“ for step-lengths 1, 2, 4,… is a halfling.

Materials

  • The primary material is my paper An Algorithmic Theory of Integer Programming, but I will try to make the exposition here much gentler. (The paper is already long, so it tries to take a „shortest path“ approach to laying the basics.) In the lecture notes I will refer to it as [ATIP].
  • The fixed dimension part was inspired by the exposition of Fritz Eisenbrand in 50 Years of Integer Programming, pg. 515 in the linked PDF. In particular look at Section 14.8, i.e., pages 535-537. But I only expect an understanding of the basic idea of Lenstra's algorithm.
teaching/ip1920.txt · Poslední úprava: 2019/10/17 16:15 autor: Martin Koutecky