by definition of $\mathcal{G}(A)$, any element from $\ker_{\mathbb{Z}}(A)$ can be decomposed into sign-compatible (conformal) elements of $\mathcal{G}(A)$. However, it is not clear how many (distinct) elements will be needed. A theorem of
Cook, Fonlupt, and Schrijver says that we need at most $2n-1$. The proof is not too hard: considering $\mathcal{G}(A)$ as a matrix with the Graver elements being its columns, we solve the LP $\max 1^{\intercal}y \colon \mathcal{G}(A)y=x, \, y \geq 0$, i.e., we are searching for a (non-integral) decomposition of $x$ into elements of $\mathcal{G}(A)$ and we are
maximizing (this will be important!) the 1-norm of the decomposition. (We should be more careful here and restrict ourselves to elements of $\mathcal{G}(A)$ which are conformal to $x$.) Assume $y$ is a basic solution, which means that it has at most $n$ non-zero coordinates. Now, take $x' = \mathcal{G}(A) \lfloor y \rfloor$ and observe that $x'' = x-x'$ is an integer vector belonging to $\ker_{\mathbb{Z}}(A)$, so it has some decomposition $\sum_{j} \lambda_j g_j$ into Graver elements. But since the decomposition according to $y$ is of maximum $1$-norm, we have $\sum_j \lambda_j \leq \| y- \lfloor y \rfloor \|_1 < n$. Hence we found a decomposition of $x'$ into at most $n$ elements and $x''$ into at most $n-1$ elements, altogether $2n-1$ elements. (Feel free to look into the original paper, the proof is fairly intelligible.)