ON THE ABSENCE OF UNIFORM DENOMINATORS IN HILBERT’S 17TH PROBLEM

. Hilbert showed that for most ( n,m ) there exist positive semidefinite forms p ( x 1 ,...,x n ) of degree m which cannot be written as a sum of squares of forms. His 17th problem asked whether, in this case, there exists a form h so that h 2 p is a sum of squares of forms; that is, p is a sum of squares of rational functions with denominator h . We show that, for every such ( n,m ) there does not exist a single form h which serves in this way as a denominator for every positive semideﬁnite p ( x 1 ,...,x n ) of degree m .


Introduction
Let H d (R n ) denote the set of real homogeneous forms of degree d in n variables ("nary d-ics") . By identifying p ∈ H d (R n ) with the N = n+d−1 n−1 -tuple of its coefficients, we see that H d (R n ) ≈ R N . Suppose m is an even integer. A form p ∈ H m (R n ) is called positive semidefinite or psd if p(x 1 , . . . , x n ) ≥ 0 for all (x 1 , . . . , x n ) ∈ R n . Following [1], we denote the set of psd forms in H m (R n ) by P n,m . Since P n,m is closed under addition and closed under multiplication by positive scalars, it is a convex cone. In fact, P n,m is a closed convex cone: if p n → p coefficient-wise, and each p n is psd, then so is p. A psd form is called positive definite or pd if p(x 1 , . . . , x n ) = 0 implies x j = 0 for 1 ≤ j ≤ n. The pd n-ary m-ics are the interior of the cone P n,m .
A form p ∈ H m (R n ) is called a sum of squares or sos if it can be written as a sum of squares of polynomials; that is, p = k h 2 k . It is easy to show in this case that each h k ∈ H m/2 (R n ). Again following [1], we denote the set of sos forms in H m (R n ) by Σ n,m . Clearly, Σ n,m is a convex cone; less obviously, it is a closed cone, a result due to R. M. Robinson [20].
In light of the inclusion Σ n,m ⊆ P n,m , let ∆ n,m = P n,m \ Σ n,m . It was well-known by the late 19th century that P n,m = Σ n,m when m = 2 or n = 2. In 1888, Hilbert proved [8] that Σ 3,4 = P 3,4 ; more specifically, every p ∈ P 3,4 can be written as the sum of three squares of quadratic forms. (An elementary proof, with "five" squares is in [2, pp.16-17]; for modern expositions of Hilbert's proof, see [24] and [21].) Hilbert also proved in [8] that the preceding are the only cases for which ∆ n,m = ∅. That is, Date: May 12, 2003. 1991 Mathematics Subject Classification. Primary: 11E10, 11E25, 11E76, 12D15, 14P99. This material is based in part upon work of the author, supported by the USAF under DARPA/AFOSR MURI Award F49620-02-1-0325. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of these agencies. if n ≥ 3 and m ≥ 6 or n ≥ 4 and m ≥ 4, then there exist psd forms n-ary m-ics that are not sos.
In 1893, Hilbert [9] generalized his three-square result for P 3,4 to ternary forms of higher degree. Suppose p ∈ P 3,m with m ≥ 6. Then there exist p 1 ∈ P 3,m−4 and h 1k ∈ H m−2 (R 3 ), 1 ≤ k ≤ 3, so that . (Hilbert's proof seems to be non-constructive, and lacks a modern exposition. In the very recent paper [10], de Klerk and Pasechnik discuss the implementation of an algorithm to find p 1 so that p 1 p is sos, though not necessarily as a sum of three squares. This paper uses Hilbert's result without giving an independent proof.) If m = 6 or 8, then p 1 is a sum of three squares of forms, and hence (as Landau later noted [11]), the four-square identity implies that p 2 1 p = p 1 (p 1 p) is the sum of four squares of forms. If m ≥ 10, then the argument can be applied to p 1 : there ⌋ so that q 2 p is the sum of four squares of forms.
Hilbert's 17th Problem asked whether this generalizes to n > 3 variables; that is, if p ∈ P n,m , must there exist some form q so that q 2 p is sos? Artin proved that there must be, in a way that gives no information about q. Much more on the history of this subject can be found in the survey paper [19].
This discussion leads to two closely related questions. Suppose p ∈ P n,m . Can we find a form h such that hp is sos? Can we find a form q so that q 2 p is sos? If we've answered the second, we've answered the first. Conversely, if p = 0 is psd and hp is sos, then h is psd. But it needn't be sos; indeed, a trivial answer to the first question is to take h = p. Stengle proved [23] that if p(x, y, z) = x 3 z 3 + (y 2 z − x 3 − z 2 x) 2 , then p 2s+1 ∈ ∆ 3,6(2s+1) for every integer s. That is, p 2s−1 · p is sos, but p 2s · p is not. Choi and Lam showed [1] that for S ∈ ∆ 3,6 (see (3) below), the product S(x, y, z)S(x, z, y) is actually sos.
The author gratefully acknowledges correspondence with Chip Delzell, Pablo Parrilo, Vicki Powers, Marie-Françoise Roy and Claus Scheiderer. Their suggestions have made this a better paper.

What is known about the denominator
The first concrete result about a denominator in Hilbert's 17th Problem was found by Pólya [16]. He showed that if f ∈ H d (R n ) is positive on the unit simplex {(x 1 , . . . , x n ) | x j ≥ 0, x j = 1}, then for sufficiently large N, ( j x j ) N f has positive coefficients. Replacing each x j by x 2 j , we see that if p ∈ H 2d (R n ) is an even positive definite form, then ( j x 2 j ) N p is a sum of even monomials with positive coefficients, and so, as it stands, is a sum of squares of monomials. Taking even N, we see that q = ( j x 2 j ) N/2 is a denominator for p. Habicht [6] generalized Pólya's proof to give an alternate solution to Hilbert's 17th Problem for pd forms; however, h is not readily constructible and in general is no longer a power of x 2 j . Except for one example, Pólya did not attempt to determine an explicit value of N. A good exposition of the theorems of Pólya and Habicht can be found in [7].
For positive definite p ∈ P n,m , let ǫ(p) := inf{p(u) : u ∈ S n−1 } sup{p(u) : u ∈ S n−1 } measure how "close" p is to having a zero. The author [18] showed that if then ( j x 2 j ) N p is a sum of (m + 2N)-th powers of linear forms, and so is sos. A similar lower bound has been shown to apply in Pólya's case, one which goes to infinity as p approaches the boundary of P n,m . (See papers by de Loera and Santos [12] and by Powers and the author [17].) The restriction to positive definite forms is necessary. There exist psd forms p in n ≥ 4 variables so that, if h 2 p is sos, then h must have a specified zero. The existence of these unavoidable singularities, or so-called "bad points", insures that ( x 2 j ) r p can never be a sum of squares of forms for any r. Habicht's Theorem implies that no positive definite form can have a bad point. Bad points were first noted by Straus and have been extensively studied by Delzell; see, e.g. [4,5].

Recent results and a new theorem
Scheiderer has shown in very recent work [22] that for p ∈ P 3,m , there exists N = N(p) so that (x 2 + y 2 + z 2 ) N p(x, y, z) is sos; indeed, x 2 + y 2 + z 2 can be replaced by any positive definite form. This is a strong refutation to the existence of bad points for ternary forms.
Also very recently, Lombardi and Roy [13] have constructed a quantitative version of the Positivstellensatz. A special case is that for fixed (n, m), there exists d = d(n, m) so that if p ∈ P n,m , there exists q ∈ H d (R n ) so that q 2 p is sos.
Suppose (n, m) is such that ∆ n,m = ∅. Theorem 1 below states that there is no single form h so that, if p ∈ P n,m , then hp is sos. Corollary 2 says that there is not even a finite set of forms H so that, if p ∈ P n,m , then there exists h ∈ H so that hp in sos. In particular, there does not exist a finite set of denominators which apply to all of P n,m . This result implies that N(p) in Scheiderer's theorem is not bounded as p ranges over P 3,m . It also implies that the denominators in the Lombardi-Roy theorem cannot be chosen from a finite, predetermined set.
The proof of the Theorem is elementary and relies on a few simple observations. If p = 0 is psd and hp is sos, then h is psd. As previously noted, Σ n,m is a closed cone for all (n, m). This cone is invariant under the action of taking invertible linear changes of form. Thus, if h ′ is derived from h by such a linear change, and if hp is sos for every p ∈ P n,m , then so is h ′ p. Suppose ℓ is a linear form, p = j g 2 k is sos, and ℓ | p. Then ℓ 2 | p and ℓ | g k for each k, and by induction, ℓ 2s | p =⇒ ℓ s | g k . Thus, we can "peel off" squares of linear factors from any sos form; this is a common practice, dating back at least to [20, p. 267]. We use this observation in the contrapositive: if p ∈ ∆ n,m , then ℓ 2s p ∈ ∆ n,m+2s . Theorem 1. Suppose ∆ n,m = ∅. Then there does not exist a non-zero form h so that if p ∈ P n,m , then hp is sos.
Proof. Suppose to the contrary that such a form h exists. Since h = 0, there exists a point a ∈ R n so that h(a) = 0. By making an invertible linear change of variables, we can take a = (1, 0, . . . , 0). Thus, we may assume without loss of generality that h(x 1 , 0, . . . , 0) = αx d 1 , where α > 0 and d is even. In the sequel, we distinguish x 1 from the other variables.
The following elegant proof is due to Claus Scheiderer and is included with his permission; it supersedes the proof in an earlier version of this manuscript. Proof. Suppose H exists. For each k, there exists non-zero p ∈ ∆ n,m so that h k p is sos. (Otherwise, we may delete h k harmlessly from H.) Thus, each h k is psd, and there exists a form q k so that q 2 k h k is sos. Define h = k q 2 k h k . We now show that for every p ∈ P n,m , hp is sos: this contradicts the Theorem and proves the Corollary. By hypothesis, there exists h j ∈ H so that h j p is sos. Thus, is a product of sos factors, and so is sos.
It is not too difficult to consider qM, qR, qS for q(x, y, z) = a 2 x 2 + b 2 y 2 + c 2 z 2 , and determine whether these are sos using the algorithm of [3] directly or its implementation in, e.g., [15].
Interestingly enough, these conditions are the same in each case: the forms are sos if and only if 2(a 2 b 2 + a 2 c 2 + b 2 c 2 ) ≥ a 4 + b 4 + c 4 .
This expression factors rather neatly into: so if a ≥ b ≥ c ≥ 0 without loss of generality, the only non-trivial condition is that b+c ≥ a; that is, there is a (possibly degenerate) triangle with sides a, b, c. (Robinson [20, p. 273] has a superficially similar condition, but note that his multiplier is ax 2 + by 2 + cz 2 .) By specializing this result and scaling variables as in the proof of the theorem, we note that (x 2 + y 2 + z 2 )M(x, λy, λz), (x 2 + y 2 + z 2 )R(x, λy, λz), (x 2 + y 2 + z 2 )S(x, λy, λz) are sos if and only if 0 ≤ λ ≤ 2.