**Application to
Elimination Theory**

** **

__Introduction__** **Many problems in linear algebra (and many other
branches of science) boil down to solving a system of linear equations in a
number of variables. This in turn means finding common solutions to some
“polynomial” equations of degree 1 (hyperplanes). In many cases, we are faced
with “nonlinear” system of polynomial equations in more than one variable.
Geometrically, this means finding common points to some “surfaces”. Like the
Gaussian elimination for linear systems, the **elimination theory** in general is about eliminating a number of
unknowns from a system of polynomial equations in one or more variables to get
an easier equivalent system.

One way of find common
solutions to polynomial equations is to solve each equation separately and then
compare all the solutions. As you can guess, this is not an efficient way
especially if the goal is only to show the existence of a solution.

To understand the
importance of elimination theory, let us start by considering the following
“simple” example.

** Example** Consider a system of two quadratic equations
in one variable

* *

_{}

We would like to find
a necessary and sufficient condition for the existence of a common solution for
the system

_{}

If *f(x)* and *g(x)*
have a common solution, then they must have a common linear factor, say, *L*.
Let

_{}

Then both *q _{1}(x)
*and

_{}

_{ }

_{ }

(the signs chosen in *q _{2}(x)
*will make more sense in a moment)
for some constants

_{}

then

_{}

Explicitly,

_{}

Expanding and
collecting terms in the above equation gives:

_{}

This equation is only
possible if the coefficients of *x, x ^{2},
x^{3}* and the constant term are all equal to 0. This gives the
following homogeneous system with variables arranged in this order:

_{}

In order for this
system to have a nontrivial solution, its coefficient matrix must be
noninvertible. In other words, its determinant must be zero:

_{}

which is equivalent to

_{}

since the determinant
of a matrix is equal to the determinant of its transpose. This determinant is
called the (Sylvester)** resultant **of *f(x)* and *g(x).* Note that the resultant in this case is the
determinant of a 4x4 matrix consisting of the coefficients of the two
polynomials together with 0’s arranged in a very special way.

Here is the formal definition of the resultant:

** Definition** Let

_{}

be two polynomials of degree *m* and *n* respectively such
that *a _{m}*

_{}

Note that *Res(f(x), g(x))* is the determinant
of a square matrix of size *m+n*

* *

__Example__** **If

_{}

then

_{}

As a generalization of the first example, we have the following:

** Theorem **Let

_{}

be two polynomials of degree *m* and *n* respectively such
that *a _{m}*

_{}

has a solution if and
only if *Res(f(x), g(x))=0.*

_{}

has solutions.

__Solution__** **We compute the resultant of the two polynomials

_{}

_{}

therefore, the
polynomials *f(x), g(x)* have a common root by the above theorem.

One can use the above
theorem to determine if a polynomial system in more than one variable has a
solution. The trick is to look at the polynomials in the system as polynomials
in one variable with coefficients polynomials in the other variables. The next
example illustrates that idea.

__Example 2____ __Solve the following system:

_{}

_{}

In order to have a
common solution, one must have:

_{}

This is equivalent to

_{}.

The problem is reduced
then to solving a polynomial equation in one variable *x.* Although the
solution of this equation does not look easy, one can use a numerical approach
to estimate the solutions. These solutions correspond to the *x*-coordinates
of the intersection points. Note also that the original system ca be written as

_{} _{}

so any solution to the
system is an intersection of an ellipse and a circle that can be found
geometrically.