Usually, the following problems are considered to be the standard problems of numerical linear algebra:
- Linear systems of equations: Solve Ax=b, where A is a given n-by-n nonsingular real or complex matrix, b is a given column vector of length n and x is a column vector with n entries that we wish to compute.
- Least squares problems: Compute the vector x that minimizes || Ax - b||_2 where A is a given m-by-n matrix, b is a given vector of length m, x is of length n, and || y||_2 is the two-norm of the vector y. If m > n, we have more equations than unknowns. The system is called overdetermined. If m < n, the system is called underdetermined and there will be infinitely many solutions if no further conditions are imposed.
- Eigenvalue Problems: Given an n-by-n matrix A find an n-by-1 vector x and a scalar such that Ax = x.
These standard problems often arise in engineering and scientific practise, they are at the heart of most calculations in scientific computing. There exist already a number of well-understood methods for solving these problems as well as reliable, efficient and thoroughly tested numerical software. Such public-domain software can be obtained from http://www.netlib.org. The netlib service provides quick, easy and efficient distribution of public-domain software to the scientific community. It contains various items of software and test problems which may be useful. In particular, the state-of-the-art collection LAPACK of subroutines for solving the most commonly occuring problems in numerical linear algebra is available there. A number of commercial software products like MATLAB (registered trademark of The MathWorks) provide a much easier-to-use environment based on LAPACK (at the cost of some performance).
But the ever increasing complexity and size of the problems to be solved require a steady progress in developing new algorithms and/or implementations even for these standard problems. Especially, the recent availability of advanced-architecture computers has had a significant impact on all fields of scientific computations including algorithm research and software development in numerical linear algebra. New implementations of known algorithms or new algorithms are needed for each new architecture in order to exploit its features.
There are also many variations of these standard problems, e.g.,
- generalized eigenvalue problems: Ax = Bx for general square matrices A and B,
- nonlinear eigenvalue problems: f (x) = 0, where f is a given function which depends on some coefficient matrices, e.g., a quadratic eigenvalue problem f (x) = Mx^2 + Cx + K, where the stiffness matrix K and the mass matrix M are real symmetric and positive (semi-)definite and the damping matrix C is real,
- rank-deficient least-squares problems || Ax - b||_2 = min whose solutions are nonunique because the columns of A are linearly dependent,
- nonlinear equations F(x) = 0 where F(x) is a square matrix,
- nonlinear least-squares problem || F(x)||_2^2 = min where F(x) is a rectangular matrix,
- matrix equations, like the linear Sylvester equation AX - XB = C or the nonlinear rational discrete-time Riccati equation X = A^TXA - A^TXB(R + B^TXB)^{-1}B^TXA + Q
For a number of these problems, there is an urgent need for more research on the development of methods and the approximate perturbation and error analysis. E.g., there are essentially no black box software packages for the nonlinear eigenvalue problem available that reach the standard of those for linear eigenvalue problems.
The applications from which these problems stem come from all kind of applied sciences; mechanical and electrical engineering as well as medical, pharmaceutical and biological problems. Hence, the field of applied and numerical linear algebra is more interesting, and more fundamental, than its rather dull name may suggest. It is full of powerful ideas that are quite unlike those normally associated with linear algebra. 'It is here that one finds the essential ideas that every (mathematical) scientist needs to work effectively with vectors and matrices. In fact, our subject is more than just vectors and matrices, for virtually everything we do carries over to functions and operators. Numerical linear algebra is really functional analysis, but with the emphasis always on practical algorithmic ideas rather than mathematical technicalities.' (Quote from the preface of Numerical Linear Algebra by L.N. Trefethen and D. Bau,III, SIAM 1997.)
There are several general concepts and techniques used in solving problems in applied and numerical linear algebra which we briefly discuss in the following:
- Matrix factorizations: A factorization of a matrix A is a representation of A as a product of several simpler matrices which make the problem easier to solve. This idea is used, e.g., in solving a linear system of equations Ax = b by Gaussian elimination. This method can be understood as factorizing A into the product L R of a lower triangular matrix L and an upper triangular matrix R. Solving Ax = b is then equivalent to solving Lz = b and Rx = z which can be done by a simple process called forward and backward substitution. When solving a standard eigenvalue problem Ax = x, a nonsingular matrix S is sought such that S-1AS = D has a simple form from which the eigenvalues can be read off. That is, A is factorized as A = SDS^{-1}. The information about the eigenvectors can be read of the columns of S. The standard least-squares problem || Ax - b||_2 = min is usually solved by factorizing A as QR where Q is a matrix with orthonormal columns and R is upper triangular or by factorizing A into its singular value decomposition UV where U and V are unitary matrices and is diagonal.
- Structure exploitation: If physical information about the problem can be exploited, more efficient and robust methods can be tailored to the solutions. If it is known that in the linear system of equations Ax = b A is a symmetric and positive definite matrix, than the use of a variant of Gaussian elimination called Cholesky factorization will save half the work compared to solving a system with a general matrix A. If it is further known, that A is a banded matrix (that is a_{ij} = 0 for | i - j| > m for some m, than the cost of the computation can be further reduced by using a banded Cholesky algorithm. Similar, there exist especially tailored eigenvalue methods for computing the eigenvalues of symmetric matrices. These methods not only decrease the amount of floating point operations needed to solve the problem, they also preserve the symmetric structure of the problem in the course of the computation so that the computed eigenvalues will all be real - something a general purpose eigenvalue solver can not guarantee for real symmetric matrices, although the eigenvalues of such matrices theoretically are real.
- Perturbation theory and condition numbers: Results produced by numerical algorithms are rarely exactly correct, even assuming exact arithmetic. Then there are two major sources of errors in the result. First, errors caused by the algorithm itself, e.g., due to approximations made in the algorithm. Second, the input data may be not exact, e.g., caused by measurement errors or by prior computations. An important question in this context is how much will the solution of a problem f change if the input data are slightly perturbed. That is, assume that instead of the input data x we are given x + x, will the absolute error | f (x + e) - f (x)| be small? Using the linear approximation f (x) + ef'(x) to f, the absolute error is approximately given by |e| . | f'(x)|. If the absolute condition number | f'(x)| is large enough, then the error may be large even if e is small. In this case, f is called ill-conditioned at x.
- Effect of roundoff error: When a numerical algorithm is actually implemented on a computer and used to calculate the desired result, this result will inevitably by corrupted by roundoff errors due to floating point arithmetic. Clearly, it is desirable that the computed result alg(x) is close to the exact result f (x). Usually, it is required that alg(x) = f (x + e) for some small e.
- Analysis of the speed of an algorithm: The traditional way to estimate the time an algorithm takes is to count the floating point operations (flops) that it performs. But this is often misleading on modern computer architectures. It might take significantly more time to move the data inside the computer to the place where it is to be used, than it does to do the actual calculation. Moreover, if an algorithm is iterative (that is, a series of approximations to the solution is generated), it is important to know, whether the convergence rate is linear, quadratic, or even faster than that.
- Numerical software: Much of the current research in applied and numerical linear algebra leads to easy to use, reliable and efficient numerical software. A huge effort is taken in order to provide the scientific community with the latest implementations. Therefore, a lot of the software developed by the numerical linear algebra community found its way into public domain software (see http://www.netlib.org) or commercial products such as MATLAB. Besides general purpose routines for numerical linear algebra problems, more focused software like SLICOT (see http://www.slicot.de) has been developed. SLICOT provides high quality numerical software for problems arising in systems and control. All such software pay special attention to numerical stability and accuracy, robustness of the algorithms, performance with respect to speed and memory requirements, as well as to portability and reusability. A set of benchmark examples is usually provided.