#
Ill-Conditioned Eigenvalues

## Introduction

For some models, the computation of eigenvalues produces the warning message *"Numerically Ill-conditioned eigenvalues"*. There are cases where some of those warnings can be expected and ignored. But there are also cases where such warnings indicate modeling or parametrization problems. This section and the information given in the Output Area help you analyze your model.The warning message includes a list of the ill-conditioned eigenvalues (see the following figure).

Figure 1:
Finding the eigenvalues given in the Output Area in the dialog window *Natural Frequencies and Modes*.

More detailed information about the ill-conditioned eigenvalues in the Output Area are available if you activate the tracing option for the ill-conditioned eigensystem (*Simulation*→*Properties*→*Tracing*→*Linear System Analysis*→*Ill-conditioned eigensystem*). As shown in the following figure, the information in the Output Area displays the eigenvalues with a higher accuracy than the warning message. In addition to the eigenvalues, the information includes the corresponding reciprocal condition numbers and eigenvectors. The eigenvalues in the warning message and in the additional information have both the same order with respect to the ascending reciprocal condition number. That makes identifying the eigenvalues in the two output items much easier.

Figure 2: Detailed information about ill-conditioned eigenvalues.

The *reciprocal condition number* is an indicator for the condition of an eigenvalue. It represents the ratio between the maximum admissible perturbation of the system matrices and a given tolerance for the eigenvalue. If the reciprocal condition number is very small, the error in the system matrix must be small to keep the eigenvalues within the tolerance range. So, if your system matrix exhibits a certain degree of errors, a smaller reciprocal condition causes a larger effect of a potential error in the calculated eigenvalue. The numerical algorithm for the calculation of the eigenvalues already introduces some numerical noise that can be interpreted as a disturbance in the system matrix. For this reason, an ill-conditioned eigenvalue must always be treated with caution even if the system matrices show no numerical errors. Regardless of a warning message, the numerically computed eigenvalues may still be close to the exact values, but they can also be far off the mark. Eigenvalues with larger reciprocal condition numbers are not affected by this problem. Eigenvalues with very small reciprocal condition numbers are reported in a warning message in the Output Area.

The next section "Examples" presents some basic model configurations that can lead to ill-conditioned eigenvalues. Should you have any problems with some notations in the "Examples" section, please refer to the section "Formulation and Basic Facts". This section provides a brief explaination of the common definitions used in the "Examples" section and describes how a system may cause ill-conditioned eigenvalues.

## Examples

### A simple mechanical example

A simple example of a mechanical model exhibiting ill-conditioned eigenvalues is a free mass. The following figure shows the Diagram View of the example model IllConditionedEigenvaluesMech1DFreeMass.isx illustrating one possible implementation.

Figure 3: Model View of a simple mechanical example of ill-conditioned eigenvalues

If you only add one element of type `Mechanics.Translation.Mass`

to a new model, the simple equation is symbolically solved resulting in . This leaves no ODE states that would represent the system. In this case, the calculation of eigenvalues does not yield the expected results, since it uses the Jacobian matrix for the states and their time derivatives. The symbolic integration can be switched off on the page *Symbolic Analysis* in the *Simulation Control*. The alternative would be a more complex right side of the motion equation. An element of type `Mechanics.Translation.Source` is sufficent to define the right side in such a way. Now enter a time-dependent expression for the force `F` of the source, such as `if time < 0.5*tStop then 0 else 1`

. The equation system for a free mass can be written as

The system matrix's eigenvalue is 0 with a algebraic multiplicity of 2 and a geometric multiplicity of 1. We analyze the system with a small disturbance :

Its eigenvalues are

At , the absolute value of the eigenvalues grows faster than the disturbance . The slope is even infinite at . As a consequence, the reciprocal condition number is zero and the eigenvalue zero is ill-conditioned. If we now assume a disturbance of the system matrix of , the absolute value of the eigenvalues lies within the range of . This may just be enough for the application at hand. So, an ill condition of numerically computed eigenvalues does not necessarily imply that they are too far off of their real values (which often they are not). Nevertheless, you should be wary of the eigenvalues in such a case, because they could be incorrect.

### Decoupled powertrain: Using the data of the info item from the Output Area

The sample model IllConditionedEigenvaluesDecoupledPowertrain.isx represents a powertrain with an open `clutch` which decouples a freely spinning wheel `inertia2` from the rest of the powertain (see next figure).

Figure 4: `inertia2` decoupled from the rest of the powertrain.

In the simple mechanical example model, the free wheel corresponds to a pair of ill-conditioned eigenvalues at zero. The model has also four more eigenvalues for the inertias of the `engine` and `inertia1`. Further eigenvalues relate to the internal variables of the engine and the clutch. SimulationX lists the reciprocal condition numbers with very small values as part of the information shown in the Output Area. Each reciprocal condition number is shown with the corresponding eigenvalue and eigenvector. The eigenvector components can be identified through the names of the variables from the model and are sorted by their absolute values in a descending order. Components with a large absolute value often have a major impact on ill-conditioned eigenvalues. In contrast to that, components with very small absolute values at the end of this list are insignificant for ill-conditioned eigenvalues. The information from the Output Area of the example model is given below. It clearly identifies `inertia2` as the component taht causes the ill-conditioned eigenvalue.

In addition to the warning message for ill-conditioned eigenvalues, there is also a warning message for ill-conditioned eigenvectors which has its own help page.

### Strong Deviations and Balancing of Eigenvalues

This example illustrates why SimulationX does not show a warning message for some models with extremely badly conditioned eigenvalues and why the program is still able to compute the exact eigenvalues for those models nonetheless. It includes a modified model where the numerically computed eigenvalues deviate strongly from the exact ones. The system matrix

has a perturbation with an absolute value that is much smaller than 1. In exact arithmetics, the eigenvalues of are . They are independent of , but their reciprocal condition numbers are in the range of and deteriorate with decreasing . SimulationX does not notify you about ill-conditioned eigenvalues for the above system matrix even if is as small as 10^{-16}, because it can balance the highly asymmetric matrix. Note that the left entry at the bottom of is much smaller than the right entry at the top. The actual eigenvalue algorithm is applied to the very well-conditioned, balanced matrix that has a reciprocal condition number in the range of 1. The reciprocal condition number of depends on the eigenvectors of . The better these eigenvectors are in orthogonal relation to each other, the larger is the reciprocal condition number. The eigenvectors of

are visualized in Figure 5 for the case of . Even for the relatively large value of , the angle between the eigenvectors is already quite small.

Figure 5: Eigenvectors of the system matrix with .

Balancing transforms the space of the eigenvalue problem by scaling the y axis with a factor in the range of (in the visualized case a factor of 10). As Figure 6 shows, the eigenvectors have an almost perpenticular position to each other in the scaled space resulting in very well-conditioned eigenvalues for the scaled matrix.

Figure 6: Eigenvectors after balancing the matrix

Balancing can only partially improve the condition of the eigenvalues. For an instance, rotating the coordinate system in the example by (see Figure 7) prevents a larger angle between the eigenvectors when one coordinate axis is scaled, and the eigenvalues remain ill-conditioned after all.

Figure 7: Eigenvectors in a rotated coordinate system

The system is implemented in the example
model IllConditionedEigenvaluesBalancing where you can
adjust for the original matrix and for the matrix with rotated eigenvectors.
The default setting for is . SimulationX computes the exact eigenvalues and for the model with the original matrix as the system matrix. If the system matrix is rotated by , SimulationX displays a warning message about ill-conditioned eigenvalues with the following information (in case the tracing option *Linear System Analysis → Ill-conditioned Eigensystem* is activated).

The stated eigenvalue has no valid digits in comparison with its exact value , and the other numerically computed eigenvalue has only one valid digit compared with its exact counterpart .

## General Notes and Basic Facts

Section "Linear System Analysis" explains how the analysis of linearized models leads to a generalized eigenvalue problem. It outlines the general and the specific problem with eigenvalues and provides some basic facts to help you interpret the warning message about ill-conditioned eigenvalues. With given real quadratic system matrices , the *generalized eigenvalue problem* lies in the challenge of finding all complex values with a solution to the linear system that is different from the zero vector.

.

Such a is an *eigenvalue* of the matrix pair , and is the corresponding *right eigenvector*. Alternatively, you can also solve the system for the corresponding *left eigenvectors*. These are non-zero solutions of the linear system

or equivalently .

The single quote is adopted from numerical algebra programs like Octave or Matlab. It indicates a transposition of real matrices and a conjugate transposition of complex matrices. If the eigenvalues of the matrix pair are ill-conditioned, small disturbances in and can cause large deviations in the eigenvalues. The effects can be observed already in systems of explicit ordinary differential equations. The matrix is the identity matrix for such systems, which is the difference between the *specific eigenvalue problem* and the generalized one. The eigenvalue equation can only have a nontrivial solution if the determinant is zero. The determinant is a polynomial in the variable called a *characteristic polynomial* of the matrix pair . Under the assumption that the system is structurally regular, the degree of the characteristic polynomial is directly related the number of finite eigenvalues of the matrix pair and, for the specific eigenvalue problem, to the matrix dimension with . The *algebraic multiplicity* of an eigenvalue is the multiplicity of as a zero of the characteristic polynomial. The *geometric multiplicity* of an eigenvalue is the nullspace dimension of the matrix . For example, the characteristic polynomial of the matrix pair is and is a zero of with the multiplicity *2*. Consequently, is an eigenvalue of the matrix pair with an algebraic multiplicity of *2*. The corresponding eigenvalue equation has the general solution with an arbitrary , and the nullspace of is one-dimensional. So the geometric multiplicity of the eigenvalue at *0* is *1*. A coordinate change with a regular matrix does not change the solvability of the equation. The eigenvalues for

remain the same as for the untransformed system. If is the identity matrix, the transformed equation reads . Multiplication of that equation by does not change the solvability of the equation and turns it into the specific eigenvalue problem:

Products of the form with regular are called *similarity transformations*. Eigenvalues are invariant under similarity transformations. The following paragraph focuses on the specific eigenvalue problem explaining how a disturbance of the system matrix from causes a disturbance in its eigenvalue and a distrurbance in its eigenvector . Replacing , and in the eigenvalue equation with the perturbed quantities , and respectively while neglecting the terms of second order, such as , gives you the equation of first order

for the perturbations.

Multiplication with the transposed left eigenvector eliminates the term with the pre-factor because of the eigenvalue equation . Re-arranging the terms gives you the formula

for the perturbation of the eigenvalue. This allows for the estimation

for the eigenvalue perturbation and

for the reciprocal condition number of the eigenvalue . This bound for is proportional to the cosine of the angle between the left and right eigenvector and respectively (for more details, refer to section 7.2 *"Perturbation Theory"* of [7]).

Two eigenvalues with almost linearly dependent eigenvectors are ill-conditioned. This is also true the other way around: If a matrix exhibits an ill-conditioned eigenvalue, then it has at least two almost linearly dependent eigenvectors (see [8]). An eigenvalue is also ill-conditioned at zero with higher algebraic multiplicity than its geometric multiplicity. Before applying the QZ algorithm for the actual eigenvalue calculation and the computation of the reciprocal condition numbers, SimulationX *balances* the system matrix . That means it determines a diagonal transformation matrix such that the eigenvalues of the system matrix are better conditioned after a similarity transformation (see [9]).