diff git a/books/bookvol10.4.pamphlet b/books/bookvol10.4.pamphlet
index f5526b9..addd4fa 100644
 a/books/bookvol10.4.pamphlet
+++ b/books/bookvol10.4.pamphlet
@@ 77519,929 +77519,1877 @@ MultivariateSquareFree (E,OV,R,P) : C == T where
)set message auto off
)clear all
S 1 of 1
+S 1 of 164
)show NagEigenPackage
+R
+R NagEigenPackage is a package constructor
+R Abbreviation for NagEigenPackage is NAGF02
+R This constructor is exposed in this frame.
+R Issue )edit bookvol10.4.pamphlet to see algebra source code for NAGF02
+R
+R Operations 
+R f02aaf : (Integer,Integer,Matrix(DoubleFloat),Integer) > Result
+R f02abf : (Matrix(DoubleFloat),Integer,Integer,Integer,Integer) > Result
+R f02adf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02aef : (Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02aff : (Integer,Integer,Matrix(DoubleFloat),Integer) > Result
+R f02agf : (Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer) > Result
+R f02ajf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02akf : (Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02awf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02axf : (Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Integer,Integer,Integer,Integer,Integer) > Result
+R f02bbf : (Integer,Integer,DoubleFloat,DoubleFloat,Integer,Integer,Matrix(DoubleFloat),Integer) > Result
+R f02bjf : (Integer,Integer,Integer,DoubleFloat,Boolean,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02fjf : (Integer,Integer,DoubleFloat,Integer,Integer,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Union(fn: FileName,fp: Asp27(DOT)),Union(fn: FileName,fp: Asp28(IMAGE))) > Result
+R f02fjf : (Integer,Integer,DoubleFloat,Integer,Integer,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Union(fn: FileName,fp: Asp27(DOT)),Union(fn: FileName,fp: Asp28(IMAGE)),FileName) > Result
+R f02wef : (Integer,Integer,Integer,Integer,Integer,Boolean,Integer,Boolean,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R f02xef : (Integer,Integer,Integer,Integer,Integer,Boolean,Integer,Boolean,Integer,Matrix(Complex(DoubleFloat)),Matrix(Complex(DoubleFloat)),Integer) > Result
+R
E 1
)spool
)lisp (bye)
\end{chunk}
\begin{chunk}{NagEigenPackage.help}
+)clear all
This package uses the NAG Library to compute
 * eigenvalues and eigenvectors of a matrix\
 * eigenvalues and eigenvectors of generalized matrix
 * eigenvalue problems
 * singular values and singular vectors of a matrix.
+S 2 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 2
 F02  Eigenvalues and Eigenvectors Introduction  F02
 Chapter F02
 Eigenvalues and Eigenvectors
+S 3 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 3
 1. Scope of the Chapter
+S 4 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 4
 This chapter is concerned with computing
+S 5 of 164
+n:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 5
  eigenvalues and eigenvectors of a matrix
+S 6 of 164
+a:Matrix SF:=
+ [[ 0.5 , 0.0 , 2.3 ,2.6 ],_
+ [ 0.0 , 0.5 ,1.4 ,0.7 ],_
+ [ 2.3 ,1.4 , 0.5 , 0.0 ],_
+ [2.6 ,0.7 , 0.0 , 0.5 ]]
+R
+R
+R (5)
+R [[0.5,0.,2.2999999999999998, 2.5999999999999996],
+R [0.,0.5, 1.3999999999999999, 0.69999999999999996],
+R [2.2999999999999998, 1.3999999999999999,0.5,0.],
+R [ 2.5999999999999996, 0.69999999999999996,0.,0.5]]
+R Type: Matrix(DoubleFloat)
+E 6
  eigenvalues and eigenvectors of generalized matrix
 eigenvalue problems
+S 7 of 164
+ result:=f02aaf(ia,n,a,1)
+E 7
  singular values and singular vectors of a matrix.
+)clear all
 2. Background to the Problems
+S 8 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 8
 2.1. Eigenvalue Problems
+S 9 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 9
 In the most usual form of eigenvalue problem we are given a
 square n by n matrix A and wish to compute (lambda) (an
 eigenvalue) and x/=0 (an eigenvector) which satisfy the equation
+S 10 of 164
+a:Matrix SF:=
+ [[ 0.5 , 0.0 , 2.3 ,2.6 ],_
+ [ 0.0 , 0.5 ,1.4 ,0.7 ],_
+ [ 2.3 ,1.4 , 0.5 , 0.0 ],_
+ [2.6 ,0.7 , 0.0 , 0.5 ]]
+R
+R
+R (3)
+R [[0.5,0.,2.2999999999999998, 2.5999999999999996],
+R [0.,0.5, 1.3999999999999999, 0.69999999999999996],
+R [2.2999999999999998, 1.3999999999999999,0.5,0.],
+R [ 2.5999999999999996, 0.69999999999999996,0.,0.5]]
+R Type: Matrix(DoubleFloat)
+E 10
 Ax=(lambda)x
+S 11 of 164
+ia:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 11
 Such problems are called 'standard' eigenvalue problems in
 contrast to 'generalized' eigenvalue problems where we wish to
 satisfy the equation
+S 12 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 12
 Ax=(lambda)Bx
+S 13 of 164
+iv:=4
+R
+R
+R (6) 4
+R Type: PositiveInteger
+E 13
 B also being a square n by n matrix.
+S 14 of 164
+ result:=f02abf(a,ia,n,iv,1)
+E 14
 Section 2.1.1 and Section 2.1.2 discuss, respectively, standard
 and generalized eigenvalue problems where the matrices involved
 are dense; Section 2.1.3 discusses both types of problem in the
 case where A and B are sparse (and symmetric).
+)clear all
 2.1.1. Standard eigenvalue problems
+S 15 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 15
 Some of the routines in this chapter find all the n eigenvalues,
 some find all the n eigensolutions (eigenvalues and corresponding
 eigenvectors), and some find a selected group of eigenvalues
 and/or eigenvectors. The matrix A may be:
+S 16 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 16
 (i) general (real or complex)
+S 17 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 17
 (ii) real symmetric, or
+S 18 of 164
+ib:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 18
 (iii) complex Hermitian (so that if a =(alpha)+i(beta) then
 ij
 a =(alpha)i(beta)).
 ji
+S 19 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 19
 In all cases the computation starts with a similarity
 1
 transformation S AS=T, where S is nonsingular and is the
 product of fairly simple matrices, and T has an 'easier form'
 than A so that its eigensolutions are easily determined. The
 matrices A and T, of course, have the same eigenvalues, and if y
 is an eigenvector of T then Sy is the corresponding eigenvector
 of A.
+S 20 of 164
+a:Matrix SF:=
+ [[0.5 , 1.5 , 6.6 , 4.8 ],_
+ [1.5 , 6.5 ,16.2 , 8.6 ],_
+ [6.6 ,16.2 ,37.6 , 9.8 ],_
+ [4.8 , 8.6 , 9.8 ,17.1 ]]
+R
+R
+R (6)
+R [[0.5,1.5,6.5999999999999996,4.7999999999999998],
+R [1.5,6.5,16.199999999999999,8.5999999999999996],
+R
+R [6.5999999999999996, 16.199999999999999, 37.599999999999994,
+R 9.8000000000000007]
+R ,
+R
+R [4.7999999999999998, 8.5999999999999996, 9.8000000000000007,
+R  17.099999999999998]
+R ]
+R Type: Matrix(DoubleFloat)
+E 20
 In case (i) (general real or complex A), the selected form of T
 is an upper Hessenberg matrix (t =0 if ij>1) and S is the
 ij
 product of n2 stabilised elementary transformation matrices.
 There is no easy method of computing selected eigenvalues of a
 Hessenberg matrix, so that all eigenvalues are always calculated.
 In the real case this computation is performed via the Francis QR
 algorithm with double shifts, and in the complex case by means of
 the LR algorithm. If the eigenvectors are required they are
 computed by backsubstitution following the QR and LR algorithm.
+S 21 of 164
+b:Matrix SF:=
+ [[1 , 3 , 4 , 1 ],_
+ [3 ,13 ,16 ,11 ],_
+ [4 ,16 ,24 ,18 ],_
+ [1 ,11 ,18 ,27 ]]
+R
+R
+R +1. 3. 4. 1. +
+R  
+R 3. 13. 16. 11.
+R (7)  
+R 4. 16. 24. 18.
+R  
+R +1. 11. 18. 27.+
+R Type: Matrix(DoubleFloat)
+E 21
 In case (ii) (real and symmetric A) the selected simple form of T
 is a tridiagonal matrix (t =0 if ij>1), and S is the product
 ij
 of n2 orthogonal Householder transformation matrices. If only
 selected eigenvalues are required, they are obtained by the
 method of bisection using the Sturm sequence property, and the
 corresponding eigenvectors of T are computed by inverse
 iteration. If all eigenvalues are required, they are computed
 from T via the QL algorithm (an adaptation of the QR algorithm),
 and the corresponding eigenvectors of T are the product of the
 transformations for the QL reduction. In all cases the
 corresponding eigenvectors of A are recovered from the
 computation of x=Sy.
+S 22 of 164
+ result:=f02adf(ia,ib,n,a,b,1)
+E 22
 In case (iii) (complex Hermitian A) analogous transformations as
 in case (ii) are used. T has complex elements in offdiagonal
 positions, but a simple diagonal similarity transformation is
 then used to produce a real tridiagonal form, after which the QL
 algorithm and succeeding methods described in the previous
 paragraph are used to complete the solution.
+)clear all
 2.1.2. Generalized eigenvalue problems
+S 23 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 23
 Here we distinguish as a special case those problems in which
 both A and B are symmetric and B is positivedefinite and well
 conditioned with respect to inversion (i.e., all the eigenvalues
 of B are significantly greater than zero). Such problems can be
 satisfactorily treated by first reducing them to case (ii) of
 Section 2.1.1 and then using the methods described there to
 T
 compute the eigensolutions. If B is factorized as LL (L lower
 triangular), then Ax=(lambda)Bx is equivalent to the standard
 1 T 1 T
 symmetric problem Ry=(lambda)y, where R=L A(L ) and y=L x.
 After finding an eigenvector y of R, the required x is computed
 T
 by backsubstitution in y=L x.
+S 24 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 24
 For generalized problems of the form Ax=(lambda)Bx which do not
 fall into the special case, the QZ algorithm is provided.
+S 25 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 25
 In order to appreciate the domain in which this algorithm is
 appropriate we remark first that when B is nonsingular the
 problem Ax=(lambda)Bx is fully equivalent to the problem
 1
 (B A)x=(lambda)x; both the eigenvalues and eigenvectors being
 the same. When A is nonsingular Ax=(lambda)Bx is equivalent to
 1
 the problem (A B)x=(mu)x; the eigenvalues (mu) being the
 reciprocals of the required eigenvalues and the eigenvectors
 remaining the same. In theory then, provided at least one of the
 matrices A and B is nonsingular, the generalized problem
 Ax=(lambda)Bx could be solved via the standard problem
 Cx=(lambda)x with an appropriate matrix C, and as far as economy
 of effort is concerned this is quite satisfactory. However, in
 practice, for this reduction to be satisfactory from the
 standpoint of numerical stability, one requires more than the
 1
 mere nonsingularity of A or B. It is necessary that B A (or
 1
 A B) should not only exist but that B (or A) should be well
 conditioned with respect to inversion. The nearer B (or A) is to
 1 1
 singularity the more unsatisfactory B A (or A B) will be as a
 vehicle for determining the required eigenvalues. Unfortunately
 1
 one cannot counter illconditioning in B (or A) by computing B A
 1
 (or A B) accurately to single precision using iterative
 refinement. Welldetermined eigenvalues of the original
 Ax=(lambda)Bx may be poorly determined even by the correctly
 1 1
 rounded version of B A (or A B). The situation may in some
 instances be saved by the observation that if Ax=(lambda)Bx then
 (AkB)x=((lambda)k)Bx. Hence if AkB is nonsingular we may
 1
 solve the standard problem [(AkB) B]x=(mu)x and for numerical
 stability we require only that (AkB) be wellconditioned with
 respect to inversion.

 In practice one may well be in a situation where no k is known
 for which (AkB) is wellconditioned with respect to inversion
 and indeed (AkB) may be singular for all k. The QZ algorithm is
 designed to deal directly with the problem Ax=(lambda)Bx itself
 and its performance is unaffected by singularity or near
 singularity of A, B or AkB.

 2.1.3. Sparse symmetric problems
+S 26 of 164
+ib:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 26
 If the matrices A and B are large and sparse (i.e., only a small
 proportion of the elements are nonzero), then the methods
 described in the previous Section are unsuitable, because in
 reducing the problem to a simpler form, much of the sparsity of
 the problem would be lost; hence the computing time and the
 storage required would be very large. Instead, for symmetric
 problems, the method of simultaneous iteration may be used to
 determine selected eigenvalues and the corresponding
 eigenvectors. The routine provided has been designed to handle
 both symmetric and generalized symmetric problems.
+S 27 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 27
 2.2. Singular Value Problems
+S 28 of 164
+iv:=4
+R
+R
+R (6) 4
+R Type: PositiveInteger
+E 28
 The singular value decomposition of an m by n real matrix A is
 given by
+S 29 of 164
+a:Matrix SF:=
+ [[0.5 , 1.5 , 6.6 , 4.8 ],_
+ [1.5 , 6.5 ,16.2 , 8.6 ],_
+ [6.6 ,16.2 ,37.6 , 9.8 ],_
+ [4.8 , 8.6 , 9.8 ,17.1 ]]
+R
+R
+R (7)
+R [[0.5,1.5,6.5999999999999996,4.7999999999999998],
+R [1.5,6.5,16.199999999999999,8.5999999999999996],
+R
+R [6.5999999999999996, 16.199999999999999, 37.599999999999994,
+R 9.8000000000000007]
+R ,
+R
+R [4.7999999999999998, 8.5999999999999996, 9.8000000000000007,
+R  17.099999999999998]
+R ]
+R Type: Matrix(DoubleFloat)
+E 29
 T
 A=QDP ,
+S 30 of 164
+b:Matrix SF:=
+ [[1 , 3 , 4 , 1 ],_
+ [3 ,13 ,16 ,11 ],_
+ [4 ,16 ,24 ,18 ],_
+ [1 ,11 ,18 ,27 ]]
+R
+R
+R +1. 3. 4. 1. +
+R  
+R 3. 13. 16. 11.
+R (8)  
+R 4. 16. 24. 18.
+R  
+R +1. 11. 18. 27.+
+R Type: Matrix(DoubleFloat)
+E 30
 where Q is an m by m orthogonal matrix, P is an n by n orthogonal
 matrix and D is an m by n diagonal matrix with nonnegative
 diagonal elements. The first k==min(m,n) columns of Q and P are
 the left and righthand singular vectors of A and the k diagonal
 elements of D are the singular values.
+S 31 of 164
+ result:=f02aef(ia,ib,n,iv,a,b,1)
+E 31
 When A is complex then the singular value decomposition is given
 by
+)clear all
 H
 A=QDP ,
+S 32 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 32
 H T
 where Q and P are unitary, P denotes the complex conjugate of P
 and D is as above for the real case.
+S 33 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 33
 If the matrix A has column means of zero, then AP is the matrix
 of principal components of A and the singular values are the
 square roots of the sample variances of the observations with
 respect to the principal components. (See also Chapter G03.)
+S 34 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 34
 Routines are provided to return the singular values and vectors
 of a general real or complex matrix.
+S 35 of 164
+n:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 35
 3. Recommendations on Choice and Use of Routines
+S 36 of 164
+a:Matrix SF:=
+ [[ 1.5 ,0.1 , 4.5 ,1.5 ],_
+ [22.5 ,3.5 ,12.5 ,2.5 ],_
+ [ 2.5 ,0.3 , 4.5 ,2.5 ],_
+ [ 2.5 ,0.1 , 4.5 , 2.5 ]]
+R
+R
+R + 1.5 0.10000000000000001 4.5  1.5+
+R  
+R  22.5 3.5 12.5  2.5
+R (5)  
+R  2.5 0.29999999999999999 4.5  2.5
+R  
+R + 2.5 0.10000000000000001 4.5 2.5 +
+R Type: Matrix(DoubleFloat)
+E 36
 3.1. General Discussion
+S 37 of 164
+ result:=f02aff(ia,n,a,1)
+E 37
 There is one routine, F02FJF, which is designed for sparse
 symmetric eigenvalue problems, either standard or generalized.
 The remainder of the routines are designed for dense matrices.
+)clear all
 3.2. Eigenvalue and Eigenvector Routines
+S 38 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 38
 These reduce the matrix A to a simpler form by a similarity
 1
 transformation S AS=T where T is an upper Hessenberg or
 tridiagonal matrix, compute the eigensolutions of T, and then
 recover the eigenvectors of A via the matrix S. The eigenvectors
 are normalised so that
+S 39 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 39
 n
  2
 > x  =1
  r
 r=1
+S 40 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 40
 x being the rth component of the eigenvector x, and so that the
 r
 element of largest modulus is real if x is complex. For problems
 of the type Ax=(lambda)Bx with A and B symmetric and B positive
 T
 definite, the eigenvectors are normalised so that x Bx=1, x
 always being real for such problems.
+S 41 of 164
+n:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 41
 3.3. Singular Value and Singular Vector Routines
+S 42 of 164
+ivr:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 42
 These reduce the matrix A to real bidiagonal form, B say, by
 T
 orthogonal transformations Q AP=B in the real case, and by
 H
 unitary transformations Q AP=B in the complex case, and the
 singular values and vectors are computed via this bidiagonal
 form. The singular values are returned in descending order.
+S 43 of 164
+ivi:=4
+R
+R
+R (6) 4
+R Type: PositiveInteger
+E 43
 3.4. Decision Trees
+S 44 of 164
+a:Matrix SF:=
+ [[ 1.5 ,0.1 , 4.5 ,1.5 ],_
+ [22.5 ,3.5 ,12.5 ,2.5 ],_
+ [ 2.5 ,0.3 , 4.5 ,2.5 ],_
+ [ 2.5 ,0.1 , 4.5 , 2.5 ]]
+R
+R
+R + 1.5 0.10000000000000001 4.5  1.5+
+R  
+R  22.5 3.5 12.5  2.5
+R (7)  
+R  2.5 0.29999999999999999 4.5  2.5
+R  
+R + 2.5 0.10000000000000001 4.5 2.5 +
+R Type: Matrix(DoubleFloat)
+E 44
 (i) Eigenvalues and Eigenvectors
+S 45 of 164
+ result:=f02agf(ia,n,ivr,ivi,a,1)
+E 45
+)clear all
 Please see figure in printed Reference Manual
+S 46 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 46
+S 47 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 47
 (ii) Singular Values and Singular Vectors
+S 48 of 164
+iar:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 48
+S 49 of 164
+iai:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 49
 Please see figure in printed Reference Manual
+S 50 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 50
 F02  Eigenvalues and Eigenvectors Contents  F02
 Chapter F02
+S 51 of 164
+ar:Matrix SF:=
+ [[21.0 , 0.0 ,13.6 ,0.0 ],_
+ [ 0.0 ,26.0 , 7.5 ,2.5 ],_
+ [ 2.0 , 1.68, 4.5 ,1.5 ],_
+ [ 0.0 ,2.6 ,2.7 ,2.5 ]]
+R
+R
+R + 21. 0. 13.6 0. +
+R  
+R  0. 26. 7.5 2.5
+R (6)  
+R  2. 1.6799999999999999 4.5 1.5
+R  
+R + 0.  2.5999999999999996  2.6999999999999997 2.5+
+R Type: Matrix(DoubleFloat)
+E 51
 Eigenvalues and Eigenvectors
+S 52 of 164
+ai:Matrix SF:=
+ [[5.0 ,24.6 , 10.2 ,4.0 ],_
+ [22.5 ,5.0 , 10.0 ,0.0 ],_
+ [ 1.5 , 2.24 ,5.0 , 2.0 ],_
+ [2.5 , 0.0 , 3.6 ,5.0 ]]
+R
+R
+R + 5. 24.600000000000001 10.199999999999999 4. +
+R  
+R 22.5  5.  10. 0. 
+R (7)  
+R  1.5 2.2400000000000002  5. 2. 
+R  
+R + 2.5 0. 3.5999999999999996  5.+
+R Type: Matrix(DoubleFloat)
+E 52
 F02AAF All eigenvalues of real symmetric matrix
+S 53 of 164
+ result:=f02ajf(iar,iai,n,ar,ai,1)
+E 53
 F02ABF All eigenvalues and eigenvectors of real symmetric matrix
+)clear all
 F02ADF All eigenvalues of generalized real symmetricdefinite
 eigenproblem
+S 54 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 54
 F02AEF All eigenvalues and eigenvectors of generalized real
 symmetricdefinite eigenproblem
+S 55 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 55
 F02AFF All eigenvalues of real matrix
+S 56 of 164
+iar:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 56
 F02AGF All eigenvalues and eigenvectors of real matrix
+S 57 of 164
+iai:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 57
 F02AJF All eigenvalues of complex matrix
+S 58 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 58
 F02AKF All eigenvalues and eigenvectors of complex matrix
+S 59 of 164
+ivr:=4
+R
+R
+R (6) 4
+R Type: PositiveInteger
+E 59
 F02AWF All eigenvalues of complex Hermitian matrix
+S 60 of 164
+ivi:=4
+R
+R
+R (7) 4
+R Type: PositiveInteger
+E 60
 F02AXF All eigenvalues and eigenvectors of complex Hermitian
 matrix
+S 61 of 164
+ar:Matrix SF:=
+ [[21.0 , 0.0 ,13.6 ,0.0 ],_
+ [ 0.0 ,26.0 , 7.5 ,2.5 ],_
+ [ 2.0 , 1.68 ,4.5 ,1.5 ],_
+ [ 0.0 ,2.6 ,2.7 ,2.5 ]]
+R
+R
+R + 21. 0. 13.6 0. +
+R  
+R  0. 26. 7.5 2.5
+R (8)  
+R  2. 1.6799999999999999 4.5 1.5
+R  
+R + 0.  2.5999999999999996  2.6999999999999997 2.5+
+R Type: Matrix(DoubleFloat)
+E 61
 F02BBF Selected eigenvalues and eigenvectors of real symmetric
 matrix
+S 62 of 164
+ai:Matrix SF:=
+ [[5.0 ,24.6 ,10.2 , 4.0 ],_
+ [22.5 ,5.0 ,10.0 , 0.0 ],_
+ [ 1.5 , 2.24 ,5.0 , 2.0 ],_
+ [2.5 , 0.0 , 3.6 ,5.0 ]]
+R
+R
+R + 5. 24.600000000000001 10.199999999999999 4. +
+R  
+R 22.5  5.  10. 0. 
+R (9)  
+R  1.5 2.2400000000000002  5. 2. 
+R  
+R + 2.5 0. 3.5999999999999996  5.+
+R Type: Matrix(DoubleFloat)
+E 62
 F02BJF All eigenvalues and optionally eigenvectors of
 generalized eigenproblem by QZ algorithm, real matrices
+S 63 of 164
+ result:=f02akf(iar,iai,n,ivr,ivi,ar,ai,1)
+E 63
 F02FJF Selected eigenvalues and eigenvectors of sparse symmetric
 eigenproblem
+)clear all
 F02WEF SVD of real matrix
+S 64 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 64
 F02XEF SVD of complex matrix
+S 65 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 65
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+S 66 of 164
+iar:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 66
 F02  Eigenvalue and Eigenvectors F02AAF
 F02AAF  NAG Foundation Library Routine Document
+S 67 of 164
+iai:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 67
 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+S 68 of 164
+n:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 68
 1. Purpose
+S 69 of 164
+ar:Matrix SF:=
+ [[0.5 , 0.0 , 1.84 , 2.08 ],_
+ [0.0 , 0.5 , 1.12 ,0.56 ],_
+ [1.84 , 1.12 ,0.5 , 0.0 ],_
+ [2.08 ,0.56 ,0.0 , 0.5 ]]
+R
+R
+R (6)
+R [[0.5,0.,1.8399999999999999,2.0800000000000001],
+R [0.,0.5,1.1200000000000001, 0.55999999999999994],
+R [1.8399999999999999,1.1200000000000001,0.5,0.],
+R [2.0800000000000001, 0.55999999999999994,0.,0.5]]
+R Type: Matrix(DoubleFloat)
+E 69
 F02AAF calculates all the eigenvalues of a real symmetric matrix.
+S 70 of 164
+ai:Matrix SF:=
+ [[ 0.0 , 0.0 , 1.38 ,1.56 ],_
+ [ 0.0 , 0.0 , 0.84 , 0.42 ],_
+ [1.38 ,0.84 ,0.0 , 0.0 ],_
+ [ 1.56 ,0.42 ,0.0 , 0.0 ]]
+R
+R
+R (7)
+R [[0.,0.,1.3799999999999999, 1.5599999999999998],
+R [0.,0.,0.83999999999999997,0.41999999999999998],
+R [ 1.3799999999999999, 0.84000000000000008,0.,0.],
+R [1.5600000000000001, 0.42000000000000004,0.,0.]]
+R Type: Matrix(DoubleFloat)
+E 70
 2. Specification
+S 71 of 164
+ result:=f02awf(iar,iai,n,ar,ai,1)
+E 71
 SUBROUTINE F02AAF (A, IA, N, R, E, IFAIL)
 INTEGER IA, N, IFAIL
 DOUBLE PRECISION A(IA,N), R(N), E(N)
+)clear all
 3. Description
+S 72 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 72
 This routine reduces the real symmetric matrix A to a real
 symmetric tridiagonal matrix using Householder's method. The
 eigenvalues of the tridiagonal matrix are then determined using
 the QL algorithm.
+S 73 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 73
 4. References
+S 74 of 164
+ar:Matrix SF:=
+ [[0.5 ,0.0 ,1.84 ,2.08 ],_
+ [0.0 ,0.5 ,1.12 ,0.56 ],_
+ [1.84 ,1.12 ,0.5 ,0.0 ],_
+ [2.08 ,0.56 ,0.0 ,0.5 ]]
+R
+R
+R (3)
+R [[0.5,0.,1.8399999999999999,2.0800000000000001],
+R [0.,0.5,1.1200000000000001, 0.55999999999999994],
+R [1.8399999999999999,1.1200000000000001,0.5,0.],
+R [2.0800000000000001, 0.55999999999999994,0.,0.5]]
+R Type: Matrix(DoubleFloat)
+E 74
 [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
 Computation II, Linear Algebra. SpringerVerlag.
+S 75 of 164
+iar:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 75
 5. Parameters
+S 76 of 164
+ai:Matrix SF:=
+ [[0.0 ,0.0 ,1.38 ,1.56 ],_
+ [0.0 ,0.0 ,0.84 ,0.42 ],_
+ [1.38 ,0.84 ,0.0 ,0.0 ],_
+ [1.56 ,0.42 ,0.0 ,0.0 ]]
+R
+R
+R (5)
+R [[0.,0.,1.3799999999999999, 1.5599999999999998],
+R [0.,0.,0.83999999999999997,0.41999999999999998],
+R [ 1.3799999999999999, 0.84000000000000008,0.,0.],
+R [1.5600000000000001, 0.42000000000000004,0.,0.]]
+R Type: Matrix(DoubleFloat)
+E 76
 1: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the lower triangle of the n by n symmetric matrix
 A. The elements of the array above the diagonal need not be
 set. On exit: the elements of A below the diagonal are
 overwritten, and the rest of the array is unchanged.
+S 77 of 164
+iai:=4
+R
+R
+R (6) 4
+R Type: PositiveInteger
+E 77
 2: IA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02AAF is called.
 Constraint: IA >= N.
+S 78 of 164
+n:=4
+R
+R
+R (7) 4
+R Type: PositiveInteger
+E 78
 3: N  INTEGER Input
 On entry: n, the order of the matrix A.
+S 79 of 164
+ivr:=4
+R
+R
+R (8) 4
+R Type: PositiveInteger
+E 79
 4: R(N)  DOUBLE PRECISION array Output
 On exit: the eigenvalues in ascending order.
+S 80 of 164
+ivi:=4
+R
+R
+R (9) 4
+R Type: PositiveInteger
+E 80
 5: E(N)  DOUBLE PRECISION array Workspace
+S 81 of 164
+ result:=f02axf(ar,iar,ai,iai,n,ivr,ivi,1)
+E 81
 6: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+)clear all
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+S 82 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 82
 6. Error Indicators and Warnings
+S 83 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 83
 Errors detected by the routine:
+S 84 of 164
+ia:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 84
 IFAIL= 1
 Failure in F02AVF(*) indicating that more than 30*N
 iterations are required to isolate all the eigenvalues.
+S 85 of 164
+n:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 85
 7. Accuracy
+S 86 of 164
+alb:=2.0
+R
+R
+R (5)  2.0
+R Type: Float
+E 86
 The accuracy of the eigenvalues depends on the sensitivity of the
 matrix to rounding errors produced in tridiagonalisation. For a
 detailed error analysis see Wilkinson and Reinsch [1] pp 222 and
 235.
+S 87 of 164
+ub:=3.0
+R
+R
+R (6) 3.0
+R Type: Float
+E 87
 8. Further Comments
+S 88 of 164
+m:=3
+R
+R
+R (7) 3
+R Type: PositiveInteger
+E 88
 3
 The time taken by the routine is approximately proportional to n
+S 89 of 164
+iv:=4
+R
+R
+R (8) 4
+R Type: PositiveInteger
+E 89
 9. Example
+S 90 of 164
+a:Matrix SF:=
+ [[0.5 ,0.0 ,2.3 ,2.6 ],_
+ [0.0 ,0.5 ,1.4 ,0.7 ],_
+ [2.3 ,1.4 ,0.5 ,0.0 ],_
+ [2.6 ,0.7 ,0.0 ,0.5 ]]
+R
+R
+R (9)
+R [[0.5,0.,2.2999999999999998, 2.5999999999999996],
+R [0.,0.5, 1.3999999999999999, 0.69999999999999996],
+R [2.2999999999999998, 1.3999999999999999,0.5,0.],
+R [ 2.5999999999999996, 0.69999999999999996,0.,0.5]]
+R Type: Matrix(DoubleFloat)
+E 90
 To calculate all the eigenvalues of the real symmetric matrix:
+S 91 of 164
+ result:=f02bbf(ia,n,alb,ub,m,iv,a,1)
+E 91
 ( 0.5 0.0 2.3 2.6)
 ( 0.0 0.5 1.4 0.7)
 ( 2.3 1.4 0.5 0.0).
 (2.6 0.7 0.0 0.5)
+)clear all
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+S 92 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 92
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+S 93 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 93
 F02  Eigenvalue and Eigenvectors F02ABF
 F02ABF  NAG Foundation Library Routine Document

 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+S 94 of 164
+n:=4
+R
+R
+R (3) 4
+R Type: PositiveInteger
+E 94
 1. Purpose
+S 95 of 164
+ia:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 95
 F02ABF calculates all the eigenvalues and eigenvectors of a real
 symmetric matrix.
+S 96 of 164
+ib:=4
+R
+R
+R (5) 4
+R Type: PositiveInteger
+E 96
 2. Specification
+S 97 of 164
+eps1:SF:=1.0e4
+R
+R
+R (6) 9.9999999999999991E5
+R Type: DoubleFloat
+E 97
 SUBROUTINE F02ABF (A, IA, N, R, V, IV, E, IFAIL)
 INTEGER IA, N, IV, IFAIL
 DOUBLE PRECISION A(IA,N), R(N), V(IV,N), E(N)
+S 98 of 164
+matv:=true
+R
+R
+R (7) true
+R Type: Boolean
+E 98
 3. Description
+S 99 of 164
+iv:=4
+R
+R
+R (8) 4
+R Type: PositiveInteger
+E 99
 This routine reduces the real symmetric matrix A to a real
 symmetric tridiagonal matrix by Householder's method. The
 eigenvalues and eigenvectors are calculated using the QL
 algorithm.
+S 100 of 164
+a:Matrix SF:=
+ [[3.9 ,12.5 ,34.5 ,0.5 ],_
+ [4.3 ,21.5 ,47.5 ,7.5 ],_
+ [4.3 ,21.5 ,43.5 ,3.5 ],_
+ [4.4 ,26.0 ,46.0 ,6.0 ]]
+R
+R
+R +3.8999999999999999 12.5  34.5  0.5+
+R  
+R 4.2999999999999998 21.5  47.5 7.5 
+R (9)  
+R 4.2999999999999998 21.5  43.5 3.5 
+R  
+R +4.4000000000000004 26.  46. 6. +
+R Type: Matrix(DoubleFloat)
+E 100
 4. References
+S 101 of 164
+b:Matrix SF:=
+ [[1 ,2 ,3 ,1 ],_
+ [1 ,3 ,5 ,4 ],_
+ [1 ,3 ,4 ,3 ],_
+ [1 ,3 ,4 ,4 ]]
+R
+R
+R +1. 2.  3. 1.+
+R  
+R 1. 3.  5. 4.
+R (10)  
+R 1. 3.  4. 3.
+R  
+R +1. 3.  4. 4.+
+R Type: Matrix(DoubleFloat)
+E 101
 [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
 Computation II, Linear Algebra. SpringerVerlag.
+S 102 of 164
+ result:=f02bjf(n,ia,ib,eps1,matv,iv,a,b,1)
+E 102
 5. Parameters
+)clear all
 1: A(IA,N)  DOUBLE PRECISION array Input
 On entry: the lower triangle of the n by n symmetric matrix
 A. The elements of the array above the diagonal need not be
 set. See also Section 8.
+S 103 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 103
 2: IA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02ABF is called.
 Constraint: IA >= N.
+S 104 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 104
 3: N  INTEGER Input
 On entry: n, the order of the matrix A.
+S 105 of 164
+n : Integer := 16;
+R
+R
+R Type: Integer
+E 105
 4: R(N)  DOUBLE PRECISION array Output
 On exit: the eigenvalues in ascending order.
+S 106 of 164
+k : Integer := 6
+R
+R
+R (4) 6
+R Type: Integer
+E 106
 5: V(IV,N)  DOUBLE PRECISION array Output
 On exit: the normalised eigenvectors, stored by columns;
 the ith column corresponds to the ith eigenvalue. The
 eigenvectors are normalised so that the sum of squares of
 the elements is equal to 1.
+S 107 of 164
+tol : DoubleFloat := 0.0001
+R
+R
+R (5) 9.9999999999999991E5
+R Type: DoubleFloat
+E 107
 6: IV  INTEGER Input
 On entry:
 the first dimension of the array V as declared in the
 (sub)program from which F02ABF is called.
 Constraint: IV >= N.
+S 108 of 164
+novecs : Integer := 0
+R
+R
+R (6) 0
+R Type: Integer
+E 108
 7: E(N)  DOUBLE PRECISION array Workspace
+S 109 of 164
+nrx : Integer := n
+R
+R
+R (7) 16
+R Type: Integer
+E 109
 8: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+S 110 of 164
+lwork : Integer := 86
+R
+R
+R (8) 86
+R Type: Integer
+E 110
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+S 111 of 164
+lrwork : Integer := 1;
+R
+R
+R Type: Integer
+E 111
 6. Error Indicators and Warnings
+S 112 of 164
+liwork : Integer := 1;
+R
+R
+R Type: Integer
+E 112
 Errors detected by the routine:
+S 113 of 164
+noits : Integer := 1000
+R
+R
+R (11) 1000
+R Type: Integer
+E 113
 IFAIL= 1
 Failure in F02AMF(*) indicating that more than 30*N
 iterations are required to isolate all the eigenvalues.
+S 114 of 164
+m : Integer := 4;
+R
+R
+R Type: Integer
+E 114
 7. Accuracy
+S 115 of 164
+x :Matrix SF:=new(nrx,k,0.0);
+R
+R Compiling function G1781 with type Integer > Boolean
+R
+R Type: Matrix(DoubleFloat)
+E 115
 The eigenvectors are always accurately orthogonal but the
 accuracy of the individual eigenvectors is dependent on their
 inherent sensitivity to changes in the original matrix. For a
 detailed error analysis see Wilkinson and Reinsch [1] pp 222 and
 235.
+S 116 of 164
+ifail : Integer := 1
+R
+R
+R (14)  1
+R Type: Integer
+E 116
 8. Further Comments
+S 117 of 164
+a :Matrix FRAC INT:= new(n,n,0);
+R
+R
+R Type: Matrix(Fraction(Integer))
+E 117
 3
 The time taken by the routine is approximately proportional to n
+S 118 of 164
+a(1,1) := 1;
+R
+R
+R Type: Fraction(Integer)
+E 118
 Unless otherwise stated in the Users' Note for your
 implementation, the routine may be called with the same actual
 array supplied for parameters A and V, in which case the
 eigenvectors will overwrite the original matrix. However this is
 not standard Fortran 77, and may not work on all systems.
+S 119 of 164
+a(1,2) := 1/4;
+R
+R
+R Type: Fraction(Integer)
+E 119
 9. Example
+S 120 of 164
+a(1,5) := 1/4;
+R
+R
+R Type: Fraction(Integer)
+E 120
 To calculate all the eigenvalues and eigenvectors of the real
 symmetric matrix:
+S 121 of 164
+for i in 2..4 repeat
+ a(i,i1) := 1/4
+ a(i,i) := 1
+ a(i,i+1) := 1/4
+ a(i,i+4) := 1/4
+R
+R Type: Void
+E 121
 ( 0.5 0.0 2.3 2.6)
 ( 0.0 0.5 1.4 0.7)
 ( 2.3 1.4 0.5 0.0).
 (2.6 0.7 0.0 0.5)
+S 122 of 164
+for i in 5..n4 repeat
+ a(i,i4) := 1/4
+ a(i,i1) := 1/4
+ a(i,i) := 1
+ a(i,i+1) := 1/4
+ a(i,i+4) := 1/4
+R
+R Type: Void
+E 122
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+S 123 of 164
+for i in n3..n1 repeat
+ a(i,i4) := 1/4
+ a(i,i1) := 1/4
+ a(i,i) := 1
+ a(i,i+1) := 1/4
+R
+R Type: Void
+E 123
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+S 124 of 164
+a(16,16) := 1;
+R
+R
+R Type: Fraction(Integer)
+E 124
 F02  Eigenvalue and Eigenvectors F02ADF
 F02ADF  NAG Foundation Library Routine Document
+S 125 of 164
+a(16,15) := 1/4;
+R
+R
+R Type: Fraction(Integer)
+E 125
 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+S 126 of 164
+a(16,12) := 1/4;
+R
+R
+R Type: Fraction(Integer)
+E 126
 1. Purpose
+S 127 of 164
+b:Matrix FRAC INT:= new(n,n,0);
+R
+R
+R Type: Matrix(Fraction(Integer))
+E 127
 F02ADF calculates all the eigenvalues of Ax=(lambda)Bx, where A
 is a real symmetric matrix and B is a real symmetric positive
 definite matrix.
+S 128 of 164
+b(1,1) := 1
+R
+R
+R (26) 1
+R Type: Fraction(Integer)
+E 128
 2. Specification
+S 129 of 164
+b(1,2) := 1/2
+R
+R
+R 1
+R (27)  
+R 2
+R Type: Fraction(Integer)
+E 129
 SUBROUTINE F02ADF (A, IA, B, IB, N, R, DE, IFAIL)
 INTEGER IA, IB, N, IFAIL
 DOUBLE PRECISION A(IA,N), B(IB,N), R(N), DE(N)
+S 130 of 164
+for i in 2..n1 repeat
+ b(i,i1) := 1/2
+ b(i,i) := 1
+ b(i,i+1) := 1/2
+R
+R Type: Void
+E 130
 3. Description
+S 131 of 164
+b(16,15) := 1/2
+R
+R
+R 1
+R (29)  
+R 2
+R Type: Fraction(Integer)
+E 131
 The problem is reduced to the standard symmetric eigenproblem
 using Cholesky's method to decompose B into triangular matrices,
 T
 B=LL , where L is lower triangular. Then Ax=(lambda)Bx implies
 1 T T T
 (L AL )(L x)=(lambda)(L x); hence the eigenvalues of
 Ax=(lambda)Bx are those of Py=(lambda)y where P is the symmetric
 1 T
 matrix L AL . Householder's method is used to tridiagonalise
 the matrix P and the eigenvalues are then found using the QL
 algorithm.

 4. References

 [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
 Computation II, Linear Algebra. SpringerVerlag.

 5. Parameters

 1: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the upper triangle of the n by n symmetric matrix
 A. The elements of the array below the diagonal need not be
 set. On exit: the lower triangle of the array is
 overwritten. The rest of the array is unchanged.
+S 132 of 164
+b(16,16) := 1
+R
+R
+R (30) 1
+R Type: Fraction(Integer)
+E 132
 2: IA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02ADF is called.
 Constraint: IA >= N.
+S 133 of 164
+c : Matrix MachineFloat := (inverse (a))*b;
+R
+R
+R Type: Matrix(MachineFloat)
+E 133
 3: B(IB,N)  DOUBLE PRECISION array Input/Output
 On entry: the upper triangle of the n by n symmetric
 positivedefinite matrix B. The elements of the array below
 the diagonal need not be set. On exit: the elements below
 the diagonal are overwritten. The rest of the array is
 unchanged.
+S 134 of 164
+bb := b :: Matrix MachineFloat
+R
+R
+R (32)
+R [[1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [ 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5,0.0],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0, 0.5],
+R [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.5,1.0]]
+R Type: Matrix(MachineFloat)
+E 134
 4: IB  INTEGER Input
 On entry:
 the first dimension of the array B as declared in the
 (sub)program from which F02ADF is called.
 Constraint: IB >= N.
+S 135 of 164
+ result:=f02fjf(n,k,tol,novecs,nrx,lwork,lrwork,liwork,m,noits,_
+ x,ifail,bb :: ASP27('DOT),c :: ASP28('IMAGE))
+E 135
 5: N  INTEGER Input
 On entry: n, the order of the matrices A and B.
+)clear all
 6: R(N)  DOUBLE PRECISION array Output
 On exit: the eigenvalues in ascending order.
+S 136 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 136
 7: DE(N)  DOUBLE PRECISION array Workspace
+S 137 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 137
 8: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+S 138 of 164
+m := 5
+R
+R
+R (3) 5
+R Type: PositiveInteger
+E 138
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+S 139 of 164
+n := 3
+R
+R
+R (4) 3
+R Type: PositiveInteger
+E 139
 6. Error Indicators and Warnings
+S 140 of 164
+a : Matrix SF:=
+ [[ 2.0, 2.5, 2.5],_
+ [ 2.0, 2.5, 2.5],_
+ [ 1.6,0.4, 2.8],_
+ [ 2.0,0.5, 0.5],_
+ [ 1.2,0.3,2.9] ]
+R
+R
+R + 2. 2.5 2.5 +
+R  
+R  2. 2.5 2.5 
+R  
+R (5) 1.6000000000000001  0.39999999999999997 2.7999999999999998 
+R  
+R  2.  0.5 0.5 
+R  
+R + 1.2  0.30000000000000004  2.9000000000000004+
+R Type: Matrix(DoubleFloat)
+E 140
 Errors detected by the routine:
+S 141 of 164
+lda := m
+R
+R
+R (6) 5
+R Type: PositiveInteger
+E 141
 IFAIL= 1
 Failure in F01AEF(*); the matrix B is not positivedefinite
 possibly due to rounding errors.
+S 142 of 164
+ncolb := 1
+R
+R
+R (7) 1
+R Type: PositiveInteger
+E 142
 IFAIL= 2
 Failure in F02AVF(*), more than 30*N iterations are required
 to isolate all the eigenvalues.
+S 143 of 164
+b : Matrix SF:= [[ 1.1, 0.9, 0.6, 0.0, 0.8 ]]
+R
+R
+R (8)
+R [
+R [1.1000000000000001, 0.89999999999999991, 0.59999999999999998, 0.,
+R  0.79999999999999993]
+R ]
+R Type: Matrix(DoubleFloat)
+E 143
 7. Accuracy
+S 144 of 164
+ldb := 5
+R
+R
+R (9) 5
+R Type: PositiveInteger
+E 144
 In general this routine is very accurate. However, if B is ill
 conditioned with respect to inversion, the eigenvalues could be
 inaccurately determined. For a detailed error analysis see
 Wilkinson and Reinsch [1] pp 310, 222 and 235.
+S 145 of 164
+wantq := true
+R
+R
+R (10) true
+R Type: Boolean
+E 145
 8. Further Comments
+S 146 of 164
+wantp := true
+R
+R
+R (11) true
+R Type: Boolean
+E 146
 3
 The time taken by the routine is approximately proportional to n
+S 147 of 164
+ldq := 1
+R
+R
+R (12) 1
+R Type: PositiveInteger
+E 147
 9. Example
+S 148 of 164
+ldpt := n
+R
+R
+R (13) 3
+R Type: PositiveInteger
+E 148
 To calculate all the eigenvalues of the general symmetric
 eigenproblem Ax=(lambda) Bx where A is the symmetric matrix:
+S 149 of 164
+ifail := 1
+R
+R
+R (14)  1
+R Type: Integer
+E 149
 (0.5 1.5 6.6 4.8)
 (1.5 6.5 16.2 8.6)
 (6.6 16.2 37.6 9.8)
 (4.8 8.6 9.8 17.1)
+S 150 of 164
+ result:=f02wef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldpt,a,b,ifail)
+E 150
 and B is the symmetric positivedefinite matrix:
+)clear all
+
+S 151 of 164
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 151
 (1 3 4 1)
 (3 13 16 11)
 (4 16 24 18).
 (1 11 18 27)
+S 152 of 164
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 152
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+S 153 of 164
+m:=5
+R
+R
+R (3) 5
+R Type: PositiveInteger
+E 153
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+S 154 of 164
+n:=3
+R
+R
+R (4) 3
+R Type: PositiveInteger
+E 154
 F02  Eigenvalue and Eigenvectors F02AEF
 F02AEF  NAG Foundation Library Routine Document
+S 155 of 164
+lda:=5
+R
+R
+R (5) 5
+R Type: PositiveInteger
+E 155
 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+S 156 of 164
+ncolb:=1
+R
+R
+R (6) 1
+R Type: PositiveInteger
+E 156
 1. Purpose
+S 157 of 164
+ldb:=5
+R
+R
+R (7) 5
+R Type: PositiveInteger
+E 157
 F02AEF calculates all the eigenvalues and eigenvectors of
 Ax=(lambda)Bx, where A is a real symmetric matrix and B is a
 real symmetric positivedefinite matrix.
+S 158 of 164
+wantq:=true
+R
+R
+R (8) true
+R Type: Boolean
+E 158
 2. Specification
+S 159 of 164
+ldq:=5
+R
+R
+R (9) 5
+R Type: PositiveInteger
+E 159
 SUBROUTINE F02AEF (A, IA, B, IB, N, R, V, IV, DL, E, IFAIL)
 INTEGER IA, IB, N, IV, IFAIL
 DOUBLE PRECISION A(IA,N), B(IB,N), R(N), V(IV,N), DL(N), E
 1 (N)
+S 160 of 164
+wantp:=true
+R
+R
+R (10) true
+R Type: Boolean
+E 160
 3. Description
+S 161 of 164
+ldph:=3
+R
+R
+R (11) 3
+R Type: PositiveInteger
+E 161
 The problem is reduced to the standard symmetric eigenproblem
 using Cholesky's method to decompose B into triangular matrices
 T
 B=LL , where L is lower triangular. Then Ax=(lambda)Bx implies
 1 T T T
 (L AL )(L x)=(lambda)(L x); hence the eigenvalues of
 Ax=(lambda)Bx are those of Py=(lambda)y, where P is the symmetric
 1 T
 matrix L AL . Householder's method is used to tridiagonalise
 the matrix P and the eigenvalues are found using the QL
 algorithm. An eigenvector z of the derived problem is related to
 T
 an eigenvector x of the original problem by z=L x. The
 eigenvectors z are determined using the QL algorithm and are
 T
 normalised so that z z=1; the eigenvectors of the original
 T
 problem are then determined by solving L x=z, and are normalised
 T
 so that x Bx=1.

 4. References

 [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
 Computation II, Linear Algebra. SpringerVerlag.

 5. Parameters

 1: A(IA,N)  DOUBLE PRECISION array Input/Output

 On entry: the upper triangle of the n by n symmetric matrix
 A. The elements of the array below the diagonal need not be
 set. On exit: the lower triangle of the array is
 overwritten. The rest of the array is unchanged. See also
 Section 8.

 2: IA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02AEF is called.
 Constraint: IA >= N.

 3: B(IB,N)  DOUBLE PRECISION array Input/Output
 On entry: the upper triangle of the n by n symmetric
 positivedefinite matrix B. The elements of the array below
 the diagonal need not be set. On exit: the elements below
 the diagonal are overwritten. The rest of the array is
 unchanged.

 4: IB  INTEGER Input
 On entry:
 the first dimension of the array B as declared in the
 (sub)program from which F02AEF is called.
 Constraint: IB >= N.

 5: N  INTEGER Input
 On entry: n, the order of the matrices A and B.

 6: R(N)  DOUBLE PRECISION array Output
 On exit: the eigenvalues in ascending order.

 7: V(IV,N)  DOUBLE PRECISION array Output
 On exit: the normalised eigenvectors, stored by columns;
 the ith column corresponds to the ith eigenvalue. The
 T
 eigenvectors x are normalised so that x Bx=1. See also
 Section 8.
+S 162 of 164
+a:Matrix Complex SF:=
+ [[0.5*%i ,0.5 + 1.5*%i ,1 + 1*%i ],_
+ [0.4 + 0.3*%i ,0.9 + 1.3*%i ,0.2 + 1.4*%i ],_
+ [0.4 ,0.4 + 0.4*%i ,1.8 ],_
+ [0.3  0.4*%i ,0.1 + 0.7*%i ,0.0 ],_
+ [0.3*%i ,0.3 + 0.3*%i ,2.4*%i ]]
+R
+R
+R (12)
+R [[0.5 %i, 0.5 + 1.5 %i, 1. + %i],
+R
+R [0.40000000000000002 + 0.29999999999999999 %i,
+R 0.89999999999999991 + 1.2999999999999998 %i,
+R 0.20000000000000001 + 1.3999999999999999 %i]
+R ,
+R
+R [0.40000000000000002,  0.39999999999999997 + 0.40000000000000002 %i,
+R 1.7999999999999998]
+R ,
+R
+R [0.29999999999999999  0.40000000000000002 %i,
+R 0.10000000000000001 + 0.69999999999999996 %i, 0.]
+R ,
+R
+R [ 0.29999999999999999 %i, 0.29999999999999999 + 0.29999999999999999 %i,
+R 2.3999999999999999 %i]
+R ]
+R Type: Matrix(Complex(DoubleFloat))
+E 162
 8: IV  INTEGER Input
 On entry:
 the first dimension of the array V as declared in the
 (sub)program from which F02AEF is called.
 Constraint: IV >= N.
+S 163 of 164
+b:Matrix Complex SF:=
+ [[0.55+1.05*%i ],_
+ [0.49+0.93*%i ],_
+ [0.560.16*%i ],_
+ [0.39+0.23*%i ],_
+ [1.13+0.83*%i ]]
+R
+R
+R + 0.54999999999999993 + 1.0499999999999998 %i+
+R  
+R 0.48999999999999999 + 0.92999999999999994 %i 
+R  
+R (13) 0.56000000000000005  0.15999999999999998 %i 
+R  
+R 0.39000000000000001 + 0.22999999999999998 %i 
+R  
+R + 1.1299999999999999 + 0.82999999999999996 %i +
+R Type: Matrix(Complex(DoubleFloat))
+E 163
 9: DL(N)  DOUBLE PRECISION array Workspace
+S 164 of 164
+ result:=f02xef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldph,a,b,1)
+E 164
 10: E(N)  DOUBLE PRECISION array Workspace
+)spool
+)lisp (bye)
+\end{chunk}
+\begin{chunk}{NagEigenPackage.help}
 11: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+This package uses the NAG Library to compute
+ * eigenvalues and eigenvectors of a matrix\
+ * eigenvalues and eigenvectors of generalized matrix
+ * eigenvalue problems
+ * singular values and singular vectors of a matrix.
 6. Error Indicators and Warnings
+ F02  Eigenvalues and Eigenvectors Introduction  F02
+ Chapter F02
+ Eigenvalues and Eigenvectors
 Errors detected by the routine:
+ 1. Scope of the Chapter
 IFAIL= 1
 Failure in F01AEF(*); the matrix B is not positivedefinite,
 possibly due to rounding errors.
+ This chapter is concerned with computing
 IFAIL= 2
 Failure in F02AMF(*); more than 30*N iterations are required
 to isolate all the eigenvalues.
+  eigenvalues and eigenvectors of a matrix
 7. Accuracy
+  eigenvalues and eigenvectors of generalized matrix
+ eigenvalue problems
 In general this routine is very accurate. However, if B is ill
 conditioned with respect to inversion, the eigenvectors could be
 inaccurately determined. For a detailed error analysis see
 Wilkinson and Reinsch [1] pp 310, 222 and 235.
+  singular values and singular vectors of a matrix.
 8. Further Comments
+ 2. Background to the Problems
 3
 The time taken by the routine is approximately proportional to n
+ 2.1. Eigenvalue Problems
 Unless otherwise stated in the Users' Note for your
 implementation, the routine may be called with the same actual
 array supplied for parameters A and V, in which case the
 eigenvectors will overwrite the original matrix A. However this
 is not standard Fortran 77, and may not work on all systems.
+ In the most usual form of eigenvalue problem we are given a
+ square n by n matrix A and wish to compute (lambda) (an
+ eigenvalue) and x/=0 (an eigenvector) which satisfy the equation
 9. Example
+ Ax=(lambda)x
 To calculate all the eigenvalues and eigenvectors of the general
 symmetric eigenproblem Ax=(lambda) Bx where A is the symmetric
 matrix:
+ Such problems are called 'standard' eigenvalue problems in
+ contrast to 'generalized' eigenvalue problems where we wish to
+ satisfy the equation
 (0.5 1.5 6.6 4.8)
 (1.5 6.5 16.2 8.6)
 (6.6 16.2 37.6 9.8)
 (4.8 8.6 9.8 17.1)
+ Ax=(lambda)Bx
 and B is the symmetric positivedefinite matrix:
+ B also being a square n by n matrix.
 (1 3 4 1)
 (3 13 16 11)
 (4 16 24 18).
 (1 11 18 27)
+ Section 2.1.1 and Section 2.1.2 discuss, respectively, standard
+ and generalized eigenvalue problems where the matrices involved
+ are dense; Section 2.1.3 discusses both types of problem in the
+ case where A and B are sparse (and symmetric).
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+ 2.1.1. Standard eigenvalue problems
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ Some of the routines in this chapter find all the n eigenvalues,
+ some find all the n eigensolutions (eigenvalues and corresponding
+ eigenvectors), and some find a selected group of eigenvalues
+ and/or eigenvectors. The matrix A may be:
 F02  Eigenvalue and Eigenvectors F02AFF
 F02AFF  NAG Foundation Library Routine Document
+ (i) general (real or complex)
 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+ (ii) real symmetric, or
 1. Purpose
+ (iii) complex Hermitian (so that if a =(alpha)+i(beta) then
+ ij
+ a =(alpha)i(beta)).
+ ji
 F02AFF calculates all the eigenvalues of a real unsymmetric
 matrix.
+ In all cases the computation starts with a similarity
+ 1
+ transformation S AS=T, where S is nonsingular and is the
+ product of fairly simple matrices, and T has an 'easier form'
+ than A so that its eigensolutions are easily determined. The
+ matrices A and T, of course, have the same eigenvalues, and if y
+ is an eigenvector of T then Sy is the corresponding eigenvector
+ of A.
 2. Specification
+ In case (i) (general real or complex A), the selected form of T
+ is an upper Hessenberg matrix (t =0 if ij>1) and S is the
+ ij
+ product of n2 stabilised elementary transformation matrices.
+ There is no easy method of computing selected eigenvalues of a
+ Hessenberg matrix, so that all eigenvalues are always calculated.
+ In the real case this computation is performed via the Francis QR
+ algorithm with double shifts, and in the complex case by means of
+ the LR algorithm. If the eigenvectors are required they are
+ computed by backsubstitution following the QR and LR algorithm.
 SUBROUTINE F02AFF (A, IA, N, RR, RI, INTGER, IFAIL)
 INTEGER IA, N, INTGER(N), IFAIL
 DOUBLE PRECISION A(IA,N), RR(N), RI(N)
+ In case (ii) (real and symmetric A) the selected simple form of T
+ is a tridiagonal matrix (t =0 if ij>1), and S is the product
+ ij
+ of n2 orthogonal Householder transformation matrices. If only
+ selected eigenvalues are required, they are obtained by the
+ method of bisection using the Sturm sequence property, and the
+ corresponding eigenvectors of T are computed by inverse
+ iteration. If all eigenvalues are required, they are computed
+ from T via the QL algorithm (an adaptation of the QR algorithm),
+ and the corresponding eigenvectors of T are the product of the
+ transformations for the QL reduction. In all cases the
+ corresponding eigenvectors of A are recovered from the
+ computation of x=Sy.
 3. Description
+ In case (iii) (complex Hermitian A) analogous transformations as
+ in case (ii) are used. T has complex elements in offdiagonal
+ positions, but a simple diagonal similarity transformation is
+ then used to produce a real tridiagonal form, after which the QL
+ algorithm and succeeding methods described in the previous
+ paragraph are used to complete the solution.
 The matrix A is first balanced and then reduced to upper
 Hessenberg form using stabilised elementary similarity
 transformations. The eigenvalues are then found using the QR
 algorithm for real Hessenberg matrices.
+ 2.1.2. Generalized eigenvalue problems
 4. References
+ Here we distinguish as a special case those problems in which
+ both A and B are symmetric and B is positivedefinite and well
+ conditioned with respect to inversion (i.e., all the eigenvalues
+ of B are significantly greater than zero). Such problems can be
+ satisfactorily treated by first reducing them to case (ii) of
+ Section 2.1.1 and then using the methods described there to
+ T
+ compute the eigensolutions. If B is factorized as LL (L lower
+ triangular), then Ax=(lambda)Bx is equivalent to the standard
+ 1 T 1 T
+ symmetric problem Ry=(lambda)y, where R=L A(L ) and y=L x.
+ After finding an eigenvector y of R, the required x is computed
+ T
+ by backsubstitution in y=L x.
 [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
 Computation II, Linear Algebra. SpringerVerlag.
+ For generalized problems of the form Ax=(lambda)Bx which do not
+ fall into the special case, the QZ algorithm is provided.
 5. Parameters
+ In order to appreciate the domain in which this algorithm is
+ appropriate we remark first that when B is nonsingular the
+ problem Ax=(lambda)Bx is fully equivalent to the problem
+ 1
+ (B A)x=(lambda)x; both the eigenvalues and eigenvectors being
+ the same. When A is nonsingular Ax=(lambda)Bx is equivalent to
+ 1
+ the problem (A B)x=(mu)x; the eigenvalues (mu) being the
+ reciprocals of the required eigenvalues and the eigenvectors
+ remaining the same. In theory then, provided at least one of the
+ matrices A and B is nonsingular, the generalized problem
+ Ax=(lambda)Bx could be solved via the standard problem
+ Cx=(lambda)x with an appropriate matrix C, and as far as economy
+ of effort is concerned this is quite satisfactory. However, in
+ practice, for this reduction to be satisfactory from the
+ standpoint of numerical stability, one requires more than the
+ 1
+ mere nonsingularity of A or B. It is necessary that B A (or
+ 1
+ A B) should not only exist but that B (or A) should be well
+ conditioned with respect to inversion. The nearer B (or A) is to
+ 1 1
+ singularity the more unsatisfactory B A (or A B) will be as a
+ vehicle for determining the required eigenvalues. Unfortunately
+ 1
+ one cannot counter illconditioning in B (or A) by computing B A
+ 1
+ (or A B) accurately to single precision using iterative
+ refinement. Welldetermined eigenvalues of the original
+ Ax=(lambda)Bx may be poorly determined even by the correctly
+ 1 1
+ rounded version of B A (or A B). The situation may in some
+ instances be saved by the observation that if Ax=(lambda)Bx then
+ (AkB)x=((lambda)k)Bx. Hence if AkB is nonsingular we may
+ 1
+ solve the standard problem [(AkB) B]x=(mu)x and for numerical
+ stability we require only that (AkB) be wellconditioned with
+ respect to inversion.
 1: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the n by n matrix A. On exit: the array is
 overwritten.
+ In practice one may well be in a situation where no k is known
+ for which (AkB) is wellconditioned with respect to inversion
+ and indeed (AkB) may be singular for all k. The QZ algorithm is
+ designed to deal directly with the problem Ax=(lambda)Bx itself
+ and its performance is unaffected by singularity or near
+ singularity of A, B or AkB.
 2: IA  INTEGER Input
 On entry:
 the dimension of the array A as declared in the (sub)program
 from which F02AFF is called.
 Constraint: IA >= N.
+ 2.1.3. Sparse symmetric problems
 3: N  INTEGER Input
 On entry: n, the order of the matrix A.
+ If the matrices A and B are large and sparse (i.e., only a small
+ proportion of the elements are nonzero), then the methods
+ described in the previous Section are unsuitable, because in
+ reducing the problem to a simpler form, much of the sparsity of
+ the problem would be lost; hence the computing time and the
+ storage required would be very large. Instead, for symmetric
+ problems, the method of simultaneous iteration may be used to
+ determine selected eigenvalues and the corresponding
+ eigenvectors. The routine provided has been designed to handle
+ both symmetric and generalized symmetric problems.
 4: RR(N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvalues.
+ 2.2. Singular Value Problems
 5: RI(N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvalues.
+ The singular value decomposition of an m by n real matrix A is
+ given by
 6: INTGER(N)  INTEGER array Output
 On exit: INTGER(i) contains the number of iterations used
 to find the ith eigenvalue. If INTGER(i) is negative, the i
 th eigenvalue is the second of a pair found simultaneously.
+ T
+ A=QDP ,
 Note that the eigenvalues are found in reverse order,
 starting with the nth.
+ where Q is an m by m orthogonal matrix, P is an n by n orthogonal
+ matrix and D is an m by n diagonal matrix with nonnegative
+ diagonal elements. The first k==min(m,n) columns of Q and P are
+ the left and righthand singular vectors of A and the k diagonal
+ elements of D are the singular values.
 7: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ When A is complex then the singular value decomposition is given
+ by
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ H
+ A=QDP ,
 6. Error Indicators and Warnings
+ H T
+ where Q and P are unitary, P denotes the complex conjugate of P
+ and D is as above for the real case.
 Errors detected by the routine:
+ If the matrix A has column means of zero, then AP is the matrix
+ of principal components of A and the singular values are the
+ square roots of the sample variances of the observations with
+ respect to the principal components. (See also Chapter G03.)
 IFAIL= 1
 More than 30*N iterations are required to isolate all the
 eigenvalues.
+ Routines are provided to return the singular values and vectors
+ of a general real or complex matrix.
 7. Accuracy
+ 3. Recommendations on Choice and Use of Routines
 The accuracy of the results depends on the original matrix and
 the multiplicity of the roots. For a detailed error analysis see
 Wilkinson and Reinsch [1] pp 352 and 367.
+ 3.1. General Discussion
 8. Further Comments
+ There is one routine, F02FJF, which is designed for sparse
+ symmetric eigenvalue problems, either standard or generalized.
+ The remainder of the routines are designed for dense matrices.
 3
 The time taken by the routine is approximately proportional to n
+ 3.2. Eigenvalue and Eigenvector Routines
 9. Example
+ These reduce the matrix A to a simpler form by a similarity
+ 1
+ transformation S AS=T where T is an upper Hessenberg or
+ tridiagonal matrix, compute the eigensolutions of T, and then
+ recover the eigenvectors of A via the matrix S. The eigenvectors
+ are normalised so that
 To calculate all the eigenvalues of the real matrix:
+ n
+  2
+ > x  =1
+  r
+ r=1
 ( 1.5 0.1 4.5 1.5)
 (22.5 3.5 12.5 2.5)
 ( 2.5 0.3 4.5 2.5).
 ( 2.5 0.1 4.5 2.5)
+ x being the rth component of the eigenvector x, and so that the
+ r
+ element of largest modulus is real if x is complex. For problems
+ of the type Ax=(lambda)Bx with A and B symmetric and B positive
+ T
+ definite, the eigenvectors are normalised so that x Bx=1, x
+ always being real for such problems.
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+ 3.3. Singular Value and Singular Vector Routines
+
+ These reduce the matrix A to real bidiagonal form, B say, by
+ T
+ orthogonal transformations Q AP=B in the real case, and by
+ H
+ unitary transformations Q AP=B in the complex case, and the
+ singular values and vectors are computed via this bidiagonal
+ form. The singular values are returned in descending order.
+
+ 3.4. Decision Trees
+
+ (i) Eigenvalues and Eigenvectors
+
+
+ Please see figure in printed Reference Manual
+
+
+ (ii) Singular Values and Singular Vectors
+
+
+ Please see figure in printed Reference Manual
+
+ F02  Eigenvalues and Eigenvectors Contents  F02
+ Chapter F02
+
+ Eigenvalues and Eigenvectors
+
+ F02AAF All eigenvalues of real symmetric matrix
+
+ F02ABF All eigenvalues and eigenvectors of real symmetric matrix
+
+ F02ADF All eigenvalues of generalized real symmetricdefinite
+ eigenproblem
+
+ F02AEF All eigenvalues and eigenvectors of generalized real
+ symmetricdefinite eigenproblem
+
+ F02AFF All eigenvalues of real matrix
+
+ F02AGF All eigenvalues and eigenvectors of real matrix
+
+ F02AJF All eigenvalues of complex matrix
+
+ F02AKF All eigenvalues and eigenvectors of complex matrix
+
+ F02AWF All eigenvalues of complex Hermitian matrix
+
+ F02AXF All eigenvalues and eigenvectors of complex Hermitian
+ matrix
+
+ F02BBF Selected eigenvalues and eigenvectors of real symmetric
+ matrix
+
+ F02BJF All eigenvalues and optionally eigenvectors of
+ generalized eigenproblem by QZ algorithm, real matrices
+
+ F02FJF Selected eigenvalues and eigenvectors of sparse symmetric
+ eigenproblem
+
+ F02WEF SVD of real matrix
+
+ F02XEF SVD of complex matrix
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02AGF
 F02AGF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AAF
+ F02AAF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 78450,25 +79398,20 @@ This package uses the NAG Library to compute
1. Purpose
 F02AGF calculates all the eigenvalues and eigenvectors of a real
 unsymmetric matrix.
+ F02AAF calculates all the eigenvalues of a real symmetric matrix.
2. Specification
 SUBROUTINE F02AGF (A, IA, N, RR, RI, VR, IVR, VI, IVI,
 1 INTGER, IFAIL)
 INTEGER IA, N, IVR, IVI, INTGER(N), IFAIL
 DOUBLE PRECISION A(IA,N), RR(N), RI(N), VR(IVR,N), VI
 1 (IVI,N)
+ SUBROUTINE F02AAF (A, IA, N, R, E, IFAIL)
+ INTEGER IA, N, IFAIL
+ DOUBLE PRECISION A(IA,N), R(N), E(N)
3. Description
 The matrix A is first balanced and then reduced to upper
 Hessenberg form using real stabilised elementary similarity
 transformations. The eigenvalues and eigenvectors of the
 Hessenberg matrix are calculated using the QR algorithm. The
 eigenvectors of the Hessenberg matrix are backtransformed to
 give the eigenvectors of the original matrix A.
+ This routine reduces the real symmetric matrix A to a real
+ symmetric tridiagonal matrix using Householder's method. The
+ eigenvalues of the tridiagonal matrix are then determined using
+ the QL algorithm.
4. References
@@ 78478,57 +79421,26 @@ This package uses the NAG Library to compute
5. Parameters
1: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the n by n matrix A. On exit: the array is
 overwritten.
+ On entry: the lower triangle of the n by n symmetric matrix
+ A. The elements of the array above the diagonal need not be
+ set. On exit: the elements of A below the diagonal are
+ overwritten, and the rest of the array is unchanged.
2: IA  INTEGER Input
On entry:
the first dimension of the array A as declared in the
 (sub)program from which F02AGF is called.
+ (sub)program from which F02AAF is called.
Constraint: IA >= N.
3: N  INTEGER Input
On entry: n, the order of the matrix A.
 4: RR(N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvalues.
+ 4: R(N)  DOUBLE PRECISION array Output
+ On exit: the eigenvalues in ascending order.
 5: RI(N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvalues.
+ 5: E(N)  DOUBLE PRECISION array Workspace
 6: VR(IVR,N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvalue.
 The eigenvectors are normalised so that the sum of the
 squares of the moduli of the elements is equal to 1 and the
 element of largest modulus is real. This ensures that real
 eigenvalues have real eigenvectors.

 7: IVR  INTEGER Input
 On entry:
 the first dimension of the array VR as declared in the
 (sub)program from which F02AGF is called.
 Constraint: IVR >= N.

 8: VI(IVI,N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvalue.

 9: IVI  INTEGER Input
 On entry:
 the first dimension of the array VI as declared in the
 (sub)program from which F02AGF is called.
 Constraint: IVI >= N.

 10: INTGER(N)  INTEGER array Output
 On exit: INTGER(i) contains the number of iterations used
 to find the ith eigenvalue. If INTGER(i) is negative, the i
 th eigenvalue is the second of a pair found simultaneously.

 Note that the eigenvalues are found in reverse order,
 starting with the nth.

 11: IFAIL  INTEGER Input/Output
+ 6: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 78541,14 +79453,15 @@ This package uses the NAG Library to compute
Errors detected by the routine:
IFAIL= 1
 More than 30*N iterations are required to isolate all the
 eigenvalues.
+ Failure in F02AVF(*) indicating that more than 30*N
+ iterations are required to isolate all the eigenvalues.
7. Accuracy
 The accuracy of the results depends on the original matrix and
 the multiplicity of the roots. For a detailed error analysis see
 Wilkinson and Reinsch [1] pp 352 and 390.
+ The accuracy of the eigenvalues depends on the sensitivity of the
+ matrix to rounding errors produced in tridiagonalisation. For a
+ detailed error analysis see Wilkinson and Reinsch [1] pp 222 and
+ 235.
8. Further Comments
@@ 78557,13 +79470,12 @@ This package uses the NAG Library to compute
9. Example
 To calculate all the eigenvalues and eigenvectors of the real
 matrix:
+ To calculate all the eigenvalues of the real symmetric matrix:
 ( 1.5 0.1 4.5 1.5)
 (22.5 3.5 12.5 2.5)
 ( 2.5 0.3 4.5 2.5).
 ( 2.5 0.1 4.5 2.5)
+ ( 0.5 0.0 2.3 2.6)
+ ( 0.0 0.5 1.4 0.7)
+ ( 2.3 1.4 0.5 0.0).
+ (2.6 0.7 0.0 0.5)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 78571,8 +79483,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02AJF
 F02AJF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02ABF
+ F02ABF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 78581,21 +79493,21 @@ This package uses the NAG Library to compute
1. Purpose
 F02AJF calculates all the eigenvalues of a complex matrix.
+ F02ABF calculates all the eigenvalues and eigenvectors of a real
+ symmetric matrix.
2. Specification
 SUBROUTINE F02AJF (AR, IAR, AI, IAI, N, RR, RI, INTGER,
 1 IFAIL)
 INTEGER IAR, IAI, N, INTGER(N), IFAIL
 DOUBLE PRECISION AR(IAR,N), AI(IAI,N), RR(N), RI(N)
+ SUBROUTINE F02ABF (A, IA, N, R, V, IV, E, IFAIL)
+ INTEGER IA, N, IV, IFAIL
+ DOUBLE PRECISION A(IA,N), R(N), V(IV,N), E(N)
3. Description
 The complex matrix A is first balanced and then reduced to upper
 Hessenberg form using stabilised elementary similarity
 transformations. The eigenvalues are then found using the
 modified LR algorithm for complex Hessenberg matrices.
+ This routine reduces the real symmetric matrix A to a real
+ symmetric tridiagonal matrix by Householder's method. The
+ eigenvalues and eigenvectors are calculated using the QL
+ algorithm.
4. References
@@ 78604,38 +79516,38 @@ This package uses the NAG Library to compute
5. Parameters
 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
 On entry: the real parts of the elements of the n by n
 complex matrix A. On exit: the array is overwritten.

 2: IAR  INTEGER Input
 On entry:
 the first dimension of the array AR as declared in the
 (sub)program from which F02AJF is called.
 Constraint: IAR >= N.

 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
 On entry: the imaginary parts of the elements of the n by n
 complex matrix A. On exit: the array is overwritten.
+ 1: A(IA,N)  DOUBLE PRECISION array Input
+ On entry: the lower triangle of the n by n symmetric matrix
+ A. The elements of the array above the diagonal need not be
+ set. See also Section 8.
 4: IAI  INTEGER Input
+ 2: IA  INTEGER Input
On entry:
 the first dimension of the array AI as declared in the
 (sub)program from which F02AJF is called.
 Constraint: IAI >= N.
+ the first dimension of the array A as declared in the
+ (sub)program from which F02ABF is called.
+ Constraint: IA >= N.
 5: N  INTEGER Input
+ 3: N  INTEGER Input
On entry: n, the order of the matrix A.
 6: RR(N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvalues.
+ 4: R(N)  DOUBLE PRECISION array Output
+ On exit: the eigenvalues in ascending order.
 7: RI(N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvalues.
+ 5: V(IV,N)  DOUBLE PRECISION array Output
+ On exit: the normalised eigenvectors, stored by columns;
+ the ith column corresponds to the ith eigenvalue. The
+ eigenvectors are normalised so that the sum of squares of
+ the elements is equal to 1.
 8: INTGER(N)  INTEGER array Workspace
+ 6: IV  INTEGER Input
+ On entry:
+ the first dimension of the array V as declared in the
+ (sub)program from which F02ABF is called.
+ Constraint: IV >= N.
 9: IFAIL  INTEGER Input/Output
+ 7: E(N)  DOUBLE PRECISION array Workspace
+
+ 8: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 78648,28 +79560,37 @@ This package uses the NAG Library to compute
Errors detected by the routine:
IFAIL= 1
 More than 30*N iterations are required to isolate all the
 eigenvalues.
+ Failure in F02AMF(*) indicating that more than 30*N
+ iterations are required to isolate all the eigenvalues.
7. Accuracy
 The accuracy of the results depends on the original matrix and
 the multiplicity of the roots. For a detailed error analysis see
 Wilkinson and Reinsch [1] pp 352 and 401.
+ The eigenvectors are always accurately orthogonal but the
+ accuracy of the individual eigenvectors is dependent on their
+ inherent sensitivity to changes in the original matrix. For a
+ detailed error analysis see Wilkinson and Reinsch [1] pp 222 and
+ 235.
8. Further Comments
3
The time taken by the routine is approximately proportional to n
+ Unless otherwise stated in the Users' Note for your
+ implementation, the routine may be called with the same actual
+ array supplied for parameters A and V, in which case the
+ eigenvectors will overwrite the original matrix. However this is
+ not standard Fortran 77, and may not work on all systems.
+
9. Example
 To calculate all the eigenvalues of the complex matrix:
+ To calculate all the eigenvalues and eigenvectors of the real
+ symmetric matrix:
 (21.05.0i 24.60i 13.6+10.2i 4.0i)
 ( 22.5i 26.005.00i 7.510.0i 2.5 )
 ( 2.0+1.5i 1.68+2.24i 4.55.0i 1.5+2.0i).
 ( 2.5i 2.60 2.7+3.6i 2.55.0i)
+ ( 0.5 0.0 2.3 2.6)
+ ( 0.0 0.5 1.4 0.7)
+ ( 2.3 1.4 0.5 0.0).
+ (2.6 0.7 0.0 0.5)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 78677,8 +79598,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02AKF
 F02AKF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02ADF
+ F02ADF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 78687,25 +79608,29 @@ This package uses the NAG Library to compute
1. Purpose
 F02AKF calculates all the eigenvalues and eigenvectors of a
 complex matrix.
+ F02ADF calculates all the eigenvalues of Ax=(lambda)Bx, where A
+ is a real symmetric matrix and B is a real symmetric positive
+ definite matrix.
2. Specification
 SUBROUTINE F02AKF (AR, IAR, AI, IAI, N, RR, RI, VR, IVR,
 1 VI, IVI, INTGER, IFAIL)
 INTEGER IAR, IAI, N, IVR, IVI, INTGER(N), IFAIL
 DOUBLE PRECISION AR(IAR,N), AI(IAI,N), RR(N), RI(N), VR
 1 (IVR,N), VI(IVI,N)
+ SUBROUTINE F02ADF (A, IA, B, IB, N, R, DE, IFAIL)
+ INTEGER IA, IB, N, IFAIL
+ DOUBLE PRECISION A(IA,N), B(IB,N), R(N), DE(N)
3. Description
 The complex matrix A is first balanced and then reduced to upper
 Hessenberg form by stabilised elementary similarity
 transformations. The eigenvalues and eigenvectors of the
 Hessenberg matrix are calculated using the LR algorithm. The
 eigenvectors of the Hessenberg matrix are backtransformed to
 give the eigenvectors of the original matrix.
+ The problem is reduced to the standard symmetric eigenproblem
+ using Cholesky's method to decompose B into triangular matrices,
+ T
+ B=LL , where L is lower triangular. Then Ax=(lambda)Bx implies
+ 1 T T T
+ (L AL )(L x)=(lambda)(L x); hence the eigenvalues of
+ Ax=(lambda)Bx are those of Py=(lambda)y where P is the symmetric
+ 1 T
+ matrix L AL . Householder's method is used to tridiagonalise
+ the matrix P and the eigenvalues are then found using the QL
+ algorithm.
4. References
@@ 78714,61 +79639,40 @@ This package uses the NAG Library to compute
5. Parameters
 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
 On entry: the real parts of the elements of the n by n
 complex matrix A. On exit: the array is overwritten.
+ 1: A(IA,N)  DOUBLE PRECISION array Input/Output
+ On entry: the upper triangle of the n by n symmetric matrix
+ A. The elements of the array below the diagonal need not be
+ set. On exit: the lower triangle of the array is
+ overwritten. The rest of the array is unchanged.
 2: IAR  INTEGER Input
+ 2: IA  INTEGER Input
On entry:
 the first dimension of the array AR as declared in the
 (sub)program from which F02AKF is called.
 Constraint: IAR >= N.
+ the first dimension of the array A as declared in the
+ (sub)program from which F02ADF is called.
+ Constraint: IA >= N.
 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
 On entry: the imaginary parts of the elements of the n by n
 complex matrix A. On exit: the array is overwritten.
+ 3: B(IB,N)  DOUBLE PRECISION array Input/Output
+ On entry: the upper triangle of the n by n symmetric
+ positivedefinite matrix B. The elements of the array below
+ the diagonal need not be set. On exit: the elements below
+ the diagonal are overwritten. The rest of the array is
+ unchanged.
 4: IAI  INTEGER Input
+ 4: IB  INTEGER Input
On entry:
 the first dimension of the array AI as declared in the
 (sub)program from which F02AKF is called.
 Constraint: IAI >= N.
+ the first dimension of the array B as declared in the
+ (sub)program from which F02ADF is called.
+ Constraint: IB >= N.
5: N  INTEGER Input
 On entry: n, the order of the matrix A.

 6: RR(N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvalues.

 7: RI(N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvalues.

 8: VR(IVR,N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvalue.
 The eigenvectors are normalised so that the sum of squares
 of the moduli of the elements is equal to 1 and the element
 of largest modulus is real.

 9: IVR  INTEGER Input
 On entry:
 the first dimension of the array VR as declared in the
 (sub)program from which F02AKF is called.
 Constraint: IVR >= N.

 10: VI(IVI,N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvalue.
+ On entry: n, the order of the matrices A and B.
 11: IVI  INTEGER Input
 On entry:
 the first dimension of the array VI as declared in the
 (sub)program from which F02AKF is called.
 Constraint: IVI >= N.
+ 6: R(N)  DOUBLE PRECISION array Output
+ On exit: the eigenvalues in ascending order.
 12: INTGER(N)  INTEGER array Workspace
+ 7: DE(N)  DOUBLE PRECISION array Workspace
 13: IFAIL  INTEGER Input/Output
+ 8: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 78781,14 +79685,19 @@ This package uses the NAG Library to compute
Errors detected by the routine:
IFAIL= 1
 More than 30*N iterations are required to isolate all the
 eigenvalues.
+ Failure in F01AEF(*); the matrix B is not positivedefinite
+ possibly due to rounding errors.
+
+ IFAIL= 2
+ Failure in F02AVF(*), more than 30*N iterations are required
+ to isolate all the eigenvalues.
7. Accuracy
 The accuracy of the results depends on the conditioning of the
 original matrix and the multiplicity of the roots. For a detailed
 error analysis see Wilkinson and Reinsch [1] pp 352 and 390.
+ In general this routine is very accurate. However, if B is ill
+ conditioned with respect to inversion, the eigenvalues could be
+ inaccurately determined. For a detailed error analysis see
+ Wilkinson and Reinsch [1] pp 310, 222 and 235.
8. Further Comments
@@ 78797,13 +79706,20 @@ This package uses the NAG Library to compute
9. Example
 To calculate all the eigenvalues and eigenvectors of the complex
 matrix:
+ To calculate all the eigenvalues of the general symmetric
+ eigenproblem Ax=(lambda) Bx where A is the symmetric matrix:
 (21.05.0i 24.60i 13.6+10.2i 4.0i)
 ( 22.5i 26.005.00i 7.510.0i 2.5 )
 ( 2.0+1.5i 1.68+2.24i 4.55.0i 1.5+2.0i).
 ( 2.5i 2.60 2.7+3.6i 2.55.0i)
+ (0.5 1.5 6.6 4.8)
+ (1.5 6.5 16.2 8.6)
+ (6.6 16.2 37.6 9.8)
+ (4.8 8.6 9.8 17.1)
+
+ and B is the symmetric positivedefinite matrix:
+
+ (1 3 4 1)
+ (3 13 16 11)
+ (4 16 24 18).
+ (1 11 18 27)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 78811,8 +79727,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02AWF
 F02AWF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AEF
+ F02AEF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 78821,76 +79737,101 @@ This package uses the NAG Library to compute
1. Purpose
 F02AWF calculates all the eigenvalues of a complex Hermitian
 matrix.
+ F02AEF calculates all the eigenvalues and eigenvectors of
+ Ax=(lambda)Bx, where A is a real symmetric matrix and B is a
+ real symmetric positivedefinite matrix.
2. Specification
 SUBROUTINE F02AWF (AR, IAR, AI, IAI, N, R, WK1, WK2, WK3,
 1 IFAIL)
 INTEGER IAR, IAI, N, IFAIL
 DOUBLE PRECISION AR(IAR,N), AI(IAI,N), R(N), WK1(N),
 1 WK2(N), WK3(N)
+ SUBROUTINE F02AEF (A, IA, B, IB, N, R, V, IV, DL, E, IFAIL)
+ INTEGER IA, IB, N, IV, IFAIL
+ DOUBLE PRECISION A(IA,N), B(IB,N), R(N), V(IV,N), DL(N), E
+ 1 (N)
3. Description
 The complex Hermitian matrix A is first reduced to a real
 tridiagonal matrix by n2 unitary transformations, and a
 subsequent diagonal transformation. The eigenvalues are then
 derived using the QL algorithm, an adaptation of the QR
 algorithm.
+ The problem is reduced to the standard symmetric eigenproblem
+ using Cholesky's method to decompose B into triangular matrices
+ T
+ B=LL , where L is lower triangular. Then Ax=(lambda)Bx implies
+ 1 T T T
+ (L AL )(L x)=(lambda)(L x); hence the eigenvalues of
+ Ax=(lambda)Bx are those of Py=(lambda)y, where P is the symmetric
+ 1 T
+ matrix L AL . Householder's method is used to tridiagonalise
+ the matrix P and the eigenvalues are found using the QL
+ algorithm. An eigenvector z of the derived problem is related to
+ T
+ an eigenvector x of the original problem by z=L x. The
+ eigenvectors z are determined using the QL algorithm and are
+ T
+ normalised so that z z=1; the eigenvectors of the original
+ T
+ problem are then determined by solving L x=z, and are normalised
+ T
+ so that x Bx=1.
4. References
 [1] Peters G (1967) NPL Algorithms Library. Document No.
 F1/04/A.

 [2] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
+ [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
Computation II, Linear Algebra. SpringerVerlag.
5. Parameters
 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
 On entry: the real parts of the elements of the lower
 triangle of the n by n complex Hermitian matrix A. Elements
 of the array above the diagonal need not be set. On exit:
 the array is overwritten.
+ 1: A(IA,N)  DOUBLE PRECISION array Input/Output
 2: IAR  INTEGER Input
+ On entry: the upper triangle of the n by n symmetric matrix
+ A. The elements of the array below the diagonal need not be
+ set. On exit: the lower triangle of the array is
+ overwritten. The rest of the array is unchanged. See also
+ Section 8.
+
+ 2: IA  INTEGER Input
On entry:
 the first dimension of the array AR as declared in the
 (sub)program from which F02AWF is called.
 Constraint: IAR >= N.
+ the first dimension of the array A as declared in the
+ (sub)program from which F02AEF is called.
+ Constraint: IA >= N.
 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
 On entry: the imaginary parts of the elements of the lower
 triangle of the n by n complex Hermitian matrix A. Elements
 of the array above the diagonal need not be set. On exit:
 the array is overwritten.
+ 3: B(IB,N)  DOUBLE PRECISION array Input/Output
+ On entry: the upper triangle of the n by n symmetric
+ positivedefinite matrix B. The elements of the array below
+ the diagonal need not be set. On exit: the elements below
+ the diagonal are overwritten. The rest of the array is
+ unchanged.
 4: IAI  INTEGER Input
+ 4: IB  INTEGER Input
On entry:
 the first dimension of the array AI as declared in the
 (sub)program from which F02AWF is called.
 Constraint: IAI >= N.
+ the first dimension of the array B as declared in the
+ (sub)program from which F02AEF is called.
+ Constraint: IB >= N.
5: N  INTEGER Input
 On entry: n, the order of the complex Hermitian matrix, A.
+ On entry: n, the order of the matrices A and B.
6: R(N)  DOUBLE PRECISION array Output
On exit: the eigenvalues in ascending order.
 7: WK1(N)  DOUBLE PRECISION array Workspace
+ 7: V(IV,N)  DOUBLE PRECISION array Output
+ On exit: the normalised eigenvectors, stored by columns;
+ the ith column corresponds to the ith eigenvalue. The
+ T
+ eigenvectors x are normalised so that x Bx=1. See also
+ Section 8.
 8: WK2(N)  DOUBLE PRECISION array Workspace
+ 8: IV  INTEGER Input
+ On entry:
+ the first dimension of the array V as declared in the
+ (sub)program from which F02AEF is called.
+ Constraint: IV >= N.
 9: WK3(N)  DOUBLE PRECISION array Workspace
+ 9: DL(N)  DOUBLE PRECISION array Workspace
 10: IFAIL  INTEGER Input/Output
+ 10: E(N)  DOUBLE PRECISION array Workspace
+
+ 11: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.

On exit: IFAIL = 0 unless the routine detects an error (see
Section 6).
@@ 78899,27 +79840,48 @@ This package uses the NAG Library to compute
Errors detected by the routine:
IFAIL= 1
 More than 30*N iterations are required to isolate all the
 eigenvalues.
+ Failure in F01AEF(*); the matrix B is not positivedefinite,
+ possibly due to rounding errors.
+
+ IFAIL= 2
+ Failure in F02AMF(*); more than 30*N iterations are required
+ to isolate all the eigenvalues.
7. Accuracy
 For a detailed error analysis see Peters [1] page 3 and Wilkinson
 and Reinsch [2] page 235.
+ In general this routine is very accurate. However, if B is ill
+ conditioned with respect to inversion, the eigenvectors could be
+ inaccurately determined. For a detailed error analysis see
+ Wilkinson and Reinsch [1] pp 310, 222 and 235.
8. Further Comments
3
The time taken by the routine is approximately proportional to n
+ Unless otherwise stated in the Users' Note for your
+ implementation, the routine may be called with the same actual
+ array supplied for parameters A and V, in which case the
+ eigenvectors will overwrite the original matrix A. However this
+ is not standard Fortran 77, and may not work on all systems.
+
9. Example
 To calculate all the eigenvalues of the complex Hermitian matrix:
+ To calculate all the eigenvalues and eigenvectors of the general
+ symmetric eigenproblem Ax=(lambda) Bx where A is the symmetric
+ matrix:
 (0.50 0.00 1.84+1.38i 2.081.56i)
 (0.00 0.50 1.12+0.84i 0.56+0.42i)
 (1.841.38i 1.120.84i 0.50 0.00 ).
 (2.08+1.56i 0.560.42i 0.00 0.50 )
+ (0.5 1.5 6.6 4.8)
+ (1.5 6.5 16.2 8.6)
+ (6.6 16.2 37.6 9.8)
+ (4.8 8.6 9.8 17.1)
+
+ and B is the symmetric positivedefinite matrix:
+
+ (1 3 4 1)
+ (3 13 16 11)
+ (4 16 24 18).
+ (1 11 18 27)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 78927,8 +79889,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02AXF
 F02AXF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AFF
+ F02AFF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 78937,96 +79899,57 @@ This package uses the NAG Library to compute
1. Purpose
 F02AXF calculates all the eigenvalues and eigenvectors of a
 complex Hermitian matrix.
+ F02AFF calculates all the eigenvalues of a real unsymmetric
+ matrix.
2. Specification
 SUBROUTINE F02AXF (AR, IAR, AI, IAI, N, R, VR, IVR, VI,
 1 IVI, WK1, WK2, WK3, IFAIL)
 INTEGER IAR, IAI, N, IVR, IVI, IFAIL
 DOUBLE PRECISION AR(IAR,N), AI(IAI,N), R(N), VR(IVR,N), VI
 1 (IVI,N), WK1(N), WK2(N), WK3(N)
+ SUBROUTINE F02AFF (A, IA, N, RR, RI, INTGER, IFAIL)
+ INTEGER IA, N, INTGER(N), IFAIL
+ DOUBLE PRECISION A(IA,N), RR(N), RI(N)
3. Description
 The complex Hermitian matrix is first reduced to a real
 tridiagonal matrix by n2 unitary transformations and a
 subsequent diagonal transformation. The eigenvalues and
 eigenvectors are then derived using the QL algorithm, an
 adaptation of the QR algorithm.
+ The matrix A is first balanced and then reduced to upper
+ Hessenberg form using stabilised elementary similarity
+ transformations. The eigenvalues are then found using the QR
+ algorithm for real Hessenberg matrices.
4. References
 [1] Peters G (1967) NPL Algorithms Library. Document No.
 F2/03/A.

 [2] Peters G (1967) NPL Algorithms Library. Document No.
 F1/04/A.
+ [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
+ Computation II, Linear Algebra. SpringerVerlag.
5. Parameters
 1: AR(IAR,N)  DOUBLE PRECISION array Input
 On entry: the real parts of the elements of the lower
 triangle of the n by n complex Hermitian matrix A. Elements
 of the array above the diagonal need not be set. See also
 Section 8.
+ 1: A(IA,N)  DOUBLE PRECISION array Input/Output
+ On entry: the n by n matrix A. On exit: the array is
+ overwritten.
 2: IAR  INTEGER Input
+ 2: IA  INTEGER Input
On entry:
 the first dimension of the array AR as declared in the
 (sub)program from which F02AXF is called.
 Constraint: IAR >= N.
+ the dimension of the array A as declared in the (sub)program
+ from which F02AFF is called.
+ Constraint: IA >= N.
 3: AI(IAI,N)  DOUBLE PRECISION array Input
 On entry: the imaginary parts of the elements of the lower
 triangle of the n by n complex Hermitian matrix A. Elements
 of the array above the diagonal need not be set. See also
 Section 8.
+ 3: N  INTEGER Input
+ On entry: n, the order of the matrix A.
 4: IAI  INTEGER Input
 On entry:
 the first dimension of the array AI as declared in the
 (sub)program from which F02AXF is called.
 Constraint: IAI >= N.
+ 4: RR(N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvalues.
 5: N  INTEGER Input
 On entry: n, the order of the matrix, A.
+ 5: RI(N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvalues.
 6: R(N)  DOUBLE PRECISION array Output
 On exit: the eigenvalues in ascending order.
+ 6: INTGER(N)  INTEGER array Output
+ On exit: INTGER(i) contains the number of iterations used
+ to find the ith eigenvalue. If INTGER(i) is negative, the i
+ th eigenvalue is the second of a pair found simultaneously.
 7: VR(IVR,N)  DOUBLE PRECISION array Output
 On exit: the real parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvector.
 The eigenvectors are normalised so that the sum of the
 squares of the moduli of the elements is equal to 1 and the
 element of largest modulus is real. See also Section 8.

 8: IVR  INTEGER Input
 On entry:
 the first dimension of the array VR as declared in the
 (sub)program from which F02AXF is called.
 Constraint: IVR >= N.

 9: VI(IVI,N)  DOUBLE PRECISION array Output
 On exit: the imaginary parts of the eigenvectors, stored by
 columns. The ith column corresponds to the ith eigenvector.
 See also Section 8.

 10: IVI  INTEGER Input
 On entry:
 the first dimension of the array VI as declared in the
 (sub)program from which F02AXF is called.
 Constraint: IVI >= N.

 11: WK1(N)  DOUBLE PRECISION array Workspace

 12: WK2(N)  DOUBLE PRECISION array Workspace

 13: WK3(N)  DOUBLE PRECISION array Workspace
+ Note that the eigenvalues are found in reverse order,
+ starting with the nth.
 14: IFAIL  INTEGER Input/Output
+ 7: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 79042,38 +79965,25 @@ This package uses the NAG Library to compute
More than 30*N iterations are required to isolate all the
eigenvalues.
 IFAIL= 2
 The diagonal elements of AI are not all zero, i.e., the
 complex matrix is not Hermitian.

7. Accuracy
 The eigenvectors are always accurately orthogonal but the
 accuracy of the individual eigenvalues and eigenvectors is
 dependent on their inherent sensitivity to small changes in the
 original matrix. For a detailed error analysis see Peters [1]
 page 3 and [2] page 3.
+ The accuracy of the results depends on the original matrix and
+ the multiplicity of the roots. For a detailed error analysis see
+ Wilkinson and Reinsch [1] pp 352 and 367.
8. Further Comments
3
The time taken by the routine is approximately proportional to n
 Unless otherwise stated in the implementation document, the
 routine may be called with the same actual array supplied for
 parameters AR and VR, and for AI and VI, in which case the
 eigenvectors will overwrite the original matrix A. However this
 is not standard Fortran 77, and may not work on all systems.

9. Example
 To calculate the eigenvalues and eigenvectors of the complex
 Hermitian matrix:
+ To calculate all the eigenvalues of the real matrix:
 (0.50 0.00 1.84+1.38i 2.081.56i)
 (0.00 0.50 1.12+0.84i 0.56+0.42i)
 (1.841.38i 1.120.84i 0.50 0.00 ).
 (2.08+1.56i 0.560.42i 0.00 0.50 )
+ ( 1.5 0.1 4.5 1.5)
+ (22.5 3.5 12.5 2.5)
+ ( 2.5 0.3 4.5 2.5).
+ ( 2.5 0.1 4.5 2.5)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 79081,8 +79991,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02BBF
 F02BBF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AGF
+ F02AGF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 79091,28 +80001,25 @@ This package uses the NAG Library to compute
1. Purpose
 F02BBF calculates selected eigenvalues and eigenvectors of a real
 symmetric matrix by reduction to tridiagonal form, bisection and
 inverse iteration, where the selected eigenvalues lie within a
 given interval.
+ F02AGF calculates all the eigenvalues and eigenvectors of a real
+ unsymmetric matrix.
2. Specification
 SUBROUTINE F02BBF (A, IA, N, ALB, UB, M, MM, R, V, IV, D,
 1 E, E2, X, G, C, ICOUNT, IFAIL)
 INTEGER IA, N, M, MM, IV, ICOUNT(M), IFAIL
 DOUBLE PRECISION A(IA,N), ALB, UB, R(M), V(IV,M), D(N), E
 1 (N), E2(N), X(N,7), G(N)
 LOGICAL C(N)
+ SUBROUTINE F02AGF (A, IA, N, RR, RI, VR, IVR, VI, IVI,
+ 1 INTGER, IFAIL)
+ INTEGER IA, N, IVR, IVI, INTGER(N), IFAIL
+ DOUBLE PRECISION A(IA,N), RR(N), RI(N), VR(IVR,N), VI
+ 1 (IVI,N)
3. Description
 The real symmetric matrix A is reduced to a symmetric tridiagonal
 matrix T by Householder's method. The eigenvalues which lie
 within a given interval [l,u], are calculated by the method of
 bisection. The corresponding eigenvectors of T are calculated by
 inverse iteration. A backtransformation is then performed to
 obtain the eigenvectors of the original matrix A.
+ The matrix A is first balanced and then reduced to upper
+ Hessenberg form using real stabilised elementary similarity
+ transformations. The eigenvalues and eigenvectors of the
+ Hessenberg matrix are calculated using the QR algorithm. The
+ eigenvectors of the Hessenberg matrix are backtransformed to
+ give the eigenvectors of the original matrix A.
4. References
@@ 79122,67 +80029,57 @@ This package uses the NAG Library to compute
5. Parameters
1: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the lower triangle of the n by n symmetric matrix
 A. The elements of the array above the diagonal need not be
 set. On exit: the elements of A below the diagonal are
 overwritten, and the rest of the array is unchanged.
+ On entry: the n by n matrix A. On exit: the array is
+ overwritten.
2: IA  INTEGER Input
On entry:
the first dimension of the array A as declared in the
 (sub)program from which F02BBF is called.
+ (sub)program from which F02AGF is called.
Constraint: IA >= N.
3: N  INTEGER Input
On entry: n, the order of the matrix A.
 4: ALB  DOUBLE PRECISION Input

 5: UB  DOUBLE PRECISION Input
 On entry: l and u, the lower and upper endpoints of the
 interval within which eigenvalues are to be calculated.

 6: M  INTEGER Input
 On entry: an upper bound for the number of eigenvalues
 within the interval.

 7: MM  INTEGER Output
 On exit: the actual number of eigenvalues within the
 interval.
+ 4: RR(N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvalues.
 8: R(M)  DOUBLE PRECISION array Output
 On exit: the eigenvalues, not necessarily in ascending
 order.
+ 5: RI(N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvalues.
 9: V(IV,M)  DOUBLE PRECISION array Output
 On exit: the eigenvectors, stored by columns. The ith
 column corresponds to the ith eigenvalue. The eigenvectors
 are normalised so that the sum of the squares of the
 elements are equal to 1.
+ 6: VR(IVR,N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvalue.
+ The eigenvectors are normalised so that the sum of the
+ squares of the moduli of the elements is equal to 1 and the
+ element of largest modulus is real. This ensures that real
+ eigenvalues have real eigenvectors.
 10: IV  INTEGER Input
+ 7: IVR  INTEGER Input
On entry:
 the first dimension of the array V as declared in the
 (sub)program from which F02BBF is called.
 Constraint: IV >= N.

 11: D(N)  DOUBLE PRECISION array Workspace

 12: E(N)  DOUBLE PRECISION array Workspace

 13: E2(N)  DOUBLE PRECISION array Workspace
+ the first dimension of the array VR as declared in the
+ (sub)program from which F02AGF is called.
+ Constraint: IVR >= N.
 14: X(N,7)  DOUBLE PRECISION array Workspace
+ 8: VI(IVI,N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvalue.
 15: G(N)  DOUBLE PRECISION array Workspace
+ 9: IVI  INTEGER Input
+ On entry:
+ the first dimension of the array VI as declared in the
+ (sub)program from which F02AGF is called.
+ Constraint: IVI >= N.
 16: C(N)  LOGICAL array Workspace
+ 10: INTGER(N)  INTEGER array Output
+ On exit: INTGER(i) contains the number of iterations used
+ to find the ith eigenvalue. If INTGER(i) is negative, the i
+ th eigenvalue is the second of a pair found simultaneously.
 17: ICOUNT(M)  INTEGER array Output
 On exit: ICOUNT(i) contains the number of iterations for
 the ith eigenvalue.
+ Note that the eigenvalues are found in reverse order,
+ starting with the nth.
 18: IFAIL  INTEGER Input/Output
+ 11: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 79195,40 +80092,29 @@ This package uses the NAG Library to compute
Errors detected by the routine:
IFAIL= 1
 M is less than the number of eigenvalues in the given
 interval. On exit MM contains the number of eigenvalues in
 the interval. Rerun with this value for M.

 IFAIL= 2
 More than 5 iterations are required to determine any one
 eigenvector.
+ More than 30*N iterations are required to isolate all the
+ eigenvalues.
7. Accuracy
 There is no guarantee of the accuracy of the eigenvectors as the
 results depend on the original matrix and the multiplicity of the
 roots. For a detailed error analysis see Wilkinson and Reinsch
 [1] pp 222 and 436.
+ The accuracy of the results depends on the original matrix and
+ the multiplicity of the roots. For a detailed error analysis see
+ Wilkinson and Reinsch [1] pp 352 and 390.
8. Further Comments
3
The time taken by the routine is approximately proportional to n
 This subroutine should only be used when less than 25% of the
 eigenvalues and the corresponding eigenvectors are required. Also
 this subroutine is less efficient with matrices which have
 multiple eigenvalues.

9. Example
 To calculate the eigenvalues lying between 2.0 and 3.0, and the
 corresponding eigenvectors of the real symmetric matrix:
+ To calculate all the eigenvalues and eigenvectors of the real
+ matrix:
 ( 0.5 0.0 2.3 2.6)
 ( 0.0 0.5 1.4 0.7)
 ( 2.3 1.4 0.5 0.0).
 (2.6 0.7 0.0 0.5)
+ ( 1.5 0.1 4.5 1.5)
+ (22.5 3.5 12.5 2.5)
+ ( 2.5 0.3 4.5 2.5).
+ ( 2.5 0.1 4.5 2.5)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 79236,8 +80122,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02BJF
 F02BJF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AJF
+ F02AJF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 79246,146 +80132,61 @@ This package uses the NAG Library to compute
1. Purpose
 F02BJF calculates all the eigenvalues and, if required, all the
 eigenvectors of the generalized eigenproblem Ax=(lambda)Bx
 where A and B are real, square matrices, using the QZ algorithm.
+ F02AJF calculates all the eigenvalues of a complex matrix.
2. Specification
 SUBROUTINE F02BJF (N, A, IA, B, IB, EPS1, ALFR, ALFI,
 1 BETA, MATV, V, IV, ITER, IFAIL)
 INTEGER N, IA, IB, IV, ITER(N), IFAIL
 DOUBLE PRECISION A(IA,N), B(IB,N), EPS1, ALFR(N), ALFI(N),
 1 BETA(N), V(IV,N)
 LOGICAL MATV
+ SUBROUTINE F02AJF (AR, IAR, AI, IAI, N, RR, RI, INTGER,
+ 1 IFAIL)
+ INTEGER IAR, IAI, N, INTGER(N), IFAIL
+ DOUBLE PRECISION AR(IAR,N), AI(IAI,N), RR(N), RI(N)
3. Description
 All the eigenvalues and, if required, all the eigenvectors of the
 generalized eigenproblem Ax=(lambda)Bx where A and B are real,
 square matrices, are determined using the QZ algorithm. The QZ
 algorithm consists of 4 stages:

 (a) A is reduced to upper Hessenberg form and at the same time
 B is reduced to upper triangular form.
+ The complex matrix A is first balanced and then reduced to upper
+ Hessenberg form using stabilised elementary similarity
+ transformations. The eigenvalues are then found using the
+ modified LR algorithm for complex Hessenberg matrices.
 (b) A is further reduced to quasitriangular form while the
 triangular form of B is maintained.
+ 4. References
 (c) The quasitriangular form of A is reduced to triangular
 form and the eigenvalues extracted. This routine does not
 actually produce the eigenvalues (lambda) , but instead
 j
 returns (alpha) and (beta) such that
+ [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
+ Computation II, Linear Algebra. SpringerVerlag.
 j j
 (lambda) =(alpha) /(beta) , j=1,2,...,.n
 j j j
 The division by (beta) becomes the responsibility of the
 j
 user's program, since (beta) may be zero indicating an
 j
 infinite eigenvalue. Pairs of complex eigenvalues occur
 with (alpha) /(beta) and (alpha) /(beta) complex
 j j j+1 j+1
+ 5. Parameters
 conjugates, even though (alpha) and (alpha) are not
 j j+1
 conjugate.
+ 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
+ On entry: the real parts of the elements of the n by n
+ complex matrix A. On exit: the array is overwritten.
 (d) If the eigenvectors are required (MATV = .TRUE.), they are
 obtained from the triangular matrices and then transformed
 back into the original coordinate system.

 4. References

 [1] Moler C B and Stewart G W (1973) An Algorithm for
 Generalized Matrix Eigenproblems. SIAM J. Numer. Anal. 10
 241256.

 [2] Ward R C (1975) The Combination Shift QZ Algorithm. SIAM J.
 Numer. Anal. 12 835853.

 [3] Wilkinson J H (1979) Kronecker's Canonical Form and the QZ
 Algorithm. Linear Algebra and Appl. 28 285303.

 5. Parameters

 1: N  INTEGER Input
 On entry: n, the order of the matrices A and B.

 2: A(IA,N)  DOUBLE PRECISION array Input/Output
 On entry: the n by n matrix A. On exit: the array is
 overwritten.

 3: IA  INTEGER Input
+ 2: IAR  INTEGER Input
On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02BJF is called.
 Constraint: IA >= N.
+ the first dimension of the array AR as declared in the
+ (sub)program from which F02AJF is called.
+ Constraint: IAR >= N.
 4: B(IB,N)  DOUBLE PRECISION array Input/Output
 On entry: the n by n matrix B. On exit: the array is
 overwritten.
+ 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
+ On entry: the imaginary parts of the elements of the n by n
+ complex matrix A. On exit: the array is overwritten.
 5: IB  INTEGER Input
+ 4: IAI  INTEGER Input
On entry:
 the first dimension of the array B as declared in the
 (sub)program from which F02BJF is called.
 Constraint: IB >= N.

 6: EPS1  DOUBLE PRECISION Input
 On entry: the tolerance used to determine negligible
 elements. If EPS1 > 0.0, an element will be considered
 negligible if it is less than EPS1 times the norm of its
 matrix. If EPS1 <= 0.0, machine precision is used in place
 of EPS1. A positive value of EPS1 may result in faster
 execution but less accurate results.

 7: ALFR(N)  DOUBLE PRECISION array Output

 8: ALFI(N)  DOUBLE PRECISION array Output
 On exit: the real and imaginary parts of (alpha) , for
 j
 j=1,2,...,n.

 9: BETA(N)  DOUBLE PRECISION array Output
 On exit: (beta) , for j=1,2,...,n.
 j

 10: MATV  LOGICAL Input
 On entry: MATV must be set .TRUE. if the eigenvectors are
 required, otherwise .FALSE..

 11: V(IV,N)  DOUBLE PRECISION array Output
 On exit: if MATV = .TRUE., then
 (i)if the jth eigenvalue is real, the jth column of V
 contains its eigenvector;
+ the first dimension of the array AI as declared in the
+ (sub)program from which F02AJF is called.
+ Constraint: IAI >= N.
 (ii) if the jth and (j+1)th eigenvalues form a complex
 pair, the jth and (j+1)th columns of V contain the
 real and imaginary parts of the eigenvector associated
 with the first eigenvalue of the pair. The conjugate
 of this vector is the eigenvector for the conjugate
 eigenvalue.
 Each eigenvector is normalised so that the component of
 largest modulus is real and the sum of squares of the moduli
 equal one.
+ 5: N  INTEGER Input
+ On entry: n, the order of the matrix A.
 If MATV = .FALSE., V is not used.
+ 6: RR(N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvalues.
 12: IV  INTEGER Input
 On entry:
 the first dimension of the array V as declared in the
 (sub)program from which F02BJF is called.
 Constraint: IV >= N.
+ 7: RI(N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvalues.
 13: ITER(N)  INTEGER array Output
 On exit: ITER(j) contains the number of iterations needed
 to obtain the jth eigenvalue. Note that the eigenvalues are
 obtained in reverse order, starting with the nth.
+ 8: INTGER(N)  INTEGER array Workspace
 14: IFAIL  INTEGER Input/Output
+ 9: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 79397,59 +80198,29 @@ This package uses the NAG Library to compute
Errors detected by the routine:
 IFAIL= i
 More than 30*N iterations are required to determine all the
 diagonal 1 by 1 or 2 by 2 blocks of the quasitriangular
 form in the second step of the QZ algorithm. IFAIL is set to
 the index i of the eigenvalue at which this failure occurs.
 If the soft failure option is used, (alpha) and (beta) are
 j j
 correct for j=i+1,i+2,...,n, but V does not contain any
 correct eigenvectors.
+ IFAIL= 1
+ More than 30*N iterations are required to isolate all the
+ eigenvalues.
7. Accuracy
 The computed eigenvalues are always exact for a problem
 (A+E)x=(lambda)(B+F)x where E/A and F/B
 are both of the order of max(EPS1,(epsilon)), EPS1 being defined
 as in Section 5 and (epsilon) being the machine precision.

 Note: interpretation of results obtained with the QZ algorithm
 often requires a clear understanding of the effects of small
 changes in the original data. These effects are reviewed in
 Wilkinson [3], in relation to the significance of small values of
 (alpha) and (beta) . It should be noted that if (alpha) and
 j j j
 (beta) are both small for any j, it may be that no reliance can
 j
 be placed on any of the computed eigenvalues
 (lambda) =(alpha) /(beta) . The user is recommended to study [3]
 i i i
 and, if in difficulty, to seek expert advice on determining the
 sensitivity of the eigenvalues to perturbations in the data.
+ The accuracy of the results depends on the original matrix and
+ the multiplicity of the roots. For a detailed error analysis see
+ Wilkinson and Reinsch [1] pp 352 and 401.
8. Further Comments
3
The time taken by the routine is approximately proportional to n
 and also depends on the value chosen for parameter EPS1.
9. Example
 To find all the eigenvalues and eigenvectors of Ax=(lambda) Bx
 where

 (3.9 12.5 34.5 0.5)
 (4.3 21.5 47.5 7.5)
 A=(4.3 21.5 43.5 3.5)
 (4.4 26.0 46.0 6.0)

 and
+ To calculate all the eigenvalues of the complex matrix:
 (1 2 3 1)
 (1 3 5 4)
 B=(1 3 4 3).
 (1 3 4 4)
+ (21.05.0i 24.60i 13.6+10.2i 4.0i)
+ ( 22.5i 26.005.00i 7.510.0i 2.5 )
+ ( 2.0+1.5i 1.68+2.24i 4.55.0i 1.5+2.0i).
+ ( 2.5i 2.60 2.7+3.6i 2.55.0i)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 79457,8 +80228,8 @@ This package uses the NAG Library to compute
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02  Eigenvalue and Eigenvectors F02FJF
 F02FJF  NAG Foundation Library Routine Document
+ F02  Eigenvalue and Eigenvectors F02AKF
+ F02AKF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 79467,653 +80238,393 @@ This package uses the NAG Library to compute
1. Purpose
 To find eigenvalues and eigenvectors of a real sparse symmetric
 or generalized symmetric eigenvalue problem.
+ F02AKF calculates all the eigenvalues and eigenvectors of a
+ complex matrix.
2. Specification
 SUBROUTINE F02FJF (N, M, K, NOITS, TOL, DOT, IMAGE, MONIT,
 1 NOVECS, X, NRX, D, WORK, LWORK, RWORK,
 2 LRWORK, IWORK, LIWORK, IFAIL)
 INTEGER N, M, K, NOITS, NOVECS, NRX, LWORK,
 1 LRWORK, IWORK(LIWORK), LIWORK, IFAIL
 DOUBLE PRECISION TOL, DOT, X(NRX,K), D(K), WORK(LWORK),
 1 RWORK(LRWORK)
 EXTERNAL DOT, IMAGE, MONIT
+ SUBROUTINE F02AKF (AR, IAR, AI, IAI, N, RR, RI, VR, IVR,
+ 1 VI, IVI, INTGER, IFAIL)
+ INTEGER IAR, IAI, N, IVR, IVI, INTGER(N), IFAIL
+ DOUBLE PRECISION AR(IAR,N), AI(IAI,N), RR(N), RI(N), VR
+ 1 (IVR,N), VI(IVI,N)
3. Description
 F02FJF finds the m eigenvalues of largest absolute value and the
 corresponding eigenvectors for the real eigenvalue problem
+ The complex matrix A is first balanced and then reduced to upper
+ Hessenberg form by stabilised elementary similarity
+ transformations. The eigenvalues and eigenvectors of the
+ Hessenberg matrix are calculated using the LR algorithm. The
+ eigenvectors of the Hessenberg matrix are backtransformed to
+ give the eigenvectors of the original matrix.
 Cx=(lambda)x (1)
+ 4. References
 where C is an n by n matrix such that
+ [1] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
+ Computation II, Linear Algebra. SpringerVerlag.
 T
 BC=C B (2)
+ 5. Parameters
 for a given positivedefinite matrix B. C is said to be B
 symmetric. Different specifications of C allow for the solution
 of a variety of eigenvalue problems. For example, when
+ 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
+ On entry: the real parts of the elements of the n by n
+ complex matrix A. On exit: the array is overwritten.
 T
 C=A and B=I where A=A
+ 2: IAR  INTEGER Input
+ On entry:
+ the first dimension of the array AR as declared in the
+ (sub)program from which F02AKF is called.
+ Constraint: IAR >= N.
 the routine finds the m eigenvalues of largest absolute magnitude
 for the standard symmetric eigenvalue problem
+ 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
+ On entry: the imaginary parts of the elements of the n by n
+ complex matrix A. On exit: the array is overwritten.
 Ax=(lambda)x. (3)
+ 4: IAI  INTEGER Input
+ On entry:
+ the first dimension of the array AI as declared in the
+ (sub)program from which F02AKF is called.
+ Constraint: IAI >= N.
 The routine is intended for the case where A is sparse.
+ 5: N  INTEGER Input
+ On entry: n, the order of the matrix A.
 As a second example, when
+ 6: RR(N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvalues.
 1
 C=B A
+ 7: RI(N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvalues.
 where
+ 8: VR(IVR,N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvalue.
+ The eigenvectors are normalised so that the sum of squares
+ of the moduli of the elements is equal to 1 and the element
+ of largest modulus is real.
 T
 A=A
+ 9: IVR  INTEGER Input
+ On entry:
+ the first dimension of the array VR as declared in the
+ (sub)program from which F02AKF is called.
+ Constraint: IVR >= N.
 the routine finds the m eigenvalues of largest absolute magnitude
 for the generalized symmetric eigenvalue problem
+ 10: VI(IVI,N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvalue.
 Ax=(lambda)Bx. (4)
+ 11: IVI  INTEGER Input
+ On entry:
+ the first dimension of the array VI as declared in the
+ (sub)program from which F02AKF is called.
+ Constraint: IVI >= N.
 The routine is intended for the case where A and B are sparse.
+ 12: INTGER(N)  INTEGER array Workspace
 The routine does not require C explicitly, but C is specified via
 a usersupplied routine IMAGE which, given an n element vector z,
 computes the image w given by
+ 13: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 w=Cz.
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 1
 For instance, in the above example, where C=B A, routine IMAGE
 will need to solve the positivedefinite system of equations
 Bw=Az for w.
+ 6. Error Indicators and Warnings
 To find the m eigenvalues of smallest absolute magnitude of (3)
 1
 we can choose C=A and hence find the reciprocals of the
 required eigenvalues, so that IMAGE will need to solve Aw=z for
 1
 w, and correspondingly for (4) we can choose C=A B and solve
 Aw=Bz for w.
+ Errors detected by the routine:
 A table of examples of choice of IMAGE is given in Table 3.1. It
 should be remembered that the routine also returns the
 corresponding eigenvectors and that B is positivedefinite.
 Throughout A is assumed to be symmetric and, where necessary,
 nonsingularity is also assumed.
+ IFAIL= 1
+ More than 30*N iterations are required to isolate all the
+ eigenvalues.
 Eigenvalues Problem
 Required
+ 7. Accuracy
 Ax=(lambda)x (B=I)Ax=(lambda)Bx ABx=(lambda)x
+ The accuracy of the results depends on the conditioning of the
+ original matrix and the multiplicity of the roots. For a detailed
+ error analysis see Wilkinson and Reinsch [1] pp 352 and 390.
 Largest Compute Solve Compute
 w=Az Bw=Az w=ABz
+ 8. Further Comments
 Smallest Solve Solve Solve
 (Find Aw=z Aw=Bz Av=z, Bw=(nu)
 1/(lambda))
+ 3
+ The time taken by the routine is approximately proportional to n
 Furthest Compute Solve Compute
 from w=(A(sigma)I)z Bw=(A(sigma)B)z w=(AB(sigma)I)z
 (sigma)
 (Find
 (lambda)
 (sigma))
+ 9. Example
 Closest to Solve Solve Solve
 (sigma) (A(sigma)I)w=z (A(sigma)B)w=Bz (AB(sigma)I)w=z
 (Find 1/((
 lambda)
 (sigma)))
+ To calculate all the eigenvalues and eigenvectors of the complex
+ matrix:
+ (21.05.0i 24.60i 13.6+10.2i 4.0i)
+ ( 22.5i 26.005.00i 7.510.0i 2.5 )
+ ( 2.0+1.5i 1.68+2.24i 4.55.0i 1.5+2.0i).
+ ( 2.5i 2.60 2.7+3.6i 2.55.0i)
 Table 3.1
 The Requirement of IMAGE for Various Problems
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 The matrix B also need not be supplied explicitly, but is
 specified via a usersupplied routine DOT which, given n element
 T
 vectors z and w, computes the generalized dot product w Bz.
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 F02FJF is based upon routine SIMITZ (see Nikolai [1]), which is
 itself a derivative of the Algol procedure ritzit (see
 Rutishauser [4]), and uses the method of simultaneous (subspace)
 iteration. (See Parlett [2] for description, analysis and advice
 on the use of the method.)
+ F02  Eigenvalue and Eigenvectors F02AWF
+ F02AWF  NAG Foundation Library Routine Document
 The routine performs simultaneous iteration on k>m vectors.
 Initial estimates to p<=k eigenvectors, corresponding to the p
 eigenvalues of C of largest absolute value, may be supplied by
 the user to F02FJF. When possible k should be chosen so that the
 kth eigenvalue is not too close to the m required eigenvalues,
 but if k is initially chosen too small then F02FJF may be re
 entered, supplying approximations to the k eigenvectors found so
 far and with k then increased.
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 At each major iteration F02FJF solves an r by r (r<=k) eigenvalue
 subproblem in order to obtain an approximation to the
 eigenvalues for which convergence has not yet occurred. This
 approximation is refined by Chebyshev acceleration.
+ 1. Purpose
 4. References
+ F02AWF calculates all the eigenvalues of a complex Hermitian
+ matrix.
 [1] Nikolai P J (1979) Algorithm 538: Eigenvectors and
 eigenvalues of real generalized symmetric matrices by
 simultaneous iteration. ACM Trans. Math. Softw. 5 118125.
+ 2. Specification
 [2] Parlett B N (1980) The Symmetric Eigenvalue Problem.
 PrenticeHall.
+ SUBROUTINE F02AWF (AR, IAR, AI, IAI, N, R, WK1, WK2, WK3,
+ 1 IFAIL)
+ INTEGER IAR, IAI, N, IFAIL
+ DOUBLE PRECISION AR(IAR,N), AI(IAI,N), R(N), WK1(N),
+ 1 WK2(N), WK3(N)
 [3] Rutishauser H (1969) Computational aspects of F L Bauer's
 simultaneous iteration method. Num. Math. 13 413.
+ 3. Description
 [4] Rutishauser H (1970) Simultaneous iteration method for
 symmetric matrices. Num. Math. 16 205223.
+ The complex Hermitian matrix A is first reduced to a real
+ tridiagonal matrix by n2 unitary transformations, and a
+ subsequent diagonal transformation. The eigenvalues are then
+ derived using the QL algorithm, an adaptation of the QR
+ algorithm.
 5. Parameters
+ 4. References
 1: N  INTEGER Input
 On entry: n, the order of the matrix C. Constraint: N >= 1.
+ [1] Peters G (1967) NPL Algorithms Library. Document No.
+ F1/04/A.
 2: M  INTEGER Input/Output
 On entry: m, the number of eigenvalues required.
 '
 Constraint: M >= 1. On exit: m, the number of eigenvalues
 actually found. It is equal to m if IFAIL = 0 on exit, and
 is less than m if IFAIL = 2, 3 or 4. See Section 6 and
 Section 8 for further information.
+ [2] Wilkinson J H and Reinsch C (1971) Handbook for Automatic
+ Computation II, Linear Algebra. SpringerVerlag.
 3: K  INTEGER Input
 On entry: the number of simultaneous iteration vectors to be
 used. Too small a value of K may inhibit convergence, while
 a larger value of K incurs additional storage and additional
 work per iteration. Suggested value: K = M + 4 will often be
 a reasonable choice in the absence of better information.
 Constraint: M < K <= N.
+ 5. Parameters
 4: NOITS  INTEGER Input/Output
 On entry: the maximum number of major iterations (eigenvalue
 subproblems) to be performed. If NOITS <= 0, then the value
 100 is used in place of NOITS. On exit: the number of
 iterations actually performed.
+ 1: AR(IAR,N)  DOUBLE PRECISION array Input/Output
+ On entry: the real parts of the elements of the lower
+ triangle of the n by n complex Hermitian matrix A. Elements
+ of the array above the diagonal need not be set. On exit:
+ the array is overwritten.
 5: TOL  DOUBLE PRECISION Input
 On entry: a relative tolerance to be used in accepting
 eigenvalues and eigenvectors. If the eigenvalues are
 required to about t significant figures, then TOL should be
 t
 set to about 10 . d is accepted as an eigenvalue as soon
 i
 as two successive approximations to d differ by less than
 i
 ~ ~
 (d *TOL)/10, where d is the latest aproximation to d .
 i i i
 Once an eigenvalue has been accepted, then an eigenvector is
 accepted as soon as (d f )/(d d )= N.
 6: DOT  DOUBLE PRECISION FUNCTION, supplied by the user.
 External Procedure
 T
 DOT must return the value w Bz for given vectors w and z.
 For the standard eigenvalue problem, where B=I, DOT must
 T
 return the dot product w z.
+ 3: AI(IAI,N)  DOUBLE PRECISION array Input/Output
+ On entry: the imaginary parts of the elements of the lower
+ triangle of the n by n complex Hermitian matrix A. Elements
+ of the array above the diagonal need not be set. On exit:
+ the array is overwritten.
 Its specification is:
+ 4: IAI  INTEGER Input
+ On entry:
+ the first dimension of the array AI as declared in the
+ (sub)program from which F02AWF is called.
+ Constraint: IAI >= N.
 DOUBLE PRECISION FUNCTION DOT (IFLAG, N, Z, W,
 1 RWORK, LRWORK,
 2 IWORK, LIWORK)
 INTEGER IFLAG, N, LRWORK, IWORK(LIWORK),
 1 LIWORK
 DOUBLE PRECISION Z(N), W(N), RWORK(LRWORK)
+ 5: N  INTEGER Input
+ On entry: n, the order of the complex Hermitian matrix, A.
 1: IFLAG  INTEGER Input/Output
 On entry: IFLAG is always nonnegative. On exit: IFLAG
 may be used as a flag to indicate a failure in the
 T
 computation of w Bz. If IFLAG is negative on exit from
 DOT, then F02FJF will exit immediately with IFAIL set
 to IFLAG. Note that in this case DOT must still be
 assigned a value.
+ 6: R(N)  DOUBLE PRECISION array Output
+ On exit: the eigenvalues in ascending order.
 2: N  INTEGER Input
 On entry: the number of elements in the vectors z and w
 and the order of the matrix B.
+ 7: WK1(N)  DOUBLE PRECISION array Workspace
 3: Z(N)  DOUBLE PRECISION array Input
 T
 On entry: the vector z for which w Bz is required.
+ 8: WK2(N)  DOUBLE PRECISION array Workspace
 4: W(N)  DOUBLE PRECISION array Input
 T
 On entry: the vector w for which w Bz is required.
+ 9: WK3(N)  DOUBLE PRECISION array Workspace
 5: RWORK(LRWORK)  DOUBLE PRECISION array User Workspace
+ 10: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 6: LRWORK  INTEGER Input
+ 6. Error Indicators and Warnings
+ Errors detected by the routine:
 7: IWORK(LIWORK)  INTEGER array User Workspace
+ IFAIL= 1
+ More than 30*N iterations are required to isolate all the
+ eigenvalues.
+ 7. Accuracy
 8: LIWORK  INTEGER Input
 DOT is called from F02FJF with the parameters RWORK,
 LRWORK, IWORK and LIWORK as supplied to F02FJF. The
 user is free to use the arrays RWORK and IWORK to
 supply information to DOT and to IMAGE as an
 alternative to using COMMON.
 DOT must be declared as EXTERNAL in the (sub)program
 from which F02FJF is called. Parameters denoted as
 Input must not be changed by this procedure.
+ For a detailed error analysis see Peters [1] page 3 and Wilkinson
+ and Reinsch [2] page 235.
 7: IMAGE  SUBROUTINE, supplied by the user.
 External Procedure
 IMAGE must return the vector w=Cz for a given vector z.

 Its specification is:
+ 8. Further Comments
 SUBROUTINE IMAGE (IFLAG, N, Z, W, RWORK, LRWORK,
 1 IWORK, LIWORK)
 INTEGER IFLAG, N, LRWORK, IWORK(LIWORK),
 1 LIWORK
 DOUBLE PRECISION Z(N), W(N), RWORK(LRWORK)
+ 3
+ The time taken by the routine is approximately proportional to n
 1: IFLAG  INTEGER Input/Output
 On entry: IFLAG is always nonnegative. On exit: IFLAG
 may be used as a flag to indicate a failure in the
 computation of w. If IFLAG is negative on exit from
 IMAGE, then F02FJF will exit immediately with IFAIL set
 to IFLAG.
+ 9. Example
 2: N  INTEGER Input
 On entry: n, the number of elements in the vectors w
 and z, and the order of the matrix C.
+ To calculate all the eigenvalues of the complex Hermitian matrix:
 3: Z(N)  DOUBLE PRECISION array Input
 On entry: the vector z for which Cz is required.
+ (0.50 0.00 1.84+1.38i 2.081.56i)
+ (0.00 0.50 1.12+0.84i 0.56+0.42i)
+ (1.841.38i 1.120.84i 0.50 0.00 ).
+ (2.08+1.56i 0.560.42i 0.00 0.50 )
 4: W(N)  DOUBLE PRECISION array Output
 On exit: the vector w=Cz.
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 5: RWORK(LRWORK)  DOUBLE PRECISION array User Workspace
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 6: LRWORK  INTEGER Input
+ F02  Eigenvalue and Eigenvectors F02AXF
+ F02AXF  NAG Foundation Library Routine Document
 7: IWORK(LIWORK)  INTEGER array User Workspace
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 8: LIWORK  INTEGER Input
 IMAGE is called from F02FJF with the parameters RWORK,
 LRWORK, IWORK and LIWORK as supplied to F02FJF. The
 user is free to use the arrays RWORK and IWORK to
 supply information to IMAGE and DOT as an alternative
 to using COMMON.
 IMAGE must be declared as EXTERNAL in the (sub)program
 from which F02FJF is called. Parameters denoted as
 Input must not be changed by this procedure.
+ 1. Purpose
 8: MONIT  SUBROUTINE, supplied by the user.
 External Procedure
 MONIT is used to monitor the progress of F02FJF. MONIT may
 be the dummy subroutine F02FJZ if no monitoring is actually
 required. (F02FJZ is included in the NAG Foundation Library
 and so need not be supplied by the user. The routine name
 F02FJZ may be implementation dependent: see the Users' Note
 for your implementation for details.) MONIT is called after
 the solution of each eigenvalue subproblem and also just
 prior to return from F02FJF. The parameters ISTATE and
 NEXTIT allow selective printing by MONIT.
+ F02AXF calculates all the eigenvalues and eigenvectors of a
+ complex Hermitian matrix.
 Its specification is:
+ 2. Specification
 SUBROUTINE MONIT (ISTATE, NEXTIT, NEVALS,
 1 NEVECS, K, F, D)
 INTEGER ISTATE, NEXTIT, NEVALS, NEVECS,
 1 K
 DOUBLE PRECISION F(K), D(K)
+ SUBROUTINE F02AXF (AR, IAR, AI, IAI, N, R, VR, IVR, VI,
+ 1 IVI, WK1, WK2, WK3, IFAIL)
+ INTEGER IAR, IAI, N, IVR, IVI, IFAIL
+ DOUBLE PRECISION AR(IAR,N), AI(IAI,N), R(N), VR(IVR,N), VI
+ 1 (IVI,N), WK1(N), WK2(N), WK3(N)
 1: ISTATE  INTEGER Input
 On entry: ISTATE specifies the state of F02FJF and will
 have values as follows:
 ISTATE = 0
 No eigenvalue or eigenvector has just been
 accepted.
+ 3. Description
 ISTATE = 1
 One or more eigenvalues have been accepted since
 the last call to MONIT.
+ The complex Hermitian matrix is first reduced to a real
+ tridiagonal matrix by n2 unitary transformations and a
+ subsequent diagonal transformation. The eigenvalues and
+ eigenvectors are then derived using the QL algorithm, an
+ adaptation of the QR algorithm.
 ISTATE = 2
 One or more eigenvectors have been accepted since
 the last call to MONIT.
+ 4. References
 ISTATE = 3
 One or more eigenvalues and eigenvectors have
 been accepted since the last call to MONIT.
+ [1] Peters G (1967) NPL Algorithms Library. Document No.
+ F2/03/A.
 ISTATE = 4
 Return from F02FJF is about to occur.
+ [2] Peters G (1967) NPL Algorithms Library. Document No.
+ F1/04/A.
 2: NEXTIT  INTEGER Input
 On entry: the number of the next iteration.
+ 5. Parameters
 3: NEVALS  INTEGER Input
 On entry: the number of eigenvalues accepted so far.
+ 1: AR(IAR,N)  DOUBLE PRECISION array Input
+ On entry: the real parts of the elements of the lower
+ triangle of the n by n complex Hermitian matrix A. Elements
+ of the array above the diagonal need not be set. See also
+ Section 8.
 4: NEVECS  INTEGER Input
 On entry: the number of eigenvectors accepted so far.
+ 2: IAR  INTEGER Input
+ On entry:
+ the first dimension of the array AR as declared in the
+ (sub)program from which F02AXF is called.
+ Constraint: IAR >= N.
 5: K  INTEGER Input
 On entry: k, the number of simultaneous iteration
 vectors.
+ 3: AI(IAI,N)  DOUBLE PRECISION array Input
+ On entry: the imaginary parts of the elements of the lower
+ triangle of the n by n complex Hermitian matrix A. Elements
+ of the array above the diagonal need not be set. See also
+ Section 8.
 6: F(K)  DOUBLE PRECISION array Input
 On entry: a vector of error quantities measuring the
 state of convergence of the simultaneous iteration
 vectors. See the parameter TOL of F02FJF above and
 Section 8 for further details. Each element of F is
 initially set to the value 4.0 and an element remains
 at 4.0 until the corresponding vector is tested.
+ 4: IAI  INTEGER Input
+ On entry:
+ the first dimension of the array AI as declared in the
+ (sub)program from which F02AXF is called.
+ Constraint: IAI >= N.
 7: D(K)  DOUBLE PRECISION array Input
 On entry: D(i) contains the latest approximation to the
 absolute value of the ith eigenvalue of C.
 MONIT must be declared as EXTERNAL in the (sub)program
 from which F02FJF is called. Parameters denoted as
 Input must not be changed by this procedure.
+ 5: N  INTEGER Input
+ On entry: n, the order of the matrix, A.
 9: NOVECS  INTEGER Input
 On entry: the number of approximate vectors that are being
 supplied in X. If NOVECS is outside the range (0,K), then
 the value 0 is used in place of NOVECS.
+ 6: R(N)  DOUBLE PRECISION array Output
+ On exit: the eigenvalues in ascending order.
 10: X(NRX,K)  DOUBLE PRECISION array Input/Output
 On entry: if 0 < NOVECS <= K, the first NOVECS columns of X
 must contain approximations to the eigenvectors
 corresponding to the NOVECS eigenvalues of largest absolute
 value of C. Supplying approximate eigenvectors can be useful
 when reasonable approximations are known, or when the
 routine is being restarted with a larger value of K.
 Otherwise it is not necessary to supply approximate vectors,
 as simultaneous iteration vectors will be generated randomly
 by the routine. On exit: if IFAIL = 0, 2, 3 or 4, the first
 m' columns contain the eigenvectors corresponding to the
 eigenvalues returned in the first m' elements of D (see
 below); and the next km'1 columns contain approximations
 to the eigenvectors corresponding to the approximate
 eigenvalues returned in the next km'1 elements of D. Here
 m' is the value returned in M (see above), the number of
 eigenvalues actually found. The kth column is used as
 workspace.
+ 7: VR(IVR,N)  DOUBLE PRECISION array Output
+ On exit: the real parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvector.
+ The eigenvectors are normalised so that the sum of the
+ squares of the moduli of the elements is equal to 1 and the
+ element of largest modulus is real. See also Section 8.
 11: NRX  INTEGER Input
+ 8: IVR  INTEGER Input
On entry:
 the first dimension of the array X as declared in the
 (sub)program from which F02FJF is called.
 Constraint: NRX >= N.

 12: D(K)  DOUBLE PRECISION array Output
 On exit: if IFAIL = 0, 2, 3 or 4, the first m' elements
 contain the first m' eigenvalues in decreasing order of
 magnitude; and the next km'1 elements contain
 approximations to the next km'1 eigenvalues. Here m' is
 the value returned in M (see above), the number of
 eigenvalues actually found. D(k) contains the value e where
 (e,e) is the latest interval over which Chebyshev
 acceleration is performed.

 13: WORK(LWORK)  DOUBLE PRECISION array Workspace

 14: LWORK  INTEGER Input
 On entry: the length of the array WORK, as declared in the
 (sub)program from which F02FJF is called. Constraint:
 LWORK>=3*K+max(K*K,2*N).
+ the first dimension of the array VR as declared in the
+ (sub)program from which F02AXF is called.
+ Constraint: IVR >= N.
 15: RWORK(LRWORK)  DOUBLE PRECISION array User Workspace
 RWORK is not used by F02FJF, but is passed directly to
 routines DOT and IMAGE and may be used to supply information
 to these routines.
+ 9: VI(IVI,N)  DOUBLE PRECISION array Output
+ On exit: the imaginary parts of the eigenvectors, stored by
+ columns. The ith column corresponds to the ith eigenvector.
+ See also Section 8.
 16: LRWORK  INTEGER Input
 On entry: the length of the array RWORK, as declared in the
 (sub)program from which F02FJF is called. Constraint: LRWORK
 >= 1.
+ 10: IVI  INTEGER Input
+ On entry:
+ the first dimension of the array VI as declared in the
+ (sub)program from which F02AXF is called.
+ Constraint: IVI >= N.
 17: IWORK(LIWORK)  INTEGER array User Workspace
 IWORK is not used by F02FJF, but is passed directly to
 routines DOT and IMAGE and may be used to supply information
 to these routines.
+ 11: WK1(N)  DOUBLE PRECISION array Workspace
 18: LIWORK  INTEGER Input
 On entry: the length of the array IWORK, as declared in the
 (sub)program from which F02FJF is called. Constraint: LIWORK
 >= 1.
+ 12: WK2(N)  DOUBLE PRECISION array Workspace
 19: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. Users who are
 unfamiliar with this parameter should refer to the Essential
 Introduction for details.
+ 13: WK3(N)  DOUBLE PRECISION array Workspace
 On exit: IFAIL = 0 unless the routine detects an error or
 gives a warning (see Section 6).
+ 14: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 For this routine, because the values of output parameters
 may be useful even if IFAIL /=0 on exit, users are
 recommended to set IFAIL to 1 before entry. It is then
 essential to test the value of IFAIL on exit. To suppress
 the output of an error message when soft failure occurs, set
 IFAIL to 1.
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
6. Error Indicators and Warnings
 Errors or warnings specified by the routine:
+ Errors detected by the routine:

 IFAIL< 0
 A negative value of IFAIL indicates an exit from F02FJF
 because the user has set IFLAG negative in DOT or IMAGE. The
 value of IFAIL will be the same as the user's setting of
 IFLAG.

 IFAIL= 1
 On entry N < 1,

 or M < 1,

 or M >= K,

 or K > N,

 or NRX < N,

 or LWORK <3*K+max(K*K*N),

 or LRWORK < 1,

 or LIWORK < 1.
+ IFAIL= 1
+ More than 30*N iterations are required to isolate all the
+ eigenvalues.
IFAIL= 2
 Not all the requested eigenvalues and vectors have been
 obtained. Approximations to the rth eigenvalue are
 oscillating rapidly indicating that severe cancellation is
 occurring in the rth eigenvector and so M is returned as (r
 1). A restart with a larger value of K may permit
 convergence.

 IFAIL= 3
 Not all the requested eigenvalues and vectors have been
 obtained. The rate of convergence of the remaining
 eigenvectors suggests that more than NOITS iterations would
 be required and so the input value of M has been reduced. A
 restart with a larger value of K may permit convergence.

 IFAIL= 4
 Not all the requested eigenvalues and vectors have been
 obtained. NOITS iterations have been performed. A restart,
 possibly with a larger value of K, may permit convergence.

 IFAIL= 5
 This error is very unlikely to occur, but indicates that
 convergence of the eigenvalue subproblem has not taken
 place. Restarting with a different set of approximate
 vectors may allow convergence. If this error occurs the user
 should check carefully that F02FJF is being called
 correctly.
+ The diagonal elements of AI are not all zero, i.e., the
+ complex matrix is not Hermitian.
7. Accuracy
 Eigenvalues and eigenvectors will normally be computed to the
 accuracy requested by the parameter TOL, but eigenvectors
 corresponding to small or to close eigenvalues may not always be
 computed to the accuracy requested by the parameter TOL. Use of
 the routine MONIT to monitor acceptance of eigenvalues and
 eigenvectors is recommended.
+ The eigenvectors are always accurately orthogonal but the
+ accuracy of the individual eigenvalues and eigenvectors is
+ dependent on their inherent sensitivity to small changes in the
+ original matrix. For a detailed error analysis see Peters [1]
+ page 3 and [2] page 3.
8. Further Comments
 The time taken by the routine will be principally determined by
 the time taken to solve the eigenvalue subproblem and the time
 taken by the routines DOT and IMAGE. The time taken to solve an
 2
 eigenvalue subproblem is approximately proportional to nk . It
 is important to be aware that several calls to DOT and IMAGE may
 occur on each major iteration.

 As can be seen from Table 3.1, many applications of F02FJF will
 require routine IMAGE to solve a system of linear equations. For
 example, to find the smallest eigenvalues of Ax=(lambda)Bx, IMAGE
 needs to solve equations of the form Aw=Bz for w and routines
 from Chapters F01 and F04 of the NAG Foundation Library will
 frequently be useful in this context. In particular, if A is a
 positivedefinite variable band matrix, F04MCF may be used after
 A has been factorized by F01MCF. Thus factorization need be
 performed only once prior to calling F02FJF. An illustration of
 this type of use is given in the example program in Section 9.

 ~
 An approximation d , to the ith eigenvalue, is accepted as soon
 h
 ~
 as d and the previous approximation differ by less than
 h
 ~
 d *TOL/10. Eigenvectors are accepted in groups corresponding to
 h
 clusters of eigenvalues that are equal, or nearly equal, in
 absolute value and that have already been accepted. If d is the
 r
 last eigenvalue in such a group and we define the residual r as
 j

 r =Cx y
 j j r

 where y is the projection of Cx , with respect to B, onto the
 r j
 space spanned by x ,x ,...,x and x is the current approximation
 1 2 r j
 to the jth eigenvector, then the value f returned in MONIT is
 i
 given by

 2 T
 f =maxr  /Cx  x =x Bx
 i j B j B B

 and each vector in the group is accepted as an eigenvector if

 (d f )/(d e)n,
+ 5. Parameters
 D=S, m=n,
+ 1: A(IA,N)  DOUBLE PRECISION array Input/Output
+ On entry: the lower triangle of the n by n symmetric matrix
+ A. The elements of the array above the diagonal need not be
+ set. On exit: the elements of A below the diagonal are
+ overwritten, and the rest of the array is unchanged.
 D=(S 0), m= N.
 Q is an m by m orthogonal matrix, P is an n by n orthogonal
 matrix and S is a min(m,n) by min(m,n) diagonal matrix with non
 negative diagonal elements, sv ,sv ,...,sv , ordered such
 1 2 min(m,n)
 that
+ 3: N  INTEGER Input
+ On entry: n, the order of the matrix A.
 sv >=sv >=...>=sv >=0.
 1 2 min(m,n)
+ 4: ALB  DOUBLE PRECISION Input
 The first min(m,n) columns of Q are the lefthand singular
 vectors of A, the diagonal elements of S are the singular values
 of A and the first min(m,n) columns of P are the righthand
 singular vectors of A.

 Either or both of the lefthand and righthand singular vectors
 of A may be requested and the matrix C given by
+ 5: UB  DOUBLE PRECISION Input
+ On entry: l and u, the lower and upper endpoints of the
+ interval within which eigenvalues are to be calculated.
 T
 C=Q B,
+ 6: M  INTEGER Input
+ On entry: an upper bound for the number of eigenvalues
+ within the interval.
 where B is an m by ncolb given matrix, may also be requested.
+ 7: MM  INTEGER Output
+ On exit: the actual number of eigenvalues within the
+ interval.
 The routine obtains the singular value decomposition by first
 reducing A to upper triangular form by means of Householder
 transformations, from the left when m>=n and from the right when
 m= N.
 T
 KK =I,
+ 11: D(N)  DOUBLE PRECISION array Workspace
 (so that K has elements +1 or 1 on the diagonal)
+ 12: E(N)  DOUBLE PRECISION array Workspace
 then
+ 13: E2(N)  DOUBLE PRECISION array Workspace
 T
 A=(QK)D(PK)
+ 14: X(N,7)  DOUBLE PRECISION array Workspace
 is also a singular value decomposition of A.
+ 15: G(N)  DOUBLE PRECISION array Workspace
 4. References
+ 16: C(N)  LOGICAL array Workspace
 [1] Dongarra J J, Moler C B, Bunch J R and Stewart G W (1979)
 LINPACK Users' Guide. SIAM, Philadelphia.
+ 17: ICOUNT(M)  INTEGER array Output
+ On exit: ICOUNT(i) contains the number of iterations for
+ the ith eigenvalue.
 [2] Hammarling S (1985) The Singular Value Decomposition in
 Multivariate Statistics. ACM Signum Newsletter. 20, 3 225.
+ 18: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 [3] Wilkinson J H (1978) Singular Value Decomposition  Basic
 Aspects. Numerical Software  Needs and Availability. (ed D
 A H Jacobs) Academic Press.
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 5. Parameters
+ 6. Error Indicators and Warnings
 1: M  INTEGER Input
 On entry: the number of rows, m, of the matrix A.
 Constraint: M >= 0.
+ Errors detected by the routine:
 When M = 0 then an immediate return is effected.
+ IFAIL= 1
+ M is less than the number of eigenvalues in the given
+ interval. On exit MM contains the number of eigenvalues in
+ the interval. Rerun with this value for M.
 2: N  INTEGER Input
 On entry: the number of columns, n, of the matrix A.
 Constraint: N >= 0.
+ IFAIL= 2
+ More than 5 iterations are required to determine any one
+ eigenvector.
 When N = 0 then an immediate return is effected.
+ 7. Accuracy
 3: A(LDA,*)  DOUBLE PRECISION array Input/Output
 Note: the second dimension of the array A must be at least
 max(1,N).
 On entry: the leading m by n part of the array A must
 contain the matrix A whose singular value decomposition is
 required. On exit: if M >= N and WANTQ = .TRUE., then the
 leading m by n part of A will contain the first n columns of
 the orthogonal matrix Q.
+ There is no guarantee of the accuracy of the eigenvectors as the
+ results depend on the original matrix and the multiplicity of the
+ roots. For a detailed error analysis see Wilkinson and Reinsch
+ [1] pp 222 and 436.
 If M < N and WANTP = .TRUE., then the leading m by n part of
 T
 A will contain the first m rows of the orthogonal matrix P .
+ 8. Further Comments
 If M >= N and WANTQ = .FALSE. and WANTP = .TRUE., then the
 leading n by n part of A will contain the first n rows of
 T
 the orthogonal matrix P .
+ 3
+ The time taken by the routine is approximately proportional to n
 Otherwise the leading m by n part of A is used as internal
 workspace.
+ This subroutine should only be used when less than 25% of the
+ eigenvalues and the corresponding eigenvectors are required. Also
+ this subroutine is less efficient with matrices which have
+ multiple eigenvalues.
 4: LDA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02WEF is called.
 Constraint: LDA >= max(1,M).
+ 9. Example
 5: NCOLB  INTEGER Input
 On entry: ncolb, the number of columns of the matrix B.
 When NCOLB = 0 the array B is not referenced. Constraint:
 NCOLB >= 0.
+ To calculate the eigenvalues lying between 2.0 and 3.0, and the
+ corresponding eigenvectors of the real symmetric matrix:
 6: B(LDB,*)  DOUBLE PRECISION array Input/Output
 Note: the second dimension of the array B must be at least
 max(1,ncolb) On entry: if NCOLB > 0, the leading m by ncolb
 part of the array B must contain the matrix to be
 transformed. On exit: B is overwritten by the m by ncolb
 T
 matrix Q B.
+ ( 0.5 0.0 2.3 2.6)
+ ( 0.0 0.5 1.4 0.7)
+ ( 2.3 1.4 0.5 0.0).
+ (2.6 0.7 0.0 0.5)
 7: LDB  INTEGER Input
 On entry:
 the first dimension of the array B as declared in the
 (sub)program from which F02WEF is called.
 Constraint: if NCOLB > 0 then LDB >= max(1,M).
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 8: WANTQ  LOGICAL Input
 On entry: WANTQ must be .TRUE., if the lefthand singular
 vectors are required. If WANTQ = .FALSE., then the array Q
 is not referenced.
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 9: Q(LDQ,*)  DOUBLE PRECISION array Output
 Note: the second dimension of the array Q must be at least
 max(1,M).
 On exit: if M < N and WANTQ = .TRUE., the leading m by m
 part of the array Q will contain the orthogonal matrix Q.
 Otherwise the array Q is not referenced.
+ F02  Eigenvalue and Eigenvectors F02BJF
+ F02BJF  NAG Foundation Library Routine Document
 10: LDQ  INTEGER Input
 On entry:
 the first dimension of the array Q as declared in the
 (sub)program from which F02WEF is called.
 Constraint: if M < N and WANTQ = .TRUE., LDQ >= max(1,M).
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 11: SV(*)  DOUBLE PRECISION array Output
 Note: the length of SV must be at least min(M,N). On exit:
 the min(M,N) diagonal elements of the matrix S.
+ 1. Purpose
 12: WANTP  LOGICAL Input
 On entry: WANTP must be .TRUE. if the righthand singular
 vectors are required. If WANTP = .FALSE., then the array PT
 is not referenced.
+ F02BJF calculates all the eigenvalues and, if required, all the
+ eigenvectors of the generalized eigenproblem Ax=(lambda)Bx
+ where A and B are real, square matrices, using the QZ algorithm.
 13: PT(LDPT,*)  DOUBLE PRECISION array Output
 Note: the second dimension of the array PT must be at least
 max(1,N).
 On exit: if M >= N and WANTQ and WANTP are .TRUE., the
 leading n by n part of the array PT will contain the
 T
 orthogonal matrix P . Otherwise the array PT is not
 referenced.
+ 2. Specification
 14: LDPT  INTEGER Input
 On entry:
 the first dimension of the array PT as declared in the
 (sub)program from which F02WEF is called.
 Constraint: if M >= N and WANTQ and WANTP are .TRUE., LDPT
 >= max(1,N).
+ SUBROUTINE F02BJF (N, A, IA, B, IB, EPS1, ALFR, ALFI,
+ 1 BETA, MATV, V, IV, ITER, IFAIL)
+ INTEGER N, IA, IB, IV, ITER(N), IFAIL
+ DOUBLE PRECISION A(IA,N), B(IB,N), EPS1, ALFR(N), ALFI(N),
+ 1 BETA(N), V(IV,N)
+ LOGICAL MATV
 15: WORK(*)  DOUBLE PRECISION array Output
 Note: the length of WORK must be at least max(1,lwork),
 where lwork must be as given in the following table:
+ 3. Description
 M >= N
 WANTQ is .TRUE. and WANTP = .TRUE.
 2
 lwork=max(N +5*(N1),N+NCOLB,4)
+ All the eigenvalues and, if required, all the eigenvectors of the
+ generalized eigenproblem Ax=(lambda)Bx where A and B are real,
+ square matrices, are determined using the QZ algorithm. The QZ
+ algorithm consists of 4 stages:
 WANTQ = .TRUE. and WANTP = .FALSE.
 2
 lwork=max(N +4*(N1),N+NCOLB,4)
+ (a) A is reduced to upper Hessenberg form and at the same time
+ B is reduced to upper triangular form.
 WANTQ = .FALSE. and WANTP = .TRUE.
 lwork=max(3*(N1),2) when NCOLB = 0
+ (b) A is further reduced to quasitriangular form while the
+ triangular form of B is maintained.
 lwork=max(5*(N1),2) when NCOLB > 0
+ (c) The quasitriangular form of A is reduced to triangular
+ form and the eigenvalues extracted. This routine does not
+ actually produce the eigenvalues (lambda) , but instead
+ j
+ returns (alpha) and (beta) such that
 WANTQ = .FALSE. and WANTP = .FALSE.
 lwork=max(2*(N1),2) when NCOLB = 0
+ j j
+ (lambda) =(alpha) /(beta) , j=1,2,...,.n
+ j j j
+ The division by (beta) becomes the responsibility of the
+ j
+ user's program, since (beta) may be zero indicating an
+ j
+ infinite eigenvalue. Pairs of complex eigenvalues occur
+ with (alpha) /(beta) and (alpha) /(beta) complex
+ j j j+1 j+1
 lwork=max(3*(N1),2) when NCOLB > 0
+ conjugates, even though (alpha) and (alpha) are not
+ j j+1
+ conjugate.
 M < N
 WANTQ = .TRUE. and WANTP = .TRUE.
 2
 lwork=max(M +5*(M1),2)
+ (d) If the eigenvectors are required (MATV = .TRUE.), they are
+ obtained from the triangular matrices and then transformed
+ back into the original coordinate system.
 WANTQ = .TRUE. and WANTP = .FALSE.
 lwork=max(3*(M1),1)
+ 4. References
 WANTQ = .FALSE. and WANTP = .TRUE.
 2
 lwork=max(M +3*(M1),2) when NCOLB = 0
+ [1] Moler C B and Stewart G W (1973) An Algorithm for
+ Generalized Matrix Eigenproblems. SIAM J. Numer. Anal. 10
+ 241256.
 2
 lwork=max(M +5*(M1),2) when NCOLB > 0
+ [2] Ward R C (1975) The Combination Shift QZ Algorithm. SIAM J.
+ Numer. Anal. 12 835853.
 WANTQ = .FALSE. and WANTP = .FALSE.
 lwork=max(2*(M1),1) when NCOLB = 0
+ [3] Wilkinson J H (1979) Kronecker's Canonical Form and the QZ
+ Algorithm. Linear Algebra and Appl. 28 285303.
 lwork=max(3*(M1),1) when NCOLB > 0
 On exit: WORK(min(M,N)) contains the total number of
 iterations taken by the R algorithm.
+ 5. Parameters
 The rest of the array is used as workspace.
+ 1: N  INTEGER Input
+ On entry: n, the order of the matrices A and B.
 16: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ 2: A(IA,N)  DOUBLE PRECISION array Input/Output
+ On entry: the n by n matrix A. On exit: the array is
+ overwritten.
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ 3: IA  INTEGER Input
+ On entry:
+ the first dimension of the array A as declared in the
+ (sub)program from which F02BJF is called.
+ Constraint: IA >= N.
 6. Error Indicators and Warnings
+ 4: B(IB,N)  DOUBLE PRECISION array Input/Output
+ On entry: the n by n matrix B. On exit: the array is
+ overwritten.
 Errors detected by the routine:
+ 5: IB  INTEGER Input
+ On entry:
+ the first dimension of the array B as declared in the
+ (sub)program from which F02BJF is called.
+ Constraint: IB >= N.
 If on entry IFAIL = 0 or 1, explanatory error messages are
 output on the current error message unit (as defined by X04AAF).
+ 6: EPS1  DOUBLE PRECISION Input
+ On entry: the tolerance used to determine negligible
+ elements. If EPS1 > 0.0, an element will be considered
+ negligible if it is less than EPS1 times the norm of its
+ matrix. If EPS1 <= 0.0, machine precision is used in place
+ of EPS1. A positive value of EPS1 may result in faster
+ execution but less accurate results.
 IFAIL=1
 One or more of the following conditions holds:
 M < 0,
+ 7: ALFR(N)  DOUBLE PRECISION array Output
 N < 0,
+ 8: ALFI(N)  DOUBLE PRECISION array Output
+ On exit: the real and imaginary parts of (alpha) , for
+ j
+ j=1,2,...,n.
 LDA < M,
+ 9: BETA(N)  DOUBLE PRECISION array Output
+ On exit: (beta) , for j=1,2,...,n.
+ j
 NCOLB < 0,
+ 10: MATV  LOGICAL Input
+ On entry: MATV must be set .TRUE. if the eigenvectors are
+ required, otherwise .FALSE..
 LDB < M and NCOLB > 0,
+ 11: V(IV,N)  DOUBLE PRECISION array Output
+ On exit: if MATV = .TRUE., then
+ (i)if the jth eigenvalue is real, the jth column of V
+ contains its eigenvector;
 LDQ < M and M < N and WANTQ = .TRUE.,
+ (ii) if the jth and (j+1)th eigenvalues form a complex
+ pair, the jth and (j+1)th columns of V contain the
+ real and imaginary parts of the eigenvector associated
+ with the first eigenvalue of the pair. The conjugate
+ of this vector is the eigenvector for the conjugate
+ eigenvalue.
+ Each eigenvector is normalised so that the component of
+ largest modulus is real and the sum of squares of the moduli
+ equal one.
 LDPT < N and M >= N and WANTQ = .TRUE., and WANTP = .
 TRUE..
+ If MATV = .FALSE., V is not used.
 IFAIL> 0
 The QR algorithm has failed to converge in 50*min(m,n)
 iterations. In this case SV(1), SV(2),..., SV(IFAIL) may not
 have been found correctly and the remaining singular values
 may not be the smallest. The matrix A will nevertheless have
 T
 been factorized as A=QEP , where the leading min(m,n) by
 min(m,n) part of E is a bidiagonal matrix with SV(1), SV(2),
 ..., SV(min(m,n)) as the diagonal elements and WORK(1), WORK
 (2),..., WORK(min(m,n)1) as the superdiagonal elements.
+ 12: IV  INTEGER Input
+ On entry:
+ the first dimension of the array V as declared in the
+ (sub)program from which F02BJF is called.
+ Constraint: IV >= N.
 This failure is not likely to occur.
+ 13: ITER(N)  INTEGER array Output
+ On exit: ITER(j) contains the number of iterations needed
+ to obtain the jth eigenvalue. Note that the eigenvalues are
+ obtained in reverse order, starting with the nth.
 7. Accuracy
+ 14: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 The computed factors Q, D and P satisfy the relation
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 T
 QDP =A+E,
+ 6. Error Indicators and Warnings
 where
+ Errors detected by the routine:
 E<=c(epsilon)A,
+ IFAIL= i
+ More than 30*N iterations are required to determine all the
+ diagonal 1 by 1 or 2 by 2 blocks of the quasitriangular
+ form in the second step of the QZ algorithm. IFAIL is set to
+ the index i of the eigenvalue at which this failure occurs.
+ If the soft failure option is used, (alpha) and (beta) are
+ j j
+ correct for j=i+1,i+2,...,n, but V does not contain any
+ correct eigenvectors.
 (epsilon) being the machine precision, c is a modest function of
 m and n and . denotes the spectral (two) norm. Note that
 A=sv .
 1
+ 7. Accuracy
 8. Further Comments
+ The computed eigenvalues are always exact for a problem
+ (A+E)x=(lambda)(B+F)x where E/A and F/B
+ are both of the order of max(EPS1,(epsilon)), EPS1 being defined
+ as in Section 5 and (epsilon) being the machine precision.
 Following the use of this routine the rank of A may be estimated
 by a call to the INTEGER FUNCTION F06KLF(*). The statement:
+ Note: interpretation of results obtained with the QZ algorithm
+ often requires a clear understanding of the effects of small
+ changes in the original data. These effects are reviewed in
+ Wilkinson [3], in relation to the significance of small values of
+ (alpha) and (beta) . It should be noted that if (alpha) and
+ j j j
+ (beta) are both small for any j, it may be that no reliance can
+ j
+ be placed on any of the computed eigenvalues
+ (lambda) =(alpha) /(beta) . The user is recommended to study [3]
+ i i i
+ and, if in difficulty, to seek expert advice on determining the
+ sensitivity of the eigenvalues to perturbations in the data.
 IRANK = F06KLF(MIN(M, N), SV, 1, TOL)
+ 8. Further Comments
 returns the value (k1) in IRANK, where k is the smallest integer
 for which SV(k)n,
+ T
+ BC=C B (2)
 D=S, m=n,
+ for a given positivedefinite matrix B. C is said to be B
+ symmetric. Different specifications of C allow for the solution
+ of a variety of eigenvalue problems. For example, when
 D=(S 0), m=sv >=...>=sv >=0.
 1 2 min(m,n)

 The first min(m,n) columns of Q are the lefthand singular
 vectors of A, the diagonal elements of S are the singular values
 of A and the first min(m,n) columns of P are the righthand
 singular vectors of A.

 Either or both of the lefthand and righthand singular vectors
 of A may be requested and the matrix C given by

 H
 C=Q B,

 where B is an m by ncolb given matrix, may also be requested.

 The routine obtains the singular value decomposition by first
 reducing A to upper triangular form by means of Householder
 transformations, from the left when m>=n and from the right when
 m= 0.
+ To find the m eigenvalues of smallest absolute magnitude of (3)
+ 1
+ we can choose C=A and hence find the reciprocals of the
+ required eigenvalues, so that IMAGE will need to solve Aw=z for
+ 1
+ w, and correspondingly for (4) we can choose C=A B and solve
+ Aw=Bz for w.
 When M = 0 then an immediate return is effected.
+ A table of examples of choice of IMAGE is given in Table 3.1. It
+ should be remembered that the routine also returns the
+ corresponding eigenvectors and that B is positivedefinite.
+ Throughout A is assumed to be symmetric and, where necessary,
+ nonsingularity is also assumed.
 2: N  INTEGER Input
 On entry: the number of columns, n, of the matrix A.
 Constraint: N >= 0.
+ Eigenvalues Problem
+ Required
 When N = 0 then an immediate return is effected.
+ Ax=(lambda)x (B=I)Ax=(lambda)Bx ABx=(lambda)x
 3: A(LDA,*)  COMPLEX(KIND(1.0D)) array Input/Output
 Note: the second dimension of the array A must be at least
 max(1,N).
 On entry: the leading m by n part of the array A must
 contain the matrix A whose singular value decomposition is
 required. On exit: if M >= N and WANTQ = .TRUE., then the
 leading m by n part of A will contain the first n columns of
 the unitary matrix Q.
 If M < N and WANTP = .TRUE., then the leading m by n part of
 H
 A will contain the first m rows of the unitary matrix P .
 will contain the first m rows of the unitary matrix P If M
 >= N and WANTQ = .FALSE. and WANTP = .TRUE., then the
 leading n by n part of A will contain the first n
 H
 rows of the unitary matrix P . Otherwise the leading m by n
 part of A is used as internal workspace.
+ Largest Compute Solve Compute
+ w=Az Bw=Az w=ABz
 4: LDA  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which F02XEF is called.
 Constraint: LDA >= max(1,M).
+ Smallest Solve Solve Solve
+ (Find Aw=z Aw=Bz Av=z, Bw=(nu)
+ 1/(lambda))
 5: NCOLB  INTEGER Input
 On entry: ncolb, the number of columns of the matrix B.
 When NCOLB = 0 the array B is not referenced. Constraint:
 NCOLB >= 0.
+ Furthest Compute Solve Compute
+ from w=(A(sigma)I)z Bw=(A(sigma)B)z w=(AB(sigma)I)z
+ (sigma)
+ (Find
+ (lambda)
+ (sigma))
 6: B(LDB,*)  COMPLEX(KIND(1.0D)) array Input/Output
 Note: the second dimension of the array B must be at least
 max(1,NCOLB).
 On entry: if NCOLB > 0, the leading m by ncolb part of the
 array B must contain the matrix to be transformed. On exit:
 H
 B is overwritten by the m by ncolb matrix Q B.
+ Closest to Solve Solve Solve
+ (sigma) (A(sigma)I)w=z (A(sigma)B)w=Bz (AB(sigma)I)w=z
+ (Find 1/((
+ lambda)
+ (sigma)))
 7: LDB  INTEGER Input
 On entry:
 the first dimension of the array B as declared in the
 (sub)program from which F02XEF is called.
 Constraint: if NCOLB > 0, then LDB >= max(1,M).
 8: WANTQ  LOGICAL Input
 On entry: WANTQ must be .TRUE. if the lefthand singular
 vectors are required. If WANTQ = .FALSE. then the array Q is
 not referenced.
+ Table 3.1
+ The Requirement of IMAGE for Various Problems
 9: Q(LDQ,*)  COMPLEX(KIND(1.0D)) array Output
 Note: the second dimension of the array Q must be at least
 max(1,M).
 On exit: if M < N and WANTQ = .TRUE., the leading m by m
 part of the array Q will contain the unitary matrix Q.
 Otherwise the array Q is not referenced.
+ The matrix B also need not be supplied explicitly, but is
+ specified via a usersupplied routine DOT which, given n element
+ T
+ vectors z and w, computes the generalized dot product w Bz.
 10: LDQ  INTEGER Input
 On entry:
 the first dimension of the array Q as declared in the
 (sub)program from which F02XEF is called.
 Constraint: if M < N and WANTQ = .TRUE., LDQ >= max(1,M).
+ F02FJF is based upon routine SIMITZ (see Nikolai [1]), which is
+ itself a derivative of the Algol procedure ritzit (see
+ Rutishauser [4]), and uses the method of simultaneous (subspace)
+ iteration. (See Parlett [2] for description, analysis and advice
+ on the use of the method.)
 11: SV(*)  DOUBLE PRECISION array Output
 Note: the length of SV must be at least min(M,N). On exit:
 the min(m,n) diagonal elements of the matrix S.
+ The routine performs simultaneous iteration on k>m vectors.
+ Initial estimates to p<=k eigenvectors, corresponding to the p
+ eigenvalues of C of largest absolute value, may be supplied by
+ the user to F02FJF. When possible k should be chosen so that the
+ kth eigenvalue is not too close to the m required eigenvalues,
+ but if k is initially chosen too small then F02FJF may be re
+ entered, supplying approximations to the k eigenvectors found so
+ far and with k then increased.
 12: WANTP  LOGICAL Input
 On entry: WANTP must be .TRUE. if the righthand singular
 vectors are required. If WANTP = .FALSE. then the array PH
 is not referenced.
+ At each major iteration F02FJF solves an r by r (r<=k) eigenvalue
+ subproblem in order to obtain an approximation to the
+ eigenvalues for which convergence has not yet occurred. This
+ approximation is refined by Chebyshev acceleration.
 13: PH(LDPH,*)  DOUBLE PRECISION array Output
 Note: the second dimension of the array PH must be at least
 max(1,N).
 On exit: if M >= N and WANTQ and WANTP are .TRUE., the
 leading n by n part of the array PH will contain the unitary
 H
 matrix P . Otherwise the array PH is not referenced.
+ 4. References
 14: LDPH  INTEGER Input
 On entry:
 the first dimension of the array PH as declared in the
 (sub)program from which F02XEF is called.
 Constraint: if M >= N and WANTQ and WANTP are .TRUE., LDPH
 >= max(1,N).
+ [1] Nikolai P J (1979) Algorithm 538: Eigenvectors and
+ eigenvalues of real generalized symmetric matrices by
+ simultaneous iteration. ACM Trans. Math. Softw. 5 118125.
 15: RWORK(*)  DOUBLE PRECISION array Output
 Note: the length of RWORK must be at least max(1,lrwork),
 where lrwork must satisfy:
 lrwork=2*(min(M,N)1) when
 NCOLB = 0 and WANTQ and WANTP are .FALSE.,
+ [2] Parlett B N (1980) The Symmetric Eigenvalue Problem.
+ PrenticeHall.
 lrwork=3*(min(M,N)1) when
 either NCOLB = 0 and WANTQ = .FALSE. and WANTP = .
 TRUE., or WANTP = .FALSE. and one or both of NCOLB > 0
 and WANTQ = .TRUE.
+ [3] Rutishauser H (1969) Computational aspects of F L Bauer's
+ simultaneous iteration method. Num. Math. 13 413.
 lrwork=5*(min(M,N)1)
 otherwise.
 On exit: RWORK(min(M,N)) contains the total number of
 iterations taken by the QR algorithm.
+ [4] Rutishauser H (1970) Simultaneous iteration method for
+ symmetric matrices. Num. Math. 16 205223.
 The rest of the array is used as workspace.
+ 5. Parameters
 16: CWORK(*)  COMPLEX(KIND(1.0D)) array Workspace
 Note: the length of CWORK must be at least max(1,lcwork),
 where lcwork must satisfy:
 2
 lcwork=N+max(N ,NCOLB) when
 M >= N and WANTQ and WANTP are both .TRUE.
+ 1: N  INTEGER Input
+ On entry: n, the order of the matrix C. Constraint: N >= 1.
 2
 lcwork=N+max(N +N,NCOLB) when
 M >= N and WANTQ = .TRUE., but WANTP = .FALSE.
+ 2: M  INTEGER Input/Output
+ On entry: m, the number of eigenvalues required.
+ '
+ Constraint: M >= 1. On exit: m, the number of eigenvalues
+ actually found. It is equal to m if IFAIL = 0 on exit, and
+ is less than m if IFAIL = 2, 3 or 4. See Section 6 and
+ Section 8 for further information.
 lcwork=N+max(N,NCOLB) when
 M >= N and WANTQ = .FALSE.
+ 3: K  INTEGER Input
+ On entry: the number of simultaneous iteration vectors to be
+ used. Too small a value of K may inhibit convergence, while
+ a larger value of K incurs additional storage and additional
+ work per iteration. Suggested value: K = M + 4 will often be
+ a reasonable choice in the absence of better information.
+ Constraint: M < K <= N.
 2
 lcwork=M +M when
 M < N and WANTP = .TRUE.
+ 4: NOITS  INTEGER Input/Output
+ On entry: the maximum number of major iterations (eigenvalue
+ subproblems) to be performed. If NOITS <= 0, then the value
+ 100 is used in place of NOITS. On exit: the number of
+ iterations actually performed.
 lcwork = M when
 M < N and WANTP = .FALSE.
+ 5: TOL  DOUBLE PRECISION Input
+ On entry: a relative tolerance to be used in accepting
+ eigenvalues and eigenvectors. If the eigenvalues are
+ required to about t significant figures, then TOL should be
+ t
+ set to about 10 . d is accepted as an eigenvalue as soon
+ i
+ as two successive approximations to d differ by less than
+ i
+ ~ ~
+ (d *TOL)/10, where d is the latest aproximation to d .
+ i i i
+ Once an eigenvalue has been accepted, then an eigenvector is
+ accepted as soon as (d f )/(d d ) 0,
+ 6: LRWORK  INTEGER Input
 LDQ < M and M < N and WANTQ = .TRUE.,
 LDPH < N and M >= N and WANTQ = .TRUE. and WANTP = .
 TRUE..
+ 7: IWORK(LIWORK)  INTEGER array User Workspace
 IFAIL> 0
 The QR algorithm has failed to converge in 50*min(m,n)
 iterations. In this case SV(1), SV(2),..., SV(IFAIL) may not
 have been found correctly and the remaining singular values
 may not be the smallest. The matrix A will nevertheless have
 H
 been factorized as A=QEP where the leading min(m,n) by
 min(m,n) part of E is a bidiagonal matrix with SV(1), SV(2),
 ..., SV(min(m,n)) as the diagonal elements and RWORK(1),
 RWORK(2),..., RWORK(min(m,n)1) as the superdiagonal
 elements.
 This failure is not likely to occur.
+ 8: LIWORK  INTEGER Input
+ DOT is called from F02FJF with the parameters RWORK,
+ LRWORK, IWORK and LIWORK as supplied to F02FJF. The
+ user is free to use the arrays RWORK and IWORK to
+ supply information to DOT and to IMAGE as an
+ alternative to using COMMON.
+ DOT must be declared as EXTERNAL in the (sub)program
+ from which F02FJF is called. Parameters denoted as
+ Input must not be changed by this procedure.
 7. Accuracy
+ 7: IMAGE  SUBROUTINE, supplied by the user.
+ External Procedure
+ IMAGE must return the vector w=Cz for a given vector z.
 The computed factors Q, D and P satisfy the relation
+ Its specification is:
 H
 QDP =A+E,
+ SUBROUTINE IMAGE (IFLAG, N, Z, W, RWORK, LRWORK,
+ 1 IWORK, LIWORK)
+ INTEGER IFLAG, N, LRWORK, IWORK(LIWORK),
+ 1 LIWORK
+ DOUBLE PRECISION Z(N), W(N), RWORK(LRWORK)
 where
+ 1: IFLAG  INTEGER Input/Output
+ On entry: IFLAG is always nonnegative. On exit: IFLAG
+ may be used as a flag to indicate a failure in the
+ computation of w. If IFLAG is negative on exit from
+ IMAGE, then F02FJF will exit immediately with IFAIL set
+ to IFLAG.
 E<=c(epsilon)A,
+ 2: N  INTEGER Input
+ On entry: n, the number of elements in the vectors w
+ and z, and the order of the matrix C.
 (epsilon) being the machine precision, c is a modest function of
 m and n and . denotes the spectral (two) norm. Note that
 A=sv .
 1
+ 3: Z(N)  DOUBLE PRECISION array Input
+ On entry: the vector z for which Cz is required.
 8. Further Comments
+ 4: W(N)  DOUBLE PRECISION array Output
+ On exit: the vector w=Cz.
 Following the use of this routine the rank of A may be estimated
 by a call to the INTEGER FUNCTION F06KLF(*). The statement:
+ 5: RWORK(LRWORK)  DOUBLE PRECISION array User Workspace
+ 6: LRWORK  INTEGER Input
 IRANK = F06KLF(MIN(M, N), SV, 1, TOL)
+ 7: IWORK(LIWORK)  INTEGER array User Workspace
 returns the value (k1) in IRANK, where k is the smallest integer
 for which SV(k) Symbol
 FOP ==> FortranOutputStackPackage
+ 10: X(NRX,K)  DOUBLE PRECISION array Input/Output
+ On entry: if 0 < NOVECS <= K, the first NOVECS columns of X
+ must contain approximations to the eigenvectors
+ corresponding to the NOVECS eigenvalues of largest absolute
+ value of C. Supplying approximate eigenvectors can be useful
+ when reasonable approximations are known, or when the
+ routine is being restarted with a larger value of K.
+ Otherwise it is not necessary to supply approximate vectors,
+ as simultaneous iteration vectors will be generated randomly
+ by the routine. On exit: if IFAIL = 0, 2, 3 or 4, the first
+ m' columns contain the eigenvectors corresponding to the
+ eigenvalues returned in the first m' elements of D (see
+ below); and the next km'1 columns contain approximations
+ to the eigenvectors corresponding to the approximate
+ eigenvalues returned in the next km'1 elements of D. Here
+ m' is the value returned in M (see above), the number of
+ eigenvalues actually found. The kth column is used as
+ workspace.
 Exports ==> with
 f02aaf : (Integer,Integer,Matrix DoubleFloat,Integer) > Result
 ++ f02aaf(ia,n,a,ifail)
 ++ calculates all the eigenvalue.
 ++ See \downlink{Manual Page}{manpageXXf02aaf}.
 f02abf : (Matrix DoubleFloat,Integer,Integer,Integer,Integer) > Result
 ++ f02abf(a,ia,n,iv,ifail)
 ++ calculates all the eigenvalues of a real
 ++ symmetric matrix.
 ++ See \downlink{Manual Page}{manpageXXf02abf}.
 f02adf : (Integer,Integer,Integer,Matrix DoubleFloat,_
 Matrix DoubleFloat,Integer) > Result
 ++ f02adf(ia,ib,n,a,b,ifail)
 ++ calculates all the eigenvalues of Ax=(lambda)Bx, where A
 ++ is a real symmetric matrix and B is a real symmetric positive
 ++ definite matrix.
 ++ See \downlink{Manual Page}{manpageXXf02adf}.
 f02aef : (Integer,Integer,Integer,Integer,_
 Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
 ++ f02aef(ia,ib,n,iv,a,b,ifail)
 ++ calculates all the eigenvalues of
 ++ Ax=(lambda)Bx, where A is a real symmetric matrix and B is a
 ++ real symmetric positivedefinite matrix.
 ++ See \downlink{Manual Page}{manpageXXf02aef}.
 f02aff : (Integer,Integer,Matrix DoubleFloat,Integer) > Result
 ++ f02aff(ia,n,a,ifail)
 ++ calculates all the eigenvalues of a real unsymmetric
 ++ matrix.
 ++ See \downlink{Manual Page}{manpageXXf02aff}.
 f02agf : (Integer,Integer,Integer,Integer,_
 Matrix DoubleFloat,Integer) > Result
 ++ f02agf(ia,n,ivr,ivi,a,ifail)
 ++ calculates all the eigenvalues of a real
 ++ unsymmetric matrix.
 ++ See \downlink{Manual Page}{manpageXXf02agf}.
 f02ajf : (Integer,Integer,Integer,Matrix DoubleFloat,_
 Matrix DoubleFloat,Integer) > Result
 ++ f02ajf(iar,iai,n,ar,ai,ifail)
 ++ calculates all the eigenvalue.
 ++ See \downlink{Manual Page}{manpageXXf02ajf}.
 f02akf : (Integer,Integer,Integer,Integer,_
 Integer,Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
 ++ f02akf(iar,iai,n,ivr,ivi,ar,ai,ifail)
 ++ calculates all the eigenvalues of a
 ++ complex matrix.
 ++ See \downlink{Manual Page}{manpageXXf02akf}.
 f02awf : (Integer,Integer,Integer,Matrix DoubleFloat,_
 Matrix DoubleFloat,Integer) > Result
 ++ f02awf(iar,iai,n,ar,ai,ifail)
 ++ calculates all the eigenvalues of a complex Hermitian
 ++ matrix.
 ++ See \downlink{Manual Page}{manpageXXf02awf}.
 f02axf : (Matrix DoubleFloat,Integer,Matrix DoubleFloat,Integer,_
 Integer,Integer,Integer,Integer) > Result
 ++ f02axf(ar,iar,ai,iai,n,ivr,ivi,ifail)
 ++ calculates all the eigenvalues of a
 ++ complex Hermitian matrix.
 ++ See \downlink{Manual Page}{manpageXXf02axf}.
 f02bbf : (Integer,Integer,DoubleFloat,DoubleFloat,_
 Integer,Integer,Matrix DoubleFloat,Integer) > Result
 ++ f02bbf(ia,n,alb,ub,m,iv,a,ifail)
 ++ calculates selected eigenvalues of a real
 ++ symmetric matrix by reduction to tridiagonal form, bisection and
 ++ inverse iteration, where the selected eigenvalues lie within a
 ++ given interval.
 ++ See \downlink{Manual Page}{manpageXXf02bbf}.
 f02bjf : (Integer,Integer,Integer,DoubleFloat,_
 Boolean,Integer,Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
 ++ f02bjf(n,ia,ib,eps1,matv,iv,a,b,ifail)
 ++ calculates all the eigenvalues and, if required, all the
 ++ eigenvectors of the generalized eigenproblem Ax=(lambda)Bx
 ++ where A and B are real, square matrices, using the QZ algorithm.
 ++ See \downlink{Manual Page}{manpageXXf02bjf}.
 f02fjf : (Integer,Integer,DoubleFloat,Integer,_
 Integer,Integer,Integer,Integer,Integer,Integer,Matrix DoubleFloat,_
 Integer,Union(fn:FileName,fp:Asp27(DOT)),_
 Union(fn:FileName,fp:Asp28(IMAGE))) > Result
 ++ f02fjf(n,k,tol,novecs,nrx,lwork,lrwork,
 ++ liwork,m,noits,x,ifail,dot,image)
 ++ finds eigenvalues of a real sparse symmetric
 ++ or generalized symmetric eigenvalue problem.
 ++ See \downlink{Manual Page}{manpageXXf02fjf}.
 f02fjf : (Integer,Integer,DoubleFloat,Integer,_
 Integer,Integer,Integer,Integer,Integer,Integer,Matrix DoubleFloat,_
 Integer,Union(fn:FileName,fp:Asp27(DOT)),_
 Union(fn:FileName,fp:Asp28(IMAGE)),FileName) > Result
 ++ f02fjf(n,k,tol,novecs,nrx,lwork,lrwork,
 ++ liwork,m,noits,x,ifail,dot,image,monit)
 ++ finds eigenvalues of a real sparse symmetric
 ++ or generalized symmetric eigenvalue problem.
 ++ See \downlink{Manual Page}{manpageXXf02fjf}.
 f02wef : (Integer,Integer,Integer,Integer,_
 Integer,Boolean,Integer,Boolean,Integer,Matrix DoubleFloat,_
 Matrix DoubleFloat,Integer) > Result
 ++ f02wef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldpt,a,b,ifail)
 ++ returns all, or part, of the singular value decomposition
 ++ of a general real matrix.
 ++ See \downlink{Manual Page}{manpageXXf02wef}.
 f02xef : (Integer,Integer,Integer,Integer,_
 Integer,Boolean,Integer,Boolean,Integer,Matrix Complex DoubleFloat,_
 Matrix Complex DoubleFloat,Integer) > Result
 ++ f02xef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldph,a,b,ifail)
 ++ returns all, or part, of the singular value decomposition
 ++ of a general complex matrix.
 ++ See \downlink{Manual Page}{manpageXXf02xef}.
 Implementation ==> add
+ 11: NRX  INTEGER Input
+ On entry:
+ the first dimension of the array X as declared in the
+ (sub)program from which F02FJF is called.
+ Constraint: NRX >= N.
 import Lisp
 import DoubleFloat
 import Any
 import Record
 import Integer
 import Matrix DoubleFloat
 import Boolean
 import NAGLinkSupportPackage
 import FortranPackage
 import AnyFunctions1(Integer)
 import AnyFunctions1(Boolean)
 import AnyFunctions1(Matrix DoubleFloat)
 import AnyFunctions1(Matrix Complex DoubleFloat)
 import AnyFunctions1(DoubleFloat)
+ 12: D(K)  DOUBLE PRECISION array Output
+ On exit: if IFAIL = 0, 2, 3 or 4, the first m' elements
+ contain the first m' eigenvalues in decreasing order of
+ magnitude; and the next km'1 elements contain
+ approximations to the next km'1 eigenvalues. Here m' is
+ the value returned in M (see above), the number of
+ eigenvalues actually found. D(k) contains the value e where
+ (e,e) is the latest interval over which Chebyshev
+ acceleration is performed.
+ 13: WORK(LWORK)  DOUBLE PRECISION array Workspace
 f02aaf(iaArg:Integer,nArg:Integer,aArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02aaf",_
 ["ia"::S,"n"::S,"ifail"::S,"r"::S,"a"::S,"e"::S]$Lisp,_
 ["r"::S,"e"::S]$Lisp,_
 [["double"::S,["r"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp_
 ,["e"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"n"::S,"ifail"::S]$Lisp_
 ]$Lisp,_
 ["r"::S,"a"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,nArg::Any,ifailArg::Any,aArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 14: LWORK  INTEGER Input
+ On entry: the length of the array WORK, as declared in the
+ (sub)program from which F02FJF is called. Constraint:
+ LWORK>=3*K+max(K*K,2*N).
 f02abf(aArg:Matrix DoubleFloat,iaArg:Integer,nArg:Integer,_
 ivArg:Integer,ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02abf",_
 ["ia"::S,"n"::S,"iv"::S,"ifail"::S,"a"::S,"r"::S,"v"::S,"e"::S]$Lisp,_
 ["r"::S,"v"::S,"e"::S]$Lisp,_
 [["double"::S,["a"::S,"ia"::S,"n"::S]$Lisp_
 ,["r"::S,"n"::S]$Lisp,["v"::S,"iv"::S,"n"::S]$Lisp,_
 ["e"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"n"::S,"iv"::S,"ifail"::S_
 ]$Lisp_
 ]$Lisp,_
 ["r"::S,"v"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,nArg::Any,ivArg::Any,ifailArg::Any,aArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 15: RWORK(LRWORK)  DOUBLE PRECISION array User Workspace
+ RWORK is not used by F02FJF, but is passed directly to
+ routines DOT and IMAGE and may be used to supply information
+ to these routines.
 f02adf(iaArg:Integer,ibArg:Integer,nArg:Integer,_
 aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02adf",_
 ["ia"::S,"ib"::S,"n"::S,"ifail"::S,"r"::S,"a"::S,"b"::S,"de"::S]$Lisp,_
 ["r"::S,"de"::S]$Lisp,_
 [["double"::S,["r"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp_
 ,["b"::S,"ib"::S,"n"::S]$Lisp,["de"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"ib"::S,"n"::S,"ifail"::S_
 ]$Lisp_
 ]$Lisp,_
 ["r"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,ibArg::Any,nArg::Any,ifailArg::Any,aArg::Any,bArg::Any])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 16: LRWORK  INTEGER Input
+ On entry: the length of the array RWORK, as declared in the
+ (sub)program from which F02FJF is called. Constraint: LRWORK
+ >= 1.
 f02aef(iaArg:Integer,ibArg:Integer,nArg:Integer,_
 ivArg:Integer,aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02aef",_
 ["ia"::S,"ib"::S,"n"::S,"iv"::S,"ifail"::S_
 ,"r"::S,"v"::S,"a"::S,"b"::S,"dl"::S_
 ,"e"::S]$Lisp,_
 ["r"::S,"v"::S,"dl"::S,"e"::S]$Lisp,_
 [["double"::S,["r"::S,"n"::S]$Lisp,["v"::S,"iv"::S,"n"::S]$Lisp_
 ,["a"::S,"ia"::S,"n"::S]$Lisp,["b"::S,"ib"::S,"n"::S]$Lisp,_
 ["dl"::S,"n"::S]$Lisp,["e"::S,"n"::S]$Lisp_
 ]$Lisp_
 ,["integer"::S,"ia"::S,"ib"::S,"n"::S,"iv"::S_
 ,"ifail"::S]$Lisp_
 ]$Lisp,_
 ["r"::S,"v"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,ibArg::Any,nArg::Any,ivArg::Any,_
 ifailArg::Any,aArg::Any,bArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 17: IWORK(LIWORK)  INTEGER array User Workspace
+ IWORK is not used by F02FJF, but is passed directly to
+ routines DOT and IMAGE and may be used to supply information
+ to these routines.
 f02aff(iaArg:Integer,nArg:Integer,aArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02aff",_
 ["ia"::S,"n"::S,"ifail"::S,"rr"::S,"ri"::S,"intger"::S,"a"::S]$Lisp,_
 ["rr"::S,"ri"::S,"intger"::S]$Lisp,_
 [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
 ,["a"::S,"ia"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"n"::S,["intger"::S,"n"::S]$Lisp_
 ,"ifail"::S]$Lisp_
 ]$Lisp,_
 ["rr"::S,"ri"::S,"intger"::S,"a"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,nArg::Any,ifailArg::Any,aArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 18: LIWORK  INTEGER Input
+ On entry: the length of the array IWORK, as declared in the
+ (sub)program from which F02FJF is called. Constraint: LIWORK
+ >= 1.
 f02agf(iaArg:Integer,nArg:Integer,ivrArg:Integer,_
 iviArg:Integer,aArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02agf",_
 ["ia"::S,"n"::S,"ivr"::S,"ivi"::S,"ifail"::S_
 ,"rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S_
 ,"a"::S]$Lisp,_
 ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S]$Lisp,_
 [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
 ,["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
 ["a"::S,"ia"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"n"::S,"ivr"::S,"ivi"::S_
 ,["intger"::S,"n"::S]$Lisp,"ifail"::S]$Lisp_
 ]$Lisp,_
 ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S,"a"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
 ifailArg::Any,aArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 19: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. Users who are
+ unfamiliar with this parameter should refer to the Essential
+ Introduction for details.
 f02ajf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
 arArg:Matrix DoubleFloat,aiArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02ajf",_
 ["iar"::S,"iai"::S,"n"::S,"ifail"::S,"rr"::S,"ri"::S,_
 "ar"::S,"ai"::S,"intger"::S_
 ]$Lisp,_
 ["rr"::S,"ri"::S,"intger"::S]$Lisp,_
 [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
 ,["ar"::S,"iar"::S,"n"::S]$Lisp,["ai"::S,"iai"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ifail"::S_
 ,["intger"::S,"n"::S]$Lisp]$Lisp_
 ]$Lisp,_
 ["rr"::S,"ri"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
 [([iarArg::Any,iaiArg::Any,nArg::Any,ifailArg::Any,_
 arArg::Any,aiArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ On exit: IFAIL = 0 unless the routine detects an error or
+ gives a warning (see Section 6).
 f02akf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
 ivrArg:Integer,iviArg:Integer,arArg:Matrix DoubleFloat,_
 aiArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02akf",_
 ["iar"::S,"iai"::S,"n"::S,"ivr"::S,"ivi"::S_
 ,"ifail"::S,"rr"::S,"ri"::S,"vr"::S,"vi"::S,"ar"::S_
 ,"ai"::S,"intger"::S]$Lisp,_
 ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S]$Lisp,_
 [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
 ,["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
 ["ar"::S,"iar"::S,"n"::S]$Lisp,["ai"::S,"iai"::S,"n"::S]$Lisp_
 ]$Lisp_
 ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ivr"::S_
 ,"ivi"::S,"ifail"::S,["intger"::S,"n"::S]$Lisp]$Lisp_
 ]$Lisp,_
 ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
 [([iarArg::Any,iaiArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
 ifailArg::Any,arArg::Any,aiArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ For this routine, because the values of output parameters
+ may be useful even if IFAIL /=0 on exit, users are
+ recommended to set IFAIL to 1 before entry. It is then
+ essential to test the value of IFAIL on exit. To suppress
+ the output of an error message when soft failure occurs, set
+ IFAIL to 1.
 f02awf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
 arArg:Matrix DoubleFloat,aiArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02awf",_
 ["iar"::S,"iai"::S,"n"::S,"ifail"::S,"r"::S,"ar"::S,"ai"::S,_
 "wk1"::S,"wk2"::S_
 ,"wk3"::S]$Lisp,_
 ["r"::S,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
 [["double"::S,["r"::S,"n"::S]$Lisp,["ar"::S,"iar"::S,"n"::S]$Lisp_
 ,["ai"::S,"iai"::S,"n"::S]$Lisp,["wk1"::S,"n"::S]$Lisp,_
 ["wk2"::S,"n"::S]$Lisp,["wk3"::S,"n"::S]$Lisp_
 ]$Lisp_
 ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ifail"::S_
 ]$Lisp_
 ]$Lisp,_
 ["r"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
 [([iarArg::Any,iaiArg::Any,nArg::Any,ifailArg::Any,arArg::Any,_
 aiArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ 6. Error Indicators and Warnings
 f02axf(arArg:Matrix DoubleFloat,iarArg:Integer,aiArg:Matrix DoubleFloat,_
 iaiArg:Integer,nArg:Integer,ivrArg:Integer,_
 iviArg:Integer,ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02axf",_
 ["iar"::S,"iai"::S,"n"::S,"ivr"::S,"ivi"::S_
 ,"ifail"::S,"ar"::S,"ai"::S,"r"::S,"vr"::S,"vi"::S_
 ,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
 ["r"::S,"vr"::S,"vi"::S,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
 [["double"::S,["ar"::S,"iar"::S,"n"::S]$Lisp_
 ,["ai"::S,"iai"::S,"n"::S]$Lisp,["r"::S,"n"::S]$Lisp,_
 ["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
 ["wk1"::S,"n"::S]$Lisp_
 ,["wk2"::S,"n"::S]$Lisp,["wk3"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ivr"::S_
 ,"ivi"::S,"ifail"::S]$Lisp_
 ]$Lisp,_
 ["r"::S,"vr"::S,"vi"::S,"ifail"::S]$Lisp,_
 [([iarArg::Any,iaiArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
 ifailArg::Any,arArg::Any,aiArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ Errors or warnings specified by the routine:
 f02bbf(iaArg:Integer,nArg:Integer,albArg:DoubleFloat,_
 ubArg:DoubleFloat,mArg:Integer,ivArg:Integer,_
 aArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02bbf",_
 ["ia"::S,"n"::S,"alb"::S,"ub"::S,"m"::S_
 ,"iv"::S,"mm"::S,"ifail"::S,"r"::S,"v"::S,"icount"::S,"a"::S,"d"::S_
 ,"e"::S,"e2"::S,"x"::S,"g"::S,"c"::S_
 ]$Lisp,_
 ["mm"::S,"r"::S,"v"::S,"icount"::S,"d"::S,"e"::S,"e2"::S,"x"::S,_
 "g"::S,"c"::S]$Lisp,_
 [["double"::S,"alb"::S,"ub"::S,["r"::S,"m"::S]$Lisp_
 ,["v"::S,"iv"::S,"m"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp,_
 ["d"::S,"n"::S]$Lisp,["e"::S,"n"::S]$Lisp,["e2"::S,"n"::S]$Lisp_
 ,["x"::S,"n"::S,7$Lisp]$Lisp,["g"::S,"n"::S]$Lisp]$Lisp_
 ,["integer"::S,"ia"::S,"n"::S,"m"::S,"iv"::S_
 ,"mm"::S,["icount"::S,"m"::S]$Lisp,"ifail"::S]$Lisp_
 ,["logical"::S,["c"::S,"n"::S]$Lisp]$Lisp_
 ]$Lisp,_
 ["mm"::S,"r"::S,"v"::S,"icount"::S,"a"::S,"ifail"::S]$Lisp,_
 [([iaArg::Any,nArg::Any,albArg::Any,ubArg::Any,mArg::Any,_
 ivArg::Any,ifailArg::Any,aArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
 f02bjf(nArg:Integer,iaArg:Integer,ibArg:Integer,_
 eps1Arg:DoubleFloat,matvArg:Boolean,ivArg:Integer,_
 aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 [(invokeNagman(NIL$Lisp,_
 "f02bjf",_
 ["n"::S,"ia"::S,"ib"::S,"eps1"::S,"matv"::S_
 ,"iv"::S,"ifail"::S,"alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S_
 ,"a"::S,"b"::S]$Lisp,_
 ["alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S]$Lisp,_
 [["double"::S,"eps1"::S,["alfr"::S,"n"::S]$Lisp_
 ,["alfi"::S,"n"::S]$Lisp,["beta"::S,"n"::S]$Lisp,_
 ["v"::S,"iv"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp,_
 ["b"::S,"ib"::S,"n"::S]$Lisp_
 ]$Lisp_
 ,["integer"::S,"n"::S,"ia"::S,"ib"::S,"iv"::S_
 ,["iter"::S,"n"::S]$Lisp,"ifail"::S]$Lisp_
 ,["logical"::S,"matv"::S]$Lisp_
 ]$Lisp,_
 ["alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S,"a"::S,"b"::S,_
 "ifail"::S]$Lisp,_
 [([nArg::Any,iaArg::Any,ibArg::Any,eps1Arg::Any,matvArg::Any,_
 ivArg::Any,ifailArg::Any,aArg::Any,bArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ IFAIL< 0
+ A negative value of IFAIL indicates an exit from F02FJF
+ because the user has set IFLAG negative in DOT or IMAGE. The
+ value of IFAIL will be the same as the user's setting of
+ IFLAG.
 f02fjf(nArg:Integer,kArg:Integer,tolArg:DoubleFloat,_
 novecsArg:Integer,nrxArg:Integer,lworkArg:Integer,_
 lrworkArg:Integer,liworkArg:Integer,mArg:Integer,_
 noitsArg:Integer,xArg:Matrix DoubleFloat,ifailArg:Integer,_
 dotArg:Union(fn:FileName,fp:Asp27(DOT)),_
 imageArg:Union(fn:FileName,fp:Asp28(IMAGE))): Result ==
 pushFortranOutputStack(dotFilename := aspFilename "dot")$FOP
 if dotArg case fn
 then outputAsFortran(dotArg.fn)
 else outputAsFortran(dotArg.fp)
 popFortranOutputStack()$FOP
 pushFortranOutputStack(imageFilename := aspFilename "image")$FOP
 if imageArg case fn
 then outputAsFortran(imageArg.fn)
 else outputAsFortran(imageArg.fp)
 popFortranOutputStack()$FOP
 pushFortranOutputStack(monitFilename := aspFilename "monit")$FOP
 outputAsFortran()$Asp29(MONIT)
 popFortranOutputStack()$FOP
 [(invokeNagman([dotFilename,imageFilename,monitFilename]$Lisp,_
 "f02fjf",_
 ["n"::S,"k"::S,"tol"::S,"novecs"::S,"nrx"::S_
 ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S_
 ,"ifail"::S,"dot"::S,"image"::S,"monit"::S,"d"::S,"x"::S,_
 "work"::S,"rwork"::S,"iwork"::S_
 ]$Lisp,_
 ["d"::S,"work"::S,"rwork"::S,"iwork"::S,"dot"::S,"image"::S,_
 "monit"::S]$Lisp,_
 [["double"::S,"tol"::S,["d"::S,"k"::S]$Lisp_
 ,["x"::S,"nrx"::S,"k"::S]$Lisp,["work"::S,"lwork"::S]$Lisp,_
 ["rwork"::S,"lrwork"::S]$Lisp,"dot"::S,"image"::S,"monit"::S_
 ]$Lisp_
 ,["integer"::S,"n"::S,"k"::S,"novecs"::S,"nrx"::S_
 ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S,"ifail"::S,_
 ["iwork"::S,"liwork"::S]$Lisp]$Lisp_
 ]$Lisp,_
 ["d"::S,"m"::S,"noits"::S,"x"::S,"ifail"::S]$Lisp,_
 [([nArg::Any,kArg::Any,tolArg::Any,novecsArg::Any,nrxArg::Any,_
 lworkArg::Any,lrworkArg::Any,liworkArg::Any,mArg::Any,_
 noitsArg::Any,ifailArg::Any,xArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ IFAIL= 1
+ On entry N < 1,
 f02fjf(nArg:Integer,kArg:Integer,tolArg:DoubleFloat,_
 novecsArg:Integer,nrxArg:Integer,lworkArg:Integer,_
 lrworkArg:Integer,liworkArg:Integer,mArg:Integer,_
 noitsArg:Integer,xArg:Matrix DoubleFloat,ifailArg:Integer,_
 dotArg:Union(fn:FileName,fp:Asp27(DOT)),_
 imageArg:Union(fn:FileName,fp:Asp28(IMAGE)),_
 monitArg:FileName): Result ==
 pushFortranOutputStack(dotFilename := aspFilename "dot")$FOP
 if dotArg case fn
 then outputAsFortran(dotArg.fn)
 else outputAsFortran(dotArg.fp)
 popFortranOutputStack()$FOP
 pushFortranOutputStack(imageFilename := aspFilename "image")$FOP
 if imageArg case fn
 then outputAsFortran(imageArg.fn)
 else outputAsFortran(imageArg.fp)
 popFortranOutputStack()$FOP
 pushFortranOutputStack(monitFilename := aspFilename "monit")$FOP
 outputAsFortran(monitArg)
 [(invokeNagman([dotFilename,imageFilename,monitFilename]$Lisp,_
 "f02fjf",_
 ["n"::S,"k"::S,"tol"::S,"novecs"::S,"nrx"::S_
 ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S_
 ,"ifail"::S,"dot"::S,"image"::S,"monit"::S,"d"::S,"x"::S,_
 "work"::S,"rwork"::S,"iwork"::S_
 ]$Lisp,_
 ["d"::S,"work"::S,"rwork"::S,"iwork"::S,"dot"::S,"image"::S,_
 "monit"::S]$Lisp,_
 [["double"::S,"tol"::S,["d"::S,"k"::S]$Lisp_
 ,["x"::S,"nrx"::S,"k"::S]$Lisp,["work"::S,"lwork"::S]$Lisp,_
 ["rwork"::S,"lrwork"::S]$Lisp,"dot"::S,"image"::S,"monit"::S_
 ]$Lisp_
 ,["integer"::S,"n"::S,"k"::S,"novecs"::S,"nrx"::S_
 ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S,"ifail"::S,_
 ["iwork"::S,"liwork"::S]$Lisp]$Lisp_
 ]$Lisp,_
 ["d"::S,"m"::S,"noits"::S,"x"::S,"ifail"::S]$Lisp,_
 [([nArg::Any,kArg::Any,tolArg::Any,novecsArg::Any,nrxArg::Any,_
 lworkArg::Any,lrworkArg::Any,liworkArg::Any,mArg::Any,_
 noitsArg::Any,ifailArg::Any,xArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ or M < 1,
 f02wef(mArg:Integer,nArg:Integer,ldaArg:Integer,_
 ncolbArg:Integer,ldbArg:Integer,wantqArg:Boolean,_
 ldqArg:Integer,wantpArg:Boolean,ldptArg:Integer,_
 aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
 ifailArg:Integer): Result ==
 workLength : Integer :=
 mArg >= nArg =>
 wantqArg and wantpArg =>
 max(max(nArg**2 + 5*(nArg  1),nArg + ncolbArg),4)
 wantqArg =>
 max(max(nArg**2 + 4*(nArg  1),nArg + ncolbArg),4)
 wantpArg =>
 zero? ncolbArg => max(3*(nArg  1),2)
 max(5*(nArg  1),2)
 zero? ncolbArg => max(2*(nArg  1),2)
 max(3*(nArg  1),2)
 wantqArg and wantpArg =>
 max(mArg**2 + 5*(mArg  1),2)
 wantqArg =>
 max(3*(mArg  1),1)
 wantpArg =>
 zero? ncolbArg => max(mArg**2+3*(mArg  1),2)
 max(mArg**2+5*(mArg  1),2)
 zero? ncolbArg => max(2*(mArg  1),1)
 max(3*(mArg  1),1)
+ or M >= K,
 [(invokeNagman(NIL$Lisp,_
 "f02wef",_
 ["m"::S,"n"::S,"lda"::S,"ncolb"::S,"ldb"::S_
 ,"wantq"::S,"ldq"::S,"wantp"::S,"ldpt"::S,"ifail"::S_
 ,"q"::S,"sv"::S,"pt"::S,"work"::S,"a"::S_
 ,"b"::S]$Lisp,_
 ["q"::S,"sv"::S,"pt"::S,"work"::S]$Lisp,_
 [["double"::S,["q"::S,"ldq"::S,"m"::S]$Lisp_
 ,["sv"::S,"m"::S]$Lisp,["pt"::S,"ldpt"::S,"n"::S]$Lisp,_
 ["work"::S,workLength]$Lisp,["a"::S,"lda"::S,"n"::S]$Lisp,_
 ["b"::S,"ldb"::S,"ncolb"::S]$Lisp_
 ]$Lisp_
 ,["integer"::S,"m"::S,"n"::S,"lda"::S,"ncolb"::S_
 ,"ldb"::S,"ldq"::S,"ldpt"::S,"ifail"::S]$Lisp_
 ,["logical"::S,"wantq"::S,"wantp"::S]$Lisp_
 ]$Lisp,_
 ["q"::S,"sv"::S,"pt"::S,"work"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
 [([mArg::Any,nArg::Any,ldaArg::Any,ncolbArg::Any,ldbArg::Any,_
 wantqArg::Any,ldqArg::Any,wantpArg::Any,ldptArg::Any,_
 ifailArg::Any,aArg::Any,bArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ or K > N,
 f02xef(mArg:Integer,nArg:Integer,ldaArg:Integer,_
 ncolbArg:Integer,ldbArg:Integer,wantqArg:Boolean,_
 ldqArg:Integer,wantpArg:Boolean,ldphArg:Integer,_
 aArg:Matrix Complex DoubleFloat,bArg:Matrix Complex DoubleFloat,_
 ifailArg:Integer): Result ==
  This segment added by hand, to deal with an assumed size array GDN
 tem : Integer := (min(mArg,nArg)1)
 rLen : Integer :=
 zero? ncolbArg and not wantqArg and not wantpArg => 2*tem
 zero? ncolbArg and wantpArg and not wantqArg => 3*tem
 not wantpArg =>
 ncolbArg >0 or wantqArg => 3*tem
 5*tem
 cLen : Integer :=
 mArg >= nArg =>
 wantqArg and wantpArg => 2*(nArg + max(nArg**2,ncolbArg))
 wantqArg and not wantpArg => 2*(nArg + max(nArg**2+nArg,ncolbArg))
 2*(nArg + max(nArg,ncolbArg))
 wantpArg => 2*(mArg**2 + mArg)
 2*mArg
 svLength : Integer :=
 min(mArg,nArg)
 [(invokeNagman(NIL$Lisp,_
 "f02xef",_
 ["m"::S,"n"::S,"lda"::S,"ncolb"::S,"ldb"::S_
 ,"wantq"::S,"ldq"::S,"wantp"::S,"ldph"::S,"ifail"::S_
 ,"q"::S,"sv"::S,"ph"::S,"rwork"::S,"a"::S_
 ,"b"::S,"cwork"::S]$Lisp,_
 ["q"::S,"sv"::S,"ph"::S,"rwork"::S,"cwork"::S]$Lisp,_
 [["double"::S,["sv"::S,svLength]$Lisp,["rwork"::S,rLen]$Lisp_
 ]$Lisp_
 ,["integer"::S,"m"::S,"n"::S,"lda"::S,"ncolb"::S_
 ,"ldb"::S,"ldq"::S,"ldph"::S,"ifail"::S]$Lisp_
 ,["logical"::S,"wantq"::S,"wantp"::S]$Lisp_
 ,["double complex"::S,["q"::S,"ldq"::S,"m"::S]$Lisp,_
 ["ph"::S,"ldph"::S,"n"::S]$Lisp,["a"::S,"lda"::S,"n"::S]$Lisp,_
 ["b"::S,"ldb"::S,"ncolb"::S]$Lisp,["cwork"::S,cLen]$Lisp]$Lisp_
 ]$Lisp,_
 ["q"::S,"sv"::S,"ph"::S,"rwork"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
 [([mArg::Any,nArg::Any,ldaArg::Any,ncolbArg::Any,ldbArg::Any,_
 wantqArg::Any,ldqArg::Any,wantpArg::Any,ldphArg::Any,_
 ifailArg::Any,aArg::Any,bArg::Any ])_
 @List Any]$Lisp)$Lisp)_
 pretend List (Record(key:Symbol,entry:Any))]$Result
+ or NRX < N,
\end{chunk}
\begin{chunk}{NAGF02.dotabb}
"NAGF02" [color="#FF4488",href="bookvol10.4.pdf#nameddest=NAGF02"]
"ALIST" [color="#88FF44",href="bookvol10.3.pdf#nameddest=ALIST"]
"NAGE02" > "ALIST"
+ or LWORK <3*K+max(K*K*N),
\end{chunk}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{package NAGE02 NagFittingPackage}
\begin{chunk}{NagFittingPackage.input}
)set break resume
)sys rm f NagFittingPackage.output
)spool NagFittingPackage.output
)set message test on
)set message auto off
)clear all
+ or LRWORK < 1,
S 1 of 215
)show NagFittingPackage
R
R NagFittingPackage is a package constructor
R Abbreviation for NagFittingPackage is NAGE02
R This constructor is exposed in this frame.
R Issue )edit bookvol10.4.pamphlet to see algebra source code for NAGE02
R
R Operations 
R e02adf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02aef : (Integer,Matrix(DoubleFloat),DoubleFloat,Integer) > Result
R e02agf : (Integer,Integer,Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(Integer),Integer,Integer,Integer) > Result
R e02ahf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,Integer,Integer,Integer) > Result
R e02ajf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,DoubleFloat,Integer,Integer,Integer) > Result
R e02akf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,DoubleFloat,Integer) > Result
R e02baf : (Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02bbf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer) > Result
R e02bcf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer) > Result
R e02bdf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02bef : (String,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(Integer)) > Result
R e02daf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(Integer),Integer,Integer,Integer,DoubleFloat,Matrix(DoubleFloat),Integer) > Result
R e02dcf : (String,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(Integer),Integer) > Result
R e02ddf : (String,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02def : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02dff : (Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Integer,Integer) > Result
R e02gaf : (Integer,Integer,Integer,DoubleFloat,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
R e02zaf : (Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Integer,Integer) > Result
R
E 1
+ or LIWORK < 1.
)clear all

+ IFAIL= 2
+ Not all the requested eigenvalues and vectors have been
+ obtained. Approximations to the rth eigenvalue are
+ oscillating rapidly indicating that severe cancellation is
+ occurring in the rth eigenvector and so M is returned as (r
+ 1). A restart with a larger value of K may permit
+ convergence.
S 2 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 2
+ IFAIL= 3
+ Not all the requested eigenvalues and vectors have been
+ obtained. The rate of convergence of the remaining
+ eigenvectors suggests that more than NOITS iterations would
+ be required and so the input value of M has been reduced. A
+ restart with a larger value of K may permit convergence.
S 3 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 3
+ IFAIL= 4
+ Not all the requested eigenvalues and vectors have been
+ obtained. NOITS iterations have been performed. A restart,
+ possibly with a larger value of K, may permit convergence.
S 4 of 215
m:=11
R
R
R (3) 11
R Type: PositiveInteger
E 4
+ IFAIL= 5
+ This error is very unlikely to occur, but indicates that
+ convergence of the eigenvalue subproblem has not taken
+ place. Restarting with a different set of approximate
+ vectors may allow convergence. If this error occurs the user
+ should check carefully that F02FJF is being called
+ correctly.
S 5 of 215
kplus1:=4
R
R
R (4) 4
R Type: PositiveInteger
E 5
+ 7. Accuracy
S 6 of 215
nrows:=50
R
R
R (5) 50
R Type: PositiveInteger
E 6
+ Eigenvalues and eigenvectors will normally be computed to the
+ accuracy requested by the parameter TOL, but eigenvectors
+ corresponding to small or to close eigenvalues may not always be
+ computed to the accuracy requested by the parameter TOL. Use of
+ the routine MONIT to monitor acceptance of eigenvalues and
+ eigenvectors is recommended.
S 7 of 215
x:Matrix SF:=
 [[1.00 ,2.10 ,3.10 ,3.90 ,4.90 ,5.80 ,_
 6.50 ,7.10 ,7.80 ,8.40 ,9.00 ]]
R
R
R (6)
R [
R [1., 2.0999999999999996, 3.0999999999999996, 3.8999999999999999,
R 4.9000000000000004, 5.7999999999999998, 6.5, 7.0999999999999996,
R 7.7999999999999998, 8.3999999999999986, 9.]
R ]
R Type: Matrix(DoubleFloat)
E 7
+ 8. Further Comments
S 8 of 215
y:Matrix SF:=
 [[10.40 ,7.90 ,4.70 ,2.50 ,1.20 ,2.20 ,_
 5.10 ,9.20 ,16.10 ,24.50 ,35.30 ]]
R
R
R (7)
R [
R [10.399999999999999, 7.9000000000000004, 4.6999999999999993, 2.5, 1.2,
R 2.2000000000000002, 5.0999999999999996, 9.1999999999999993,
R 16.100000000000001, 24.5, 35.299999999999997]
R ]
R Type: Matrix(DoubleFloat)
E 8
+ The time taken by the routine will be principally determined by
+ the time taken to solve the eigenvalue subproblem and the time
+ taken by the routines DOT and IMAGE. The time taken to solve an
+ 2
+ eigenvalue subproblem is approximately proportional to nk . It
+ is important to be aware that several calls to DOT and IMAGE may
+ occur on each major iteration.
S 9 of 215
w:Matrix SF:=
 [[1.00 ,1.00 ,1.00 ,1.00 ,1.00 ,0.80 ,_
 0.80 ,0.70 ,0.50 ,0.30 ,0.20 ]]
R
R
R (8)
R [
R [1., 1., 1., 1., 1., 0.80000000000000004, 0.80000000000000004,
R 0.69999999999999996, 0.5, 0.29999999999999999, 0.20000000000000001]
R ]
R Type: Matrix(DoubleFloat)
E 9
+ As can be seen from Table 3.1, many applications of F02FJF will
+ require routine IMAGE to solve a system of linear equations. For
+ example, to find the smallest eigenvalues of Ax=(lambda)Bx, IMAGE
+ needs to solve equations of the form Aw=Bz for w and routines
+ from Chapters F01 and F04 of the NAG Foundation Library will
+ frequently be useful in this context. In particular, if A is a
+ positivedefinite variable band matrix, F04MCF may be used after
+ A has been factorized by F01MCF. Thus factorization need be
+ performed only once prior to calling F02FJF. An illustration of
+ this type of use is given in the example program in Section 9.
S 10 of 215
 result:=e02adf(m,kplus1,nrows,x,y,w,1)
E 10
+ ~
+ An approximation d , to the ith eigenvalue, is accepted as soon
+ h
+ ~
+ as d and the previous approximation differ by less than
+ h
+ ~
+ d *TOL/10. Eigenvectors are accepted in groups corresponding to
+ h
+ clusters of eigenvalues that are equal, or nearly equal, in
+ absolute value and that have already been accepted. If d is the
+ r
+ last eigenvalue in such a group and we define the residual r as
+ j
)clear all

+ r =Cx y
+ j j r
S 11 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 11
+ where y is the projection of Cx , with respect to B, onto the
+ r j
+ space spanned by x ,x ,...,x and x is the current approximation
+ 1 2 r j
+ to the jth eigenvector, then the value f returned in MONIT is
+ i
+ given by
S 12 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 12
+ 2 T
+ f =maxr  /Cx  x =x Bx
+ i j B j B B
S 13 of 215
nplus1:=5
R
R
R (3) 5
R Type: PositiveInteger
E 13
+ and each vector in the group is accepted as an eigenvector if
S 14 of 215
a:Matrix SF:= [[2.0000 ,0.5000 ,0.2500 ,0.1250 ,0.0625 ]]
R
R
R (4) [2. 0.5 0.25 0.125 6.25E2]
R Type: Matrix(DoubleFloat)
E 14
+ (d f )/(d e)n,
+ D=S, m=n,
S 35 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 35
+ D=(S 0), m=sv >=...>=sv >=0.
+ 1 2 min(m,n)
S 38 of 215
xmin:=0.5
R
R
R (4)  0.5
R Type: Float
E 38
+ The first min(m,n) columns of Q are the lefthand singular
+ vectors of A, the diagonal elements of S are the singular values
+ of A and the first min(m,n) columns of P are the righthand
+ singular vectors of A.
S 39 of 215
xmax:=2.5
R
R
R (5) 2.5
R Type: Float
E 39
+ Either or both of the lefthand and righthand singular vectors
+ of A may be requested and the matrix C given by
S 40 of 215
a:Matrix SF:= [[2.53213 ,1.13032 ,0.27150 ,0.04434 ,_
 0.00547 ,0.00054 ,0.00004 ]]
R
R
R (6)
R [
R [2.53213, 1.13032, 0.27149999999999996, 4.4339999999999997E2,
R 5.4699999999999992E3, 5.399999999999999E4, 3.9999999999999996E5]
R ]
R Type: Matrix(DoubleFloat)
E 40
+ T
+ C=Q B,
S 41 of 215
ia1:=1
R
R
R (7) 1
R Type: PositiveInteger
E 41
+ where B is an m by ncolb given matrix, may also be requested.
S 42 of 215
la:=7
R
R
R (8) 7
R Type: PositiveInteger
E 42
+ The routine obtains the singular value decomposition by first
+ reducing A to upper triangular form by means of Householder
+ transformations, from the left when m>=n and from the right when
+ m= 0.
S 54 of 215
qatm1:=0.0
R
R
R (9) 0.0
R Type: Float
E 54
+ When M = 0 then an immediate return is effected.
S 55 of 215
iaint1:=1
R
R
R (10) 1
R Type: PositiveInteger
E 55
+ 2: N  INTEGER Input
+ On entry: the number of columns, n, of the matrix A.
+ Constraint: N >= 0.
S 56 of 215
laint:=8
R
R
R (11) 8
R Type: PositiveInteger
E 56
+ When N = 0 then an immediate return is effected.
S 57 of 215
 result:=e02ajf(np1,xmin,xmax,a,ia1,la,qatm1,iaint1,laint, 1)
E 57
+ 3: A(LDA,*)  DOUBLE PRECISION array Input/Output
+ Note: the second dimension of the array A must be at least
+ max(1,N).
+ On entry: the leading m by n part of the array A must
+ contain the matrix A whose singular value decomposition is
+ required. On exit: if M >= N and WANTQ = .TRUE., then the
+ leading m by n part of A will contain the first n columns of
+ the orthogonal matrix Q.
)clear all
+ If M < N and WANTP = .TRUE., then the leading m by n part of
+ T
+ A will contain the first m rows of the orthogonal matrix P .
+ If M >= N and WANTQ = .FALSE. and WANTP = .TRUE., then the
+ leading n by n part of A will contain the first n rows of
+ T
+ the orthogonal matrix P .
S 58 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 58
+ Otherwise the leading m by n part of A is used as internal
+ workspace.
S 59 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 59
+ 4: LDA  INTEGER Input
+ On entry:
+ the first dimension of the array A as declared in the
+ (sub)program from which F02WEF is called.
+ Constraint: LDA >= max(1,M).
S 60 of 215
np1:=7
R
R
R (3) 7
R Type: PositiveInteger
E 60
+ 5: NCOLB  INTEGER Input
+ On entry: ncolb, the number of columns of the matrix B.
+ When NCOLB = 0 the array B is not referenced. Constraint:
+ NCOLB >= 0.
S 61 of 215
xmin:=0.5
R
R
R (4)  0.5
R Type: Float
E 61
+ 6: B(LDB,*)  DOUBLE PRECISION array Input/Output
+ Note: the second dimension of the array B must be at least
+ max(1,ncolb) On entry: if NCOLB > 0, the leading m by ncolb
+ part of the array B must contain the matrix to be
+ transformed. On exit: B is overwritten by the m by ncolb
+ T
+ matrix Q B.
S 62 of 215
xmax:=2.5
R
R
R (5) 2.5
R Type: Float
E 62
+ 7: LDB  INTEGER Input
+ On entry:
+ the first dimension of the array B as declared in the
+ (sub)program from which F02WEF is called.
+ Constraint: if NCOLB > 0 then LDB >= max(1,M).
S 63 of 215
a:Matrix SF:=
 [[2.53213 ,1.13032 ,0.27150 ,0.04434 ,0.00547 ,0.00054 ,0.00004 ]]
R
R
R (6)
R [
R [2.53213, 1.13032, 0.27149999999999996, 4.4339999999999997E2,
R 5.4699999999999992E3, 5.399999999999999E4, 3.9999999999999996E5]
R ]
R Type: Matrix(DoubleFloat)
E 63
+ 8: WANTQ  LOGICAL Input
+ On entry: WANTQ must be .TRUE., if the lefthand singular
+ vectors are required. If WANTQ = .FALSE., then the array Q
+ is not referenced.
S 64 of 215
ia1:=1
R
R
R (7) 1
R Type: PositiveInteger
E 64
+ 9: Q(LDQ,*)  DOUBLE PRECISION array Output
+ Note: the second dimension of the array Q must be at least
+ max(1,M).
+ On exit: if M < N and WANTQ = .TRUE., the leading m by m
+ part of the array Q will contain the orthogonal matrix Q.
+ Otherwise the array Q is not referenced.
S 65 of 215
la:=7
R
R
R (8) 7
R Type: PositiveInteger
E 65
+ 10: LDQ  INTEGER Input
+ On entry:
+ the first dimension of the array Q as declared in the
+ (sub)program from which F02WEF is called.
+ Constraint: if M < N and WANTQ = .TRUE., LDQ >= max(1,M).
S 66 of 215
x:=0.5
R
R
R (9)  0.5
R Type: Float
E 66
+ 11: SV(*)  DOUBLE PRECISION array Output
+ Note: the length of SV must be at least min(M,N). On exit:
+ the min(M,N) diagonal elements of the matrix S.
S 67 of 215
 result:=e02akf(np1,xmin,xmax,a,ia1,la,x, 1)
E 67
+ 12: WANTP  LOGICAL Input
+ On entry: WANTP must be .TRUE. if the righthand singular
+ vectors are required. If WANTP = .FALSE., then the array PT
+ is not referenced.
)clear all
+ 13: PT(LDPT,*)  DOUBLE PRECISION array Output
+ Note: the second dimension of the array PT must be at least
+ max(1,N).
+ On exit: if M >= N and WANTQ and WANTP are .TRUE., the
+ leading n by n part of the array PT will contain the
+ T
+ orthogonal matrix P . Otherwise the array PT is not
+ referenced.
+ 14: LDPT  INTEGER Input
+ On entry:
+ the first dimension of the array PT as declared in the
+ (sub)program from which F02WEF is called.
+ Constraint: if M >= N and WANTQ and WANTP are .TRUE., LDPT
+ >= max(1,N).
S 68 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 68
+ 15: WORK(*)  DOUBLE PRECISION array Output
+ Note: the length of WORK must be at least max(1,lwork),
+ where lwork must be as given in the following table:
S 69 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 69
+ M >= N
+ WANTQ is .TRUE. and WANTP = .TRUE.
+ 2
+ lwork=max(N +5*(N1),N+NCOLB,4)
S 70 of 215
m:=14
R
R
R (3) 14
R Type: PositiveInteger
E 70
+ WANTQ = .TRUE. and WANTP = .FALSE.
+ 2
+ lwork=max(N +4*(N1),N+NCOLB,4)
S 71 of 215
ncap7:=12
R
R
R (4) 12
R Type: PositiveInteger
E 71
+ WANTQ = .FALSE. and WANTP = .TRUE.
+ lwork=max(3*(N1),2) when NCOLB = 0
S 72 of 215
x:Matrix SF:=
 [[0.20 ,0.47 ,0.74 ,1.09 ,1.60 ,1.90 ,2.60 ,3.10 ,4.00 ,5.15,_
R 6.17 ,8.00 ,10.00 ,12.00 ]]
R
R
R (5)
R [
R [0.20000000000000001, 0.46999999999999997, 0.73999999999999999,
R 1.0899999999999999, 1.6000000000000001, 1.8999999999999999,
R 2.5999999999999996, 3.0999999999999996, 4., 5.1500000000000004,
R 6.1699999999999999, 8., 10., 12.]
R ]
R Type: Matrix(DoubleFloat)
E 72
+ lwork=max(5*(N1),2) when NCOLB > 0
S 73 of 215
y:Matrix SF:=
 [[0.00 ,2.00 ,4.00 ,6.00 ,8.00 ,8.62 ,9.10 ,8.90,_
 8.15 ,7.00 ,6.00 ,4.54 ,3.39 ,2.56 ]]
R
R
R (6)
R [
R [0., 2., 4., 6., 8., 8.6199999999999992, 9.0999999999999996,
R 8.8999999999999986, 8.1499999999999986, 7., 6., 4.5399999999999991,
R 3.3899999999999997, 2.5599999999999996]
R ]
R Type: Matrix(DoubleFloat)
E 73
+ WANTQ = .FALSE. and WANTP = .FALSE.
+ lwork=max(2*(N1),2) when NCOLB = 0
S 74 of 215
w:Matrix SF:=
 [[0.20 ,0.20 ,0.30 ,0.70 ,0.90 ,1.00 ,_
 1.00 ,1.00 ,0.80 ,0.50 ,0.70 ,1.00 ,1.00 ,1.00 ]]
R
R
R (7)
R [
R [0.20000000000000001, 0.20000000000000001, 0.29999999999999999,
R 0.69999999999999996, 0.89999999999999991, 1., 1., 1.,
R 0.80000000000000004, 0.5, 0.69999999999999996, 1., 1., 1.]
R ]
R Type: Matrix(DoubleFloat)
E 74
+ lwork=max(3*(N1),2) when NCOLB > 0
S 75 of 215
lamda:Matrix SF:=
 [[0.0 ,0.0 ,0.0 ,0.0 ,1.50 ,2.60 ,_
 4.00 ,8.00 ,0.0 ,0.0 ,0.0 ,0.0 ]]
R
R
R (8) [0. 0. 0. 0. 1.5 2.5999999999999996 4. 8. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 75
+ M < N
+ WANTQ = .TRUE. and WANTP = .TRUE.
+ 2
+ lwork=max(M +5*(M1),2)
S 76 of 215
 result:=e02baf(m,ncap7,x,y,w,lamda, 1)
E 76
+ WANTQ = .TRUE. and WANTP = .FALSE.
+ lwork=max(3*(M1),1)
)clear all
+ WANTQ = .FALSE. and WANTP = .TRUE.
+ 2
+ lwork=max(M +3*(M1),2) when NCOLB = 0
+ 2
+ lwork=max(M +5*(M1),2) when NCOLB > 0
S 77 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 77
+ WANTQ = .FALSE. and WANTP = .FALSE.
+ lwork=max(2*(M1),1) when NCOLB = 0
S 78 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 78
+ lwork=max(3*(M1),1) when NCOLB > 0
+ On exit: WORK(min(M,N)) contains the total number of
+ iterations taken by the R algorithm.
S 79 of 215
ncap7:=11
R
R
R (3) 11
R Type: PositiveInteger
E 79
+ The rest of the array is used as workspace.
S 80 of 215
lamda:Matrix SF:=
 [[1.00 ,1.00 ,1.00 ,1.00 ,3.00 ,6.00 ,_
 8.00 ,9.00 ,9.00 ,9.00 ,9.00 ]]
R
R
R (4) [1. 1. 1. 1. 3. 6. 8. 9. 9. 9. 9.]
R Type: Matrix(DoubleFloat)
E 80
+ 16: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
S 81 of 215
c:Matrix SF:=
 [[1.00 ,2.00 ,4.00 ,7.00 ,6.00 ,4.00 ,_
 3.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
R
R
R (5) [1. 2. 4. 7. 6. 4. 3. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 81
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
S 82 of 215
x:=2.0
R
R
R (6) 2.0
R Type: Float
E 82
+ 6. Error Indicators and Warnings
S 83 of 215
 result:=e02bbf(ncap7,lamda,c,x,1)
E 83
+ Errors detected by the routine:
)clear all
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
+ IFAIL=1
+ One or more of the following conditions holds:
+ M < 0,
S 84 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 84
+ N < 0,
S 85 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 85
+ LDA < M,
S 86 of 215
ncap7:=14
R
R
R (3) 14
R Type: PositiveInteger
E 86
+ NCOLB < 0,
S 87 of 215
lamda:Matrix SF:=
 [[0.0 ,0.00 ,0.00 ,0.00 ,1.00 ,3.00 ,3.00 ,_
 3.00 ,4.00 ,4.00 ,6.00,6.00 ,6.00 ,6.00 ]]
R
R
R (4) [0. 0. 0. 0. 1. 3. 3. 3. 4. 4. 6. 6. 6. 6.]
R Type: Matrix(DoubleFloat)
E 87
+ LDB < M and NCOLB > 0,
S 88 of 215
c:Matrix SF:=
 [[10.00 ,12.00 ,13.00 ,15.00 ,22.00 ,26.00 ,_
 24.00 ,18.00 ,14.00 ,12.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
R
R
R (5) [10. 12. 13. 15. 22. 26. 24. 18. 14. 12. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 88
+ LDQ < M and M < N and WANTQ = .TRUE.,
S 89 of 215
x:=2.0
R
R
R (6) 2.0
R Type: Float
E 89
+ LDPT < N and M >= N and WANTQ = .TRUE., and WANTP = .
+ TRUE..
S 90 of 215
left:=1
R
R
R (7) 1
R Type: PositiveInteger
E 90
+ IFAIL> 0
+ The QR algorithm has failed to converge in 50*min(m,n)
+ iterations. In this case SV(1), SV(2),..., SV(IFAIL) may not
+ have been found correctly and the remaining singular values
+ may not be the smallest. The matrix A will nevertheless have
+ T
+ been factorized as A=QEP , where the leading min(m,n) by
+ min(m,n) part of E is a bidiagonal matrix with SV(1), SV(2),
+ ..., SV(min(m,n)) as the diagonal elements and WORK(1), WORK
+ (2),..., WORK(min(m,n)1) as the superdiagonal elements.
S 91 of 215
 result:=e02bcf(ncap7,lamda,c,x,left, 1)
E 91
+ This failure is not likely to occur.
)clear all
+ 7. Accuracy
+ The computed factors Q, D and P satisfy the relation
S 92 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 92
+ T
+ QDP =A+E,
S 93 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 93
+ where
S 94 of 215
ncap7:=14
R
R
R (3) 14
R Type: PositiveInteger
E 94
+ E<=c(epsilon)A,
S 95 of 215
lamda:Matrix SF:=
 [[0.0 ,0.00 ,0.00 ,0.00 ,1.00 ,3.00 ,_
 3.00 ,3.00 ,4.00 ,4.00 ,6.00 ,6.00 ,6.00 ,6.00 ]]
R
R
R (4) [0. 0. 0. 0. 1. 3. 3. 3. 4. 4. 6. 6. 6. 6.]
R Type: Matrix(DoubleFloat)
E 95
+ (epsilon) being the machine precision, c is a modest function of
+ m and n and . denotes the spectral (two) norm. Note that
+ A=sv .
+ 1
S 96 of 215
c:Matrix SF:=
 [[10.00 ,12.00 ,13.00 ,15.00 ,22.00 ,26.00 ,_
 24.00 ,18.00 ,14.00 ,12.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
R
R
R (5) [10. 12. 13. 15. 22. 26. 24. 18. 14. 12. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 96
+ 8. Further Comments
S 97 of 215
 result:=e02bdf(ncap7,lamda,c, 1)
E 97
+ Following the use of this routine the rank of A may be estimated
+ by a call to the INTEGER FUNCTION F06KLF(*). The statement:
)clear all

+ IRANK = F06KLF(MIN(M, N), SV, 1, TOL)
S 98 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 98
+ returns the value (k1) in IRANK, where k is the smallest integer
+ for which SV(k)n,
S 120 of 215
y:Matrix SF:=
 [[0.60 ,0.95 ,0.87 ,0.84 ,0.17 ,0.87 ,1 ,0.1 ,0.24 ,0.77 ,_
 0.32 ,1 ,0.63 ,0.66 ,0.93 ,0.15 ,0.99 ,0.54 ,0.44 ,0.72 ,_
 0.63 ,0.40 ,0.20 ,0.43 ,0.28 ,0.24 ,0.86 ,0.41 ,0.05 ,1 ]]
R
R
R (7)
R [
R [0.59999999999999998,  0.94999999999999996, 0.87, 0.83999999999999997,
R 0.16999999999999998,  0.87000000000000011, 1., 0.10000000000000001,
R 0.23999999999999999,  0.76999999999999991, 0.31999999999999995, 1.,
R  0.62999999999999989,  0.65999999999999992, 0.92999999999999994,
R 0.14999999999999999, 0.98999999999999999,  0.53999999999999992,
R 0.43999999999999995,  0.71999999999999997, 0.62999999999999989,
R  0.39999999999999997, 0.20000000000000001, 0.42999999999999999,
R 0.28000000000000003,  0.23999999999999999, 0.85999999999999999,
R  0.41000000000000003,  4.9999999999999996E2,  1.]
R ]
R Type: Matrix(DoubleFloat)
E 120
+ D=S, m=n,
S 121 of 215
f:Matrix SF:=
 [[0.93 ,1.79 ,0.36 ,0.52 ,0.49 ,1.76 ,0.33 ,0.48 ,0.65 ,_
 1.82 ,0.92 ,1 ,8.88 ,2.01 ,0.47 ,0.49 ,0.84 ,2.42 ,_
 0.47 ,7.15 ,0.44 ,3.34 ,2.78 ,0.44 ,0.70 ,6.52 ,0.66 ,_
 2.32 ,1.66 ,1 ]]
R
R
R (8)
R [
R [0.92999999999999994,  1.7899999999999998, 0.35999999999999999,
R 0.52000000000000002, 0.48999999999999999,  1.7599999999999998,
R 0.32999999999999996, 0.47999999999999998, 0.64999999999999991,
R  1.8199999999999998, 0.91999999999999993, 1., 8.879999999999999,
R  2.0099999999999998, 0.46999999999999997, 0.48999999999999999,
R 0.83999999999999997,  2.4199999999999999, 0.46999999999999997,
R 7.1500000000000004, 0.43999999999999995,  3.3399999999999999,
R 2.7799999999999998, 0.43999999999999995, 0.69999999999999996,
R  6.5199999999999996, 0.65999999999999992, 2.3199999999999998,
R 1.6599999999999999,  1.]
R ]
R Type: Matrix(DoubleFloat)
E 121
+ D=(S 0), m=sv >=...>=sv >=0.
+ 1 2 min(m,n)
S 124 of 215
point:Matrix Integer:=
 [[3 ,6 ,4 ,5 ,7 ,10 ,8 ,9 ,11 ,13 ,12 ,15 ,14 ,18 ,_
 16 ,17 ,19 ,20 ,21 ,30 ,23 ,26 ,24 ,25 ,27 ,28 ,_
 0 ,29 ,0 ,0 ,2 ,22 ,1 ,0 ,0 ,0,0 ,0 ,0 ,0 ,0 ,0 ,0 ]]
R
R
R (11)
R [
R [3, 6, 4, 5, 7, 10, 8, 9, 11, 13, 12, 15, 14, 18, 16, 17, 19, 20, 21, 30,
R 23, 26, 24, 25, 27, 28, 0, 29, 0, 0, 2, 22, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
R 0]
R ]
R Type: Matrix(Integer)
E 124
+ The first min(m,n) columns of Q are the lefthand singular
+ vectors of A, the diagonal elements of S are the singular values
+ of A and the first min(m,n) columns of P are the righthand
+ singular vectors of A.
S 125 of 215
npoint:=43
R
R
R (12) 43
R Type: PositiveInteger
E 125
+ Either or both of the lefthand and righthand singular vectors
+ of A may be requested and the matrix C given by
S 126 of 215
nc:=24
R
R
R (13) 24
R Type: PositiveInteger
E 126
+ H
+ C=Q B,
S 127 of 215
nws:=1750
R
R
R (14) 1750
R Type: PositiveInteger
E 127
+ where B is an m by ncolb given matrix, may also be requested.
S 128 of 215
eps:=0.000001
R
R
R (15) 0.000001
R Type: Float
E 128
+ The routine obtains the singular value decomposition by first
+ reducing A to upper triangular form by means of Householder
+ transformations, from the left when m>=n and from the right when
+ m= 0.
S 139 of 215
s:=0.1
R
R
R (9) 0.1
R Type: Float
E 139
+ When M = 0 then an immediate return is effected.
S 140 of 215
nxest:=15
R
R
R (10) 15
R Type: PositiveInteger
E 140
+ 2: N  INTEGER Input
+ On entry: the number of columns, n, of the matrix A.
+ Constraint: N >= 0.
S 141 of 215
nyest:=13
R
R
R (11) 13
R Type: PositiveInteger
E 141
+ When N = 0 then an immediate return is effected.
S 142 of 215
lwrk:=592
R
R
R (12) 592
R Type: PositiveInteger
E 142
+ 3: A(LDA,*)  COMPLEX(KIND(1.0D)) array Input/Output
+ Note: the second dimension of the array A must be at least
+ max(1,N).
+ On entry: the leading m by n part of the array A must
+ contain the matrix A whose singular value decomposition is
+ required. On exit: if M >= N and WANTQ = .TRUE., then the
+ leading m by n part of A will contain the first n columns of
+ the unitary matrix Q.
+ If M < N and WANTP = .TRUE., then the leading m by n part of
+ H
+ A will contain the first m rows of the unitary matrix P .
+ will contain the first m rows of the unitary matrix P If M
+ >= N and WANTQ = .FALSE. and WANTP = .TRUE., then the
+ leading n by n part of A will contain the first n
+ H
+ rows of the unitary matrix P . Otherwise the leading m by n
+ part of A is used as internal workspace.
S 143 of 215
liwrk:=51
R
R
R (13) 51
R Type: PositiveInteger
E 143
+ 4: LDA  INTEGER Input
+ On entry:
+ the first dimension of the array A as declared in the
+ (sub)program from which F02XEF is called.
+ Constraint: LDA >= max(1,M).
S 144 of 215
nx:=0
R
R
R (14) 0
R Type: NonNegativeInteger
E 144
+ 5: NCOLB  INTEGER Input
+ On entry: ncolb, the number of columns of the matrix B.
+ When NCOLB = 0 the array B is not referenced. Constraint:
+ NCOLB >= 0.
S 145 of 215
lamda:Matrix SF:=new(1,15,0.0)$Matrix SF
R
R
R (15) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 145
+ 6: B(LDB,*)  COMPLEX(KIND(1.0D)) array Input/Output
+ Note: the second dimension of the array B must be at least
+ max(1,NCOLB).
+ On entry: if NCOLB > 0, the leading m by ncolb part of the
+ array B must contain the matrix to be transformed. On exit:
+ H
+ B is overwritten by the m by ncolb matrix Q B.
S 146 of 215
ny:=0
R
R
R (16) 0
R Type: NonNegativeInteger
E 146
+ 7: LDB  INTEGER Input
+ On entry:
+ the first dimension of the array B as declared in the
+ (sub)program from which F02XEF is called.
+ Constraint: if NCOLB > 0, then LDB >= max(1,M).
S 147 of 215
mu:Matrix SF:=new(1,13,0.0)$Matrix SF
R
R
R (17) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 147
+ 8: WANTQ  LOGICAL Input
+ On entry: WANTQ must be .TRUE. if the lefthand singular
+ vectors are required. If WANTQ = .FALSE. then the array Q is
+ not referenced.
S 148 of 215
wrk:Matrix SF:=new(1,592,0.0)$Matrix SF
R
R
R (18)
R [
R [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]
R ]
R Type: Matrix(DoubleFloat)
E 148
+ 9: Q(LDQ,*)  COMPLEX(KIND(1.0D)) array Output
+ Note: the second dimension of the array Q must be at least
+ max(1,M).
+ On exit: if M < N and WANTQ = .TRUE., the leading m by m
+ part of the array Q will contain the unitary matrix Q.
+ Otherwise the array Q is not referenced.
S 149 of 215
iwrk:Matrix Integer:=new(1,51,0)$Matrix Integer
R
R
R (19)
R [
R [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
R 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
R 0, 0, 0]
R ]
R Type: Matrix(Integer)
E 149
+ 10: LDQ  INTEGER Input
+ On entry:
+ the first dimension of the array Q as declared in the
+ (sub)program from which F02XEF is called.
+ Constraint: if M < N and WANTQ = .TRUE., LDQ >= max(1,M).
S 150 of 215
 result:=e02dcf(start,mx,x,my,y,f,s,nxest,nyest,lwrk,liwrk,nx,_
 lamda,ny,mu,wrk,iwrk,1)
E 150
+ 11: SV(*)  DOUBLE PRECISION array Output
+ Note: the length of SV must be at least min(M,N). On exit:
+ the min(m,n) diagonal elements of the matrix S.
)clear all
+ 12: WANTP  LOGICAL Input
+ On entry: WANTP must be .TRUE. if the righthand singular
+ vectors are required. If WANTP = .FALSE. then the array PH
+ is not referenced.
+ 13: PH(LDPH,*)  DOUBLE PRECISION array Output
+ Note: the second dimension of the array PH must be at least
+ max(1,N).
+ On exit: if M >= N and WANTQ and WANTP are .TRUE., the
+ leading n by n part of the array PH will contain the unitary
+ H
+ matrix P . Otherwise the array PH is not referenced.
S 151 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 151
+ 14: LDPH  INTEGER Input
+ On entry:
+ the first dimension of the array PH as declared in the
+ (sub)program from which F02XEF is called.
+ Constraint: if M >= N and WANTQ and WANTP are .TRUE., LDPH
+ >= max(1,N).
S 152 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 152
+ 15: RWORK(*)  DOUBLE PRECISION array Output
+ Note: the length of RWORK must be at least max(1,lrwork),
+ where lrwork must satisfy:
+ lrwork=2*(min(M,N)1) when
+ NCOLB = 0 and WANTQ and WANTP are .FALSE.,
S 153 of 215
start:="c"
R
R
R (3) "c"
R Type: String
E 153
+ lrwork=3*(min(M,N)1) when
+ either NCOLB = 0 and WANTQ = .FALSE. and WANTP = .
+ TRUE., or WANTP = .FALSE. and one or both of NCOLB > 0
+ and WANTQ = .TRUE.
S 154 of 215
m:=30
R
R
R (4) 30
R Type: PositiveInteger
E 154
+ lrwork=5*(min(M,N)1)
+ otherwise.
+ On exit: RWORK(min(M,N)) contains the total number of
+ iterations taken by the QR algorithm.
S 155 of 215
x:Matrix SF:=
 [[11.16 ,12.85 ,19.85 ,19.72 ,15.91 ,0 ,20.87 ,3.45 ,_
 14.26 ,17.43 ,22.8 ,7.58 ,25 ,0 ,9.66 ,5.22 ,17.25 ,25 ,12.13 ,22.23 ,_
 11.52 ,15.2 ,7.54 ,17.32 ,2.14 ,0.51 ,22.69 ,5.47 ,21.67 ,3.31 ]]
R
R
R (5)
R [
R [11.16, 12.85, 19.850000000000001, 19.719999999999999, 15.91, 0.,
R 20.869999999999997, 3.4500000000000002, 14.26, 17.43, 22.799999999999997,
R 7.5800000000000001, 25., 0., 9.6600000000000001, 5.2199999999999998,
R 17.25, 25., 12.129999999999999, 22.229999999999997, 11.52,
R 15.199999999999999, 7.5399999999999991, 17.32, 2.1399999999999997,
R 0.51000000000000001, 22.689999999999998, 5.4699999999999998,
R 21.670000000000002, 3.3099999999999996]
R ]
R Type: Matrix(DoubleFloat)
E 155
+ The rest of the array is used as workspace.
S 156 of 215
y:Matrix SF:=
 [[1.24 ,3.06 ,10.72 ,1.39 ,7.74 ,20 ,20 ,12.78 ,17.87 ,3.46 ,12.39 ,_
 1.98 ,11.87 ,0 ,20 ,14.66 ,19.57 ,3.87 ,10.79 ,6.21 ,8.53 ,0 ,10.69 ,_
 13.78 ,15.03 ,8.37 ,19.63 ,17.13 ,14.36 ,0.33 ]]
R
R
R (6)
R [
R [1.24, 3.0599999999999996, 10.719999999999999, 1.3899999999999999,
R 7.7400000000000002, 20., 20., 12.779999999999999, 17.869999999999997,
R 3.46, 12.390000000000001, 1.98, 11.869999999999999, 0., 20., 14.66,
R 19.57, 3.8700000000000001, 10.789999999999999, 6.21, 8.5299999999999994,
R 0., 10.69, 13.779999999999999, 15.029999999999999, 8.3699999999999992,
R 19.629999999999999, 17.129999999999999, 14.359999999999999,
R 0.32999999999999996]
R ]
R Type: Matrix(DoubleFloat)
E 156
+ 16: CWORK(*)  COMPLEX(KIND(1.0D)) array Workspace
+ Note: the length of CWORK must be at least max(1,lcwork),
+ where lcwork must satisfy:
+ 2
+ lcwork=N+max(N ,NCOLB) when
+ M >= N and WANTQ and WANTP are both .TRUE.
S 157 of 215
f:Matrix SF:=
 [[22.15 ,22.11 ,7.97 ,16.83 ,15.30 ,34.6 ,5.74 ,41.24 ,10.74 ,18.60 ,_
 5.47 ,29.87 ,4.4 ,58.2 ,4.73 ,40.36 ,6.43 ,8.74 ,13.71 ,10.25 ,_
 15.74 ,21.6 ,19.31 ,12.11 ,53.1 ,49.43 ,3.25 ,28.63 ,5.52 ,44.08 ]]
R
R
R (7)
R [
R [22.149999999999999, 22.109999999999999, 7.9699999999999998,
R 16.829999999999998, 15.300000000000001, 34.599999999999994,
R 5.7400000000000002, 41.239999999999995, 10.739999999999998,
R 18.600000000000001, 5.4699999999999998, 29.869999999999997,
R 4.4000000000000004, 58.200000000000003, 4.7300000000000004,
R 40.359999999999999, 6.4299999999999997, 8.7399999999999984,
R 13.710000000000001, 10.25, 15.739999999999998, 21.600000000000001,
R 19.309999999999999, 12.109999999999999, 53.099999999999994, 49.43, 3.25,
R 28.629999999999999, 5.5199999999999996, 44.079999999999998]
R ]
R Type: Matrix(DoubleFloat)
E 157
+ 2
+ lcwork=N+max(N +N,NCOLB) when
+ M >= N and WANTQ = .TRUE., but WANTP = .FALSE.
S 158 of 215
w:Matrix SF:=
 [[1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,_
 1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ]]
R
R
R (8)
R [
R [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
R 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
R ]
R Type: Matrix(DoubleFloat)
E 158
+ lcwork=N+max(N,NCOLB) when
+ M >= N and WANTQ = .FALSE.
S 159 of 215
s:=10
R
R
R (9) 10
R Type: PositiveInteger
E 159
+ 2
+ lcwork=M +M when
+ M < N and WANTP = .TRUE.
S 160 of 215
nxest:=14
R
R
R (10) 14
R Type: PositiveInteger
E 160
+ lcwork = M when
+ M < N and WANTP = .FALSE.
S 161 of 215
nyest:=14
R
R
R (11) 14
R Type: PositiveInteger
E 161
+ 17: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
S 162 of 215
lwrk:=11016
R
R
R (12) 11016
R Type: PositiveInteger
E 162
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
S 163 of 215
liwrk:=128
R
R
R (13) 128
R Type: PositiveInteger
E 163
+ 6. Error Indicators and Warnings
S 164 of 215
nx:=0
R
R
R (14) 0
R Type: NonNegativeInteger
E 164
+ Errors detected by the routine:
S 165 of 215
lamda:=new(1,14,0.0)$Matrix SF
R
R
R (15) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 165
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
S 166 of 215
ny:=0
R
R
R (16) 0
R Type: NonNegativeInteger
E 166
+ IFAIL=1
+ One or more of the following conditions holds:
+ M < 0,
S 167 of 215
mu:=new(1,14,0.0)$Matrix SF
R
R
R (17) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 167
+ N < 0,
S 168 of 215
wrk:=new(1,11016,0.0)$Matrix SF;
R
R
R Type: Matrix(DoubleFloat)
E 168
+ LDA < M,
S 169 of 215
 result:=e02ddf(start,m,x,y,f,w,s,nxest,nyest,lwrk,liwrk,nx,lamda,ny,mu,wrk,1)
E 169
+ NCOLB < 0,
)clear all
+ LDB < M and NCOLB > 0,
+ LDQ < M and M < N and WANTQ = .TRUE.,
S 170 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 170
+ LDPH < N and M >= N and WANTQ = .TRUE. and WANTP = .
+ TRUE..
S 171 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 171
+ IFAIL> 0
+ The QR algorithm has failed to converge in 50*min(m,n)
+ iterations. In this case SV(1), SV(2),..., SV(IFAIL) may not
+ have been found correctly and the remaining singular values
+ may not be the smallest. The matrix A will nevertheless have
+ H
+ been factorized as A=QEP where the leading min(m,n) by
+ min(m,n) part of E is a bidiagonal matrix with SV(1), SV(2),
+ ..., SV(min(m,n)) as the diagonal elements and RWORK(1),
+ RWORK(2),..., RWORK(min(m,n)1) as the superdiagonal
+ elements.
S 172 of 215
m:=7
R
R
R (3) 7
R Type: PositiveInteger
E 172
+ This failure is not likely to occur.
S 173 of 215
px:=11
R
R
R (4) 11
R Type: PositiveInteger
E 173
+ 7. Accuracy
S 174 of 215
py:=10
R
R
R (5) 10
R Type: PositiveInteger
E 174
+ The computed factors Q, D and P satisfy the relation
S 175 of 215
x:Matrix SF:= [[1 ,1.1 ,1.5 ,1.6 ,1.9 ,1.9 ,2 ]]
R
R
R (6)
R [
R [1., 1.1000000000000001, 1.5, 1.6000000000000001, 1.8999999999999999,
R 1.8999999999999999, 2.]
R ]
R Type: Matrix(DoubleFloat)
E 175
+ H
+ QDP =A+E,
S 176 of 215
y:Matrix SF:= [[0 ,0.1 ,0.7 ,0.4 ,0.3 ,0.8 ,1 ]]
R
R
R (7)
R [
R [0., 0.10000000000000001, 0.69999999999999996, 0.40000000000000002,
R 0.29999999999999999, 0.80000000000000004, 1.]
R ]
R Type: Matrix(DoubleFloat)
E 176
+ where
S 177 of 215
lamda:Matrix SF:= [[1.0 ,1.0 ,1.0 ,1.0 ,1.3 ,1.5 ,1.6 ,2 ,2 ,2 ,2 ]]
R
R
R (8)
R [1. 1. 1. 1. 1.2999999999999998 1.5 1.6000000000000001 2. 2. 2. 2.]
R Type: Matrix(DoubleFloat)
E 177
+ E<=c(epsilon)A,
S 178 of 215
mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.4 ,0.7 ,1 ,1 ,1 ,1 ]]
R
R
R (9)
R [0. 0. 0. 0. 0.40000000000000002 0.69999999999999996 1. 1. 1. 1.]
R Type: Matrix(DoubleFloat)
E 178
+ (epsilon) being the machine precision, c is a modest function of
+ m and n and . denotes the spectral (two) norm. Note that
+ A=sv .
+ 1
S 179 of 215
c:Matrix SF:=
 [[1 ,1.1333 ,1.3667 ,1.7 ,1.9 ,2 ,1.2 ,1.3333 ,1.5667,1.9 ,_
 2.1 ,2.2 ,1.5833 ,1.7167 ,1.95 ,2.2833 ,2.4833 ,2.5833 ,_
 2.1433 ,2.2767 ,2.51 ,2.8433 ,3.0433 ,3.1433 ,2.8667 ,_
 3 ,3.2333 ,3.5667 ,3.7667 ,3.8667 ,3.4667 ,3.6 ,3.8333 ,_
 4.1667 ,4.3667 ,4.4667 ,4 ,4.1333 ,4.3667 ,4.7 ,4.9 ,5 ]]
R
R
R (10)
R [
R [1., 1.1333, 1.3666999999999998, 1.7, 1.8999999999999999, 2., 1.2,
R 1.3332999999999999, 1.5667, 1.8999999999999999, 2.0999999999999996,
R 2.2000000000000002, 1.5832999999999999, 1.7166999999999999, 1.95,
R 2.2832999999999997, 2.4832999999999998, 2.5832999999999999, 2.1433,
R 2.2766999999999999, 2.5099999999999998, 2.8433000000000002,
R 3.0432999999999999, 3.1433, 2.8666999999999998, 3., 3.2332999999999998,
R 3.5667, 3.7667000000000002, 3.8666999999999998, 3.4666999999999999,
R 3.5999999999999996, 3.8332999999999999, 4.1666999999999996,
R 4.3666999999999998, 4.4666999999999994, 4., 4.1333000000000002,
R 4.3666999999999998, 4.6999999999999993, 4.9000000000000004, 5.]
R ]
R Type: Matrix(DoubleFloat)
E 179
+ 8. Further Comments
S 180 of 215
 result:=e02def(m,px,py,x,y,lamda,mu,c,1)
E 180
+ Following the use of this routine the rank of A may be estimated
+ by a call to the INTEGER FUNCTION F06KLF(*). The statement:
)clear all
+ IRANK = F06KLF(MIN(M, N), SV, 1, TOL)
S 181 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 181
+ returns the value (k1) in IRANK, where k is the smallest integer
+ for which SV(k) Symbol
+ FOP ==> FortranOutputStackPackage
+
+ Exports ==> with
+ f02aaf : (Integer,Integer,Matrix DoubleFloat,Integer) > Result
+ ++ f02aaf(ia,n,a,ifail)
+ ++ calculates all the eigenvalue.
+ ++ See \downlink{Manual Page}{manpageXXf02aaf}.
+ f02abf : (Matrix DoubleFloat,Integer,Integer,Integer,Integer) > Result
+ ++ f02abf(a,ia,n,iv,ifail)
+ ++ calculates all the eigenvalues of a real
+ ++ symmetric matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02abf}.
+ f02adf : (Integer,Integer,Integer,Matrix DoubleFloat,_
+ Matrix DoubleFloat,Integer) > Result
+ ++ f02adf(ia,ib,n,a,b,ifail)
+ ++ calculates all the eigenvalues of Ax=(lambda)Bx, where A
+ ++ is a real symmetric matrix and B is a real symmetric positive
+ ++ definite matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02adf}.
+ f02aef : (Integer,Integer,Integer,Integer,_
+ Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
+ ++ f02aef(ia,ib,n,iv,a,b,ifail)
+ ++ calculates all the eigenvalues of
+ ++ Ax=(lambda)Bx, where A is a real symmetric matrix and B is a
+ ++ real symmetric positivedefinite matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02aef}.
+ f02aff : (Integer,Integer,Matrix DoubleFloat,Integer) > Result
+ ++ f02aff(ia,n,a,ifail)
+ ++ calculates all the eigenvalues of a real unsymmetric
+ ++ matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02aff}.
+ f02agf : (Integer,Integer,Integer,Integer,_
+ Matrix DoubleFloat,Integer) > Result
+ ++ f02agf(ia,n,ivr,ivi,a,ifail)
+ ++ calculates all the eigenvalues of a real
+ ++ unsymmetric matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02agf}.
+ f02ajf : (Integer,Integer,Integer,Matrix DoubleFloat,_
+ Matrix DoubleFloat,Integer) > Result
+ ++ f02ajf(iar,iai,n,ar,ai,ifail)
+ ++ calculates all the eigenvalue.
+ ++ See \downlink{Manual Page}{manpageXXf02ajf}.
+ f02akf : (Integer,Integer,Integer,Integer,_
+ Integer,Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
+ ++ f02akf(iar,iai,n,ivr,ivi,ar,ai,ifail)
+ ++ calculates all the eigenvalues of a
+ ++ complex matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02akf}.
+ f02awf : (Integer,Integer,Integer,Matrix DoubleFloat,_
+ Matrix DoubleFloat,Integer) > Result
+ ++ f02awf(iar,iai,n,ar,ai,ifail)
+ ++ calculates all the eigenvalues of a complex Hermitian
+ ++ matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02awf}.
+ f02axf : (Matrix DoubleFloat,Integer,Matrix DoubleFloat,Integer,_
+ Integer,Integer,Integer,Integer) > Result
+ ++ f02axf(ar,iar,ai,iai,n,ivr,ivi,ifail)
+ ++ calculates all the eigenvalues of a
+ ++ complex Hermitian matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02axf}.
+ f02bbf : (Integer,Integer,DoubleFloat,DoubleFloat,_
+ Integer,Integer,Matrix DoubleFloat,Integer) > Result
+ ++ f02bbf(ia,n,alb,ub,m,iv,a,ifail)
+ ++ calculates selected eigenvalues of a real
+ ++ symmetric matrix by reduction to tridiagonal form, bisection and
+ ++ inverse iteration, where the selected eigenvalues lie within a
+ ++ given interval.
+ ++ See \downlink{Manual Page}{manpageXXf02bbf}.
+ f02bjf : (Integer,Integer,Integer,DoubleFloat,_
+ Boolean,Integer,Matrix DoubleFloat,Matrix DoubleFloat,Integer) > Result
+ ++ f02bjf(n,ia,ib,eps1,matv,iv,a,b,ifail)
+ ++ calculates all the eigenvalues and, if required, all the
+ ++ eigenvectors of the generalized eigenproblem Ax=(lambda)Bx
+ ++ where A and B are real, square matrices, using the QZ algorithm.
+ ++ See \downlink{Manual Page}{manpageXXf02bjf}.
+ f02fjf : (Integer,Integer,DoubleFloat,Integer,_
+ Integer,Integer,Integer,Integer,Integer,Integer,Matrix DoubleFloat,_
+ Integer,Union(fn:FileName,fp:Asp27(DOT)),_
+ Union(fn:FileName,fp:Asp28(IMAGE))) > Result
+ ++ f02fjf(n,k,tol,novecs,nrx,lwork,lrwork,
+ ++ liwork,m,noits,x,ifail,dot,image)
+ ++ finds eigenvalues of a real sparse symmetric
+ ++ or generalized symmetric eigenvalue problem.
+ ++ See \downlink{Manual Page}{manpageXXf02fjf}.
+ f02fjf : (Integer,Integer,DoubleFloat,Integer,_
+ Integer,Integer,Integer,Integer,Integer,Integer,Matrix DoubleFloat,_
+ Integer,Union(fn:FileName,fp:Asp27(DOT)),_
+ Union(fn:FileName,fp:Asp28(IMAGE)),FileName) > Result
+ ++ f02fjf(n,k,tol,novecs,nrx,lwork,lrwork,
+ ++ liwork,m,noits,x,ifail,dot,image,monit)
+ ++ finds eigenvalues of a real sparse symmetric
+ ++ or generalized symmetric eigenvalue problem.
+ ++ See \downlink{Manual Page}{manpageXXf02fjf}.
+ f02wef : (Integer,Integer,Integer,Integer,_
+ Integer,Boolean,Integer,Boolean,Integer,Matrix DoubleFloat,_
+ Matrix DoubleFloat,Integer) > Result
+ ++ f02wef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldpt,a,b,ifail)
+ ++ returns all, or part, of the singular value decomposition
+ ++ of a general real matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02wef}.
+ f02xef : (Integer,Integer,Integer,Integer,_
+ Integer,Boolean,Integer,Boolean,Integer,Matrix Complex DoubleFloat,_
+ Matrix Complex DoubleFloat,Integer) > Result
+ ++ f02xef(m,n,lda,ncolb,ldb,wantq,ldq,wantp,ldph,a,b,ifail)
+ ++ returns all, or part, of the singular value decomposition
+ ++ of a general complex matrix.
+ ++ See \downlink{Manual Page}{manpageXXf02xef}.
+ Implementation ==> add
+
+ import Lisp
+ import DoubleFloat
+ import Any
+ import Record
+ import Integer
+ import Matrix DoubleFloat
+ import Boolean
+ import NAGLinkSupportPackage
+ import FortranPackage
+ import AnyFunctions1(Integer)
+ import AnyFunctions1(Boolean)
+ import AnyFunctions1(Matrix DoubleFloat)
+ import AnyFunctions1(Matrix Complex DoubleFloat)
+ import AnyFunctions1(DoubleFloat)
+
+
+ f02aaf(iaArg:Integer,nArg:Integer,aArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02aaf",_
+ ["ia"::S,"n"::S,"ifail"::S,"r"::S,"a"::S,"e"::S]$Lisp,_
+ ["r"::S,"e"::S]$Lisp,_
+ [["double"::S,["r"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp_
+ ,["e"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"n"::S,"ifail"::S]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"a"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,nArg::Any,ifailArg::Any,aArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02abf(aArg:Matrix DoubleFloat,iaArg:Integer,nArg:Integer,_
+ ivArg:Integer,ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02abf",_
+ ["ia"::S,"n"::S,"iv"::S,"ifail"::S,"a"::S,"r"::S,"v"::S,"e"::S]$Lisp,_
+ ["r"::S,"v"::S,"e"::S]$Lisp,_
+ [["double"::S,["a"::S,"ia"::S,"n"::S]$Lisp_
+ ,["r"::S,"n"::S]$Lisp,["v"::S,"iv"::S,"n"::S]$Lisp,_
+ ["e"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"n"::S,"iv"::S,"ifail"::S_
+ ]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"v"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,nArg::Any,ivArg::Any,ifailArg::Any,aArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02adf(iaArg:Integer,ibArg:Integer,nArg:Integer,_
+ aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02adf",_
+ ["ia"::S,"ib"::S,"n"::S,"ifail"::S,"r"::S,"a"::S,"b"::S,"de"::S]$Lisp,_
+ ["r"::S,"de"::S]$Lisp,_
+ [["double"::S,["r"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp_
+ ,["b"::S,"ib"::S,"n"::S]$Lisp,["de"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"ib"::S,"n"::S,"ifail"::S_
+ ]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,ibArg::Any,nArg::Any,ifailArg::Any,aArg::Any,bArg::Any])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02aef(iaArg:Integer,ibArg:Integer,nArg:Integer,_
+ ivArg:Integer,aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02aef",_
+ ["ia"::S,"ib"::S,"n"::S,"iv"::S,"ifail"::S_
+ ,"r"::S,"v"::S,"a"::S,"b"::S,"dl"::S_
+ ,"e"::S]$Lisp,_
+ ["r"::S,"v"::S,"dl"::S,"e"::S]$Lisp,_
+ [["double"::S,["r"::S,"n"::S]$Lisp,["v"::S,"iv"::S,"n"::S]$Lisp_
+ ,["a"::S,"ia"::S,"n"::S]$Lisp,["b"::S,"ib"::S,"n"::S]$Lisp,_
+ ["dl"::S,"n"::S]$Lisp,["e"::S,"n"::S]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"ia"::S,"ib"::S,"n"::S,"iv"::S_
+ ,"ifail"::S]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"v"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,ibArg::Any,nArg::Any,ivArg::Any,_
+ ifailArg::Any,aArg::Any,bArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02aff(iaArg:Integer,nArg:Integer,aArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02aff",_
+ ["ia"::S,"n"::S,"ifail"::S,"rr"::S,"ri"::S,"intger"::S,"a"::S]$Lisp,_
+ ["rr"::S,"ri"::S,"intger"::S]$Lisp,_
+ [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
+ ,["a"::S,"ia"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"n"::S,["intger"::S,"n"::S]$Lisp_
+ ,"ifail"::S]$Lisp_
+ ]$Lisp,_
+ ["rr"::S,"ri"::S,"intger"::S,"a"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,nArg::Any,ifailArg::Any,aArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02agf(iaArg:Integer,nArg:Integer,ivrArg:Integer,_
+ iviArg:Integer,aArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02agf",_
+ ["ia"::S,"n"::S,"ivr"::S,"ivi"::S,"ifail"::S_
+ ,"rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S_
+ ,"a"::S]$Lisp,_
+ ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S]$Lisp,_
+ [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
+ ,["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
+ ["a"::S,"ia"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"n"::S,"ivr"::S,"ivi"::S_
+ ,["intger"::S,"n"::S]$Lisp,"ifail"::S]$Lisp_
+ ]$Lisp,_
+ ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S,"a"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
+ ifailArg::Any,aArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02ajf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
+ arArg:Matrix DoubleFloat,aiArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02ajf",_
+ ["iar"::S,"iai"::S,"n"::S,"ifail"::S,"rr"::S,"ri"::S,_
+ "ar"::S,"ai"::S,"intger"::S_
+ ]$Lisp,_
+ ["rr"::S,"ri"::S,"intger"::S]$Lisp,_
+ [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
+ ,["ar"::S,"iar"::S,"n"::S]$Lisp,["ai"::S,"iai"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ifail"::S_
+ ,["intger"::S,"n"::S]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["rr"::S,"ri"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
+ [([iarArg::Any,iaiArg::Any,nArg::Any,ifailArg::Any,_
+ arArg::Any,aiArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02akf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
+ ivrArg:Integer,iviArg:Integer,arArg:Matrix DoubleFloat,_
+ aiArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02akf",_
+ ["iar"::S,"iai"::S,"n"::S,"ivr"::S,"ivi"::S_
+ ,"ifail"::S,"rr"::S,"ri"::S,"vr"::S,"vi"::S,"ar"::S_
+ ,"ai"::S,"intger"::S]$Lisp,_
+ ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"intger"::S]$Lisp,_
+ [["double"::S,["rr"::S,"n"::S]$Lisp,["ri"::S,"n"::S]$Lisp_
+ ,["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
+ ["ar"::S,"iar"::S,"n"::S]$Lisp,["ai"::S,"iai"::S,"n"::S]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ivr"::S_
+ ,"ivi"::S,"ifail"::S,["intger"::S,"n"::S]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["rr"::S,"ri"::S,"vr"::S,"vi"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
+ [([iarArg::Any,iaiArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
+ ifailArg::Any,arArg::Any,aiArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02awf(iarArg:Integer,iaiArg:Integer,nArg:Integer,_
+ arArg:Matrix DoubleFloat,aiArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02awf",_
+ ["iar"::S,"iai"::S,"n"::S,"ifail"::S,"r"::S,"ar"::S,"ai"::S,_
+ "wk1"::S,"wk2"::S_
+ ,"wk3"::S]$Lisp,_
+ ["r"::S,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
+ [["double"::S,["r"::S,"n"::S]$Lisp,["ar"::S,"iar"::S,"n"::S]$Lisp_
+ ,["ai"::S,"iai"::S,"n"::S]$Lisp,["wk1"::S,"n"::S]$Lisp,_
+ ["wk2"::S,"n"::S]$Lisp,["wk3"::S,"n"::S]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ifail"::S_
+ ]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"ar"::S,"ai"::S,"ifail"::S]$Lisp,_
+ [([iarArg::Any,iaiArg::Any,nArg::Any,ifailArg::Any,arArg::Any,_
+ aiArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02axf(arArg:Matrix DoubleFloat,iarArg:Integer,aiArg:Matrix DoubleFloat,_
+ iaiArg:Integer,nArg:Integer,ivrArg:Integer,_
+ iviArg:Integer,ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02axf",_
+ ["iar"::S,"iai"::S,"n"::S,"ivr"::S,"ivi"::S_
+ ,"ifail"::S,"ar"::S,"ai"::S,"r"::S,"vr"::S,"vi"::S_
+ ,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
+ ["r"::S,"vr"::S,"vi"::S,"wk1"::S,"wk2"::S,"wk3"::S]$Lisp,_
+ [["double"::S,["ar"::S,"iar"::S,"n"::S]$Lisp_
+ ,["ai"::S,"iai"::S,"n"::S]$Lisp,["r"::S,"n"::S]$Lisp,_
+ ["vr"::S,"ivr"::S,"n"::S]$Lisp,["vi"::S,"ivi"::S,"n"::S]$Lisp,_
+ ["wk1"::S,"n"::S]$Lisp_
+ ,["wk2"::S,"n"::S]$Lisp,["wk3"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"iar"::S,"iai"::S,"n"::S,"ivr"::S_
+ ,"ivi"::S,"ifail"::S]$Lisp_
+ ]$Lisp,_
+ ["r"::S,"vr"::S,"vi"::S,"ifail"::S]$Lisp,_
+ [([iarArg::Any,iaiArg::Any,nArg::Any,ivrArg::Any,iviArg::Any,_
+ ifailArg::Any,arArg::Any,aiArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02bbf(iaArg:Integer,nArg:Integer,albArg:DoubleFloat,_
+ ubArg:DoubleFloat,mArg:Integer,ivArg:Integer,_
+ aArg:Matrix DoubleFloat,ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02bbf",_
+ ["ia"::S,"n"::S,"alb"::S,"ub"::S,"m"::S_
+ ,"iv"::S,"mm"::S,"ifail"::S,"r"::S,"v"::S,"icount"::S,"a"::S,"d"::S_
+ ,"e"::S,"e2"::S,"x"::S,"g"::S,"c"::S_
+ ]$Lisp,_
+ ["mm"::S,"r"::S,"v"::S,"icount"::S,"d"::S,"e"::S,"e2"::S,"x"::S,_
+ "g"::S,"c"::S]$Lisp,_
+ [["double"::S,"alb"::S,"ub"::S,["r"::S,"m"::S]$Lisp_
+ ,["v"::S,"iv"::S,"m"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp,_
+ ["d"::S,"n"::S]$Lisp,["e"::S,"n"::S]$Lisp,["e2"::S,"n"::S]$Lisp_
+ ,["x"::S,"n"::S,7$Lisp]$Lisp,["g"::S,"n"::S]$Lisp]$Lisp_
+ ,["integer"::S,"ia"::S,"n"::S,"m"::S,"iv"::S_
+ ,"mm"::S,["icount"::S,"m"::S]$Lisp,"ifail"::S]$Lisp_
+ ,["logical"::S,["c"::S,"n"::S]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["mm"::S,"r"::S,"v"::S,"icount"::S,"a"::S,"ifail"::S]$Lisp,_
+ [([iaArg::Any,nArg::Any,albArg::Any,ubArg::Any,mArg::Any,_
+ ivArg::Any,ifailArg::Any,aArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02bjf(nArg:Integer,iaArg:Integer,ibArg:Integer,_
+ eps1Arg:DoubleFloat,matvArg:Boolean,ivArg:Integer,_
+ aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ [(invokeNagman(NIL$Lisp,_
+ "f02bjf",_
+ ["n"::S,"ia"::S,"ib"::S,"eps1"::S,"matv"::S_
+ ,"iv"::S,"ifail"::S,"alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S_
+ ,"a"::S,"b"::S]$Lisp,_
+ ["alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S]$Lisp,_
+ [["double"::S,"eps1"::S,["alfr"::S,"n"::S]$Lisp_
+ ,["alfi"::S,"n"::S]$Lisp,["beta"::S,"n"::S]$Lisp,_
+ ["v"::S,"iv"::S,"n"::S]$Lisp,["a"::S,"ia"::S,"n"::S]$Lisp,_
+ ["b"::S,"ib"::S,"n"::S]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"n"::S,"ia"::S,"ib"::S,"iv"::S_
+ ,["iter"::S,"n"::S]$Lisp,"ifail"::S]$Lisp_
+ ,["logical"::S,"matv"::S]$Lisp_
+ ]$Lisp,_
+ ["alfr"::S,"alfi"::S,"beta"::S,"v"::S,"iter"::S,"a"::S,"b"::S,_
+ "ifail"::S]$Lisp,_
+ [([nArg::Any,iaArg::Any,ibArg::Any,eps1Arg::Any,matvArg::Any,_
+ ivArg::Any,ifailArg::Any,aArg::Any,bArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02fjf(nArg:Integer,kArg:Integer,tolArg:DoubleFloat,_
+ novecsArg:Integer,nrxArg:Integer,lworkArg:Integer,_
+ lrworkArg:Integer,liworkArg:Integer,mArg:Integer,_
+ noitsArg:Integer,xArg:Matrix DoubleFloat,ifailArg:Integer,_
+ dotArg:Union(fn:FileName,fp:Asp27(DOT)),_
+ imageArg:Union(fn:FileName,fp:Asp28(IMAGE))): Result ==
+ pushFortranOutputStack(dotFilename := aspFilename "dot")$FOP
+ if dotArg case fn
+ then outputAsFortran(dotArg.fn)
+ else outputAsFortran(dotArg.fp)
+ popFortranOutputStack()$FOP
+ pushFortranOutputStack(imageFilename := aspFilename "image")$FOP
+ if imageArg case fn
+ then outputAsFortran(imageArg.fn)
+ else outputAsFortran(imageArg.fp)
+ popFortranOutputStack()$FOP
+ pushFortranOutputStack(monitFilename := aspFilename "monit")$FOP
+ outputAsFortran()$Asp29(MONIT)
+ popFortranOutputStack()$FOP
+ [(invokeNagman([dotFilename,imageFilename,monitFilename]$Lisp,_
+ "f02fjf",_
+ ["n"::S,"k"::S,"tol"::S,"novecs"::S,"nrx"::S_
+ ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S_
+ ,"ifail"::S,"dot"::S,"image"::S,"monit"::S,"d"::S,"x"::S,_
+ "work"::S,"rwork"::S,"iwork"::S_
+ ]$Lisp,_
+ ["d"::S,"work"::S,"rwork"::S,"iwork"::S,"dot"::S,"image"::S,_
+ "monit"::S]$Lisp,_
+ [["double"::S,"tol"::S,["d"::S,"k"::S]$Lisp_
+ ,["x"::S,"nrx"::S,"k"::S]$Lisp,["work"::S,"lwork"::S]$Lisp,_
+ ["rwork"::S,"lrwork"::S]$Lisp,"dot"::S,"image"::S,"monit"::S_
+ ]$Lisp_
+ ,["integer"::S,"n"::S,"k"::S,"novecs"::S,"nrx"::S_
+ ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S,"ifail"::S,_
+ ["iwork"::S,"liwork"::S]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["d"::S,"m"::S,"noits"::S,"x"::S,"ifail"::S]$Lisp,_
+ [([nArg::Any,kArg::Any,tolArg::Any,novecsArg::Any,nrxArg::Any,_
+ lworkArg::Any,lrworkArg::Any,liworkArg::Any,mArg::Any,_
+ noitsArg::Any,ifailArg::Any,xArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02fjf(nArg:Integer,kArg:Integer,tolArg:DoubleFloat,_
+ novecsArg:Integer,nrxArg:Integer,lworkArg:Integer,_
+ lrworkArg:Integer,liworkArg:Integer,mArg:Integer,_
+ noitsArg:Integer,xArg:Matrix DoubleFloat,ifailArg:Integer,_
+ dotArg:Union(fn:FileName,fp:Asp27(DOT)),_
+ imageArg:Union(fn:FileName,fp:Asp28(IMAGE)),_
+ monitArg:FileName): Result ==
+ pushFortranOutputStack(dotFilename := aspFilename "dot")$FOP
+ if dotArg case fn
+ then outputAsFortran(dotArg.fn)
+ else outputAsFortran(dotArg.fp)
+ popFortranOutputStack()$FOP
+ pushFortranOutputStack(imageFilename := aspFilename "image")$FOP
+ if imageArg case fn
+ then outputAsFortran(imageArg.fn)
+ else outputAsFortran(imageArg.fp)
+ popFortranOutputStack()$FOP
+ pushFortranOutputStack(monitFilename := aspFilename "monit")$FOP
+ outputAsFortran(monitArg)
+ [(invokeNagman([dotFilename,imageFilename,monitFilename]$Lisp,_
+ "f02fjf",_
+ ["n"::S,"k"::S,"tol"::S,"novecs"::S,"nrx"::S_
+ ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S_
+ ,"ifail"::S,"dot"::S,"image"::S,"monit"::S,"d"::S,"x"::S,_
+ "work"::S,"rwork"::S,"iwork"::S_
+ ]$Lisp,_
+ ["d"::S,"work"::S,"rwork"::S,"iwork"::S,"dot"::S,"image"::S,_
+ "monit"::S]$Lisp,_
+ [["double"::S,"tol"::S,["d"::S,"k"::S]$Lisp_
+ ,["x"::S,"nrx"::S,"k"::S]$Lisp,["work"::S,"lwork"::S]$Lisp,_
+ ["rwork"::S,"lrwork"::S]$Lisp,"dot"::S,"image"::S,"monit"::S_
+ ]$Lisp_
+ ,["integer"::S,"n"::S,"k"::S,"novecs"::S,"nrx"::S_
+ ,"lwork"::S,"lrwork"::S,"liwork"::S,"m"::S,"noits"::S,"ifail"::S,_
+ ["iwork"::S,"liwork"::S]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["d"::S,"m"::S,"noits"::S,"x"::S,"ifail"::S]$Lisp,_
+ [([nArg::Any,kArg::Any,tolArg::Any,novecsArg::Any,nrxArg::Any,_
+ lworkArg::Any,lrworkArg::Any,liworkArg::Any,mArg::Any,_
+ noitsArg::Any,ifailArg::Any,xArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02wef(mArg:Integer,nArg:Integer,ldaArg:Integer,_
+ ncolbArg:Integer,ldbArg:Integer,wantqArg:Boolean,_
+ ldqArg:Integer,wantpArg:Boolean,ldptArg:Integer,_
+ aArg:Matrix DoubleFloat,bArg:Matrix DoubleFloat,_
+ ifailArg:Integer): Result ==
+ workLength : Integer :=
+ mArg >= nArg =>
+ wantqArg and wantpArg =>
+ max(max(nArg**2 + 5*(nArg  1),nArg + ncolbArg),4)
+ wantqArg =>
+ max(max(nArg**2 + 4*(nArg  1),nArg + ncolbArg),4)
+ wantpArg =>
+ zero? ncolbArg => max(3*(nArg  1),2)
+ max(5*(nArg  1),2)
+ zero? ncolbArg => max(2*(nArg  1),2)
+ max(3*(nArg  1),2)
+ wantqArg and wantpArg =>
+ max(mArg**2 + 5*(mArg  1),2)
+ wantqArg =>
+ max(3*(mArg  1),1)
+ wantpArg =>
+ zero? ncolbArg => max(mArg**2+3*(mArg  1),2)
+ max(mArg**2+5*(mArg  1),2)
+ zero? ncolbArg => max(2*(mArg  1),1)
+ max(3*(mArg  1),1)
+
+ [(invokeNagman(NIL$Lisp,_
+ "f02wef",_
+ ["m"::S,"n"::S,"lda"::S,"ncolb"::S,"ldb"::S_
+ ,"wantq"::S,"ldq"::S,"wantp"::S,"ldpt"::S,"ifail"::S_
+ ,"q"::S,"sv"::S,"pt"::S,"work"::S,"a"::S_
+ ,"b"::S]$Lisp,_
+ ["q"::S,"sv"::S,"pt"::S,"work"::S]$Lisp,_
+ [["double"::S,["q"::S,"ldq"::S,"m"::S]$Lisp_
+ ,["sv"::S,"m"::S]$Lisp,["pt"::S,"ldpt"::S,"n"::S]$Lisp,_
+ ["work"::S,workLength]$Lisp,["a"::S,"lda"::S,"n"::S]$Lisp,_
+ ["b"::S,"ldb"::S,"ncolb"::S]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"m"::S,"n"::S,"lda"::S,"ncolb"::S_
+ ,"ldb"::S,"ldq"::S,"ldpt"::S,"ifail"::S]$Lisp_
+ ,["logical"::S,"wantq"::S,"wantp"::S]$Lisp_
+ ]$Lisp,_
+ ["q"::S,"sv"::S,"pt"::S,"work"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
+ [([mArg::Any,nArg::Any,ldaArg::Any,ncolbArg::Any,ldbArg::Any,_
+ wantqArg::Any,ldqArg::Any,wantpArg::Any,ldptArg::Any,_
+ ifailArg::Any,aArg::Any,bArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+ f02xef(mArg:Integer,nArg:Integer,ldaArg:Integer,_
+ ncolbArg:Integer,ldbArg:Integer,wantqArg:Boolean,_
+ ldqArg:Integer,wantpArg:Boolean,ldphArg:Integer,_
+ aArg:Matrix Complex DoubleFloat,bArg:Matrix Complex DoubleFloat,_
+ ifailArg:Integer): Result ==
+  This segment added by hand, to deal with an assumed size array GDN
+ tem : Integer := (min(mArg,nArg)1)
+ rLen : Integer :=
+ zero? ncolbArg and not wantqArg and not wantpArg => 2*tem
+ zero? ncolbArg and wantpArg and not wantqArg => 3*tem
+ not wantpArg =>
+ ncolbArg >0 or wantqArg => 3*tem
+ 5*tem
+ cLen : Integer :=
+ mArg >= nArg =>
+ wantqArg and wantpArg => 2*(nArg + max(nArg**2,ncolbArg))
+ wantqArg and not wantpArg => 2*(nArg + max(nArg**2+nArg,ncolbArg))
+ 2*(nArg + max(nArg,ncolbArg))
+ wantpArg => 2*(mArg**2 + mArg)
+ 2*mArg
+ svLength : Integer :=
+ min(mArg,nArg)
+ [(invokeNagman(NIL$Lisp,_
+ "f02xef",_
+ ["m"::S,"n"::S,"lda"::S,"ncolb"::S,"ldb"::S_
+ ,"wantq"::S,"ldq"::S,"wantp"::S,"ldph"::S,"ifail"::S_
+ ,"q"::S,"sv"::S,"ph"::S,"rwork"::S,"a"::S_
+ ,"b"::S,"cwork"::S]$Lisp,_
+ ["q"::S,"sv"::S,"ph"::S,"rwork"::S,"cwork"::S]$Lisp,_
+ [["double"::S,["sv"::S,svLength]$Lisp,["rwork"::S,rLen]$Lisp_
+ ]$Lisp_
+ ,["integer"::S,"m"::S,"n"::S,"lda"::S,"ncolb"::S_
+ ,"ldb"::S,"ldq"::S,"ldph"::S,"ifail"::S]$Lisp_
+ ,["logical"::S,"wantq"::S,"wantp"::S]$Lisp_
+ ,["double complex"::S,["q"::S,"ldq"::S,"m"::S]$Lisp,_
+ ["ph"::S,"ldph"::S,"n"::S]$Lisp,["a"::S,"lda"::S,"n"::S]$Lisp,_
+ ["b"::S,"ldb"::S,"ncolb"::S]$Lisp,["cwork"::S,cLen]$Lisp]$Lisp_
+ ]$Lisp,_
+ ["q"::S,"sv"::S,"ph"::S,"rwork"::S,"a"::S,"b"::S,"ifail"::S]$Lisp,_
+ [([mArg::Any,nArg::Any,ldaArg::Any,ncolbArg::Any,ldbArg::Any,_
+ wantqArg::Any,ldqArg::Any,wantpArg::Any,ldphArg::Any,_
+ ifailArg::Any,aArg::Any,bArg::Any ])_
+ @List Any]$Lisp)$Lisp)_
+ pretend List (Record(key:Symbol,entry:Any))]$Result
+
+\end{chunk}
+\begin{chunk}{NAGF02.dotabb}
+"NAGF02" [color="#FF4488",href="bookvol10.4.pdf#nameddest=NAGF02"]
+"ALIST" [color="#88FF44",href="bookvol10.3.pdf#nameddest=ALIST"]
+"NAGE02" > "ALIST"
+
+\end{chunk}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\section{package NAGE02 NagFittingPackage}
+\begin{chunk}{NagFittingPackage.input}
+)set break resume
+)sys rm f NagFittingPackage.output
+)spool NagFittingPackage.output
+)set message test on
+)set message auto off
+)clear all
+
+S 1 of 215
+)show NagFittingPackage
+R
+R NagFittingPackage is a package constructor
+R Abbreviation for NagFittingPackage is NAGE02
+R This constructor is exposed in this frame.
+R Issue )edit bookvol10.4.pamphlet to see algebra source code for NAGE02
+R
+R Operations 
+R e02adf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02aef : (Integer,Matrix(DoubleFloat),DoubleFloat,Integer) > Result
+R e02agf : (Integer,Integer,Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(Integer),Integer,Integer,Integer) > Result
+R e02ahf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,Integer,Integer,Integer) > Result
+R e02ajf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,DoubleFloat,Integer,Integer,Integer) > Result
+R e02akf : (Integer,DoubleFloat,DoubleFloat,Matrix(DoubleFloat),Integer,Integer,DoubleFloat,Integer) > Result
+R e02baf : (Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02bbf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer) > Result
+R e02bcf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer) > Result
+R e02bdf : (Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02bef : (String,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(Integer)) > Result
+R e02daf : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(Integer),Integer,Integer,Integer,DoubleFloat,Matrix(DoubleFloat),Integer) > Result
+R e02dcf : (String,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(Integer),Integer) > Result
+R e02ddf : (String,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),DoubleFloat,Integer,Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02def : (Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02dff : (Integer,Integer,Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Integer,Integer) > Result
+R e02gaf : (Integer,Integer,Integer,DoubleFloat,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer) > Result
+R e02zaf : (Integer,Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Matrix(DoubleFloat),Matrix(DoubleFloat),Integer,Integer,Integer) > Result
+R
+E 1
+
+)clear all
+
+
+S 2 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 2
+
+S 3 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 3
+
+S 4 of 215
+m:=11
+R
+R
+R (3) 11
+R Type: PositiveInteger
+E 4
+
+S 5 of 215
+kplus1:=4
+R
+R
+R (4) 4
+R Type: PositiveInteger
+E 5
+
+S 6 of 215
+nrows:=50
+R
+R
+R (5) 50
+R Type: PositiveInteger
+E 6
+
+S 7 of 215
+x:Matrix SF:=
+ [[1.00 ,2.10 ,3.10 ,3.90 ,4.90 ,5.80 ,_
+ 6.50 ,7.10 ,7.80 ,8.40 ,9.00 ]]
+R
+R
+R (6)
+R [
+R [1., 2.0999999999999996, 3.0999999999999996, 3.8999999999999999,
+R 4.9000000000000004, 5.7999999999999998, 6.5, 7.0999999999999996,
+R 7.7999999999999998, 8.3999999999999986, 9.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 7
+
+S 8 of 215
+y:Matrix SF:=
+ [[10.40 ,7.90 ,4.70 ,2.50 ,1.20 ,2.20 ,_
+ 5.10 ,9.20 ,16.10 ,24.50 ,35.30 ]]
+R
+R
+R (7)
+R [
+R [10.399999999999999, 7.9000000000000004, 4.6999999999999993, 2.5, 1.2,
+R 2.2000000000000002, 5.0999999999999996, 9.1999999999999993,
+R 16.100000000000001, 24.5, 35.299999999999997]
+R ]
+R Type: Matrix(DoubleFloat)
+E 8
+
+S 9 of 215
+w:Matrix SF:=
+ [[1.00 ,1.00 ,1.00 ,1.00 ,1.00 ,0.80 ,_
+ 0.80 ,0.70 ,0.50 ,0.30 ,0.20 ]]
+R
+R
+R (8)
+R [
+R [1., 1., 1., 1., 1., 0.80000000000000004, 0.80000000000000004,
+R 0.69999999999999996, 0.5, 0.29999999999999999, 0.20000000000000001]
+R ]
+R Type: Matrix(DoubleFloat)
+E 9
+
+S 10 of 215
+ result:=e02adf(m,kplus1,nrows,x,y,w,1)
+E 10
+
+)clear all
+
+
+S 11 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 11
+
+S 12 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 12
+
+S 13 of 215
+nplus1:=5
+R
+R
+R (3) 5
+R Type: PositiveInteger
+E 13
+
+S 14 of 215
+a:Matrix SF:= [[2.0000 ,0.5000 ,0.2500 ,0.1250 ,0.0625 ]]
+R
+R
+R (4) [2. 0.5 0.25 0.125 6.25E2]
+R Type: Matrix(DoubleFloat)
+E 14
+
+S 15 of 215
+xcap:=1.0
+R
+R
+R (5)  1.0
+R Type: Float
+E 15
+
+S 16 of 215
+ result:=e02aef(nplus1,a,xcap, 1)
+E 16
+
+)clear all
+
+
+S 17 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 17
+
+S 18 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 18
+
+S 19 of 215
+m:=5
+R
+R
+R (3) 5
+R Type: PositiveInteger
+E 19
+
+S 20 of 215
+kplus1:=5
+R
+R
+R (4) 5
+R Type: PositiveInteger
+E 20
+
+S 21 of 215
+nrows:=6
+R
+R
+R (5) 6
+R Type: PositiveInteger
+E 21
+
+S 22 of 215
+xmin:=0.0
+R
+R
+R (6) 0.0
+R Type: Float
+E 22
+
+S 23 of 215
+xmax:=4.0
+R
+R
+R (7) 4.0
+R Type: Float
+E 23
+
+S 24 of 215
+x:Matrix SF:= [[0.5 ,1.0 ,2.0 ,2.5 ,3.0 ]]
+R
+R
+R (8) [0.5 1. 2. 2.5 3.]
+R Type: Matrix(DoubleFloat)
+E 24
+
+S 25 of 215
+y:Matrix SF:= [[0.03 ,0.75 ,1.0 ,0.1 ,1.75 ]]
+R
+R
+R (9) [2.9999999999999999E2  0.75  1.  9.9999999999999992E2 1.75]
+R Type: Matrix(DoubleFloat)
+E 25
+
+S 26 of 215
+w:Matrix SF:= [[1.0 ,1.0 ,1.0 ,1.0 ,1.0 ]]
+R
+R
+R (10) [1. 1. 1. 1. 1.]
+R Type: Matrix(DoubleFloat)
+E 26
+
+S 27 of 215
+mf:=2
+R
+R
+R (11) 2
+R Type: PositiveInteger
+E 27
+
+S 28 of 215
+xf:Matrix SF:= [[0.0 ,4.0 ]]
+R
+R
+R (12) [0. 4.]
+R Type: Matrix(DoubleFloat)
+E 28
+
+S 29 of 215
+yf:Matrix SF:=
+ [[1.0 ,2.0 ,9.0 ,0.0 ,0.0 ,0.0 ,_
+ 0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,0.0 ,_
+ 0.0 ,0.0 ,0.0 ]]
+R
+R
+R (13) [1.  2. 9. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 29
+
+S 30 of 215
+lyf:=15
+R
+R
+R (14) 15
+R Type: PositiveInteger
+E 30
+
+S 31 of 215
+ip:Matrix Integer:= [[1 ,0 ]]
+R
+R
+R (15) [1 0]
+R Type: Matrix(Integer)
+E 31
+
+S 32 of 215
+lwrk:=200
+R
+R
+R (16) 200
+R Type: PositiveInteger
+E 32
+
+S 33 of 215
+liwrk:=12
+R
+R
+R (17) 12
+R Type: PositiveInteger
+E 33
+
+S 34 of 215
+ result:=e02agf(m,kplus1,nrows,xmin,xmax,x,y,w,mf,xf,yf,lyf,ip,lwrk,liwrk, 1)
+E 34
+
+)clear all
+
+
+S 35 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 35
+
+S 36 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 36
+
+S 37 of 215
+np1:=7
+R
+R
+R (3) 7
+R Type: PositiveInteger
+E 37
+
+S 38 of 215
+xmin:=0.5
+R
+R
+R (4)  0.5
+R Type: Float
+E 38
+
+S 39 of 215
+xmax:=2.5
+R
+R
+R (5) 2.5
+R Type: Float
+E 39
+
+S 40 of 215
+a:Matrix SF:= [[2.53213 ,1.13032 ,0.27150 ,0.04434 ,_
+ 0.00547 ,0.00054 ,0.00004 ]]
+R
+R
+R (6)
+R [
+R [2.53213, 1.13032, 0.27149999999999996, 4.4339999999999997E2,
+R 5.4699999999999992E3, 5.399999999999999E4, 3.9999999999999996E5]
+R ]
+R Type: Matrix(DoubleFloat)
+E 40
+
+S 41 of 215
+ia1:=1
+R
+R
+R (7) 1
+R Type: PositiveInteger
+E 41
+
+S 42 of 215
+la:=7
+R
+R
+R (8) 7
+R Type: PositiveInteger
+E 42
+
+S 43 of 215
+iadif1:=1
+R
+R
+R (9) 1
+R Type: PositiveInteger
+E 43
+
+S 44 of 215
+ladif:=7
+R
+R
+R (10) 7
+R Type: PositiveInteger
+E 44
+
+S 45 of 215
+ result:=e02ahf(np1,xmin,xmax,a,ia1,la,iadif1,ladif, 1)
+E 45
+
+)clear all
+
+
+S 46 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 46
+
+S 47 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 47
+
+S 48 of 215
+np1:=7
+R
+R
+R (3) 7
+R Type: PositiveInteger
+E 48
+
+S 49 of 215
+xmin:=0.5
+R
+R
+R (4)  0.5
+R Type: Float
+E 49
+
+S 50 of 215
+xmax:=2.5
+R
+R
+R (5) 2.5
+R Type: Float
+E 50
+
+S 51 of 215
+a:Matrix SF:=
+ [[2.53213 ,1.13032 ,0.27150 ,0.04434 ,0.00547 ,0.00054 ,0.00004 ]]
+R
+R
+R (6)
+R [
+R [2.53213, 1.13032, 0.27149999999999996, 4.4339999999999997E2,
+R 5.4699999999999992E3, 5.399999999999999E4, 3.9999999999999996E5]
+R ]
+R Type: Matrix(DoubleFloat)
+E 51
+
+S 52 of 215
+ia1:=1
+R
+R
+R (7) 1
+R Type: PositiveInteger
+E 52
+
+S 53 of 215
+la:=7
+R
+R
+R (8) 7
+R Type: PositiveInteger
+E 53
+
+S 54 of 215
+qatm1:=0.0
+R
+R
+R (9) 0.0
+R Type: Float
+E 54
+
+S 55 of 215
+iaint1:=1
+R
+R
+R (10) 1
+R Type: PositiveInteger
+E 55
+
+S 56 of 215
+laint:=8
+R
+R
+R (11) 8
+R Type: PositiveInteger
+E 56
+
+S 57 of 215
+ result:=e02ajf(np1,xmin,xmax,a,ia1,la,qatm1,iaint1,laint, 1)
+E 57
+
+)clear all
+
+
+S 58 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 58
+
+S 59 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 59
+
+S 60 of 215
+np1:=7
+R
+R
+R (3) 7
+R Type: PositiveInteger
+E 60
+
+S 61 of 215
+xmin:=0.5
+R
+R
+R (4)  0.5
+R Type: Float
+E 61
+
+S 62 of 215
+xmax:=2.5
+R
+R
+R (5) 2.5
+R Type: Float
+E 62
+
+S 63 of 215
+a:Matrix SF:=
+ [[2.53213 ,1.13032 ,0.27150 ,0.04434 ,0.00547 ,0.00054 ,0.00004 ]]
+R
+R
+R (6)
+R [
+R [2.53213, 1.13032, 0.27149999999999996, 4.4339999999999997E2,
+R 5.4699999999999992E3, 5.399999999999999E4, 3.9999999999999996E5]
+R ]
+R Type: Matrix(DoubleFloat)
+E 63
+
+S 64 of 215
+ia1:=1
+R
+R
+R (7) 1
+R Type: PositiveInteger
+E 64
+
+S 65 of 215
+la:=7
+R
+R
+R (8) 7
+R Type: PositiveInteger
+E 65
+
+S 66 of 215
+x:=0.5
+R
+R
+R (9)  0.5
+R Type: Float
+E 66
+
+S 67 of 215
+ result:=e02akf(np1,xmin,xmax,a,ia1,la,x, 1)
+E 67
+
+)clear all
+
+
+S 68 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 68
+
+S 69 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 69
+
+S 70 of 215
+m:=14
+R
+R
+R (3) 14
+R Type: PositiveInteger
+E 70
+
+S 71 of 215
+ncap7:=12
+R
+R
+R (4) 12
+R Type: PositiveInteger
+E 71
+
+S 72 of 215
+x:Matrix SF:=
+ [[0.20 ,0.47 ,0.74 ,1.09 ,1.60 ,1.90 ,2.60 ,3.10 ,4.00 ,5.15,_
+R 6.17 ,8.00 ,10.00 ,12.00 ]]
+R
+R
+R (5)
+R [
+R [0.20000000000000001, 0.46999999999999997, 0.73999999999999999,
+R 1.0899999999999999, 1.6000000000000001, 1.8999999999999999,
+R 2.5999999999999996, 3.0999999999999996, 4., 5.1500000000000004,
+R 6.1699999999999999, 8., 10., 12.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 72
+
+S 73 of 215
+y:Matrix SF:=
+ [[0.00 ,2.00 ,4.00 ,6.00 ,8.00 ,8.62 ,9.10 ,8.90,_
+ 8.15 ,7.00 ,6.00 ,4.54 ,3.39 ,2.56 ]]
+R
+R
+R (6)
+R [
+R [0., 2., 4., 6., 8., 8.6199999999999992, 9.0999999999999996,
+R 8.8999999999999986, 8.1499999999999986, 7., 6., 4.5399999999999991,
+R 3.3899999999999997, 2.5599999999999996]
+R ]
+R Type: Matrix(DoubleFloat)
+E 73
+
+S 74 of 215
+w:Matrix SF:=
+ [[0.20 ,0.20 ,0.30 ,0.70 ,0.90 ,1.00 ,_
+ 1.00 ,1.00 ,0.80 ,0.50 ,0.70 ,1.00 ,1.00 ,1.00 ]]
+R
+R
+R (7)
+R [
+R [0.20000000000000001, 0.20000000000000001, 0.29999999999999999,
+R 0.69999999999999996, 0.89999999999999991, 1., 1., 1.,
+R 0.80000000000000004, 0.5, 0.69999999999999996, 1., 1., 1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 74
+
+S 75 of 215
+lamda:Matrix SF:=
+ [[0.0 ,0.0 ,0.0 ,0.0 ,1.50 ,2.60 ,_
+ 4.00 ,8.00 ,0.0 ,0.0 ,0.0 ,0.0 ]]
+R
+R
+R (8) [0. 0. 0. 0. 1.5 2.5999999999999996 4. 8. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 75
+
+S 76 of 215
+ result:=e02baf(m,ncap7,x,y,w,lamda, 1)
+E 76
+
+)clear all
+
+
+S 77 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 77
+
+S 78 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 78
+
+S 79 of 215
+ncap7:=11
+R
+R
+R (3) 11
+R Type: PositiveInteger
+E 79
+
+S 80 of 215
+lamda:Matrix SF:=
+ [[1.00 ,1.00 ,1.00 ,1.00 ,3.00 ,6.00 ,_
+ 8.00 ,9.00 ,9.00 ,9.00 ,9.00 ]]
+R
+R
+R (4) [1. 1. 1. 1. 3. 6. 8. 9. 9. 9. 9.]
+R Type: Matrix(DoubleFloat)
+E 80
+
+S 81 of 215
+c:Matrix SF:=
+ [[1.00 ,2.00 ,4.00 ,7.00 ,6.00 ,4.00 ,_
+ 3.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
+R
+R
+R (5) [1. 2. 4. 7. 6. 4. 3. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 81
+
+S 82 of 215
+x:=2.0
+R
+R
+R (6) 2.0
+R Type: Float
+E 82
+
+S 83 of 215
+ result:=e02bbf(ncap7,lamda,c,x,1)
+E 83
+
+)clear all
+
+
+S 84 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 84
+
+S 85 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 85
+
+S 86 of 215
+ncap7:=14
+R
+R
+R (3) 14
+R Type: PositiveInteger
+E 86
+
+S 87 of 215
+lamda:Matrix SF:=
+ [[0.0 ,0.00 ,0.00 ,0.00 ,1.00 ,3.00 ,3.00 ,_
+ 3.00 ,4.00 ,4.00 ,6.00,6.00 ,6.00 ,6.00 ]]
+R
+R
+R (4) [0. 0. 0. 0. 1. 3. 3. 3. 4. 4. 6. 6. 6. 6.]
+R Type: Matrix(DoubleFloat)
+E 87
+
+S 88 of 215
+c:Matrix SF:=
+ [[10.00 ,12.00 ,13.00 ,15.00 ,22.00 ,26.00 ,_
+ 24.00 ,18.00 ,14.00 ,12.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
+R
+R
+R (5) [10. 12. 13. 15. 22. 26. 24. 18. 14. 12. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 88
+
+S 89 of 215
+x:=2.0
+R
+R
+R (6) 2.0
+R Type: Float
+E 89
+
+S 90 of 215
+left:=1
+R
+R
+R (7) 1
+R Type: PositiveInteger
+E 90
+
+S 91 of 215
+ result:=e02bcf(ncap7,lamda,c,x,left, 1)
+E 91
+
+)clear all
+
+
+S 92 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 92
+
+S 93 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 93
+
+S 94 of 215
+ncap7:=14
+R
+R
+R (3) 14
+R Type: PositiveInteger
+E 94
+
+S 95 of 215
+lamda:Matrix SF:=
+ [[0.0 ,0.00 ,0.00 ,0.00 ,1.00 ,3.00 ,_
+ 3.00 ,3.00 ,4.00 ,4.00 ,6.00 ,6.00 ,6.00 ,6.00 ]]
+R
+R
+R (4) [0. 0. 0. 0. 1. 3. 3. 3. 4. 4. 6. 6. 6. 6.]
+R Type: Matrix(DoubleFloat)
+E 95
+
+S 96 of 215
+c:Matrix SF:=
+ [[10.00 ,12.00 ,13.00 ,15.00 ,22.00 ,26.00 ,_
+ 24.00 ,18.00 ,14.00 ,12.00 ,0.00 ,0.00 ,0.00 ,0.00 ]]
+R
+R
+R (5) [10. 12. 13. 15. 22. 26. 24. 18. 14. 12. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 96
+
+S 97 of 215
+ result:=e02bdf(ncap7,lamda,c, 1)
+E 97
+
+)clear all
+
+
+S 98 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 98
+
+S 99 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 99
+
+S 100 of 215
+start:="c"
+R
+R
+R (3) "c"
+R Type: String
+E 100
+
+S 101 of 215
+m:=15
+R
+R
+R (4) 15
+R Type: PositiveInteger
+E 101
+
+S 102 of 215
+x:Matrix SF:=
+ [[0.00 ,0.50 ,1.00 ,1.50 ,2.00 ,2.50 ,3.00 ,_
+ 4.00 ,4.50 ,5.00 ,5.50 ,6.00 ,7.00 ,7.50 ,8.00 ]]
+R
+R
+R (5) [0. 0.5 1. 1.5 2. 2.5 3. 4. 4.5 5. 5.5 6. 7. 7.5 8.]
+R Type: Matrix(DoubleFloat)
+E 102
+
+S 103 of 215
+y:Matrix SF:=
+ [[1.1 ,0.372 ,0.431 ,1.69 ,2.11 ,3.10 ,4.23 ,4.35 ,4.81 ,_
+ 4.61 ,4.79 ,5.23 ,6.35 ,7.19 ,7.97 ]]
+R
+R
+R (6)
+R [
+R [ 1.0999999999999999,  0.372, 0.43099999999999999, 1.6899999999999999,
+R 2.1099999999999999, 3.0999999999999996, 4.2300000000000004,
+R 4.3499999999999996, 4.8099999999999996, 4.6099999999999994,
+R 4.7899999999999991, 5.2300000000000004, 6.3499999999999996,
+R 7.1899999999999995, 7.9699999999999998]
+R ]
+R Type: Matrix(DoubleFloat)
+E 103
+
+S 104 of 215
+w:Matrix SF:=
+ [[1.00 ,2.00 ,1.50 ,1.00 ,3.00 ,1.00 ,0.50 ,_
+ 1.00 ,2.00 ,2.50 ,1.00 ,3.00 ,1.00 ,2.00 ,1.00 ]]
+R
+R
+R (7) [1. 2. 1.5 1. 3. 1. 0.5 1. 2. 2.5 1. 3. 1. 2. 1.]
+R Type: Matrix(DoubleFloat)
+E 104
+
+S 105 of 215
+s:=1.0
+R
+R
+R (8) 1.0
+R Type: Float
+E 105
+
+S 106 of 215
+nest:=54
+R
+R
+R (9) 54
+R Type: PositiveInteger
+E 106
+
+S 107 of 215
+lwrk:=1105
+R
+R
+R (10) 1105
+R Type: PositiveInteger
+E 107
+
+S 108 of 215
+n:=0
+R
+R
+R (11) 0
+R Type: NonNegativeInteger
+E 108
+
+S 109 of 215
+lamda:=new(1,54,0.0)$Matrix DoubleFloat
+R
+R
+R (12)
+R [
+R [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 109
+
+S 110 of 215
+ifail:=1
+R
+R
+R (13)  1
+R Type: Integer
+E 110
+
+S 111 of 215
+wrk:=new(1,1105,0.0)$Matrix DoubleFloat
+R
+R
+R (14)
+R [
+R [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 111
+
+S 112 of 215
+iwrk:=new(1,54,0)$Matrix Integer
+R
+R
+R (15)
+R [
+R [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+R 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+R 0, 0, 0, 0, 0, 0]
+R ]
+R Type: Matrix(Integer)
+E 112
+
+S 113 of 215
+ result:=e02bef(start,m,x,y,w,s,nest,lwrk,n,lamda,ifail,wrk,iwrk)
+E 113
+
+)clear all
+
+
+S 114 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 114
+
+S 115 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 115
+
+S 116 of 215
+m:=30
+R
+R
+R (3) 30
+R Type: PositiveInteger
+E 116
+
+S 117 of 215
+px:=8
+R
+R
+R (4) 8
+R Type: PositiveInteger
+E 117
+
+S 118 of 215
+py:=10
+R
+R
+R (5) 10
+R Type: PositiveInteger
+E 118
+
+S 119 of 215
+x:Matrix SF:=
+ [[0.52 ,0.61 ,0.93 ,0.09 ,0.88 ,0.70 ,1 ,1 ,0.3 ,0.77 ,_
+ 0.23 ,1 ,0.26 ,0.83 ,0.22 ,0.89 ,0.80 ,0.88 ,0.68 ,_
+ 0.14 ,0.67 ,0.90 ,0.84 ,0.84 ,0.15 ,0.91 ,0.35 ,0.16 ,_
+ 0.35 ,1 ]]
+R
+R
+R (6)
+R [
+R [ 0.51999999999999991,  0.6100000000000001, 0.92999999999999994,
+R 8.9999999999999997E2, 0.87999999999999989,  0.69999999999999996, 1.,
+R 1., 0.29999999999999999,  0.76999999999999991,  0.22999999999999998,
+R  1.,  0.25999999999999995,  0.83000000000000007, 0.21999999999999997,
+R 0.8899999999999999,  0.79999999999999993,  0.87999999999999989,
+R 0.67999999999999994,  0.13999999999999999, 0.66999999999999993,
+R  0.89999999999999991,  0.84000000000000008, 0.83999999999999997,
+R 0.14999999999999999,  0.90999999999999992,  0.34999999999999998,
+R  0.15999999999999998,  0.34999999999999998,  1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 119
+
+S 120 of 215
+y:Matrix SF:=
+ [[0.60 ,0.95 ,0.87 ,0.84 ,0.17 ,0.87 ,1 ,0.1 ,0.24 ,0.77 ,_
+ 0.32 ,1 ,0.63 ,0.66 ,0.93 ,0.15 ,0.99 ,0.54 ,0.44 ,0.72 ,_
+ 0.63 ,0.40 ,0.20 ,0.43 ,0.28 ,0.24 ,0.86 ,0.41 ,0.05 ,1 ]]
+R
+R
+R (7)
+R [
+R [0.59999999999999998,  0.94999999999999996, 0.87, 0.83999999999999997,
+R 0.16999999999999998,  0.87000000000000011, 1., 0.10000000000000001,
+R 0.23999999999999999,  0.76999999999999991, 0.31999999999999995, 1.,
+R  0.62999999999999989,  0.65999999999999992, 0.92999999999999994,
+R 0.14999999999999999, 0.98999999999999999,  0.53999999999999992,
+R 0.43999999999999995,  0.71999999999999997, 0.62999999999999989,
+R  0.39999999999999997, 0.20000000000000001, 0.42999999999999999,
+R 0.28000000000000003,  0.23999999999999999, 0.85999999999999999,
+R  0.41000000000000003,  4.9999999999999996E2,  1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 120
+
+S 121 of 215
+f:Matrix SF:=
+ [[0.93 ,1.79 ,0.36 ,0.52 ,0.49 ,1.76 ,0.33 ,0.48 ,0.65 ,_
+ 1.82 ,0.92 ,1 ,8.88 ,2.01 ,0.47 ,0.49 ,0.84 ,2.42 ,_
+ 0.47 ,7.15 ,0.44 ,3.34 ,2.78 ,0.44 ,0.70 ,6.52 ,0.66 ,_
+ 2.32 ,1.66 ,1 ]]
+R
+R
+R (8)
+R [
+R [0.92999999999999994,  1.7899999999999998, 0.35999999999999999,
+R 0.52000000000000002, 0.48999999999999999,  1.7599999999999998,
+R 0.32999999999999996, 0.47999999999999998, 0.64999999999999991,
+R  1.8199999999999998, 0.91999999999999993, 1., 8.879999999999999,
+R  2.0099999999999998, 0.46999999999999997, 0.48999999999999999,
+R 0.83999999999999997,  2.4199999999999999, 0.46999999999999997,
+R 7.1500000000000004, 0.43999999999999995,  3.3399999999999999,
+R 2.7799999999999998, 0.43999999999999995, 0.69999999999999996,
+R  6.5199999999999996, 0.65999999999999992, 2.3199999999999998,
+R 1.6599999999999999,  1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 121
+
+S 122 of 215
+w:Matrix SF:=
+ [[10 ,10 ,10 ,10 ,10 ,10 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,_
+ 1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ]]
+R
+R
+R (9)
+R [
+R [10., 10., 10., 10., 10., 10., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
+R 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 122
+
+S 123 of 215
+mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.50, 0.00 ,0 ,0 ,0 ,0 ]]
+R
+R
+R (10) [0. 0. 0. 0.  0.5 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 123
+
+S 124 of 215
+point:Matrix Integer:=
+ [[3 ,6 ,4 ,5 ,7 ,10 ,8 ,9 ,11 ,13 ,12 ,15 ,14 ,18 ,_
+ 16 ,17 ,19 ,20 ,21 ,30 ,23 ,26 ,24 ,25 ,27 ,28 ,_
+ 0 ,29 ,0 ,0 ,2 ,22 ,1 ,0 ,0 ,0,0 ,0 ,0 ,0 ,0 ,0 ,0 ]]
+R
+R
+R (11)
+R [
+R [3, 6, 4, 5, 7, 10, 8, 9, 11, 13, 12, 15, 14, 18, 16, 17, 19, 20, 21, 30,
+R 23, 26, 24, 25, 27, 28, 0, 29, 0, 0, 2, 22, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+R 0]
+R ]
+R Type: Matrix(Integer)
+E 124
+
+S 125 of 215
+npoint:=43
+R
+R
+R (12) 43
+R Type: PositiveInteger
+E 125
+
+S 126 of 215
+nc:=24
+R
+R
+R (13) 24
+R Type: PositiveInteger
+E 126
+
+S 127 of 215
+nws:=1750
+R
+R
+R (14) 1750
+R Type: PositiveInteger
+E 127
+
+S 128 of 215
+eps:=0.000001
+R
+R
+R (15) 0.000001
+R Type: Float
+E 128
+
+S 129 of 215
+lamda:Matrix SF:= [[0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ]]
+R
+R
+R (16) [0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 129
+
+S 130 of 215
+ result:=e02daf(m,px,py,x,y,f,w,mu,point,npoint,nc,nws,eps,lamda,1)
+E 130
+
+)clear all
+
+
+S 131 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 131
+
+S 132 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 132
+
+S 133 of 215
+start:="c"
+R
+R
+R (3) "c"
+R Type: String
+E 133
+
+S 134 of 215
+mx:=11
+R
+R
+R (4) 11
+R Type: PositiveInteger
+E 134
+
+S 135 of 215
+x:Matrix SF:= [[0 ,0.5 ,1 ,1.5 ,2 ,2.5 ,3 ,3.5 ,4 ,4.5 ,5 ]]
+R
+R
+R (5) [0. 0.5 1. 1.5 2. 2.5 3. 3.5 4. 4.5 5.]
+R Type: Matrix(DoubleFloat)
+E 135
+
+S 136 of 215
+my:=9
+R
+R
+R (6) 9
+R Type: PositiveInteger
+E 136
+
+S 137 of 215
+y:Matrix SF:= [[0 ,0.5 ,1 ,1.5 ,2 ,2.5 ,3 ,3.5 ,4 ]]
+R
+R
+R (7) [0. 0.5 1. 1.5 2. 2.5 3. 3.5 4.]
+R Type: Matrix(DoubleFloat)
+E 137
+
+S 138 of 215
+f:Matrix SF:=
+ [[1 ,0.88758 ,0.5403 ,0.070737 ,0.41515 ,0.80114 ,_
+ 0.97999 ,0.93446 ,0.65664 ,1.5 ,1.3564 ,0.82045 ,_
+ 0.10611 ,0.62422 ,1.2317 ,1.485 ,1.3047 ,0.98547 ,_
+ 2.06 ,1.7552 ,1.0806 ,0.15147 ,0.83229 ,1.6023 ,_
+ 1.97 ,1.8729 ,1.4073 ,2.57 ,2.124 ,1.3508 ,0.17684 ,_
+ 1.0404 ,2.0029 ,2.475 ,2.3511 ,1.6741 ,3 ,2.6427 ,_
+ 1.6309 ,0.21221 ,1.2484 ,2.2034 ,2.97 ,2.8094 ,_
+ 1.9809 ,3.5 ,3.1715 ,1.8611 ,0.24458 ,1.4565 ,2.864 ,_
+ 3.265 ,3.2776 ,2.2878 ,4.04 ,3.5103 ,2.0612 ,0.28595 ,_
+ 1.6946 ,3.2046 ,3.96 ,3.7958 ,2.6146 ,4.5 ,3.9391 ,_
+ 2.4314 ,0.31632 ,1.8627 ,3.6351 ,4.455 ,4.2141 ,_
+ 2.9314 ,5.04 ,4.3879 ,2.7515 ,0.35369 ,2.0707 ,4.0057 ,_
+ 4.97 ,4.6823 ,3.2382 ,5.505 ,4.8367 ,2.9717 ,0.38505 ,_
+ 2.2888 ,4.4033 ,5.445 ,5.1405 ,3.595 ,6 ,5.2755 ,_
+ 3.2418 ,0.42442 ,2.4769 ,4.8169 ,5.93 ,5.6387 ,3.9319 ]]
+R
+R
+R (8)
+R [
+R [1., 0.88758000000000004, 0.5403, 7.0736999999999994E2,
+R  0.41514999999999996,  0.80113999999999996,  0.97998999999999992,
+R  0.93446000000000007,  0.65663999999999989, 1.5, 1.3563999999999998,
+R 0.8204499999999999, 0.10611,  0.62422,  1.2316999999999998,
+R  1.4849999999999999,  1.3047,  0.98547000000000007,
+R 2.0599999999999996, 1.7551999999999999, 1.0806, 0.15146999999999999,
+R  0.83228999999999997,  1.6022999999999998,  1.9700000000000002,
+R  1.8728999999999998,  1.4073000000000002, 2.5699999999999998,
+R 2.1239999999999997, 1.3508, 0.17684,  1.0404,  2.0029000000000003,
+R  2.4749999999999996,  2.3510999999999997,  1.6741000000000001, 3.,
+R 2.6426999999999996, 1.6309, 0.21221000000000001,  1.2484000000000002,
+R  2.2034000000000002,  2.9699999999999998,  2.8093999999999997,
+R  1.9808999999999999, 3.5, 3.1715, 1.8611, 0.24457999999999999,
+R  1.4565000000000001,  2.8639999999999999,  3.2649999999999997,
+R  3.2775999999999996,  2.2877999999999998, 4.0399999999999991, 3.5103,
+R 2.0611999999999999, 0.28594999999999998,  1.6945999999999999,
+R  3.2045999999999997,  3.96,  3.7957999999999998,  2.6146000000000003,
+R 4.5, 3.9390999999999998, 2.4314, 0.31631999999999999,
+R  1.8626999999999998,  3.6351000000000004,  4.4549999999999992,
+R  4.2140999999999993,  2.9313999999999996, 5.0399999999999991,
+R 4.3879000000000001, 2.7515000000000001, 0.35368999999999995,
+R  2.0707000000000004,  4.0056999999999992,  4.9700000000000006,
+R  4.6822999999999997,  3.2382, 5.5049999999999999, 4.8367000000000004,
+R 2.9716999999999998, 0.38505,  2.2887999999999997,  4.4032999999999998,
+R  5.4449999999999994,  5.1404999999999994,  3.5949999999999998, 6.,
+R 5.2754999999999992, 3.2417999999999996, 0.42442000000000002,
+R  2.4768999999999997,  4.8168999999999995,  5.9299999999999997,
+R  5.6386999999999992,  3.9318999999999997]
+R ]
+R Type: Matrix(DoubleFloat)
+E 138
+
+S 139 of 215
+s:=0.1
+R
+R
+R (9) 0.1
+R Type: Float
+E 139
+
+S 140 of 215
+nxest:=15
+R
+R
+R (10) 15
+R Type: PositiveInteger
+E 140
+
+S 141 of 215
+nyest:=13
+R
+R
+R (11) 13
+R Type: PositiveInteger
+E 141
+
+S 142 of 215
+lwrk:=592
+R
+R
+R (12) 592
+R Type: PositiveInteger
+E 142
+
+S 143 of 215
+liwrk:=51
+R
+R
+R (13) 51
+R Type: PositiveInteger
+E 143
+
+S 144 of 215
+nx:=0
+R
+R
+R (14) 0
+R Type: NonNegativeInteger
+E 144
+
+S 145 of 215
+lamda:Matrix SF:=new(1,15,0.0)$Matrix SF
+R
+R
+R (15) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 145
+
+S 146 of 215
+ny:=0
+R
+R
+R (16) 0
+R Type: NonNegativeInteger
+E 146
+
+S 147 of 215
+mu:Matrix SF:=new(1,13,0.0)$Matrix SF
+R
+R
+R (17) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 147
+
+S 148 of 215
+wrk:Matrix SF:=new(1,592,0.0)$Matrix SF
+R
+R
+R (18)
+R [
+R [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+R 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 148
+
+S 149 of 215
+iwrk:Matrix Integer:=new(1,51,0)$Matrix Integer
+R
+R
+R (19)
+R [
+R [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+R 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+R 0, 0, 0]
+R ]
+R Type: Matrix(Integer)
+E 149
+
+S 150 of 215
+ result:=e02dcf(start,mx,x,my,y,f,s,nxest,nyest,lwrk,liwrk,nx,_
+ lamda,ny,mu,wrk,iwrk,1)
+E 150
+
+)clear all
+
+
+S 151 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 151
+
+S 152 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 152
+
+S 153 of 215
+start:="c"
+R
+R
+R (3) "c"
+R Type: String
+E 153
+
+S 154 of 215
+m:=30
+R
+R
+R (4) 30
+R Type: PositiveInteger
+E 154
+
+S 155 of 215
+x:Matrix SF:=
+ [[11.16 ,12.85 ,19.85 ,19.72 ,15.91 ,0 ,20.87 ,3.45 ,_
+ 14.26 ,17.43 ,22.8 ,7.58 ,25 ,0 ,9.66 ,5.22 ,17.25 ,25 ,12.13 ,22.23 ,_
+ 11.52 ,15.2 ,7.54 ,17.32 ,2.14 ,0.51 ,22.69 ,5.47 ,21.67 ,3.31 ]]
+R
+R
+R (5)
+R [
+R [11.16, 12.85, 19.850000000000001, 19.719999999999999, 15.91, 0.,
+R 20.869999999999997, 3.4500000000000002, 14.26, 17.43, 22.799999999999997,
+R 7.5800000000000001, 25., 0., 9.6600000000000001, 5.2199999999999998,
+R 17.25, 25., 12.129999999999999, 22.229999999999997, 11.52,
+R 15.199999999999999, 7.5399999999999991, 17.32, 2.1399999999999997,
+R 0.51000000000000001, 22.689999999999998, 5.4699999999999998,
+R 21.670000000000002, 3.3099999999999996]
+R ]
+R Type: Matrix(DoubleFloat)
+E 155
+
+S 156 of 215
+y:Matrix SF:=
+ [[1.24 ,3.06 ,10.72 ,1.39 ,7.74 ,20 ,20 ,12.78 ,17.87 ,3.46 ,12.39 ,_
+ 1.98 ,11.87 ,0 ,20 ,14.66 ,19.57 ,3.87 ,10.79 ,6.21 ,8.53 ,0 ,10.69 ,_
+ 13.78 ,15.03 ,8.37 ,19.63 ,17.13 ,14.36 ,0.33 ]]
+R
+R
+R (6)
+R [
+R [1.24, 3.0599999999999996, 10.719999999999999, 1.3899999999999999,
+R 7.7400000000000002, 20., 20., 12.779999999999999, 17.869999999999997,
+R 3.46, 12.390000000000001, 1.98, 11.869999999999999, 0., 20., 14.66,
+R 19.57, 3.8700000000000001, 10.789999999999999, 6.21, 8.5299999999999994,
+R 0., 10.69, 13.779999999999999, 15.029999999999999, 8.3699999999999992,
+R 19.629999999999999, 17.129999999999999, 14.359999999999999,
+R 0.32999999999999996]
+R ]
+R Type: Matrix(DoubleFloat)
+E 156
+
+S 157 of 215
+f:Matrix SF:=
+ [[22.15 ,22.11 ,7.97 ,16.83 ,15.30 ,34.6 ,5.74 ,41.24 ,10.74 ,18.60 ,_
+ 5.47 ,29.87 ,4.4 ,58.2 ,4.73 ,40.36 ,6.43 ,8.74 ,13.71 ,10.25 ,_
+ 15.74 ,21.6 ,19.31 ,12.11 ,53.1 ,49.43 ,3.25 ,28.63 ,5.52 ,44.08 ]]
+R
+R
+R (7)
+R [
+R [22.149999999999999, 22.109999999999999, 7.9699999999999998,
+R 16.829999999999998, 15.300000000000001, 34.599999999999994,
+R 5.7400000000000002, 41.239999999999995, 10.739999999999998,
+R 18.600000000000001, 5.4699999999999998, 29.869999999999997,
+R 4.4000000000000004, 58.200000000000003, 4.7300000000000004,
+R 40.359999999999999, 6.4299999999999997, 8.7399999999999984,
+R 13.710000000000001, 10.25, 15.739999999999998, 21.600000000000001,
+R 19.309999999999999, 12.109999999999999, 53.099999999999994, 49.43, 3.25,
+R 28.629999999999999, 5.5199999999999996, 44.079999999999998]
+R ]
+R Type: Matrix(DoubleFloat)
+E 157
+
+S 158 of 215
+w:Matrix SF:=
+ [[1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,_
+ 1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ,1 ]]
+R
+R
+R (8)
+R [
+R [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
+R 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 158
+
+S 159 of 215
+s:=10
+R
+R
+R (9) 10
+R Type: PositiveInteger
+E 159
+
+S 160 of 215
+nxest:=14
+R
+R
+R (10) 14
+R Type: PositiveInteger
+E 160
+
+S 161 of 215
+nyest:=14
+R
+R
+R (11) 14
+R Type: PositiveInteger
+E 161
+
+S 162 of 215
+lwrk:=11016
+R
+R
+R (12) 11016
+R Type: PositiveInteger
+E 162
+
+S 163 of 215
+liwrk:=128
+R
+R
+R (13) 128
+R Type: PositiveInteger
+E 163
+
+S 164 of 215
+nx:=0
+R
+R
+R (14) 0
+R Type: NonNegativeInteger
+E 164
+
+S 165 of 215
+lamda:=new(1,14,0.0)$Matrix SF
+R
+R
+R (15) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 165
+
+S 166 of 215
+ny:=0
+R
+R
+R (16) 0
+R Type: NonNegativeInteger
+E 166
+
+S 167 of 215
+mu:=new(1,14,0.0)$Matrix SF
+R
+R
+R (17) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 167
+
+S 168 of 215
+wrk:=new(1,11016,0.0)$Matrix SF;
+R
+R
+R Type: Matrix(DoubleFloat)
+E 168
+
+S 169 of 215
+ result:=e02ddf(start,m,x,y,f,w,s,nxest,nyest,lwrk,liwrk,nx,lamda,ny,mu,wrk,1)
+E 169
+
+)clear all
+
+
+S 170 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 170
+
+S 171 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 171
+
+S 172 of 215
+m:=7
+R
+R
+R (3) 7
+R Type: PositiveInteger
+E 172
+
+S 173 of 215
+px:=11
+R
+R
+R (4) 11
+R Type: PositiveInteger
+E 173
+
+S 174 of 215
+py:=10
+R
+R
+R (5) 10
+R Type: PositiveInteger
+E 174
+
+S 175 of 215
+x:Matrix SF:= [[1 ,1.1 ,1.5 ,1.6 ,1.9 ,1.9 ,2 ]]
+R
+R
+R (6)
+R [
+R [1., 1.1000000000000001, 1.5, 1.6000000000000001, 1.8999999999999999,
+R 1.8999999999999999, 2.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 175
+
+S 176 of 215
+y:Matrix SF:= [[0 ,0.1 ,0.7 ,0.4 ,0.3 ,0.8 ,1 ]]
+R
+R
+R (7)
+R [
+R [0., 0.10000000000000001, 0.69999999999999996, 0.40000000000000002,
+R 0.29999999999999999, 0.80000000000000004, 1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 176
+
+S 177 of 215
+lamda:Matrix SF:= [[1.0 ,1.0 ,1.0 ,1.0 ,1.3 ,1.5 ,1.6 ,2 ,2 ,2 ,2 ]]
+R
+R
+R (8)
+R [1. 1. 1. 1. 1.2999999999999998 1.5 1.6000000000000001 2. 2. 2. 2.]
+R Type: Matrix(DoubleFloat)
+E 177
+
+S 178 of 215
+mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.4 ,0.7 ,1 ,1 ,1 ,1 ]]
+R
+R
+R (9)
+R [0. 0. 0. 0. 0.40000000000000002 0.69999999999999996 1. 1. 1. 1.]
+R Type: Matrix(DoubleFloat)
+E 178
+
+S 179 of 215
+c:Matrix SF:=
+ [[1 ,1.1333 ,1.3667 ,1.7 ,1.9 ,2 ,1.2 ,1.3333 ,1.5667,1.9 ,_
+ 2.1 ,2.2 ,1.5833 ,1.7167 ,1.95 ,2.2833 ,2.4833 ,2.5833 ,_
+ 2.1433 ,2.2767 ,2.51 ,2.8433 ,3.0433 ,3.1433 ,2.8667 ,_
+ 3 ,3.2333 ,3.5667 ,3.7667 ,3.8667 ,3.4667 ,3.6 ,3.8333 ,_
+ 4.1667 ,4.3667 ,4.4667 ,4 ,4.1333 ,4.3667 ,4.7 ,4.9 ,5 ]]
+R
+R
+R (10)
+R [
+R [1., 1.1333, 1.3666999999999998, 1.7, 1.8999999999999999, 2., 1.2,
+R 1.3332999999999999, 1.5667, 1.8999999999999999, 2.0999999999999996,
+R 2.2000000000000002, 1.5832999999999999, 1.7166999999999999, 1.95,
+R 2.2832999999999997, 2.4832999999999998, 2.5832999999999999, 2.1433,
+R 2.2766999999999999, 2.5099999999999998, 2.8433000000000002,
+R 3.0432999999999999, 3.1433, 2.8666999999999998, 3., 3.2332999999999998,
+R 3.5667, 3.7667000000000002, 3.8666999999999998, 3.4666999999999999,
+R 3.5999999999999996, 3.8332999999999999, 4.1666999999999996,
+R 4.3666999999999998, 4.4666999999999994, 4., 4.1333000000000002,
+R 4.3666999999999998, 4.6999999999999993, 4.9000000000000004, 5.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 179
+
+S 180 of 215
+ result:=e02def(m,px,py,x,y,lamda,mu,c,1)
+E 180
+
+)clear all
+
+
+S 181 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 181
+
+S 182 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 182
+
+S 183 of 215
+mx:=7
+R
+R
+R (3) 7
+R Type: PositiveInteger
+E 183
+
+S 184 of 215
+my:=6
+R
+R
+R (4) 6
+R Type: PositiveInteger
+E 184
+
+S 185 of 215
+px:=11
+R
+R
+R (5) 11
+R Type: PositiveInteger
+E 185
+
+S 186 of 215
+py:=10
+R
+R
+R (6) 10
+R Type: PositiveInteger
+E 186
+
+S 187 of 215
+x:Matrix SF:= [[1 ,1.1 ,1.3 ,1.4 ,1.5 ,1.7 ,2 ]]
+R
+R
+R (7)
+R [[1.,1.1000000000000001,1.2999999999999998,1.3999999999999999,1.5,1.7,2.]]
+R Type: Matrix(DoubleFloat)
+E 187
+
+S 188 of 215
+y:Matrix SF:= [[0 ,0.2 ,0.4 ,0.6 ,0.8 ,1 ]]
+R
+R
+R (8)
+R [
+R [0., 0.20000000000000001, 0.40000000000000002, 0.59999999999999998,
+R 0.80000000000000004, 1.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 188
+
+S 189 of 215
+lamda:Matrix SF:= [[1 ,1 ,1 ,1 ,1.3 ,1.5 ,1.6 ,2 ,2 ,2 ,2 ]]
+R
+R
+R (9)
+R [1. 1. 1. 1. 1.2999999999999998 1.5 1.6000000000000001 2. 2. 2. 2.]
+R Type: Matrix(DoubleFloat)
+E 189
+
+S 190 of 215
+mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.4 ,0.7 ,1 ,1 ,1 ,1 ]]
+R
+R
+R (10)
+R [0. 0. 0. 0. 0.40000000000000002 0.69999999999999996 1. 1. 1. 1.]
+R Type: Matrix(DoubleFloat)
+E 190
+
+S 191 of 215
+c:Matrix SF:=
+ [[1 ,1.1333 ,1.3667 ,1.7 ,1.9 ,2 ,1.2 ,1.3333 ,1.5667 ,1.9 ,_
+ 2.1 ,2.2 ,1.5833 ,1.7167 ,1.95 ,2.2833 ,2.4833 ,2.5833 ,2.1433 ,2.2767 ,_
+ 2.51 ,2.8433 ,3.0433 ,3.1433 ,2.8667 ,3 ,3.2333 ,3.5667 ,3.7667 ,3.8667 ,_
+ 3.4667 ,3.6 ,3.8333 ,4.1667 ,4.3667 ,4.4667 ,4 ,4.1333 ,4.3667,_
+ 4.7 ,4.9 ,5 ]]
+R
+R
+R (11)
+R [
+R [1., 1.1333, 1.3666999999999998, 1.7, 1.8999999999999999, 2., 1.2,
+R 1.3332999999999999, 1.5667, 1.8999999999999999, 2.0999999999999996,
+R 2.2000000000000002, 1.5832999999999999, 1.7166999999999999, 1.95,
+R 2.2832999999999997, 2.4832999999999998, 2.5832999999999999, 2.1433,
+R 2.2766999999999999, 2.5099999999999998, 2.8433000000000002,
+R 3.0432999999999999, 3.1433, 2.8666999999999998, 3., 3.2332999999999998,
+R 3.5667, 3.7667000000000002, 3.8666999999999998, 3.4666999999999999,
+R 3.5999999999999996, 3.8332999999999999, 4.1666999999999996,
+R 4.3666999999999998, 4.4666999999999994, 4., 4.1333000000000002,
+R 4.3666999999999998, 4.6999999999999993, 4.9000000000000004, 5.]
+R ]
+R Type: Matrix(DoubleFloat)
+E 191
+
+S 192 of 215
+lwrk:=36
+R
+R
+R (12) 36
+R Type: PositiveInteger
+E 192
+
+S 193 of 215
+liwrk:=108
+R
+R
+R (13) 108
+R Type: PositiveInteger
+E 193
+
+S 194 of 215
+ result:=e02dff(mx,my,px,py,x,y,lamda,mu,c,lwrk,liwrk,1)
+E 194
+
+)clear all
+
+
+S 195 of 215
+showArrayValues true
+R
+R
+R (1) true
+R Type: Boolean
+E 195
+
+S 196 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 196
+
+S 197 of 215
+m:=5
+R
+R
+R (3) 5
+R Type: PositiveInteger
+E 197
+
+S 198 of 215
+la:=7
+R
+R
+R (4) 7
+R Type: PositiveInteger
+E 198
+
+S 199 of 215
+nplus2:=5
+R
+R
+R (5) 5
+R Type: PositiveInteger
+E 199
+
+S 200 of 215
+toler:=0.0
+R
+R
+R (6) 0.0
+R Type: Float
+E 200
+
+S 201 of 215
+a:Matrix SF:=
+ [[1.0,1.0,1.0,0.0,0.0],_
+ [exp(0.2),exp(0.2),1.0,0.0,0.0],_
+ [exp(0.4),exp(0.4),1.0,0.0,0.0],_
+ [exp(0.6),exp(0.6),1.0,0.0,0.0],_
+ [exp(0.8),exp(0.8),1.0,0.0,0.0],_
+ [0.0,0.0,0.0,0.0,0.0],_
+ [0.0,0.0,0.0,0.0,0.0]]
+R
+R
+R + 1. 1. 1. 0. 0.+
+R  
+R 1.2214027581601696 0.81873075307798182 1. 0. 0.
+R  
+R 1.4918246976412703 0.67032004603563933 1. 0. 0.
+R  
+R (7) 1.8221188003905089 0.54881163609402639 1. 0. 0.
+R  
+R 2.2255409284924674 0.44932896411722156 1. 0. 0.
+R  
+R  0. 0. 0. 0. 0.
+R  
+R + 0. 0. 0. 0. 0.+
+R Type: Matrix(DoubleFloat)
+E 201
+
+S 202 of 215
+b:Matrix SF:= [[4.501 ,4.36 ,4.333 ,4.418 ,4.625 ]]
+R
+R
+R (8)
+R [
+R [4.5009999999999994, 4.3599999999999994, 4.3330000000000002,
+R 4.4179999999999993, 4.625]
+R ]
+R Type: Matrix(DoubleFloat)
+E 202
+
+S 203 of 215
+ result:=e02gaf(m,la,nplus2,toler,a,b, 1)
+E 203
+
+)clear all
+
+
+S 204 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 195
+R (1) true
+R Type: Boolean
+E 204
+
+S 205 of 215
+showScalarValues true
+R
+R
+R (2) true
+R Type: Boolean
+E 205
+
+S 206 of 215
+px:=9
+R
+R
+R (3) 9
+R Type: PositiveInteger
+E 206
+
+S 207 of 215
+py:=10
+R
+R
+R (4) 10
+R Type: PositiveInteger
+E 207
+
+S 208 of 215
+lamda:Matrix SF:= [[0 ,0 ,0 ,0 ,1.00 ,0 ,0 ,0 ,0 ]]
+R
+R
+R (5) [0. 0. 0. 0. 1. 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 208
+
+S 209 of 215
+mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.80 ,1.20 ,0 ,0 ,0 ,0 ]]
+R
+R
+R (6) [0. 0. 0. 0. 0.80000000000000004 1.2 0. 0. 0. 0.]
+R Type: Matrix(DoubleFloat)
+E 209
+
+S 210 of 215
+m:=10
+R
+R
+R (7) 10
+R Type: PositiveInteger
+E 210
+
+S 211 of 215
+x:Matrix SF:= [[0.00 ,0.70 ,1.44 ,0.21 ,1.01 ,1.84 ,0.71 ,1.00 ,0.54 ,1.531 ]]
+R
+R
+R (8)
+R [
+R [0., 0.69999999999999996, 1.4399999999999999, 0.20999999999999999,
+R 1.0099999999999998, 1.8399999999999999, 0.70999999999999996, 1.,
+R 0.54000000000000004, 1.5309999999999999]
+R ]
+R Type: Matrix(DoubleFloat)
+E 211
+
+S 212 of 215
+y:Matrix SF:= [[0.77 ,1.06 ,0.33 ,0.44 ,0.50 ,0.02 ,1.95 ,1.20 ,0.04 ,0.18 ]]
+R
+R
+R (9)
+R [
+R [0.77000000000000002, 1.0600000000000001, 0.32999999999999996,
+R 0.43999999999999995, 0.5, 1.9999999999999997E2, 1.95, 1.2,
+R 3.9999999999999994E2, 0.17999999999999999]
+R ]
+R Type: Matrix(DoubleFloat)
+E 212
+
+S 213 of 215
+npoint:=45
+R
+R
+R (10) 45
+R Type: PositiveInteger
+E 213
+
+S 214 of 215
+nadres:=6
+R
+R
+R (11) 6
+R Type: PositiveInteger
+E 214
+
+S 215 of 215
+ result:=e02zaf(px,py,lamda,mu,m,x,y,npoint,nadres, 1)
+E 215
+
+)spool
+)lisp (bye)
+\end{chunk}
+\begin{chunk}{NagFittingPackage.help}
+
+This package uses the NAG Library to find a function which
+approximates a set of data points. Typically the data contain random
+errors, as of experimental measurement, which need to be smoothed
+out. To seek an approximation to the data, it is first necessary to
+specify for the approximating function a mathematical form (a
+polynomial, for example) which contains a number of unspecified
+coefficients: the appropriate fitting routine then derives for the
+coefficients the values which provide the best fit of that particular
+form. The package deals mainly with curve and surface fitting (i.e.,
+fitting with functions of one and of two variables) when a polynomial
+or a cubic spline is used as the fitting function, since these cover
+the most common needs. However, fitting with other functions and/or
+more variables can be undertaken by means of general linear or
+nonlinear routines (some of which are contained in other packages)
+depending on whether the coefficients in the function occur linearly
+or nonlinearly. Cases where a graph rather than a set of data points
+is given can be treated simply by first reading a suitable set of
+points from the graph. The package also contains routines for
+evaluating, differentiating and integrating polynomial and spline
+curves and surfaces, once the numerical values of their coefficients
+have been determined.
+
+ E02  Curve and Surface Fitting Introduction  E02
+ Chapter E02
+ Curve and Surface Fitting
+
+ Contents of this Introduction:
+
+ 1. Scope of the Chapter
+
+ 2. Background to the Problems
+
+ 2.1. Preliminary Considerations
+
+ 2.1.1. Fitting criteria: norms
+
+ 2.1.2. Weighting of data points
+
+ 2.2. Curve Fitting
+
+ 2.2.1. Representation of polynomials
+
+ 2.2.2. Representation of cubic splines
+
+ 2.3. Surface Fitting
+
+ 2.3.1. Bicubic splines: definition and representation
+
+ 2.4. General Linear and Nonlinear Fitting Functions
+
+ 2.5. Constrained Problems
+
+ 2.6. References
+
+ 3. Recommendations on Choice and Use of Routines
+
+ 3.1. General
+
+ 3.1.1. Data considerations
+
+ 3.1.2. Transformation of variables
+
+ 3.2. Polynomial Curves
+
+ 3.2.1. Leastsquares polynomials: arbitrary data points
+
+ 3.2.2. Leastsquares polynomials: selected data points
+
+ 3.3. Cubic Spline Curves
+
+ 3.3.1. Leastsquares cubic splines
+
+ 3.3.2. Automatic fitting with cubic splines
+
+ 3.4. Spline Surfaces
+
+ 3.4.1. Leastsquares bicubic splines
+
+ 3.4.2. Automatic fitting with bicubic splines
+
+ 3.5. General Linear and Nonlinear Fitting Functions
+
+ 3.5.1. General linear functions
+
+ 3.5.2. Nonlinear functions
+
+ 3.6. Constraints
+
+ 3.7. Evaluation, Differentiation and Integration
+
+ 3.8. Index
+
+
+
+ 1. Scope of the Chapter
+
+ The main aim of this chapter is to assist the user in finding a
+ function which approximates a set of data points. Typically the
+ data contain random errors, as of experimental measurement, which
+ need to be smoothed out. To seek an approximation to the data, it
+ is first necessary to specify for the approximating function a
+ mathematical form (a polynomial, for example) which contains a
+ number of unspecified coefficients: the appropriate fitting
+ routine then derives for the coefficients the values which
+ provide the best fit of that particular form. The chapter deals
+ mainly with curve and surface fitting (i.e., fitting with
+ functions of one and of two variables) when a polynomial or a
+ cubic spline is used as the fitting function, since these cover
+ the most common needs. However, fitting with other functions
+ and/or more variables can be undertaken by means of general
+ linear or nonlinear routines (some of which are contained in
+ other chapters) depending on whether the coefficients in the
+ function occur linearly or nonlinearly. Cases where a graph
+ rather than a set of data points is given can be treated simply
+ by first reading a suitable set of points from the graph.
+
+ The chapter also contains routines for evaluating,
+ differentiating and integrating polynomial and spline curves and
+ surfaces, once the numerical values of their coefficients have
+ been determined.
+
+ 2. Background to the Problems
+
+ 2.1. Preliminary Considerations
+
+ In the curvefitting problems considered in this chapter, we have
+ a dependent variable y and an independent variable x, and we are
+ given a set of data points (x ,y ), for r=1,2,...,m. The
+ r r
+ preliminary matters to be considered in this section will, for
+ simplicity, be discussed in this context of curvefitting
+ problems. In fact, however, these considerations apply equally
+ well to surface and higherdimensional problems. Indeed, the
+ discussion presented carries over essentially as it stands if,
+ for these cases, we interpret x as a vector of several
+ independent variables and correspondingly each x as a vector
+ r
+ containing the rth data value of each independent variable.
+
+ We wish, then, to approximate the set of data points as closely
+ as possible with a specified function, f(x) say, which is as
+ smooth as possible  f(x) may, for example, be a polynomial. The
+ requirements of smoothness and closeness conflict, however, and a
+ balance has to be struck between them. Most often, the smoothness
+ requirement is met simply by limiting the number of coefficients
+ allowed in the fitting function  for example, by restricting
+ the degree in the case of a polynomial. Given a particular number
+ of coefficients in the function in question, the fitting routines
+ of this chapter determine the values of the coefficients such
+ that the 'distance' of the function from the data points is as
+ small as possible. The necessary balance is struck by the user
+ comparing a selection of such fits having different numbers of
+ coefficients. If the number of coefficients is too low, the
+ approximation to the data will be poor. If the number is too
+ high, the fit will be too close to the data, essentially
+ following the random errors and tending to have unwanted
+ fluctuations between the data points. Between these extremes,
+ there is often a group of fits all similarly close to the data
+ points and then, particularly when leastsquares polynomials are
+ used, the choice is clear: it is the fit from this group having
+ the smallest number of coefficients.
+
+ The above process can be seen as the user minimizing the
+ smoothness measure (i.e., the number of coefficients) subject to
+ the distance from the data points being acceptably small. Some of
+ the routines, however, do this task themselves. They use a
+ different measure of smoothness (in each case one that is
+ continuous) and minimize it subject to the distance being less
+ than a threshold specified by the user. This is a much more
+ automatic process, requiring only some experimentation with the
+ threshold.
+
+ 2.1.1. Fitting criteria: norms
+
+ A measure of the above 'distance' between the set of data points
+ and the function f(x) is needed. The distance from a single data
+ point (x ,y ) to the function can simply be taken as
+ r r
+
+ (epsilon) =y f(x ), (1)
+ r r r
+
+ and is called the residual of the point. (With this definition,
+ the residual is regarded as a function of the coefficients
+ contained in f(x); however, the term is also used to mean the
+ particular value of (epsilon) which corresponds to the fitted
+ r
+ values of the coefficients.) However, we need a measure of
+ distance for the set of data points as a whole. Three different
+ measures are used in the different routines (which measure to
+ select, according to circumstances, is discussed later in this
+ subsection). With (epsilon) defined in (1), these measures, or
+ r
+ norms, are
+
+ m
+ 
+ > (epsilon) , (2)
+  r
+ r=1
+
+
+
+ / m
+ /  2
+ / > (epsilon) , and (3)
+ /  r
+ \/ r=1
+
+ max (epsilon) , (4)
+ r r
+
+ respectively the l norm, the l norm and the l norm.
+ 1 2 infty
+
+ Minimization of one or other of these norms usually provides the
+ fitting criterion, the minimization being carried out with
+ respect to the coefficients in the mathematical form used for
+ f(x): with respect to the b for example if the mathematical form
+ i
+ is the power series in (8) below. The fit which results from
+ minimizing (2) is known as the l fit, or the fit in the l norm:
+ 1 1
+ that which results from minimizing (3) is the l fit, the well
+ 2
+ known leastsquares fit (minimizing (3) is equivalent to
+ minimizing the square of (3), i.e., the sum of squares of
+ residuals, and it is the latter which is used in practice), and
+ that from minimizing (4) is the l , or minimax, fit.
+ infty
S 196 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 196
+ Strictly speaking, implicit in the use of the above norms are the
+ statistical assumptions that the random errors in the y are
+ r
+ independent of one another and that any errors in the x are
+ r
+ negligible by comparison. From this point of view, the use of the
+ l norm is appropriate when the random errors in the y have a
+ 2 r
+ normal distribution, and the l norm is appropriate when they
+ infty
+ have a rectangular distribution, as when fitting a table of
+ values rounded to a fixed number of decimal places. The l norm
+ 1
+ is appropriate when the error distribution has its frequency
+ function proportional to the negative exponential of the modulus
+ of the normalised error  not a common situation.
S 197 of 215
m:=5
R
R
R (3) 5
R Type: PositiveInteger
E 197
+ However, the user is often indifferent to these statistical
+ considerations, and simply seeks a fit which he can assess by
+ inspection, perhaps visually from a graph of the results. In this
+ event, the l norm is particularly appropriate when the data are
+ 1
+ thought to contain some 'wild' points (since fitting in this norm
+ tends to be unaffected by the presence of a small number of such
+ points), though of course in simple situations the user may
+ prefer to identify and reject these points. The l norm
+ infty
+ should be used only when the maximum residual is of particular
+ concern, as may be the case for example when the data values have
+ been obtained by accurate computation, as of a mathematical
+ function. Generally, however, a routine based on leastsquares
+ should be preferred, as being computationally faster and usually
+ providing more information on which to assess the results. In
+ many problems the three fits will not differ significantly for
+ practical purposes.
S 198 of 215
la:=7
R
R
R (4) 7
R Type: PositiveInteger
E 198
+ Some of the routines based on the l norm do not minimize the
+ 2
+ norm itself but instead minimize some (intuitively acceptable)
+ measure of smoothness subject to the norm being less than a user
+ specified threshold. These routines fit with cubic or bicubic
+ splines (see (10) and (14) below) and the smoothing measures
+ relate to the size of the discontinuities in their third
+ derivatives. A much more automatic fitting procedure follows from
+ this approach.
S 199 of 215
nplus2:=5
R
R
R (5) 5
R Type: PositiveInteger
E 199
+ 2.1.2. Weighting of data points
S 200 of 215
toler:=0.0
R
R
R (6) 0.0
R Type: Float
E 200
+ The use of the above norms also assumes that the data values y
+ r
+ are of equal (absolute) accuracy. Some of the routines enable an
+ allowance to be made to take account of differing accuracies. The
+ allowance takes the form of 'weights' applied to the yvalues so
+ that those values known to be more accurate have a greater
+ influence on the fit than others. These weights, to be supplied
+ by the user, should be calculated from estimates of the absolute
+ accuracies of the yvalues, these estimates being expressed as
+ standard deviations, probable errors or some other measure which
+ has the same dimensions as y. Specifically, for each y the
+ r
+ corresponding weight w should be inversely proportional to the
+ r
+ accuracy estimate of y . For example, if the percentage accuracy
+ r
+ is the same for all y , then the absolute accuracy of y is
+ r r
+ proportional to y (assuming y to be positive, as it usually is
+ r r
+ in such cases) and so w =K/y , for r=1,2,...,m, for an arbitrary
+ r r
+ positive constant K. (This definition of weight is stressed
+ because often weight is defined as the square of that used here.)
+ The norms (2), (3) and (4) above are then replaced respectively
+ by
S 201 of 215
a:Matrix SF:=
 [[1.0,1.0,1.0,0.0,0.0],_
 [exp(0.2),exp(0.2),1.0,0.0,0.0],_
 [exp(0.4),exp(0.4),1.0,0.0,0.0],_
 [exp(0.6),exp(0.6),1.0,0.0,0.0],_
 [exp(0.8),exp(0.8),1.0,0.0,0.0],_
 [0.0,0.0,0.0,0.0,0.0],_
 [0.0,0.0,0.0,0.0,0.0]]
R
R
R + 1. 1. 1. 0. 0.+
R  
R 1.2214027581601696 0.81873075307798182 1. 0. 0.
R  
R 1.4918246976412703 0.67032004603563933 1. 0. 0.
R  
R (7) 1.8221188003905089 0.54881163609402639 1. 0. 0.
R  
R 2.2255409284924674 0.44932896411722156 1. 0. 0.
R  
R  0. 0. 0. 0. 0.
R  
R + 0. 0. 0. 0. 0.+
R Type: Matrix(DoubleFloat)
E 201
+ m
+ 
+ > w (epsilon) , (5)
+  r r
+ r=1
S 202 of 215
b:Matrix SF:= [[4.501 ,4.36 ,4.333 ,4.418 ,4.625 ]]
R
R
R (8)
R [
R [4.5009999999999994, 4.3599999999999994, 4.3330000000000002,
R 4.4179999999999993, 4.625]
R ]
R Type: Matrix(DoubleFloat)
E 202
+
+
+ / m
+ /  2 2
+ / > w (epsilon) , and (6)
+ /  r r
+ \/ r=1
+
+ max w (epsilon) . (7)
+ r r r
+
+ Again it is the square of (6) which is used in practice rather
+ than (6) itself.
+
+ 2.2. Curve Fitting
+
+ When, as is commonly the case, the mathematical form of the
+ fitting function is immaterial to the problem, polynomials and
+ cubic splines are to be preferred because their simplicity and
+ ease of handling confer substantial benefits. The cubic spline is
+ the more versatile of the two. It consists of a number of cubic
+ polynomial segments joined end to end with continuity in first
+ and second derivatives at the joins. The third derivative at the
+ joins is in general discontinuous. The xvalues of the joins are
+ called knots, or, more precisely, interior knots. Their number
+ determines the number of coefficients in the spline, just as the
+ degree determines the number of coefficients in a polynomial.
+
+ 2.2.1. Representation of polynomials
+
+ Rather than using the powerseries form
+
+ 2 k
+ f(x)==b +b x+b x +...+b x (8)
+ 0 1 2 k
+
+ to represent a polynomial, the routines in this chapter use the
+ Chebyshev series form
+
+ 1
+ f(x)== a T (x)+a T (x)+a T (x)+...+a T (x), (9)
+ 2 0 0 1 1 2 2 k k
+
+ where T (x) is the Chebyshev polynomial of the first kind of
+ i
+ degree i (see Cox and Hayes [1], page 9), and where the range of
+ x has been normalised to run from 1 to +1. The use of either
+ form leads theoretically to the same fitted polynomial, but in
+ practice results may differ substantially because of the effects
+ of rounding error. The Chebyshev form is to be preferred, since
+ it leads to much better accuracy in general, both in the
+ computation of the coefficients and in the subsequent evaluation
+ of the fitted polynomial at specified points. This form also has
+ other advantages: for example, since the later terms in (9)
+ generally decrease much more rapidly from left to right than do
+ those in (8), the situation is more often encountered where the
+ last terms are negligible and it is obvious that the degree of
+ the polynomial can be reduced (note that on the interval 1<=x<=1
+ for all i, T (x) attains the value unity but never exceeds it, so
+ i
+ that the coefficient a gives directly the maximum value of the
+ i
+ term containing it).
+
+ 2.2.2. Representation of cubic splines
+
+ A cubic spline is represented in the form
+
+ f(x)==c N (x)+c N (x)+...+c N (x), (10)
+ 1 1 2 2 p p
+
+ where N (x), for i=1,2,...,p, is a normalised cubic Bspline (see
+ i
+ Hayes [2]). This form, also, has advantages of computational
+ speed and accuracy over alternative representations.
+
+ 2.3. Surface Fitting
+
+ There are now two independent variables, and we shall denote
+ these by x and y. The dependent variable, which was denoted by y
+ in the curvefitting case, will now be denoted by f. (This is a
+ rather different notation from that indicated for the general
+ dimensional problem in the first paragraph of Section 2.1 , but
+ it has some advantages in presentation.)
+
+ Again, in the absence of contrary indications in the particular
+ application being considered, polynomials and splines are the
+ approximating functions most commonly used. Only splines are used
+ by the surfacefitting routines in this chapter.
+
+ 2.3.1. Bicubic splines: definition and representation
+
+ The bicubic spline is defined over a rectangle R in the (x,y)
+ plane, the sides of R being parallel to the x and yaxes. R is
+ divided into rectangular panels, again by lines parallel to the
+ axes. Over each panel the bicubic spline is a bicubic polynomial,
+ that is it takes the form
S 203 of 215
 result:=e02gaf(m,la,nplus2,toler,a,b, 1)
E 203
+ 3 3
+   i j
+ > > a x y . (13)
+   ij
+ i=0 j=0
)clear all
+ Each of these polynomials joins the polynomials in adjacent
+ panels with continuity up to the second derivative. The constant
+ xvalues of the dividing lines parallel to the yaxis form the
+ set of interior knots for the variable x, corresponding precisely
+ to the set of interior knots of a cubic spline. Similarly, the
+ constant yvalues of dividing lines parallel to the xaxis form
+ the set of interior knots for the variable y. Instead of
+ representing the bicubic spline in terms of the above set of
+ bicubic polynomials, however, it is represented, for the sake of
+ computational speed and accuracy, in the form
+ p q
+  
+ f(x,y)= > > c M (x)N (y), (14)
+   ij i j
+ i=1 j=1
S 204 of 215
showArrayValues true
R
R
R (1) true
R Type: Boolean
E 204
+ where M (x), for i=1,2,...,p, and N (y), for j=1,2,...,q, are
+ i j
+ normalised Bsplines (see Hayes and Halliday [4] for further
+ details of bicubic splines and Hayes [2] for normalised B
+ splines).
S 205 of 215
showScalarValues true
R
R
R (2) true
R Type: Boolean
E 205
+ 2.4. General Linear and Nonlinear Fitting Functions
S 206 of 215
px:=9
R
R
R (3) 9
R Type: PositiveInteger
E 206
+ We have indicated earlier that, unless the datafitting
+ application under consideration specifically requires some other
+ type of fitting function, a polynomial or a spline is usually to
+ be preferred. Special routines for these functions, in one and in
+ two variables, are provided in this chapter. When the application
+ does specify some other fitting function, however, it may be
+ treated by a routine which deals with a general linear function,
+ or by one for a general nonlinear function, depending on whether
+ the coefficients in the given function occur linearly or
+ nonlinearly.
S 207 of 215
py:=10
R
R
R (4) 10
R Type: PositiveInteger
E 207
+ The general linear fitting function can be written in the form
S 208 of 215
lamda:Matrix SF:= [[0 ,0 ,0 ,0 ,1.00 ,0 ,0 ,0 ,0 ]]
R
R
R (5) [0. 0. 0. 0. 1. 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 208
+ f(x)==c (phi) (x)+c (phi) (x)+...+c (phi) (x), (15)
+ 1 1 2 2 p p
S 209 of 215
mu:Matrix SF:= [[0 ,0 ,0 ,0 ,0.80 ,1.20 ,0 ,0 ,0 ,0 ]]
R
R
R (6) [0. 0. 0. 0. 0.80000000000000004 1.2 0. 0. 0. 0.]
R Type: Matrix(DoubleFloat)
E 209
+ where x is a vector of one or more independent variables, and the
+ (phi) are any given functions of these variables (though they
+ i
+ must be linearly independent of one another if there is to be the
+ possibility of a unique solution to the fitting problem). This is
+ not intended to imply that each (phi) is necessarily a function
+ i
+ of all the variables: we may have, for example, that each (phi)
+ i
+ is a function of a different single variable, and even that one
+ of the (phi) is a constant. All that is required is that a value
+ i
+ of each (phi) (x) can be computed when a value of each
+ i
+ independent variable is given.
S 210 of 215
m:=10
R
R
R (7) 10
R Type: PositiveInteger
E 210
+ When the fitting function f(x) is not linear in its coefficients,
+ no more specific representation is available in general than f(x)
+ itself. However, we shall find it helpful later on to indicate
+ the fact that f(x) contains a number of coefficients (to be
+ determined by the fitting process) by using instead the notation
+ f(x;c), where c denotes the vector of coefficients. An example of
+ a nonlinear fitting function is
S 211 of 215
x:Matrix SF:= [[0.00 ,0.70 ,1.44 ,0.21 ,1.01 ,1.84 ,0.71 ,1.00 ,0.54 ,1.531 ]]
R
R
R (8)
R [
R [0., 0.69999999999999996, 1.4399999999999999, 0.20999999999999999,
R 1.0099999999999998, 1.8399999999999999, 0.70999999999999996, 1.,
R 0.54000000000000004, 1.5309999999999999]
R ]
R Type: Matrix(DoubleFloat)
E 211
S 212 of 215
y:Matrix SF:= [[0.77 ,1.06 ,0.33 ,0.44 ,0.50 ,0.02 ,1.95 ,1.20 ,0.04 ,0.18 ]]
R
R
R (9)
R [
R [0.77000000000000002, 1.0600000000000001, 0.32999999999999996,
R 0.43999999999999995, 0.5, 1.9999999999999997E2, 1.95, 1.2,
R 3.9999999999999994E2, 0.17999999999999999]
R ]
R Type: Matrix(DoubleFloat)
E 212
+ f(x;c)==c +c exp(c x)+c exp(c x), (16)
+ 1 2 4 3 5
S 213 of 215
npoint:=45
R
R
R (10) 45
R Type: PositiveInteger
E 213
+ which is in one variable and contains five coefficients. Note
+ that here, as elsewhere in this Chapter Introduction, we use the
+ term 'coefficients' to include all the quantities whose values
+ are to be determined by the fitting process, not just those which
+ occur linearly. We may observe that it is only the presence of
+ the coefficients c and c which makes the form (16) nonlinear.
+ 4 5
+ If the values of these two coefficients were known beforehand,
+ (16) would instead be a linear function which, in terms of the
+ general linear form (15), has p=3 and
S 214 of 215
nadres:=6
R
R
R (11) 6
R Type: PositiveInteger
E 214
+ (phi) (x)==1, (phi) (x)==exp(c x), and (phi) (x)==exp(c x).
+ 1 2 4 3 5
S 215 of 215
 result:=e02zaf(px,py,lamda,mu,m,x,y,npoint,nadres, 1)
E 215
+ We may note also that polynomials and splines, such as (9) and
+ (14), are themselves linear in their coefficients. Thus if, when
+ fitting with these functions, a suitable special routine is not
+ available (as when more than two independent variables are
+ involved or when fitting in the l norm), it is appropriate to
+ 1
+ use a routine designed for a general linear function.
)spool
)lisp (bye)
\end{chunk}
\begin{chunk}{NagFittingPackage.help}
+ 2.5. Constrained Problems
This package uses the NAG Library to find a function which
approximates a set of data points. Typically the data contain random
errors, as of experimental measurement, which need to be smoothed
out. To seek an approximation to the data, it is first necessary to
specify for the approximating function a mathematical form (a
polynomial, for example) which contains a number of unspecified
coefficients: the appropriate fitting routine then derives for the
coefficients the values which provide the best fit of that particular
form. The package deals mainly with curve and surface fitting (i.e.,
fitting with functions of one and of two variables) when a polynomial
or a cubic spline is used as the fitting function, since these cover
the most common needs. However, fitting with other functions and/or
more variables can be undertaken by means of general linear or
nonlinear routines (some of which are contained in other packages)
depending on whether the coefficients in the function occur linearly
or nonlinearly. Cases where a graph rather than a set of data points
is given can be treated simply by first reading a suitable set of
points from the graph. The package also contains routines for
evaluating, differentiating and integrating polynomial and spline
curves and surfaces, once the numerical values of their coefficients
have been determined.
+ So far, we have considered only fitting processes in which the
+ values of the coefficients in the fitting function are determined
+ by an unconstrained minimization of a particular norm. Some
+ fitting problems, however, require that further restrictions be
+ placed on the determination of the coefficient values. Sometimes
+ these restrictions are contained explicitly in the formulation of
+ the problem in the form of equalities or inequalities which the
+ coefficients, or some function of them, must satisfy. For
+ example, if the fitting function contains a term Aexp(kx), it
+ may be required that k>=0. Often, however, the equality or
+ inequality constraints relate to the value of the fitting
+ function or its derivatives at specified values of the
+ independent variable(s), but these too can be expressed in terms
+ of the coefficients of the fitting function, and it is
+ appropriate to do this if a general linear or nonlinear routine
+ is being used. For example, if the fitting function is that given
+ in (10), the requirement that the first derivative of the
+ function at x=x be nonnegative can be expressed as
+ 0
 E02  Curve and Surface Fitting Introduction  E02
 Chapter E02
 Curve and Surface Fitting
+ c N '(x )+c N '(x )+...+c N '(x )>=0, (17)
+ 1 1 0 2 2 0 p p 0
 Contents of this Introduction:
+ where the prime denotes differentiation with respect to x and
+ each derivative is evaluated at x=x . On the other hand, if the
+ 0
+ requirement had been that the derivative at x=x be exactly zero,
+ 0
+ the inequality sign in (17) would be replaced by an equality.
 1. Scope of the Chapter
+ Routines which provide a facility for minimizing the appropriate
+ norm subject to such constraints are discussed in Section 3.6.
 2. Background to the Problems
+ 2.6. References
 2.1. Preliminary Considerations
+ [1] Cox M G and Hayes J G (1973) Curve fitting: a guide and
+ suite of algorithms for the nonspecialist user. Report
+ NAC26. National Physical Laboratory.
 2.1.1. Fitting criteria: norms
+ [2] Hayes J G (1974 ) Numerical Methods for Curve and Surface
+ Fitting. Bull Inst Math Appl. 10 144152.
 2.1.2. Weighting of data points
+ (For definition of normalised Bsplines and details of
+ numerical methods.)
 2.2. Curve Fitting
+ [3] Hayes J G (1970) Curve Fitting by Polynomials in One
+ Variable. Numerical Approximation to Functions and Data. (ed
+ J G Hayes) Athlone Press, London.
 2.2.1. Representation of polynomials
+ [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
+ Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
+ Appl. 14 89103.
 2.2.2. Representation of cubic splines
+ 3. Recommendations on Choice and Use of Routines
 2.3. Surface Fitting
+ 3.1. General
 2.3.1. Bicubic splines: definition and representation
+ The choice of a routine to treat a particular fitting problem
+ will depend first of all on the fitting function and the norm to
+ be used. Unless there is good reason to the contrary, the fitting
+ function should be a polynomial or a cubic spline (in the
+ appropriate number of variables) and the norm should be the l
+ 2
+ norm (leading to the leastsquares fit). If some other function
+ is to be used, the choice of routine will depend on whether the
+ function is nonlinear (in which case see Section 3.5.2) or linear
+ in its coefficients (see Section 3.5.1), and, in the latter case,
+ on whether the l or l norm is to be used. The latter section is
+ 1 2
+ appropriate for polynomials and splines, too, if the l norm is
+ 1
+ preferred.
 2.4. General Linear and Nonlinear Fitting Functions
+ In the case of a polynomial or cubic spline, if there is only one
+ independent variable, the user should choose a spline (Section
+ 3.3) when the curve represented by the data is of complicated
+ form, perhaps with several peaks and troughs. When the curve is
+ of simple form, first try a polynomial (see Section 3.2) of low
+ degree, say up to degree 5 or 6, and then a spline if the
+ polynomial fails to provide a satisfactory fit. (Of course, if
+ thirdderivative discontinuities are unacceptable to the user, a
+ polynomial is the only choice.) If the problem is one of surface
+ fitting, one of the spline routines should be used (Section 3.4).
+ If the problem has more than two independent variables, it may be
+ treated by the general linear routine in Section 3.5.1, again
+ using a polynomial in the first instance.
 2.5. Constrained Problems
+ Another factor which affects the choice of routine is the
+ presence of constraints, as previously discussed in Section 2.5.
+ Indeed this factor is likely to be overriding at present, because
+ of the limited number of routines which have the necessary
+ facility. See Section 3.6.
 2.6. References
+ 3.1.1. Data considerations
 3. Recommendations on Choice and Use of Routines
+ A satisfactory fit cannot be expected by any means if the number
+ and arrangement of the data points do not adequately represent
+ the character of the underlying relationship: sharp changes in
+ behaviour, in particular, such as sharp peaks, should be well
+ covered. Data points should extend over the whole range of
+ interest of the independent variable(s): extrapolation outside
+ the data ranges is most unwise. Then, with polynomials, it is
+ advantageous to have additional points near the ends of the
+ ranges, to counteract the tendency of polynomials to develop
+ fluctuations in these regions. When, with polynomial curves, the
+ user can precisely choose the xvalues of the data, the special
+ points defined in Section 3.2.2 should be selected. With splines
+ the choice is less critical as long as the character of the
+ relationship is adequately represented. All fits should be tested
+ graphically before accepting them as satisfactory.
 3.1. General
+ For this purpose it should be noted that it is not sufficient to
+ plot the values of the fitted function only at the data values of
+ the independent variable(s); at the least, its values at a
+ similar number of intermediate points should also be plotted, as
+ unwanted fluctuations may otherwise go undetected. Such
+ fluctuations are the less likely to occur the lower the number of
+ coefficients chosen in the fitting function. No firm guide can be
+ given, but as a rough rule, at least initially, the number of
+ coefficients should not exceed half the number of data points
+ (points with equal or nearly equal values of the independent
+ variable, or both independent variables in surface fitting,
+ counting as a single point for this purpose). However, the
+ situation may be such, particularly with a small number of data
+ points, that a satisfactorily close fit to the data cannot be
+ achieved without unwanted fluctuations occurring. In such cases,
+ it is often possible to improve the situation by a transformation
+ of one or more of the variables, as discussed in the next
+ paragraph: otherwise it will be necessary to provide extra data
+ points. Further advice on curve fitting is given in Cox and Hayes
+ [1] and, for polynomials only, in Hayes [3] of Section 2.7. Much
+ of the advice applies also to surface fitting; see also the
+ Routine Documents.
 3.1.1. Data considerations
+ 3.1.2. Transformation of variables
 3.1.2. Transformation of variables
+ Before starting the fitting, consideration should be given to the
+ choice of a good form in which to deal with each of the
+ variables: often it will be satisfactory to use the variables as
+ they stand, but sometimes the use of the logarithm, square root,
+ or some other function of a variable will lead to a better
+ behaved relationship. This question is customarily taken into
+ account in preparing graphs and tables of a relationship and the
+ same considerations apply when curve or surface fitting. The
+ practical context will often give a guide. In general, it is best
+ to avoid having to deal with a relationship whose behaviour in
+ one region is radically different from that in another. A steep
+ rise at the lefthand end of a curve, for example, can often best
+ be treated by curve fitting in terms of log(x+c) with some
+ suitable value of the constant c. A case when such a
+ transformation gave substantial benefit is discussed in Hayes [3]
+ page 60. According to the features exhibited in any particular
+ case, transformation of either dependent variable or independent
+ variable(s) or both may be beneficial. When there is a choice it
+ is usually better to transform the independent variable(s): if
+ the dependent variable is transformed, the weights attached to
+ the data points must be adjusted. Thus (denoting the dependent
+ variable by y, as in the notation for curves) if the y to be
+ r
+ fitted have been obtained by a transformation y=g(Y) from
+ original data values Y , with weights W , for r=1,2,...,m, we
+ r r
+ must take
 3.2. Polynomial Curves
+ w =W /(dy/dY), (18)
+ r r
 3.2.1. Leastsquares polynomials: arbitrary data points
+ where the derivative is evaluated at Y . Strictly, the
+ r
+ transformation of Y and the adjustment of weights are valid only
+ when the data errors in the Y are small compared with the range
+ r
+ spanned by the Y , but this is usually the case.
+ r
 3.2.2. Leastsquares polynomials: selected data points
+ 3.2. Polynomial Curves
 3.3. Cubic Spline Curves
+ 3.2.1. Leastsquares polynomials: arbitrary data points
 3.3.1. Leastsquares cubic splines
+ E02ADF fits to arbitrary data points, with arbitrary weights,
+ polynomials of all degrees up to a maximum degree k, which is at
+ choice. If the user is seeking only a low degree polynomial, up
+ to degree 5 or 6 say, k=10 is an appropriate value, providing
+ there are about 20 data points or more. To assist in deciding the
+ degree of polynomial which satisfactorily fits the data, the
+ routine provides the rootmeansquareresidual s for all degrees
+ i
+ i=1,2,...,k. In a satisfactory case, these s will decrease
+ i
+ steadily as i increases and then settle down to a fairly constant
+ value, as shown in the example
 3.3.2. Automatic fitting with cubic splines
+ i s
+ i
 3.4. Spline Surfaces
+ 0 3.5215
 3.4.1. Leastsquares bicubic splines
+ 1 0.7708
 3.4.2. Automatic fitting with bicubic splines
+ 2 0.1861
 3.5. General Linear and Nonlinear Fitting Functions
+ 3 0.0820
 3.5.1. General linear functions
+ 4 0.0554
 3.5.2. Nonlinear functions
+ 5 0.0251
 3.6. Constraints
+ 6 0.0264
 3.7. Evaluation, Differentiation and Integration
+ 7 0.0280
 3.8. Index
+ 8 0.0277
+ 9 0.0297
+ 10 0.0271
 1. Scope of the Chapter
+ If the s values settle down in this way, it indicates that the
+ i
+ closest polynomial approximation justified by the data has been
+ achieved. The degree which first gives the approximately constant
+ value of s (degree 5 in the example) is the appropriate degree
+ i
+ to select. (Users who are prepared to accept a fit higher than
+ sixth degree, should simply find a high enough value of k to
+ enable the type of behaviour indicated by the example to be
+ detected: thus they should seek values of k for which at least 4
+ or 5 consecutive values of s are approximately the same.) If the
+ i
+ degree were allowed to go high enough, s would, in most cases,
+ i
+ eventually start to decrease again, indicating that the data
+ points are being fitted too closely and that undesirable
+ fluctuations are developing between the points. In some cases,
+ particularly with a small number of data points, this final
+ decrease is not distinguishable from the initial decrease in s .
+ i
+ In such cases, users may seek an acceptable fit by examining the
+ graphs of several of the polynomials obtained. Failing this, they
+ may (a) seek a transformation of variables which improves the
+ behaviour, (b) try fitting a spline, or (c) provide more data
+ points. If data can be provided simply by drawing an
+ approximating curve by hand and reading points from it, use the
+ points discussed in Section 3.2.2.
 The main aim of this chapter is to assist the user in finding a
 function which approximates a set of data points. Typically the
 data contain random errors, as of experimental measurement, which
 need to be smoothed out. To seek an approximation to the data, it
 is first necessary to specify for the approximating function a
 mathematical form (a polynomial, for example) which contains a
 number of unspecified coefficients: the appropriate fitting
 routine then derives for the coefficients the values which
 provide the best fit of that particular form. The chapter deals
 mainly with curve and surface fitting (i.e., fitting with
 functions of one and of two variables) when a polynomial or a
 cubic spline is used as the fitting function, since these cover
 the most common needs. However, fitting with other functions
 and/or more variables can be undertaken by means of general
 linear or nonlinear routines (some of which are contained in
 other chapters) depending on whether the coefficients in the
 function occur linearly or nonlinearly. Cases where a graph
 rather than a set of data points is given can be treated simply
 by first reading a suitable set of points from the graph.
+ 3.2.2. Leastsquares polynomials: selected data points
 The chapter also contains routines for evaluating,
 differentiating and integrating polynomial and spline curves and
 surfaces, once the numerical values of their coefficients have
 been determined.
+ When users are at liberty to choose the xvalues of data points,
+ such as when the points are taken from a graph, it is most
+ advantageous when fitting with polynomials to use the values
+ x =cos((pi)r/n), for r=0,1,...,n for some value of n, a suitable
+ r
+ value for which is discussed at the end of this section. Note
+ that these x relate to the variable x after it has been
+ r
+ normalised so that its range of interest is 1 to +1. E02ADF may
+ then be used as in Section 3.2.1 to seek a satisfactory fit.
 2. Background to the Problems
+ 3.3. Cubic Spline Curves
 2.1. Preliminary Considerations
+ 3.3.1. Leastsquares cubic splines
 In the curvefitting problems considered in this chapter, we have
 a dependent variable y and an independent variable x, and we are
 given a set of data points (x ,y ), for r=1,2,...,m. The
 r r
 preliminary matters to be considered in this section will, for
 simplicity, be discussed in this context of curvefitting
 problems. In fact, however, these considerations apply equally
 well to surface and higherdimensional problems. Indeed, the
 discussion presented carries over essentially as it stands if,
 for these cases, we interpret x as a vector of several
 independent variables and correspondingly each x as a vector
 r
 containing the rth data value of each independent variable.
+ E02BAF fits to arbitrary data points, with arbitrary weights, a
+ cubic spline with interior knots specified by the user. The
+ choice of these knots so as to give an acceptable fit must
+ largely be a matter of trial and error, though with a little
+ experience a satisfactory choice can often be made after one or
+ two trials. It is usually best to start with a small number of
+ knots (too many will result in unwanted fluctuations in the fit,
+ or even in there being no unique solution) and, examining the fit
+ graphically at each stage, to add a few knots at a time at places
+ where the fit is particularly poor. Moving the existing knots
+ towards these places will also often improve the fit. In regions
+ where the behaviour of the curve underlying the data is changing
+ rapidly, closer knots will be needed than elsewhere. Otherwise,
+ positioning is not usually very critical and equallyspaced knots
+ are often satisfactory. See also the next section, however.
 We wish, then, to approximate the set of data points as closely
 as possible with a specified function, f(x) say, which is as
 smooth as possible  f(x) may, for example, be a polynomial. The
 requirements of smoothness and closeness conflict, however, and a
 balance has to be struck between them. Most often, the smoothness
 requirement is met simply by limiting the number of coefficients
 allowed in the fitting function  for example, by restricting
 the degree in the case of a polynomial. Given a particular number
 of coefficients in the function in question, the fitting routines
 of this chapter determine the values of the coefficients such
 that the 'distance' of the function from the data points is as
 small as possible. The necessary balance is struck by the user
 comparing a selection of such fits having different numbers of
 coefficients. If the number of coefficients is too low, the
 approximation to the data will be poor. If the number is too
 high, the fit will be too close to the data, essentially
 following the random errors and tending to have unwanted
 fluctuations between the data points. Between these extremes,
 there is often a group of fits all similarly close to the data
 points and then, particularly when leastsquares polynomials are
 used, the choice is clear: it is the fit from this group having
 the smallest number of coefficients.
+ A useful feature of the routine is that it can be used in
+ applications which require the continuity to be less than the
+ normal continuity of the cubic spline. For example, the fit may
+ be required to have a discontinuous slope at some point in the
+ range. This can be achieved by placing three coincident knots at
+ the given point. Similarly a discontinuity in the second
+ derivative at a point can be achieved by placing two knots there.
+ Analogy with these discontinuous cases can provide guidance in
+ more usual cases: for example, just as three coincident knots can
+ produce a discontinuity in slope, so three close knots can
+ produce a rapid change in slope. The closer the knots are, the
+ more rapid can the change be.
 The above process can be seen as the user minimizing the
 smoothness measure (i.e., the number of coefficients) subject to
 the distance from the data points being acceptably small. Some of
 the routines, however, do this task themselves. They use a
 different measure of smoothness (in each case one that is
 continuous) and minimize it subject to the distance being less
 than a threshold specified by the user. This is a much more
 automatic process, requiring only some experimentation with the
 threshold.
+ Figure 1
+ Please see figure in printed Reference Manual
 2.1.1. Fitting criteria: norms
+ An example set of data is given in Figure 1. It is a rather
+ tricky set, because of the scarcity of data on the right, but it
+ will serve to illustrate some of the above points and to show
+ some of the dangers to be avoided. Three interior knots
+ (indicated by the vertical lines at the top of the diagram) are
+ chosen as a start. We see that the resulting curve is not steep
+ enough in the middle and fluctuates at both ends, severely on the
+ right. The spline is unable to cope with the shape and more knots
+ are needed.
 A measure of the above 'distance' between the set of data points
 and the function f(x) is needed. The distance from a single data
 point (x ,y ) to the function can simply be taken as
 r r
+ In Figure 2, three knots have been added in the centre, where the
+ data shows a rapid change in behaviour, and one further out at
+ each end, where the fit is poor. The fit is still poor, so a
+ further knot is added in this region and, in Figure 3, disaster
+ ensues in rather spectacular fashion.
 (epsilon) =y f(x ), (1)
 r r r
+ Figure 2
+ Please see figure in printed Reference Manual
 and is called the residual of the point. (With this definition,
 the residual is regarded as a function of the coefficients
 contained in f(x); however, the term is also used to mean the
 particular value of (epsilon) which corresponds to the fitted
 r
 values of the coefficients.) However, we need a measure of
 distance for the set of data points as a whole. Three different
 measures are used in the different routines (which measure to
 select, according to circumstances, is discussed later in this
 subsection). With (epsilon) defined in (1), these measures, or
 r
 norms, are
+ Figure 3
+ Please see figure in printed Reference Manual
 m
 
 > (epsilon) , (2)
  r
 r=1
+ The reason is that, at the righthand end, the fits in Figure 1
+ and Figure 2 have been interpreted as poor simply because of the
+ fluctuations about the curve underlying the data (or what it is
+ naturally assumed to be). But the fitting process knows only
+ about the data and nothing else about the underlying curve, so it
+ is important to consider only closeness to the data when deciding
+ goodness of fit.

+ Thus, in Figure 1, the curve fits the last two data points quite
+ well compared with the fit elsewhere, so no knot should have been
+ added in this region. In Figure 2, the curve goes exactly through
+ the last two points, so a further knot is certainly not needed
+ here.
 / m
 /  2
 / > (epsilon) , and (3)
 /  r
 \/ r=1
 max (epsilon) , (4)
 r r
+ Figure 4
+ Please see figure in printed Reference Manual
 respectively the l norm, the l norm and the l norm.
 1 2 infty
+ Figure 4 shows what can be achieved without the extra knot on
+ each of the flat regions. Remembering that within each knot
+ interval the spline is a cubic polynomial, there is really no
+ need to have more than one knot interval covering each flat
+ region.
 Minimization of one or other of these norms usually provides the
 fitting criterion, the minimization being carried out with
 respect to the coefficients in the mathematical form used for
 f(x): with respect to the b for example if the mathematical form
 i
 is the power series in (8) below. The fit which results from
 minimizing (2) is known as the l fit, or the fit in the l norm:
 1 1
 that which results from minimizing (3) is the l fit, the well
 2
 known leastsquares fit (minimizing (3) is equivalent to
 minimizing the square of (3), i.e., the sum of squares of
 residuals, and it is the latter which is used in practice), and
 that from minimizing (4) is the l , or minimax, fit.
 infty
+ What we have, in fact, in Figure 2 and Figure 3 is a case of too
+ many knots (so too many coefficients in the spline equation) for
+ the number of data points. The warning in the second paragraph of
+ Section 2.1 was that the fit will then be too close to the data,
+ tending to have unwanted fluctuations between the data points.
+ The warning applies locally for splines, in the sense that, in
+ localities where there are plenty of data points, there can be a
+ lot of knots, as long as there are few knots where there are few
+ points, especially near the ends of the interval. In the present
+ example, with so few data points on the right, just the one extra
+ knot in Figure 2 is too many! The signs are clearly present, with
+ the last two points fitted exactly (at least to the graphical
+ accuracy and actually much closer than that) and fluctuations
+ within the last two knotintervals (cf. Figure 1, where only the
+ final point is fitted exactly and one of the wobbles spans
+ several data points).
 Strictly speaking, implicit in the use of the above norms are the
 statistical assumptions that the random errors in the y are
 r
 independent of one another and that any errors in the x are
 r
 negligible by comparison. From this point of view, the use of the
 l norm is appropriate when the random errors in the y have a
 2 r
 normal distribution, and the l norm is appropriate when they
 infty
 have a rectangular distribution, as when fitting a table of
 values rounded to a fixed number of decimal places. The l norm
 1
 is appropriate when the error distribution has its frequency
 function proportional to the negative exponential of the modulus
 of the normalised error  not a common situation.
+ The situation in Figure 3 is different. The fit, if computed
+ exactly, would still pass through the last two data points, with
+ even more violent fluctuations. However, the problem has become
+ so illconditioned that all accuracy has been lost. Indeed, if
+ the last interior knot were moved a tiny amount to the right,
+ there would be no unique solution and an error message would have
+ been caused. Nearsingularity is, sadly, not picked up by the
+ routine, but can be spotted readily in a graph, as Figure 3. B
+ spline coefficients becoming large, with alternating signs, is
+ another indication. However, it is better to avoid such
+ situations, firstly by providing, whenever possible, data
+ adequately covering the range of interest, and secondly by
+ placing knots only where there is a reasonable amount of data.
 However, the user is often indifferent to these statistical
 considerations, and simply seeks a fit which he can assess by
 inspection, perhaps visually from a graph of the results. In this
 event, the l norm is particularly appropriate when the data are
 1
 thought to contain some 'wild' points (since fitting in this norm
 tends to be unaffected by the presence of a small number of such
 points), though of course in simple situations the user may
 prefer to identify and reject these points. The l norm
 infty
 should be used only when the maximum residual is of particular
 concern, as may be the case for example when the data values have
 been obtained by accurate computation, as of a mathematical
 function. Generally, however, a routine based on leastsquares
 should be preferred, as being computationally faster and usually
 providing more information on which to assess the results. In
 many problems the three fits will not differ significantly for
 practical purposes.
+ The example here could, in fact, have utilised from the start the
+ observation made in the second paragraph of this section, that
+ three close knots can produce a rapid change in slope. The
+ example has two such rapid changes and so requires two sets of
+ three close knots (in fact, the two sets can be so close that one
+ knot can serve in both sets, so only five knots prove sufficient
+ in Figure 4). It should be noted, however, that the rapid turn
+ occurs within the range spanned by the three knots. This is the
+ reason that the six knots in Figure 2 are not satisfactory as
+ they do not quite span the two turns.
 Some of the routines based on the l norm do not minimize the
 2
 norm itself but instead minimize some (intuitively acceptable)
 measure of smoothness subject to the norm being less than a user
 specified threshold. These routines fit with cubic or bicubic
 splines (see (10) and (14) below) and the smoothing measures
 relate to the size of the discontinuities in their third
 derivatives. A much more automatic fitting procedure follows from
 this approach.
+ Some more examples to illustrate the choice of knots are given in
+ Cox and Hayes [1].
 2.1.2. Weighting of data points
+ 3.3.2. Automatic fitting with cubic splines
 The use of the above norms also assumes that the data values y
 r
 are of equal (absolute) accuracy. Some of the routines enable an
 allowance to be made to take account of differing accuracies. The
 allowance takes the form of 'weights' applied to the yvalues so
 that those values known to be more accurate have a greater
 influence on the fit than others. These weights, to be supplied
 by the user, should be calculated from estimates of the absolute
 accuracies of the yvalues, these estimates being expressed as
 standard deviations, probable errors or some other measure which
 has the same dimensions as y. Specifically, for each y the
 r
 corresponding weight w should be inversely proportional to the
 r
 accuracy estimate of y . For example, if the percentage accuracy
 r
 is the same for all y , then the absolute accuracy of y is
 r r
 proportional to y (assuming y to be positive, as it usually is
 r r
 in such cases) and so w =K/y , for r=1,2,...,m, for an arbitrary
 r r
 positive constant K. (This definition of weight is stressed
 because often weight is defined as the square of that used here.)
 The norms (2), (3) and (4) above are then replaced respectively
 by
+ E02BEF also fits cubic splines to arbitrary data points with
+ arbitrary weights but itself chooses the number and positions of
+ the knots. The user has to supply only a threshold for the sum of
+ squares of residuals. The routine first builds up a knot set by a
+ series of trial fits in the l norm. Then, with the knot set
+ 2
+ decided, the final spline is computed to minimize a certain
+ smoothing measure subject to satisfaction of the chosen
+ threshold. Thus it is easier to use than E02BAF (see previous
+ section), requiring only some experimentation with this
+ threshold. It should therefore be first choice unless the user
+ has a preference for the ordinary leastsquares fit or, for
+ example, wishes to experiment with knot positions, trying to keep
+ their number down (E02BEF aims only to be reasonably frugal with
+ knots).
 m
 
 > w (epsilon) , (5)
  r r
 r=1
+ 3.4. Spline Surfaces

+ 3.4.1. Leastsquares bicubic splines
 / m
 /  2 2
 / > w (epsilon) , and (6)
 /  r r
 \/ r=1
+ E02DAF fits to arbitrary data points, with arbitrary weights, a
+ bicubic spline with its two sets of interior knots specified by
+ the user. For choosing these knots, the advice given for cubic
+ splines, in Section 3.3.1 above, applies here too. (See also the
+ next section, however.) If changes in the behaviour of the
+ surface underlying the data are more marked in the direction of
+ one variable than of the other, more knots will be needed for the
+ former variable than the latter. Note also that, in the surface
+ case, the reduction in continuity caused by coincident knots will
+ extend across the whole spline surface: for example, if three
+ knots associated with the variable x are chosen to coincide at a
+ value L, the spline surface will have a discontinuous slope
+ across the whole extent of the line x=L.
 max w (epsilon) . (7)
 r r r
+ With some sets of data and some choices of knots, the least
+ squares bicubic spline will not be unique. This will not occur,
+ with a reasonable choice of knots, if the rectangle R is well
+ covered with data points: here R is defined as the smallest
+ rectangle in the (x,y) plane, with sides parallel to the axes,
+ which contains all the data points. Where the leastsquares
+ solution is not unique, the minimal leastsquares solution is
+ computed, namely that leastsquares solution which has the
+ smallest value of the sum of squares of the Bspline coefficients
+ c (see the end of Section 2.3.2 above). This choice of least
+ ij
+ squares solution tends to minimize the risk of unwanted
+ fluctuations in the fit. The fit will not be reliable, however,
+ in regions where there are few or no data points.
 Again it is the square of (6) which is used in practice rather
 than (6) itself.
+ 3.4.2. Automatic fitting with bicubic splines
 2.2. Curve Fitting
+ E02DDF also fits bicubic splines to arbitrary data points with
+ arbitrary weights but chooses the knot sets itself. The user has
+ to supply only a threshold for the sum of squares of residuals.
+ Just like the automatic curve E02BEF (Section 3.3.2), E02DDF then
+ builds up the knot sets and finally fits a spline minimizing a
+ smoothing measure subject to satisfaction of the threshold.
+ Again, this easier to use routine is normally to be preferred, at
+ least in the first instance.
 When, as is commonly the case, the mathematical form of the
 fitting function is immaterial to the problem, polynomials and
 cubic splines are to be preferred because their simplicity and
 ease of handling confer substantial benefits. The cubic spline is
 the more versatile of the two. It consists of a number of cubic
 polynomial segments joined end to end with continuity in first
 and second derivatives at the joins. The third derivative at the
 joins is in general discontinuous. The xvalues of the joins are
 called knots, or, more precisely, interior knots. Their number
 determines the number of coefficients in the spline, just as the
 degree determines the number of coefficients in a polynomial.
+ E02DCF is a very similar routine to E02DDF but deals with data
+ points of equal weight which lie on a rectangular mesh in the
+ (x,y) plane. This kind of data allows a very much faster
+ computation and so is to be preferred when applicable.
+ Substantial departures from equal weighting can be ignored if the
+ user is not concerned with statistical questions, though the
+ quality of the fit will suffer if this is taken too far. In such
+ cases, the user should revert to E02DDF.
 2.2.1. Representation of polynomials
+ 3.5. General Linear and Nonlinear Fitting Functions
 Rather than using the powerseries form
+ 3.5.1. General linear functions
 2 k
 f(x)==b +b x+b x +...+b x (8)
 0 1 2 k
+ For the general linear function (15), routines are available for
+ fitting in the l and l norms. The leastsquares routines (which
+ 1 2
+ are to be preferred unless there is good reason to use another
+ norm  see Section 2.1.1) are in Chapter F04. The l routine is
+ 1
+ E02GAF.
 to represent a polynomial, the routines in this chapter use the
 Chebyshev series form
+ All the above routines are essentially linear algebra routines,
+ and in considering their use we need to view the fitting process
+ in a slightly different way from hitherto. Taking y to be the
+ dependent variable and x the vector of independent variables, we
+ have, as for equation (1) but with each x now a vector,
+ r
 1
 f(x)== a T (x)+a T (x)+a T (x)+...+a T (x), (9)
 2 0 0 1 1 2 2 k k
+ (epsilon) =y f(x ) r=1,2,...,m.
+ r r r
 where T (x) is the Chebyshev polynomial of the first kind of
 i
 degree i (see Cox and Hayes [1], page 9), and where the range of
 x has been normalised to run from 1 to +1. The use of either
 form leads theoretically to the same fitted polynomial, but in
 practice results may differ substantially because of the effects
 of rounding error. The Chebyshev form is to be preferred, since
 it leads to much better accuracy in general, both in the
 computation of the coefficients and in the subsequent evaluation
 of the fitted polynomial at specified points. This form also has
 other advantages: for example, since the later terms in (9)
 generally decrease much more rapidly from left to right than do
 those in (8), the situation is more often encountered where the
 last terms are negligible and it is obvious that the degree of
 the polynomial can be reduced (note that on the interval 1<=x<=1
 for all i, T (x) attains the value unity but never exceeds it, so
 i
 that the coefficient a gives directly the maximum value of the
 i
 term containing it).
+ Substituting for f(x) the general linear form (15), we can write
+ this as
 2.2.2. Representation of cubic splines
+ c (phi) (x )+c (phi) (x )+...+c (phi) (x )=y (epsilon) ,
+ 1 1 r 2 2 r p p r r r
+ r=1,2,...,m (19)
 A cubic spline is represented in the form
 f(x)==c N (x)+c N (x)+...+c N (x), (10)
 1 1 2 2 p p
+ Thus we have a system of linear equations in the coefficients c .
+ j
+ Usually, in writing these equations, the (epsilon) are omitted
+ r
+ and simply taken as implied. The system of equations is then
+ described as an overdetermined system (since we must have m>=p if
+ there is to be the possibility of a unique solution to our
+ fitting problem), and the fitting process of computing the c to
+ j
+ minimize one or other of the norms (2), (3) and (4) can be
+ described, in relation to the system of equations, as solving the
+ overdetermined system in that particular norm. In matrix
+ notation, the system can be written as
 where N (x), for i=1,2,...,p, is a normalised cubic Bspline (see
 i
 Hayes [2]). This form, also, has advantages of computational
 speed and accuracy over alternative representations.
+ (Phi)c=y, (20)
 2.3. Surface Fitting
+ where (Phi) is the m by p matrix whose element in row r and
+ column j is (phi) (x ), for r=1,2,...,m; j=1,2,...,p. The vectors
+ j r
+ c and y respectively contain the coefficients c and the data
+ j
+ values y .
+ r
 There are now two independent variables, and we shall denote
 these by x and y. The dependent variable, which was denoted by y
 in the curvefitting case, will now be denoted by f. (This is a
 rather different notation from that indicated for the general
 dimensional problem in the first paragraph of Section 2.1 , but
 it has some advantages in presentation.)
+ The routines, however, use the standard notation of linear
+ algebra, the overdetermined system of equations being denoted by
 Again, in the absence of contrary indications in the particular
 application being considered, polynomials and splines are the
 approximating functions most commonly used. Only splines are used
 by the surfacefitting routines in this chapter.
+ Ax=b (21)
 2.3.1. Bicubic splines: definition and representation
+ The correspondence between this notation and that which we have
+ used for the datafitting problem (equation (20)) is therefore
+ given by
 The bicubic spline is defined over a rectangle R in the (x,y)
 plane, the sides of R being parallel to the x and yaxes. R is
 divided into rectangular panels, again by lines parallel to the
 axes. Over each panel the bicubic spline is a bicubic polynomial,
 that is it takes the form
+ A==(Phi), x==c b==y (22)
 3 3
   i j
 > > a x y . (13)
   ij
 i=0 j=0
+ Note that the norms used by these routines are the unweighted
+ norms (2) and (3). If the user wishes to apply weights to the
+ data points, that is to use the norms (5) or (6), the
+ equivalences (22) should be replaced by
 Each of these polynomials joins the polynomials in adjacent
 panels with continuity up to the second derivative. The constant
 xvalues of the dividing lines parallel to the yaxis form the
 set of interior knots for the variable x, corresponding precisely
 to the set of interior knots of a cubic spline. Similarly, the
 constant yvalues of dividing lines parallel to the xaxis form
 the set of interior knots for the variable y. Instead of
 representing the bicubic spline in terms of the above set of
 bicubic polynomials, however, it is represented, for the sake of
 computational speed and accuracy, in the form
+ A==D(Phi), x==c b==Dy
 p q
  
 f(x,y)= > > c M (x)N (y), (14)
   ij i j
 i=1 j=1
+ where D is a diagonal matrix with w as the rth diagonal element.
+ r
+ Here w , for r=1,2,...,m, is the weight of the rth data point as
+ r
+ defined in Section 2.1.2.
 where M (x), for i=1,2,...,p, and N (y), for j=1,2,...,q, are
 i j
 normalised Bsplines (see Hayes and Halliday [4] for further
 details of bicubic splines and Hayes [2] for normalised B
 splines).
+ 3.5.2. Nonlinear functions
 2.4. General Linear and Nonlinear Fitting Functions
+ Routines for fitting with a nonlinear function in the l norm are
+ 2
+ provided in Chapter E04, and that chapter's Introduction should
+ be consulted for the appropriate choice of routine. Again,
+ however, the notation adopted is different from that we have used
+ for data fitting. In the latter, we denote the fitting function
+ by f(x;c), where x is the vector of independent variables and c
+ is the vector of coefficients, whose values are to be determined.
+ The squared l norm, to be minimized with respect to the elements
+ 2
+ of c, is then
 We have indicated earlier that, unless the datafitting
 application under consideration specifically requires some other
 type of fitting function, a polynomial or a spline is usually to
 be preferred. Special routines for these functions, in one and in
 two variables, are provided in this chapter. When the application
 does specify some other fitting function, however, it may be
 treated by a routine which deals with a general linear function,
 or by one for a general nonlinear function, depending on whether
 the coefficients in the given function occur linearly or
 nonlinearly.
+ m
+  2 2
+ > w [y f(x ;c)] (23)
+  r r r
+ r=1
+
+ where y is the rth data value of the dependent variable, x is
+ r r
+ the vector containing the rth values of the independent
+ variables, and w is the corresponding weight as defined in
+ r
+ Section 2.1.2.
+
+ On the other hand, in the nonlinear leastsquares routines of
+ Chapter E04, the function to be minimized is denoted by
 The general linear fitting function can be written in the form
+ m
+  2
+ > f (x), (24)
+  i
+ i=1
 f(x)==c (phi) (x)+c (phi) (x)+...+c (phi) (x), (15)
 1 1 2 2 p p
+ the minimization being carried out with respect to the elements
+ of the vector x. The correspondence between the two notations is
+ given by
 where x is a vector of one or more independent variables, and the
 (phi) are any given functions of these variables (though they
 i
 must be linearly independent of one another if there is to be the
 possibility of a unique solution to the fitting problem). This is
 not intended to imply that each (phi) is necessarily a function
+ x==c and
+
+ f (x)==w [y f(x ;c)], i=r=1,2,...,m.
+ i r r r
+
+ Note especially that the vector x of variables of the nonlinear
+ leastsquares routines is the vector c of coefficients of the
+ datafitting problem, and in particular that, if the selected
+ routine requires derivatives of the f (x) to be provided, these
i
 of all the variables: we may have, for example, that each (phi)
 i
 is a function of a different single variable, and even that one
 of the (phi) is a constant. All that is required is that a value
 i
 of each (phi) (x) can be computed when a value of each
 i
 independent variable is given.
+ are derivatives of w [y f(x ;c)] with respect to the
+ r r r
+ coefficients of the datafitting problem.
 When the fitting function f(x) is not linear in its coefficients,
 no more specific representation is available in general than f(x)
 itself. However, we shall find it helpful later on to indicate
 the fact that f(x) contains a number of coefficients (to be
 determined by the fitting process) by using instead the notation
 f(x;c), where c denotes the vector of coefficients. An example of
 a nonlinear fitting function is
+ 3.6. Constraints
+ At present, there are only a limited number of routines which fit
+ subject to constraints. Chapter E04 contains a routine, E04UCF,
+ which can be used for fitting with a nonlinear function in the l
+ 2
+ norm subject to equality or inequality constraints. This routine,
+ unlike those in that chapter suited to the unconstrained case, is
+ not designed specifically for minimizing functions which are sums
+ of squares, and so the function (23) has to be treated as a
+ general nonlinear function. The E04 Chapter Introduction should
+ be consulted.
 f(x;c)==c +c exp(c x)+c exp(c x), (16)
 1 2 4 3 5
+ The remaining constraint routine relates to fitting with
+ polynomials in the l norm. E02AGF deals with polynomial curves
+ 2
+ and allows precise values of the fitting function and (if
+ required) all its derivatives up to a given order to be
+ prescribed at one or more values of the independent variable.
 which is in one variable and contains five coefficients. Note
 that here, as elsewhere in this Chapter Introduction, we use the
 term 'coefficients' to include all the quantities whose values
 are to be determined by the fitting process, not just those which
 occur linearly. We may observe that it is only the presence of
 the coefficients c and c which makes the form (16) nonlinear.
 4 5
 If the values of these two coefficients were known beforehand,
 (16) would instead be a linear function which, in terms of the
 general linear form (15), has p=3 and
+ 3.7. Evaluation, Differentiation and Integration
 (phi) (x)==1, (phi) (x)==exp(c x), and (phi) (x)==exp(c x).
 1 2 4 3 5
+ Routines are available to evaluate, differentiate and integrate
+ polynomials in Chebyshevseries form and cubic or bicubic splines
+ in Bspline form. These polynomials and splines may have been
+ produced by the various fitting routines or, in the case of
+ polynomials, from prior calls of the differentiation and
+ integration routines themselves.
 We may note also that polynomials and splines, such as (9) and
 (14), are themselves linear in their coefficients. Thus if, when
 fitting with these functions, a suitable special routine is not
 available (as when more than two independent variables are
 involved or when fitting in the l norm), it is appropriate to
 1
 use a routine designed for a general linear function.
+ E02AEF and E02AKF evaluate polynomial curves: the latter has a
+ longer parameter list but does not require the user to normalise
+ the values of the independent variable and can accept
+ coefficients which are not stored in contiguous locations. E02BBF
+ evaluates cubic spline curves, and E02DEF and E02DFF bicubic
+ spline surfaces.
 2.5. Constrained Problems
+ Differentiation and integration of polynomial curves are carried
+ out by E02AHF and E02AJF respectively. The results are provided
+ in Chebyshevseries form and so repeated differentiation and
+ integration are catered for. Values of the derivative or integral
+ can then be computed using the appropriate evaluation routine.
 So far, we have considered only fitting processes in which the
 values of the coefficients in the fitting function are determined
 by an unconstrained minimization of a particular norm. Some
 fitting problems, however, require that further restrictions be
 placed on the determination of the coefficient values. Sometimes
 these restrictions are contained explicitly in the formulation of
 the problem in the form of equalities or inequalities which the
 coefficients, or some function of them, must satisfy. For
 example, if the fitting function contains a term Aexp(kx), it
 may be required that k>=0. Often, however, the equality or
 inequality constraints relate to the value of the fitting
 function or its derivatives at specified values of the
 independent variable(s), but these too can be expressed in terms
 of the coefficients of the fitting function, and it is
 appropriate to do this if a general linear or nonlinear routine
 is being used. For example, if the fitting function is that given
 in (10), the requirement that the first derivative of the
 function at x=x be nonnegative can be expressed as
+ For splines the differentiation and integration routines provided
+ are of a different nature from those for polynomials. E02BCF
+ provides values of a cubic spline curve and its first three
+ derivatives (the rest, of course, are zero) at a given value of x
+ spline over its whole range. These routines can also be applied
+ to surfaces of the form (14). For example, if, for each value of
+ j in turn, the coefficients c , for i=1,2,...,p are supplied to
+ ij
+ E02BCF with x=x and on each occasion we select from the output
0
+ the value of the second derivative, d say, and if the whole set
+ j
+ of d are then supplied to the same routine with x=y , the output
+ j 0
+ will contain all the values at (x ,y ) of
+ 0 0
 c N '(x )+c N '(x )+...+c N '(x )>=0, (17)
 1 1 0 2 2 0 p p 0
+ 2 r+2
+ dd f dd f
+  and , r=1,2,3.
+ 2 2 r
+ dd fx ddx ddy
 where the prime denotes differentiation with respect to x and
 each derivative is evaluated at x=x . On the other hand, if the
 0
 requirement had been that the derivative at x=x be exactly zero,
 0
 the inequality sign in (17) would be replaced by an equality.
+ Equally, if after each of the first p calls of E02BCF we had
+ selected the function value (E02BBF would also provide this)
+ instead of the second derivative and we had supplied these values
+ to E02BDF, the result obtained would have been the value of
 Routines which provide a facility for minimizing the appropriate
 norm subject to such constraints are discussed in Section 3.6.
+ B
+ /
+ f(x ,y)dy,
+ / 0
+ A
 2.6. References
+ where A and B are the endpoints of the y interval over which the
+ spline was defined.
 [1] Cox M G and Hayes J G (1973) Curve fitting: a guide and
 suite of algorithms for the nonspecialist user. Report
 NAC26. National Physical Laboratory.
+ 3.8. Index
 [2] Hayes J G (1974 ) Numerical Methods for Curve and Surface
 Fitting. Bull Inst Math Appl. 10 144152.
+ Automatic fitting,
+ with bicubic splines E02DCF
+ E02DDF
+ with cubic splines E02BEF
+ Data on rectangular mesh E02DCF
+ Differentiation,
+ of cubic splines E02BCF
+ of polynomials E02AHF
+ Evaluation,
+ of bicubic splines E02DEF
+ E02DFF
+ of cubic splines E02BBF
+ of cubic splines and derivatives E02BCF
+ of definite integral of cubic splines E02BDF
+ of polynomials E02AEF
+ E02AKF
+ Integration,
+ of cubic splines (definite integral) E02BDF
+ of polynomials E02AJF
+ Leastsquares curve fit,
+ with cubic splines E02BAF
+ with polynomials,
+ arbitrary data points E02ADF
+ with constraints E02AGF
+ Leastsquares surface fit with bicubic splines E02DAF
+ l fit with general linear function, E02GAF
+ 1
+ Sorting,
+ 2D data into panels E02ZAF
 (For definition of normalised Bsplines and details of
 numerical methods.)
 [3] Hayes J G (1970) Curve Fitting by Polynomials in One
 Variable. Numerical Approximation to Functions and Data. (ed
 J G Hayes) Athlone Press, London.
+ E02  Curve and Surface Fitting Contents  E02
+ Chapter E02
 [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
 Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
 Appl. 14 89103.
+ Curve and Surface Fitting
 3. Recommendations on Choice and Use of Routines
+ E02ADF Leastsquares curve fit, by polynomials, arbitrary data
+ points
 3.1. General
+ E02AEF Evaluation of fitted polynomial in one variable from
+ Chebyshev series form (simplified parameter list)
 The choice of a routine to treat a particular fitting problem
 will depend first of all on the fitting function and the norm to
 be used. Unless there is good reason to the contrary, the fitting
 function should be a polynomial or a cubic spline (in the
 appropriate number of variables) and the norm should be the l
 2
 norm (leading to the leastsquares fit). If some other function
 is to be used, the choice of routine will depend on whether the
 function is nonlinear (in which case see Section 3.5.2) or linear
 in its coefficients (see Section 3.5.1), and, in the latter case,
 on whether the l or l norm is to be used. The latter section is
 1 2
 appropriate for polynomials and splines, too, if the l norm is
 1
 preferred.
+ E02AGF Leastsquares polynomial fit, values and derivatives may
+ be constrained, arbitrary data points,
 In the case of a polynomial or cubic spline, if there is only one
 independent variable, the user should choose a spline (Section
 3.3) when the curve represented by the data is of complicated
 form, perhaps with several peaks and troughs. When the curve is
 of simple form, first try a polynomial (see Section 3.2) of low
 degree, say up to degree 5 or 6, and then a spline if the
 polynomial fails to provide a satisfactory fit. (Of course, if
 thirdderivative discontinuities are unacceptable to the user, a
 polynomial is the only choice.) If the problem is one of surface
 fitting, one of the spline routines should be used (Section 3.4).
 If the problem has more than two independent variables, it may be
 treated by the general linear routine in Section 3.5.1, again
 using a polynomial in the first instance.
+ E02AHF Derivative of fitted polynomial in Chebyshev series form
+
+ E02AJF Integral of fitted polynomial in Chebyshev series form
+
+ E02AKF Evaluation of fitted polynomial in one variable, from
+ Chebyshev series form
+
+ E02BAF Leastsquares curve cubic spline fit (including
+ interpolation)
+
+ E02BBF Evaluation of fitted cubic spline, function only
+
+ E02BCF Evaluation of fitted cubic spline, function and
+ derivatives
 Another factor which affects the choice of routine is the
 presence of constraints, as previously discussed in Section 2.5.
 Indeed this factor is likely to be overriding at present, because
 of the limited number of routines which have the necessary
 facility. See Section 3.6.
+ E02BDF Evaluation of fitted cubic spline, definite integral
 3.1.1. Data considerations
+ E02BEF Leastsquares cubic spline curve fit, automatic knot
+ placement
 A satisfactory fit cannot be expected by any means if the number
 and arrangement of the data points do not adequately represent
 the character of the underlying relationship: sharp changes in
 behaviour, in particular, such as sharp peaks, should be well
 covered. Data points should extend over the whole range of
 interest of the independent variable(s): extrapolation outside
 the data ranges is most unwise. Then, with polynomials, it is
 advantageous to have additional points near the ends of the
 ranges, to counteract the tendency of polynomials to develop
 fluctuations in these regions. When, with polynomial curves, the
 user can precisely choose the xvalues of the data, the special
 points defined in Section 3.2.2 should be selected. With splines
 the choice is less critical as long as the character of the
 relationship is adequately represented. All fits should be tested
 graphically before accepting them as satisfactory.
+ E02DAF Leastsquares surface fit, bicubic splines
 For this purpose it should be noted that it is not sufficient to
 plot the values of the fitted function only at the data values of
 the independent variable(s); at the least, its values at a
 similar number of intermediate points should also be plotted, as
 unwanted fluctuations may otherwise go undetected. Such
 fluctuations are the less likely to occur the lower the number of
 coefficients chosen in the fitting function. No firm guide can be
 given, but as a rough rule, at least initially, the number of
 coefficients should not exceed half the number of data points
 (points with equal or nearly equal values of the independent
 variable, or both independent variables in surface fitting,
 counting as a single point for this purpose). However, the
 situation may be such, particularly with a small number of data
 points, that a satisfactorily close fit to the data cannot be
 achieved without unwanted fluctuations occurring. In such cases,
 it is often possible to improve the situation by a transformation
 of one or more of the variables, as discussed in the next
 paragraph: otherwise it will be necessary to provide extra data
 points. Further advice on curve fitting is given in Cox and Hayes
 [1] and, for polynomials only, in Hayes [3] of Section 2.7. Much
 of the advice applies also to surface fitting; see also the
 Routine Documents.
+ E02DCF Leastsquares surface fit by bicubic splines with
+ automatic knot placement, data on rectangular grid
 3.1.2. Transformation of variables
+ E02DDF Leastsquares surface fit by bicubic splines with
+ automatic knot placement, scattered data
 Before starting the fitting, consideration should be given to the
 choice of a good form in which to deal with each of the
 variables: often it will be satisfactory to use the variables as
 they stand, but sometimes the use of the logarithm, square root,
 or some other function of a variable will lead to a better
 behaved relationship. This question is customarily taken into
 account in preparing graphs and tables of a relationship and the
 same considerations apply when curve or surface fitting. The
 practical context will often give a guide. In general, it is best
 to avoid having to deal with a relationship whose behaviour in
 one region is radically different from that in another. A steep
 rise at the lefthand end of a curve, for example, can often best
 be treated by curve fitting in terms of log(x+c) with some
 suitable value of the constant c. A case when such a
 transformation gave substantial benefit is discussed in Hayes [3]
 page 60. According to the features exhibited in any particular
 case, transformation of either dependent variable or independent
 variable(s) or both may be beneficial. When there is a choice it
 is usually better to transform the independent variable(s): if
 the dependent variable is transformed, the weights attached to
 the data points must be adjusted. Thus (denoting the dependent
 variable by y, as in the notation for curves) if the y to be
 r
 fitted have been obtained by a transformation y=g(Y) from
 original data values Y , with weights W , for r=1,2,...,m, we
 r r
 must take
+ E02DEF Evaluation of a fitted bicubic spline at a vector of
+ points
 w =W /(dy/dY), (18)
 r r
+ E02DFF Evaluation of a fitted bicubic spline at a mesh of points
 where the derivative is evaluated at Y . Strictly, the
 r
 transformation of Y and the adjustment of weights are valid only
 when the data errors in the Y are small compared with the range
 r
 spanned by the Y , but this is usually the case.
 r
+ E02GAF L approximation by general linear function
+ 1
 3.2. Polynomial Curves
+ E02ZAF Sort 2D data into panels for fitting bicubic splines
 3.2.1. Leastsquares polynomials: arbitrary data points
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02ADF fits to arbitrary data points, with arbitrary weights,
 polynomials of all degrees up to a maximum degree k, which is at
 choice. If the user is seeking only a low degree polynomial, up
 to degree 5 or 6 say, k=10 is an appropriate value, providing
 there are about 20 data points or more. To assist in deciding the
 degree of polynomial which satisfactorily fits the data, the
 routine provides the rootmeansquareresidual s for all degrees
 i
 i=1,2,...,k. In a satisfactory case, these s will decrease
 i
 steadily as i increases and then settle down to a fairly constant
 value, as shown in the example
+ E02  Curve and Surface Fitting E02ADF
+ E02ADF  NAG Foundation Library Routine Document
 i s
 i
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 0 3.5215
+ 1. Purpose
 1 0.7708
+ E02ADF computes weighted leastsquares polynomial approximations
+ to an arbitrary set of data points.
 2 0.1861
+ 2. Specification
 3 0.0820
+ SUBROUTINE E02ADF (M, KPLUS1, NROWS, X, Y, W, WORK1,
+ 1 WORK2, A, S, IFAIL)
+ INTEGER M, KPLUS1, NROWS, IFAIL
+ DOUBLE PRECISION X(M), Y(M), W(M), WORK1(3*M), WORK2
+ 1 (2*KPLUS1), A(NROWS,KPLUS1), S(KPLUS1)
 4 0.0554
+ 3. Description
 5 0.0251
+ This routine determines leastsquares polynomial approximations
+ of degrees 0,1,...,k to the set of data points (x ,y ) with
+ r r
+ weights w , for r=1,2,...,m.
+ r
 6 0.0264
+ The approximation of degree i has the property that it minimizes
+ (sigma) the sum of squares of the weighted residuals (epsilon) ,
+ i r
+ where
 7 0.0280
+ (epsilon) =w (y f )
+ r r r r
 8 0.0277
+ and f is the value of the polynomial of degree i at the rth data
+ r
+ point.
 9 0.0297
+ Each polynomial is represented in Chebyshevseries form with
+
 10 0.0271
+ normalised argument x. This argument lies in the range 1 to +1
+ and is related to the original variable x by the linear
+ transformation
 If the s values settle down in this way, it indicates that the
 i
 closest polynomial approximation justified by the data has been
 achieved. The degree which first gives the approximately constant
 value of s (degree 5 in the example) is the appropriate degree
 i
 to select. (Users who are prepared to accept a fit higher than
 sixth degree, should simply find a high enough value of k to
 enable the type of behaviour indicated by the example to be
 detected: thus they should seek values of k for which at least 4
 or 5 consecutive values of s are approximately the same.) If the
 i
 degree were allowed to go high enough, s would, in most cases,
 i
 eventually start to decrease again, indicating that the data
 points are being fitted too closely and that undesirable
 fluctuations are developing between the points. In some cases,
 particularly with a small number of data points, this final
 decrease is not distinguishable from the initial decrease in s .
 i
 In such cases, users may seek an acceptable fit by examining the
 graphs of several of the polynomials obtained. Failing this, they
 may (a) seek a transformation of variables which improves the
 behaviour, (b) try fitting a spline, or (c) provide more data
 points. If data can be provided simply by drawing an
 approximating curve by hand and reading points from it, use the
 points discussed in Section 3.2.2.
+ (2xx x )
+ max min
+ x= .
+ (x x )
+ max min
 3.2.2. Leastsquares polynomials: selected data points
+ Here x and x are respectively the largest and smallest
+ max min
+ values of x . The polynomial approximation of degree i is
+ r
+ represented as
 When users are at liberty to choose the xvalues of data points,
 such as when the points are taken from a graph, it is most
 advantageous when fitting with polynomials to use the values
 x =cos((pi)r/n), for r=0,1,...,n for some value of n, a suitable
 r
 value for which is discussed at the end of this section. Note
 that these x relate to the variable x after it has been
 r
 normalised so that its range of interest is 1 to +1. E02ADF may
 then be used as in Section 3.2.1 to seek a satisfactory fit.
+ 1
+ a T (x)+a T (x)+a T (x)+...+a T (x),
+ 2 i+1,1 0 i+1,2 1 i+1,3 2 i+1,i+1 i
 3.3. Cubic Spline Curves
+
 3.3.1. Leastsquares cubic splines
+ where T (x) is the Chebyshev polynomial of the first kind of
+ j
+
 E02BAF fits to arbitrary data points, with arbitrary weights, a
 cubic spline with interior knots specified by the user. The
 choice of these knots so as to give an acceptable fit must
 largely be a matter of trial and error, though with a little
 experience a satisfactory choice can often be made after one or
 two trials. It is usually best to start with a small number of
 knots (too many will result in unwanted fluctuations in the fit,
 or even in there being no unique solution) and, examining the fit
 graphically at each stage, to add a few knots at a time at places
 where the fit is particularly poor. Moving the existing knots
 towards these places will also often improve the fit. In regions
 where the behaviour of the curve underlying the data is changing
 rapidly, closer knots will be needed than elsewhere. Otherwise,
 positioning is not usually very critical and equallyspaced knots
 are often satisfactory. See also the next section, however.
+ degree j with argument (x).
+
+ For i=0,1,...,k, the routine produces the values of a , for
+ i+1,j+1
+ j=0,1,...,i, together with the value of the root mean square
+
+
+ / (sigma)
+ / i
+ residual s = / . In the case m=i+1 the routine sets
+ i \/ mi1
+ the value of s to zero.
+ i
+
+ The method employed is due to Forsythe [4] and is based upon the
+ generation of a set of polynomials orthogonal with respect to
+ summation over the normalised data set. The extensions due to
+ Clenshaw [1] to represent these polynomials as well as the
+ approximating polynomials in their Chebyshevseries forms are
+ incorporated. The modifications suggested by Reinsch and
+ Gentleman (see [5]) to the method originally employed by Clenshaw
+ for evaluating the orthogonal polynomials from their Chebyshev
+ series representations are used to give greater numerical
+ stability.
+
+ For further details of the algorithm and its use see Cox [2] and
+ [3].
 A useful feature of the routine is that it can be used in
 applications which require the continuity to be less than the
 normal continuity of the cubic spline. For example, the fit may
 be required to have a discontinuous slope at some point in the
 range. This can be achieved by placing three coincident knots at
 the given point. Similarly a discontinuity in the second
 derivative at a point can be achieved by placing two knots there.
 Analogy with these discontinuous cases can provide guidance in
 more usual cases: for example, just as three coincident knots can
 produce a discontinuity in slope, so three close knots can
 produce a rapid change in slope. The closer the knots are, the
 more rapid can the change be.
+ Subsequent evaluation of the Chebyshevseries representations of
+ the polynomial approximations should be carried out using E02AEF.
 Figure 1
 Please see figure in printed Reference Manual
+ 4. References
 An example set of data is given in Figure 1. It is a rather
 tricky set, because of the scarcity of data on the right, but it
 will serve to illustrate some of the above points and to show
 some of the dangers to be avoided. Three interior knots
 (indicated by the vertical lines at the top of the diagram) are
 chosen as a start. We see that the resulting curve is not steep
 enough in the middle and fluctuates at both ends, severely on the
 right. The spline is unable to cope with the shape and more knots
 are needed.
+ [1] Clenshaw C W (1960) Curve Fitting with a Digital Computer.
+ Comput. J. 2 170173.
 In Figure 2, three knots have been added in the centre, where the
 data shows a rapid change in behaviour, and one further out at
 each end, where the fit is poor. The fit is still poor, so a
 further knot is added in this region and, in Figure 3, disaster
 ensues in rather spectacular fashion.
+ [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
+ User. Software for Numerical Mathematics. (ed D J Evans)
+ Academic Press.
 Figure 2
 Please see figure in printed Reference Manual
+ [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
+ suite of algorithms for the nonspecialist user. Report
+ NAC26. National Physical Laboratory.
 Figure 3
 Please see figure in printed Reference Manual
+ [4] Forsythe G E (1957) Generation and use of orthogonal
+ polynomials for data fitting with a digital computer. J.
+ Soc. Indust. Appl. Math. 5 7488.
 The reason is that, at the righthand end, the fits in Figure 1
 and Figure 2 have been interpreted as poor simply because of the
 fluctuations about the curve underlying the data (or what it is
 naturally assumed to be). But the fitting process knows only
 about the data and nothing else about the underlying curve, so it
 is important to consider only closeness to the data when deciding
 goodness of fit.
+ [5] Gentlemen W M (1969) An Error Analysis of Goertzel's
+ (Watt's) Method for Computing Fourier Coefficients. Comput.
+ J. 12 160165.
 Thus, in Figure 1, the curve fits the last two data points quite
 well compared with the fit elsewhere, so no knot should have been
 added in this region. In Figure 2, the curve goes exactly through
 the last two points, so a further knot is certainly not needed
 here.
+ [6] Hayes J G (1970) Curve Fitting by Polynomials in One
+ Variable. Numerical Approximation to Functions and Data. (ed
+ J G Hayes) Athlone Press, London.
+ 5. Parameters
 Figure 4
 Please see figure in printed Reference Manual
+ 1: M  INTEGER Input
+ On entry: the number m of data points. Constraint: M >=
+ MDIST >= 2, where MDIST is the number of distinct x values
+ in the data.
 Figure 4 shows what can be achieved without the extra knot on
 each of the flat regions. Remembering that within each knot
 interval the spline is a cubic polynomial, there is really no
 need to have more than one knot interval covering each flat
 region.
+ 2: KPLUS1  INTEGER Input
+ On entry: k+1, where k is the maximum degree required.
+ Constraint: 0 < KPLUS1 <= MDIST, where MDIST is the number
+ of distinct x values in the data.
 What we have, in fact, in Figure 2 and Figure 3 is a case of too
 many knots (so too many coefficients in the spline equation) for
 the number of data points. The warning in the second paragraph of
 Section 2.1 was that the fit will then be too close to the data,
 tending to have unwanted fluctuations between the data points.
 The warning applies locally for splines, in the sense that, in
 localities where there are plenty of data points, there can be a
 lot of knots, as long as there are few knots where there are few
 points, especially near the ends of the interval. In the present
 example, with so few data points on the right, just the one extra
 knot in Figure 2 is too many! The signs are clearly present, with
 the last two points fitted exactly (at least to the graphical
 accuracy and actually much closer than that) and fluctuations
 within the last two knotintervals (cf. Figure 1, where only the
 final point is fitted exactly and one of the wobbles spans
 several data points).
+ 3: NROWS  INTEGER Input
+ On entry:
+ the first dimension of the array A as declared in the
+ (sub)program from which E02ADF is called.
+ Constraint: NROWS >= KPLUS1.
 The situation in Figure 3 is different. The fit, if computed
 exactly, would still pass through the last two data points, with
 even more violent fluctuations. However, the problem has become
 so illconditioned that all accuracy has been lost. Indeed, if
 the last interior knot were moved a tiny amount to the right,
 there would be no unique solution and an error message would have
 been caused. Nearsingularity is, sadly, not picked up by the
 routine, but can be spotted readily in a graph, as Figure 3. B
 spline coefficients becoming large, with alternating signs, is
 another indication. However, it is better to avoid such
 situations, firstly by providing, whenever possible, data
 adequately covering the range of interest, and secondly by
 placing knots only where there is a reasonable amount of data.
+ 4: X(M)  DOUBLE PRECISION array Input
+ On entry: the values x of the independent variable, for
+ r
+ r=1,2,...,m. Constraint: the values must be supplied in non
+ decreasing order with X(M) > X(1).
 The example here could, in fact, have utilised from the start the
 observation made in the second paragraph of this section, that
 three close knots can produce a rapid change in slope. The
 example has two such rapid changes and so requires two sets of
 three close knots (in fact, the two sets can be so close that one
 knot can serve in both sets, so only five knots prove sufficient
 in Figure 4). It should be noted, however, that the rapid turn
 occurs within the range spanned by the three knots. This is the
 reason that the six knots in Figure 2 are not satisfactory as
 they do not quite span the two turns.
+ 5: Y(M)  DOUBLE PRECISION array Input
+ On entry: the values y of the dependent variable, for
+ r
+ r=1,2,...,m.
 Some more examples to illustrate the choice of knots are given in
 Cox and Hayes [1].
+ 6: W(M)  DOUBLE PRECISION array Input
+ On entry: the set of weights, w , for r=1,2,...,m. For
+ r
+ advice on the choice of weights, see Section 2.1.2 of the
+ Chapter Introduction. Constraint: W(r) > 0.0, for r=1,2,...m.
 3.3.2. Automatic fitting with cubic splines
+ 7: WORK1(3*M)  DOUBLE PRECISION array Workspace
 E02BEF also fits cubic splines to arbitrary data points with
 arbitrary weights but itself chooses the number and positions of
 the knots. The user has to supply only a threshold for the sum of
 squares of residuals. The routine first builds up a knot set by a
 series of trial fits in the l norm. Then, with the knot set
 2
 decided, the final spline is computed to minimize a certain
 smoothing measure subject to satisfaction of the chosen
 threshold. Thus it is easier to use than E02BAF (see previous
 section), requiring only some experimentation with this
 threshold. It should therefore be first choice unless the user
 has a preference for the ordinary leastsquares fit or, for
 example, wishes to experiment with knot positions, trying to keep
 their number down (E02BEF aims only to be reasonably frugal with
 knots).
+ 8: WORK2(2*KPLUS1)  DOUBLE PRECISION array Workspace
 3.4. Spline Surfaces
+ 9: A(NROWS,KPLUS1)  DOUBLE PRECISION array Output
+
 3.4.1. Leastsquares bicubic splines
+ On exit: the coefficients of T (x) in the approximating
+ j
+ polynomial of degree i. A(i+1,j+1) contains the coefficient
+ a , for i=0,1,...,k; j=0,1,...,i.
+ i+1,j+1
 E02DAF fits to arbitrary data points, with arbitrary weights, a
 bicubic spline with its two sets of interior knots specified by
 the user. For choosing these knots, the advice given for cubic
 splines, in Section 3.3.1 above, applies here too. (See also the
 next section, however.) If changes in the behaviour of the
 surface underlying the data are more marked in the direction of
 one variable than of the other, more knots will be needed for the
 former variable than the latter. Note also that, in the surface
 case, the reduction in continuity caused by coincident knots will
 extend across the whole spline surface: for example, if three
 knots associated with the variable x are chosen to coincide at a
 value L, the spline surface will have a discontinuous slope
 across the whole extent of the line x=L.
+ 10: S(KPLUS1)  DOUBLE PRECISION array Output
+ On exit: S(i+1) contains the root mean square residual s ,
+ i
+ for i=0,1,...,k, as described in Section 3. For the
+ interpretation of the values of the s and their use in
+ i
+ selecting an appropriate degree, see Section 3.1 of the
+ Chapter Introduction.
 With some sets of data and some choices of knots, the least
 squares bicubic spline will not be unique. This will not occur,
 with a reasonable choice of knots, if the rectangle R is well
 covered with data points: here R is defined as the smallest
 rectangle in the (x,y) plane, with sides parallel to the axes,
 which contains all the data points. Where the leastsquares
 solution is not unique, the minimal leastsquares solution is
 computed, namely that leastsquares solution which has the
 smallest value of the sum of squares of the Bspline coefficients
 c (see the end of Section 2.3.2 above). This choice of least
 ij
 squares solution tends to minimize the risk of unwanted
 fluctuations in the fit. The fit will not be reliable, however,
 in regions where there are few or no data points.
+ 11: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 3.4.2. Automatic fitting with bicubic splines
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 E02DDF also fits bicubic splines to arbitrary data points with
 arbitrary weights but chooses the knot sets itself. The user has
 to supply only a threshold for the sum of squares of residuals.
 Just like the automatic curve E02BEF (Section 3.3.2), E02DDF then
 builds up the knot sets and finally fits a spline minimizing a
 smoothing measure subject to satisfaction of the threshold.
 Again, this easier to use routine is normally to be preferred, at
 least in the first instance.
+ 6. Error Indicators and Warnings
 E02DCF is a very similar routine to E02DDF but deals with data
 points of equal weight which lie on a rectangular mesh in the
 (x,y) plane. This kind of data allows a very much faster
 computation and so is to be preferred when applicable.
 Substantial departures from equal weighting can be ignored if the
 user is not concerned with statistical questions, though the
 quality of the fit will suffer if this is taken too far. In such
 cases, the user should revert to E02DDF.
+ Errors detected by the routine:
 3.5. General Linear and Nonlinear Fitting Functions
+ IFAIL= 1
+ The weights are not all strictly positive.
 3.5.1. General linear functions
+ IFAIL= 2
+ The values of X(r), for r=1,2,...,M are not in non
+ decreasing order.
 For the general linear function (15), routines are available for
 fitting in the l and l norms. The leastsquares routines (which
 1 2
 are to be preferred unless there is good reason to use another
 norm  see Section 2.1.1) are in Chapter F04. The l routine is
 1
 E02GAF.
+ IFAIL= 3
+ All X(r) have the same value: thus the normalisation of X is
+ not possible.
+
+ IFAIL= 4
+ On entry KPLUS1 < 1 (so the maximum degree required is
+ negative)
+
+ or KPLUS1 > MDIST, where MDIST is the number of
+ distinct x values in the data (so there cannot be a
+ unique solution for degree k=KPLUS11).
+
+ IFAIL= 5
+ NROWS < KPLUS1.
 All the above routines are essentially linear algebra routines,
 and in considering their use we need to view the fitting process
 in a slightly different way from hitherto. Taking y to be the
 dependent variable and x the vector of independent variables, we
 have, as for equation (1) but with each x now a vector,
 r
+ 7. Accuracy
 (epsilon) =y f(x ) r=1,2,...,m.
 r r r
+ No error analysis for the method has been published. Practical
+ experience with the method, however, is generally extremely
+ satisfactory.
 Substituting for f(x) the general linear form (15), we can write
 this as
+ 8. Further Comments
 c (phi) (x )+c (phi) (x )+...+c (phi) (x )=y (epsilon) ,
 1 1 r 2 2 r p p r r r
 r=1,2,...,m (19)
+ The time taken by the routine is approximately proportional to
+ m(k+1)(k+11).
+ The approximating polynomials may exhibit undesirable
+ oscillations (particularly near the ends of the range) if the
+ maximum degree k exceeds a critical value which depends on the
+ number of data points m and their relative positions. As a rough
+ guide, for equallyspaced data, this critical value is about
+
 Thus we have a system of linear equations in the coefficients c .
 j
 Usually, in writing these equations, the (epsilon) are omitted
 r
 and simply taken as implied. The system of equations is then
 described as an overdetermined system (since we must have m>=p if
 there is to be the possibility of a unique solution to our
 fitting problem), and the fitting process of computing the c to
 j
 minimize one or other of the norms (2), (3) and (4) can be
 described, in relation to the system of equations, as solving the
 overdetermined system in that particular norm. In matrix
 notation, the system can be written as
+ 2*\/m. For further details see Hayes [6] page 60.
 (Phi)c=y, (20)
+ 9. Example
 where (Phi) is the m by p matrix whose element in row r and
 column j is (phi) (x ), for r=1,2,...,m; j=1,2,...,p. The vectors
 j r
 c and y respectively contain the coefficients c and the data
 j
 values y .
 r
+ Determine weighted leastsquares polynomial approximations of
+ degrees 0, 1, 2 and 3 to a set of 11 prescribed data points. For
+ the approximation of degree 3, tabulate the data and the
+ corresponding values of the approximating polynomial, together
+ with the residual errors, and also the values of the
+ approximating polynomial at points halfway between each pair of
+ adjacent data points.
 The routines, however, use the standard notation of linear
 algebra, the overdetermined system of equations being denoted by
+ The example program supplied is written in a general form that
+ will enable polynomial approximations of degrees 0,1,...,k to be
+ obtained to m data points, with arbitrary positive weights, and
+ the approximation of degree k to be tabulated. E02AEF is used to
+ evaluate the approximating polynomial. The program is self
+ starting in that any number of data sets can be supplied.
 Ax=b (21)
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 The correspondence between this notation and that which we have
 used for the datafitting problem (equation (20)) is therefore
 given by
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 A==(Phi), x==c b==y (22)
+ E02  Curve and Surface Fitting E02AEF
+ E02AEF  NAG Foundation Library Routine Document
 Note that the norms used by these routines are the unweighted
 norms (2) and (3). If the user wishes to apply weights to the
 data points, that is to use the norms (5) or (6), the
 equivalences (22) should be replaced by
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 A==D(Phi), x==c b==Dy
+ 1. Purpose
 where D is a diagonal matrix with w as the rth diagonal element.
 r
 Here w , for r=1,2,...,m, is the weight of the rth data point as
 r
 defined in Section 2.1.2.
+ E02AEF evaluates a polynomial from its Chebyshevseries
+ representation.
 3.5.2. Nonlinear functions
+ 2. Specification
 Routines for fitting with a nonlinear function in the l norm are
 2
 provided in Chapter E04, and that chapter's Introduction should
 be consulted for the appropriate choice of routine. Again,
 however, the notation adopted is different from that we have used
 for data fitting. In the latter, we denote the fitting function
 by f(x;c), where x is the vector of independent variables and c
 is the vector of coefficients, whose values are to be determined.
 The squared l norm, to be minimized with respect to the elements
 2
 of c, is then
+ SUBROUTINE E02AEF (NPLUS1, A, XCAP, P, IFAIL)
+ INTEGER NPLUS1, IFAIL
+ DOUBLE PRECISION A(NPLUS1), XCAP, P
 m
  2 2
 > w [y f(x ;c)] (23)
  r r r
 r=1
+ 3. Description
 where y is the rth data value of the dependent variable, x is
 r r
 the vector containing the rth values of the independent
 variables, and w is the corresponding weight as defined in
 r
 Section 2.1.2.
+ This routine evaluates the polynomial
 On the other hand, in the nonlinear leastsquares routines of
 Chapter E04, the function to be minimized is denoted by
+ 1
+ a T (x)+a T (x)+a T (x)+...+a T (x)
+ 2 1 0 2 1 3 2 n+1 n
 m
  2
 > f (x), (24)
  i
 i=1
+
 the minimization being carried out with respect to the elements
 of the vector x. The correspondence between the two notations is
 given by
+ for any value of x satisfying 1<=x<=1. Here T (x) denotes the
+ j
+ Chebyshev polynomial of the first kind of degree j with argument
+
 x==c and
+ x. The value of n is prescribed by the user.
 f (x)==w [y f(x ;c)], i=r=1,2,...,m.
 i r r r
+
 Note especially that the vector x of variables of the nonlinear
 leastsquares routines is the vector c of coefficients of the
 datafitting problem, and in particular that, if the selected
 routine requires derivatives of the f (x) to be provided, these
 i
 are derivatives of w [y f(x ;c)] with respect to the
 r r r
 coefficients of the datafitting problem.
+ In practice, the variable x will usually have been obtained from
+ an original variable x, where x <=x<=x and
+ min max
 3.6. Constraints
+ ((xx )(x x))
+ min max
+ x= 
+ (x x )
+ max min
 At present, there are only a limited number of routines which fit
 subject to constraints. Chapter E04 contains a routine, E04UCF,
 which can be used for fitting with a nonlinear function in the l
 2
 norm subject to equality or inequality constraints. This routine,
 unlike those in that chapter suited to the unconstrained case, is
 not designed specifically for minimizing functions which are sums
 of squares, and so the function (23) has to be treated as a
 general nonlinear function. The E04 Chapter Introduction should
 be consulted.
+ Note that this form of the transformation should be used
+ computationally rather than the mathematical equivalent
 The remaining constraint routine relates to fitting with
 polynomials in the l norm. E02AGF deals with polynomial curves
 2
 and allows precise values of the fitting function and (if
 required) all its derivatives up to a given order to be
 prescribed at one or more values of the independent variable.
+ (2xx x )
+ min max
+ x= 
+ (x x )
+ max min
 3.7. Evaluation, Differentiation and Integration
+
 Routines are available to evaluate, differentiate and integrate
 polynomials in Chebyshevseries form and cubic or bicubic splines
 in Bspline form. These polynomials and splines may have been
 produced by the various fitting routines or, in the case of
 polynomials, from prior calls of the differentiation and
 integration routines themselves.
+ since the former guarantees that the computed value of x differs
+ from its true value by at most 4(epsilon), where (epsilon) is the
+ machine precision, whereas the latter has no such guarantee.
 E02AEF and E02AKF evaluate polynomial curves: the latter has a
 longer parameter list but does not require the user to normalise
 the values of the independent variable and can accept
 coefficients which are not stored in contiguous locations. E02BBF
 evaluates cubic spline curves, and E02DEF and E02DFF bicubic
 spline surfaces.
+ The method employed is based upon the threeterm recurrence
+ relation due to Clenshaw [1], with modifications to give greater
+ numerical stability due to Reinsch and Gentleman (see [4]).
 Differentiation and integration of polynomial curves are carried
 out by E02AHF and E02AJF respectively. The results are provided
 in Chebyshevseries form and so repeated differentiation and
 integration are catered for. Values of the derivative or integral
 can then be computed using the appropriate evaluation routine.
+ For further details of the algorithm and its use see Cox [2] and
+ [3].
 For splines the differentiation and integration routines provided
 are of a different nature from those for polynomials. E02BCF
 provides values of a cubic spline curve and its first three
 derivatives (the rest, of course, are zero) at a given value of x
 spline over its whole range. These routines can also be applied
 to surfaces of the form (14). For example, if, for each value of
 j in turn, the coefficients c , for i=1,2,...,p are supplied to
 ij
 E02BCF with x=x and on each occasion we select from the output
 0
 the value of the second derivative, d say, and if the whole set
 j
 of d are then supplied to the same routine with x=y , the output
 j 0
 will contain all the values at (x ,y ) of
 0 0
+ 4. References
 2 r+2
 dd f dd f
  and , r=1,2,3.
 2 2 r
 dd fx ddx ddy
+ [1] Clenshaw C W (1955) A Note on the Summation of Chebyshev
+ Series. Math. Tables Aids Comput. 9 118120.
 Equally, if after each of the first p calls of E02BCF we had
 selected the function value (E02BBF would also provide this)
 instead of the second derivative and we had supplied these values
 to E02BDF, the result obtained would have been the value of
+ [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
+ User. Software for Numerical Mathematics. (ed D J Evans)
+ Academic Press.
 B
 /
 f(x ,y)dy,
 / 0
 A
+ [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
+ suite of algorithms for the nonspecialist user. Report
+ NAC26. National Physical Laboratory.
 where A and B are the endpoints of the y interval over which the
 spline was defined.
+ [4] Gentlemen W M (1969) An Error Analysis of Goertzel's
+ (Watt's) Method for Computing Fourier Coefficients. Comput.
+ J. 12 160165.
+
+ 5. Parameters
 3.8. Index
+ 1: NPLUS1  INTEGER Input
+ On entry: the number n+1 of terms in the series (i.e., one
+ greater than the degree of the polynomial). Constraint:
+ NPLUS1 >= 1.
 Automatic fitting,
 with bicubic splines E02DCF
 E02DDF
 with cubic splines E02BEF
 Data on rectangular mesh E02DCF
 Differentiation,
 of cubic splines E02BCF
 of polynomials E02AHF
 Evaluation,
 of bicubic splines E02DEF
 E02DFF
 of cubic splines E02BBF
 of cubic splines and derivatives E02BCF
 of definite integral of cubic splines E02BDF
 of polynomials E02AEF
 E02AKF
 Integration,
 of cubic splines (definite integral) E02BDF
 of polynomials E02AJF
 Leastsquares curve fit,
 with cubic splines E02BAF
 with polynomials,
 arbitrary data points E02ADF
 with constraints E02AGF
 Leastsquares surface fit with bicubic splines E02DAF
 l fit with general linear function, E02GAF
 1
 Sorting,
 2D data into panels E02ZAF
+ 2: A(NPLUS1)  DOUBLE PRECISION array Input
+ On entry: A(i) must be set to the value of the ith
+ coefficient in the series, for i=1,2,...,n+1.
+ 3: XCAP  DOUBLE PRECISION Input
+
 E02  Curve and Surface Fitting Contents  E02
 Chapter E02
+ On entry: x, the argument at which the polynomial is to be
+ evaluated. It should lie in the range 1 to +1, but a value
+ just outside this range is permitted (see Section 6) to
+ allow for possible rounding errors committed in the
+
 Curve and Surface Fitting
+ transformation from x to x discussed in Section 3. Provided
+ the recommended form of the transformation is used, a
+ successful exit is thus assured whenever the value of x lies
+ in the range x to x .
+ min max
 E02ADF Leastsquares curve fit, by polynomials, arbitrary data
 points
+ 4: P  DOUBLE PRECISION Output
+ On exit: the value of the polynomial.
 E02AEF Evaluation of fitted polynomial in one variable from
 Chebyshev series form (simplified parameter list)
+ 5: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 E02AGF Leastsquares polynomial fit, values and derivatives may
 be constrained, arbitrary data points,
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 E02AHF Derivative of fitted polynomial in Chebyshev series form
+ 6. Error Indicators and Warnings
 E02AJF Integral of fitted polynomial in Chebyshev series form
+ Errors detected by the routine:
 E02AKF Evaluation of fitted polynomial in one variable, from
 Chebyshev series form
+ IFAIL= 1
+ ABS(XCAP) > 1.0 + 4(epsilon), where (epsilon) is the
+ machine precision. In this case the value of P is set
+ arbitrarily to zero.
 E02BAF Leastsquares curve cubic spline fit (including
 interpolation)
+ IFAIL= 2
+ On entry NPLUS1 < 1.
 E02BBF Evaluation of fitted cubic spline, function only
+ 7. Accuracy
 E02BCF Evaluation of fitted cubic spline, function and
 derivatives
+ The rounding errors committed are such that the computed value of
+ the polynomial is exact for a slightly perturbed set of
+ coefficients a +(delta)a . The ratio of the sum of the absolute
+ i i
+ values of the (delta)a to the sum of the absolute values of the
+ i
+ a is less than a small multiple of (n+1) times machine
+ i
+ precision.
 E02BDF Evaluation of fitted cubic spline, definite integral
+ 8. Further Comments
 E02BEF Leastsquares cubic spline curve fit, automatic knot
 placement
+ The time taken by the routine is approximately proportional to
+ n+1.
 E02DAF Leastsquares surface fit, bicubic splines
+ It is expected that a common use of E02AEF will be the evaluation
+ of the polynomial approximations produced by E02ADF and E02AFF(*)
 E02DCF Leastsquares surface fit by bicubic splines with
 automatic knot placement, data on rectangular grid
+ 9. Example
 E02DDF Leastsquares surface fit by bicubic splines with
 automatic knot placement, scattered data
+
 E02DEF Evaluation of a fitted bicubic spline at a vector of
 points
+ Evaluate at 11 equallyspaced points in the interval 1<=x<=1 the
+ polynomial of degree 4 with Chebyshev coefficients, 2.0, 0.5, 0.
+ 25, 0.125, 0.0625.
 E02DFF Evaluation of a fitted bicubic spline at a mesh of points
+ The example program is written in a general form that will enable
+ a polynomial of degree n in its Chebyshevseries form to be
+
 E02GAF L approximation by general linear function
 1
+ evaluated at m equallyspaced points in the interval 1<=x<=1.
+ The program is selfstarting in that any number of data sets can
+ be supplied.
 E02ZAF Sort 2D data into panels for fitting bicubic splines
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02ADF
 E02ADF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02AGF
+ E02AGF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 84871,36 +86860,53 @@ have been determined.
1. Purpose
 E02ADF computes weighted leastsquares polynomial approximations
 to an arbitrary set of data points.
+ E02AGF computes constrained weighted leastsquares polynomial
+ approximations in Chebyshevseries form to an arbitrary set of
+ data points. The values of the approximations and any number of
+ their derivatives can be specified at selected points.
2. Specification
 SUBROUTINE E02ADF (M, KPLUS1, NROWS, X, Y, W, WORK1,
 1 WORK2, A, S, IFAIL)
 INTEGER M, KPLUS1, NROWS, IFAIL
 DOUBLE PRECISION X(M), Y(M), W(M), WORK1(3*M), WORK2
 1 (2*KPLUS1), A(NROWS,KPLUS1), S(KPLUS1)
+ SUBROUTINE E02AGF (M, KPLUS1, NROWS, XMIN, XMAX, X, Y, W,
+ 1 MF, XF, YF, LYF, IP, A, S, NP1, WRK,
+ 2 LWRK, IWRK, LIWRK, IFAIL)
+ INTEGER M, KPLUS1, NROWS, MF, LYF, IP(MF), NP1,
+ 1 LWRK, IWRK(LIWRK), LIWRK, IFAIL
+ DOUBLE PRECISION XMIN, XMAX, X(M), Y(M), W(M), XF(MF), YF
+ 1 (LYF), A(NROWS,KPLUS1), S(KPLUS1), WRK
+ 2 (LWRK)
3. Description
This routine determines leastsquares polynomial approximations
 of degrees 0,1,...,k to the set of data points (x ,y ) with
 r r
 weights w , for r=1,2,...,m.
 r
+ of degrees up to k to the set of data points (x ,y ) with weights
+ r r
+ w , for r=1,2,...,m. The value of k, the maximum degree required,
+ r
+ is prescribed by the user. At each of the values XF , for r =
+ r
+ 1,2,...,MF, of the independent variable x, the approximations and
+ their derivatives up to order p are constrained to have one of
+ r
+ MF
+ 
+ the userspecified values YF , for s=1,2,...,n, where n=MF+ > p
+ s  r
+ r=1
 The approximation of degree i has the property that it minimizes
 (sigma) the sum of squares of the weighted residuals (epsilon) ,
 i r
+ The approximation of degree i has the property that, subject to
+ the imposed contraints, it minimizes (Sigma) , the sum of the
+ i
+ squares of the weighted residuals (epsilon) for r=1,2,...,m
+ r
where
 (epsilon) =w (y f )
 r r r r
+ (epsilon) =w (y f (x ))
+ r r r i r
 and f is the value of the polynomial of degree i at the rth data
 r
 point.
+ and f (x ) is the value of the polynomial approximation of degree
+ i r
+ i at the rth data point.
Each polynomial is represented in Chebyshevseries form with
@@ 84909,142 +86915,225 @@ have been determined.
and is related to the original variable x by the linear
transformation
 (2xx x )
 max min
 x= .
 (x x )
 max min
+ 2x(x +x )
+ max min
+ x= 
+ (x x )
+ max min
 Here x and x are respectively the largest and smallest
 max min
 values of x . The polynomial approximation of degree i is
 r
 represented as
+ where x and x , specified by the user, are respectively the
+ min max
+ lower and upper endpoints of the interval of x over which the
+ polynomials are to be defined.
 1
 a T (x)+a T (x)+a T (x)+...+a T (x),
 2 i+1,1 0 i+1,2 1 i+1,3 2 i+1,i+1 i
+ The polynomial approximation of degree i can be written as
+
+ 1
+ a +a T (x)+...+a T (x)+...+a T (x)
+ 2 i,0 i,1 1 ij j ii i
where T (x) is the Chebyshev polynomial of the first kind of
j

+
 degree j with argument (x).
+ degree j with argument x. For i=n,n+1,...,k, the routine produces
+ the values of the coefficients a , for j=0,1,...,i, together
+ ij
+ with the value of the root mean square residual, S , defined as
+ i
+
 For i=0,1,...,k, the routine produces the values of a , for
 i+1,j+1
 j=0,1,...,i, together with the value of the root mean square

+ / 
+ / >
+ / 
+ / i
+ / , where m' is the number of data points with
+ \/ (m'+ni1)
+ nonzero weight.
 / (sigma)
 / i
 residual s = / . In the case m=i+1 the routine sets
 i \/ mi1
 the value of s to zero.
 i
+ Values of the approximations may subsequently be computed using
+ E02AEF or E02AKF.
 The method employed is due to Forsythe [4] and is based upon the
 generation of a set of polynomials orthogonal with respect to
 summation over the normalised data set. The extensions due to
 Clenshaw [1] to represent these polynomials as well as the
 approximating polynomials in their Chebyshevseries forms are
 incorporated. The modifications suggested by Reinsch and
 Gentleman (see [5]) to the method originally employed by Clenshaw
 for evaluating the orthogonal polynomials from their Chebyshev
 series representations are used to give greater numerical
 stability.
+
+
+ First E02AGF determines a polynomial (mu)(x), of degree n1,
+ which satisfies the given constraints, and a polynomial (nu)(x),
+ of degree n, which has value (or derivative) zero wherever a
+ constrained value (or derivative) is specified. It then fits
+ y (mu)(x ), for r=1,2,...,m with polynomials of the required
+ r r
+
 For further details of the algorithm and its use see Cox [2] and
 [3].
+ degree in x each with factor (nu)(x). Finally the coefficients of
+
 Subsequent evaluation of the Chebyshevseries representations of
 the polynomial approximations should be carried out using E02AEF.
+ (mu)(x) are added to the coefficients of these fits to give the
+ coefficients of the constrained polynomial approximations to the
+ data points (x ,y ), for r=1,2,...,m. The method employed is
+ r r
+ given in Hayes [3]: it is an extension of Forsythe's orthogonal
+ polynomials method [2] as modified by Clenshaw [1].
4. References
[1] Clenshaw C W (1960) Curve Fitting with a Digital Computer.
Comput. J. 2 170173.
 [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
 User. Software for Numerical Mathematics. (ed D J Evans)
 Academic Press.

 [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
 suite of algorithms for the nonspecialist user. Report
 NAC26. National Physical Laboratory.

 [4] Forsythe G E (1957) Generation and use of orthogonal
+ [2] Forsythe G E (1957) Generation and use of orthogonal
polynomials for data fitting with a digital computer. J.
Soc. Indust. Appl. Math. 5 7488.
 [5] Gentlemen W M (1969) An Error Analysis of Goertzel's
 (Watt's) Method for Computing Fourier Coefficients. Comput.
 J. 12 160165.

 [6] Hayes J G (1970) Curve Fitting by Polynomials in One
+ [3] Hayes J G (1970) Curve Fitting by Polynomials in One
Variable. Numerical Approximation to Functions and Data. (ed
J G Hayes) Athlone Press, London.
5. Parameters
1: M  INTEGER Input
 On entry: the number m of data points. Constraint: M >=
 MDIST >= 2, where MDIST is the number of distinct x values
 in the data.
+ On entry: the number m of data points to be fitted.
+ Constraint: M >= 1.
2: KPLUS1  INTEGER Input
On entry: k+1, where k is the maximum degree required.
 Constraint: 0 < KPLUS1 <= MDIST, where MDIST is the number
 of distinct x values in the data.
+ Constraint: n+1<=KPLUS1<=m''+n, where n is the total number
+ of constraints and m'' is the number of data points with
+ nonzero weights and distinct abscissae which do not
+ coincide with any of the XF(r).
3: NROWS  INTEGER Input
On entry:
the first dimension of the array A as declared in the
 (sub)program from which E02ADF is called.
+ (sub)program from which E02AGF is called.
Constraint: NROWS >= KPLUS1.
 4: X(M)  DOUBLE PRECISION array Input
 On entry: the values x of the independent variable, for
 r
 r=1,2,...,m. Constraint: the values must be supplied in non
 decreasing order with X(M) > X(1).
+ 4: XMIN  DOUBLE PRECISION Input
 5: Y(M)  DOUBLE PRECISION array Input
 On entry: the values y of the dependent variable, for
 r
 r=1,2,...,m.
+ 5: XMAX  DOUBLE PRECISION Input
+ On entry: the lower and upper endpoints, respectively, of
+ the interval [x ,x ]. Unless there are specific reasons
+ min max
+ to the contrary, it is recommended that XMIN and XMAX be set
+ respectively to the lowest and highest value among the x
+ r
+ and XF(r). This avoids the danger of extrapolation provided
+ there is a constraint point or data point with nonzero
+ weight at each endpoint. Constraint: XMAX > XMIN.
 6: W(M)  DOUBLE PRECISION array Input
 On entry: the set of weights, w , for r=1,2,...,m. For
 r
 advice on the choice of weights, see Section 2.1.2 of the
 Chapter Introduction. Constraint: W(r) > 0.0, for r=1,2,...m.
+ 6: X(M)  DOUBLE PRECISION array Input
+ On entry: the value x of the independent variable at the r
+ r
+ th data point, for r=1,2,...,m. Constraint: the X(r) must be
+ in nondecreasing order and satisfy XMIN <= X(r) <= XMAX.
 7: WORK1(3*M)  DOUBLE PRECISION array Workspace
+ 7: Y(M)  DOUBLE PRECISION array Input
+ On entry: Y(r) must contain y , the value of the dependent
+ r
+ variable at the rth data point, for r=1,2,...,m.
 8: WORK2(2*KPLUS1)  DOUBLE PRECISION array Workspace
+ 8: W(M)  DOUBLE PRECISION array Input
+ On entry: the weights w to be applied to the data points
+ r
+ x , for r=1,2...,m. For advice on the choice of weights see
+ r
+ the Chapter Introduction. Negative weights are treated as
+ positive. A zero weight causes the corresponding data point
+ to be ignored. Zero weight should be given to any data point
+ whose x and y values both coincide with those of a
+ constraint (otherwise the denominators involved in the root
+ meansquare residuals s will be slightly in error).
+ i
 9: A(NROWS,KPLUS1)  DOUBLE PRECISION array Output

+ 9: MF  INTEGER Input
+ On entry: the number of values of the independent variable
+ at which a constraint is specified. Constraint: MF >= 1.
 On exit: the coefficients of T (x) in the approximating
 j
 polynomial of degree i. A(i+1,j+1) contains the coefficient
 a , for i=0,1,...,k; j=0,1,...,i.
 i+1,j+1
+ 10: XF(MF)  DOUBLE PRECISION array Input
+ On entry: the rth value of the independent variable at
+ which a constraint is specified, for r = 1,2,...,MF.
+ Constraint: these values need not be ordered but must be
+ distinct and satisfy XMIN <= XF(r) <= XMAX.
 10: S(KPLUS1)  DOUBLE PRECISION array Output
 On exit: S(i+1) contains the root mean square residual s ,
 i
 for i=0,1,...,k, as described in Section 3. For the
 interpretation of the values of the s and their use in
 i
+ 11: YF(LYF)  DOUBLE PRECISION array Input
+ On entry: the values which the approximating polynomials
+ and their derivatives are required to take at the points
+ specified in XF. For each value of XF(r), YF contains in
+ successive elements the required value of the approximation,
+ its first derivative, second derivative,..., p th
+ r
+ derivative, for r = 1,2,...,MF. Thus the value which the kth
+ derivative of each approximation (k=0 referring to the
+ approximation itself) is required to take at the point XF(r)
+ must be contained in YF(s), where
+ s=r+k+p +p +...+p ,
+ 1 2 r1
+ for k=0,1,...,p and r = 1,2,...,MF. The derivatives are
+ r
+ with respect to the user's variable x.
+
+ 12: LYF  INTEGER Input
+ On entry:
+ the dimension of the array YF as declared in the
+ (sub)program from which E02AGF is called.
+ Constraint: LYF>=n, where n=MF+p +p +...+p .
+ 1 2 MF
+
+ 13: IP(MF)  INTEGER array Input
+ On entry: IP(r) must contain p , the order of the highest
+ r
+ order derivative specified at XF(r), for r = 1,2,...,MF.
+ p =0 implies that the value of the approximation at XF(r) is
+ r
+ specified, but not that of any derivative. Constraint: IP(r)
+ >= 0, for r=1,2,...,MF.
+
+ 14: A(NROWS,KPLUS1)  DOUBLE PRECISION array Output
+ On exit: A(i+1,j+1) contains the coefficient a in the
+ ij
+ approximating polynomial of degree i, for i=n,n+1,...,k;
+ j=0,1,...,i.
+
+ 15: S(KPLUS1)  DOUBLE PRECISION array Output
+ On exit: S(i+1) contains s , for i=n,n+1,...,k, the root
+ i
+ meansquare residual corresponding to the approximating
+ polynomial of degree i. In the case where the number of data
+ points with nonzero weight is equal to k+1n, s is
+ i
+ indeterminate: the routine sets it to zero. For the
+ interpretation of the values of s and their use in
+ i
selecting an appropriate degree, see Section 3.1 of the
Chapter Introduction.
 11: IFAIL  INTEGER Input/Output
+ 16: NP1  INTEGER Output
+ On exit: n+1, where n is the total number of constraint
+ conditions imposed: n=MF+p +p +...+p .
+ 1 2 MF
+
+ 17: WRK(LWRK)  DOUBLE PRECISION array Output
+ On exit: WRK contains weighted residuals of the highest
+ degree of fit determined (k). The residual at x is in
+ element 2(n+1)+3(m+k+1)+r, for r=1,2,...,m. The rest of the
+ array is used as workspace.
+
+ 18: LWRK  INTEGER Input
+ On entry:
+ the dimension of the array WRK as declared in the
+ (sub)program from which E02AGF is called.
+ Constraint: LWRK>=max(4*M+3*KPLUS1, 8*n+5*IPMAX+MF+10)+2*n+2
+ , where IPMAX = max(IP(R)).
+
+ 19: IWRK(LIWRK)  INTEGER array Workspace
+
+ 20: LIWRK  INTEGER Input
+ On entry:
+ the dimension of the array IWRK as declared in the
+ (sub)program from which E02AGF is called.
+ Constraint: LIWRK>=2*MF+2.
+
+ 21: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 85057,63 +87146,102 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 The weights are not all strictly positive.
+ On entry M < 1,
+
+ or KPLUS1 < n + 1,
+
+ or NROWS < KPLUS1,
+
+ or MF < 1,
+
+ or LYF < n,
+
+ or LWRK is too small (see Section 5),
+
+ or LIWRK<2*MF+2.
+ (Here n is the total number of constraint conditions.)
IFAIL= 2
 The values of X(r), for r=1,2,...,M are not in non
 decreasing order.
+ IP(r) < 0 for some r = 1,2,...,MF.
IFAIL= 3
 All X(r) have the same value: thus the normalisation of X is
 not possible.
+ XMIN >= XMAX, or XF(r) is not in the interval XMIN to XMAX
+ for some r = 1,2,...,MF, or the XF(r) are not distinct.
+
+ IFAIL= 4
+ X(r) is not in the interval XMIN to XMAX for some
+ r=1,2,...,M.
+
+ IFAIL= 5
+ X(r) < X(r1) for some r=2,3,...,M.
+
+ IFAIL= 6
+ KPLUS1>m''+n, where m'' is the number of data points with
+ nonzero weight and distinct abscissae which do not coincide
+ with any XF(r). Thus there is no unique solution.
+
+ IFAIL= 7
+ The polynomials (mu)(x) and/or (nu)(x) cannot be determined.
+ The problem supplied is too illconditioned. This may occur
+ when the constraint points are very close together, or large
+ in number, or when an attempt is made to constrain high
+ order derivatives.
+
+ 7. Accuracy
+
+ No complete error analysis exists for either the interpolating
+ algorithm or the approximating algorithm. However, considerable
+ experience with the approximating algorithm shows that it is
+ generally extremely satisfactory. Also the moderate number of
+ constraints, of low order, which are typical of data fitting
+ applications, are unlikely to cause difficulty with the
+ interpolating routine.
+
+ 8. Further Comments
+
+ The time taken by the routine to form the interpolating
+ 3
+ polynomial is approximately proportional to n , and that to form
+ the approximating polynomials is very approximately proportional
+ to m(k+1)(k+1n).
+
+ To carry out a leastsquares polynomial fit without constraints,
+ use E02ADF. To carry out polynomial interpolation only, use
+ E01AEF(*).
+
+ 9. Example
 IFAIL= 4
 On entry KPLUS1 < 1 (so the maximum degree required is
 negative)
+ The example program reads data in the following order, using the
+ notation of the parameter list above:
 or KPLUS1 > MDIST, where MDIST is the number of
 distinct x values in the data (so there cannot be a
 unique solution for degree k=KPLUS11).
+ MF
 IFAIL= 5
 NROWS < KPLUS1.
+ IP(i), XF(i), Yvalue and derivative values (if any) at
+ XF(i), for i= 1,2,...,MF
 7. Accuracy
+ M
 No error analysis for the method has been published. Practical
 experience with the method, however, is generally extremely
 satisfactory.
+ X(i), Y(i), W(i), for i=1,2,...,M
 8. Further Comments
+ k, XMIN, XMAX
 The time taken by the routine is approximately proportional to
 m(k+1)(k+11).
+ The output is:
 The approximating polynomials may exhibit undesirable
 oscillations (particularly near the ends of the range) if the
 maximum degree k exceeds a critical value which depends on the
 number of data points m and their relative positions. As a rough
 guide, for equallyspaced data, this critical value is about

+ the rootmeansquare residual for each degree from n to k;
 2*\/m. For further details see Hayes [6] page 60.
+ the Chebyshev coefficients for the fit of degree k;
 9. Example
+ the data points, and the fitted values and residuals for
+ the fit of degree k.
 Determine weighted leastsquares polynomial approximations of
 degrees 0, 1, 2 and 3 to a set of 11 prescribed data points. For
 the approximation of degree 3, tabulate the data and the
 corresponding values of the approximating polynomial, together
 with the residual errors, and also the values of the
 approximating polynomial at points halfway between each pair of
 adjacent data points.
+ The program is written in a generalized form which will read any
+ number of data sets.
 The example program supplied is written in a general form that
 will enable polynomial approximations of degrees 0,1,...,k to be
 obtained to m data points, with arbitrary positive weights, and
 the approximation of degree k to be tabulated. E02AEF is used to
 evaluate the approximating polynomial. The program is self
 starting in that any number of data sets can be supplied.
+ The data set supplied specifies 5 data points in the interval [0.
+ 0,4.0] with unit weights, to which are to be fitted polynomials,
+ p, of degrees up to 4, subject to the 3 constraints:
+
+ p(0.0)=1.0, p'(0.0)=2.0, p(4.0)=9.0.
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 85121,8 +87249,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02AEF
 E02AEF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02AHF
+ E02AHF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 85131,113 +87259,177 @@ have been determined.
1. Purpose
 E02AEF evaluates a polynomial from its Chebyshevseries
 representation.
+ E02AHF determines the coefficients in the Chebyshevseries
+ representation of the derivative of a polynomial given in
+ Chebyshevseries form.
2. Specification
 SUBROUTINE E02AEF (NPLUS1, A, XCAP, P, IFAIL)
 INTEGER NPLUS1, IFAIL
 DOUBLE PRECISION A(NPLUS1), XCAP, P
+ SUBROUTINE E02AHF (NP1, XMIN, XMAX, A, IA1, LA, PATM1,
+ 1 ADIF, IADIF1, LADIF, IFAIL)
+ INTEGER NP1, IA1, LA, IADIF1, LADIF, IFAIL
+ DOUBLE PRECISION XMIN, XMAX, A(LA), PATM1, ADIF(LADIF)
3. Description
 This routine evaluates the polynomial
+ This routine forms the polynomial which is the derivative of a
+ given polynomial. Both the original polynomial and its derivative
+ are represented in Chebyshevseries form. Given the coefficients
+ a , for i=0,1,...,n, of a polynomial p(x) of degree n, where
+ i
 1
 a T (x)+a T (x)+a T (x)+...+a T (x)
 2 1 0 2 1 3 2 n+1 n
+ 1
+ p(x)= a +a T (x)+...+a T (x)
+ 2 0 1 1 n n

+
 for any value of x satisfying 1<=x<=1. Here T (x) denotes the
 j
 Chebyshev polynomial of the first kind of degree j with argument

+ the routine returns the coefficients a , for i=0,1,...,n1, of
+ i
+ the polynomial q(x) of degree n1, where
 x. The value of n is prescribed by the user.
+ dp(x) 1
+ q(x)= = a +a T (x)+...+a T (x).
+ dx 2 0 1 1 n1 n1

+
 In practice, the variable x will usually have been obtained from
 an original variable x, where x <=x<=x and
 min max
+ Here T (x) denotes the Chebyshev polynomial of the first kind of
+ j
+
 ((xx )(x x))
 min max
 x= 
 (x x )
 max min
+ degree j with argument x. It is assumed that the normalised
+
 Note that this form of the transformation should be used
 computationally rather than the mathematical equivalent
+ variable x in the interval [1,+1] was obtained from the user's
+ original variable x in the interval [x ,x ] by the linear
+ min max
+ transformation
 (2xx x )
 min max
+ 2x(x +x )
+ max min
x= 
 (x x )
+ x x
max min

+ and that the user requires the derivative to be with respect to
+
 since the former guarantees that the computed value of x differs
 from its true value by at most 4(epsilon), where (epsilon) is the
 machine precision, whereas the latter has no such guarantee.
+ the variable x. If the derivative with respect to x is required,
+ set x =1 and x =1.
+ max min
 The method employed is based upon the threeterm recurrence
 relation due to Clenshaw [1], with modifications to give greater
 numerical stability due to Reinsch and Gentleman (see [4]).
+ Values of the derivative can subsequently be computed, from the
+ coefficients obtained, by using E02AKF.
 For further details of the algorithm and its use see Cox [2] and
 [3].
+ The method employed is that of [1] modified to obtain the
+
+
+ derivative with respect to x. Initially setting a =a =0, the
+ n+1 n
+ routine forms successively
+
+ 2
+ a =a + 2ia , i=n,n1,...,1.
+ i1 i+1 x x i
+ max min
4. References
 [1] Clenshaw C W (1955) A Note on the Summation of Chebyshev
 Series. Math. Tables Aids Comput. 9 118120.
+ [1] Unknown (1961) Chebyshevseries. Modern Computing Methods,
+ Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
+ HMSO.
 [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
 User. Software for Numerical Mathematics. (ed D J Evans)
 Academic Press.
+ 5. Parameters
 [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
 suite of algorithms for the nonspecialist user. Report
 NAC26. National Physical Laboratory.
+ 1: NP1  INTEGER Input
+ On entry: n+1, where n is the degree of the given
+ polynomial p(x). Thus NP1 is the number of coefficients in
+ this polynomial. Constraint: NP1 >= 1.
 [4] Gentlemen W M (1969) An Error Analysis of Goertzel's
 (Watt's) Method for Computing Fourier Coefficients. Comput.
 J. 12 160165.
+ 2: XMIN  DOUBLE PRECISION Input
 5. Parameters
+ 3: XMAX  DOUBLE PRECISION Input
+ On entry: the lower and upper endpoints respectively of
+ the interval [x ,x ]. The Chebyshevseries
+ min max
+
 1: NPLUS1  INTEGER Input
 On entry: the number n+1 of terms in the series (i.e., one
 greater than the degree of the polynomial). Constraint:
 NPLUS1 >= 1.
+ representation is in terms of the normalised variable x,
+ where
+ 2x(x +x )
+ max min
+ x= .
+ x x
+ max min
+ Constraint: XMAX > XMIN.
 2: A(NPLUS1)  DOUBLE PRECISION array Input
 On entry: A(i) must be set to the value of the ith
 coefficient in the series, for i=1,2,...,n+1.
+ 4: A(LA)  DOUBLE PRECISION array Input
+ On entry: the Chebyshev coefficients of the polynomial p(x).
+ Specifically, element 1 + i*IA1 of A must contain the
+ coefficient a , for i=0,1,...,n. Only these n+1 elements
+ i
+ will be accessed.
 3: XCAP  DOUBLE PRECISION Input

+ Unchanged on exit, but see ADIF, below.
 On entry: x, the argument at which the polynomial is to be
 evaluated. It should lie in the range 1 to +1, but a value
 just outside this range is permitted (see Section 6) to
 allow for possible rounding errors committed in the
+ 5: IA1  INTEGER Input
+ On entry: the index increment of A. Most frequently the
+ Chebyshev coefficients are stored in adjacent elements of A,
+ and IA1 must be set to 1. However, if, for example, they are
+ stored in A(1),A(4),A(7),..., then the value of IA1 must be
+ 3. See also Section 8. Constraint: IA1 >= 1.
+
+ 6: LA  INTEGER Input
+ On entry:
+ the dimension of the array A as declared in the (sub)program
+ from which E02AHF is called.
+ Constraint: LA>=1+(NP11)*IA1.
+
+ 7: PATM1  DOUBLE PRECISION Output
+ On exit: the value of p(x ). If this value is passed to
+ min
+ the integration routine E02AJF with the coefficients of q(x)
+ , then the original polynomial p(x) is recovered, including
+ its constant coefficient.
+
+ 8: ADIF(LADIF)  DOUBLE PRECISION array Output
+ On exit: the Chebyshev coefficients of the derived
+ polynomial q(x). (The differentiation is with respect to the
+ variable x). Specifically, element 1+i*IADIF1 of ADIF
 transformation from x to x discussed in Section 3. Provided
 the recommended form of the transformation is used, a
 successful exit is thus assured whenever the value of x lies
 in the range x to x .
 min max
+ contains the coefficient a , i=0,1,...n1. Additionally
+ i
+ element 1+n*IADIF1 is set to zero. A call of the routine may
+ have the array name ADIF the same as A, provided that note
+ is taken of the order in which elements are overwritten,
+ when choosing the starting elements and increments IA1 and
+ IADIF1: i.e., the coefficients a ,a ,...,a must be intact
+ 0 1 i1
+
+
+ after coefficient a is stored. In particular, it is
+ i
+ possible to overwrite the a completely by having IA1 =
+ i
+ IADIF1, and the actual arrays for A and ADIF identical.
+
+ 9: IADIF1  INTEGER Input
+ On entry: the index increment of ADIF. Most frequently the
+ Chebyshev coefficients are required in adjacent elements of
+ ADIF, and IADIF1 must be set to 1. However, if, for example,
+ they are to be stored in ADIF(1),ADIF(4),ADIF(7),..., then
+ the value of IADIF1 must be 3. See Section 8. Constraint:
+ IADIF1 >= 1.
 4: P  DOUBLE PRECISION Output
 On exit: the value of the polynomial.
+ 10: LADIF  INTEGER Input
+ On entry:
+ the dimension of the array ADIF as declared in the
+ (sub)program from which E02AHF is called.
+ Constraint: LADIF>=1+(NP11)*IADIF1.
 5: IFAIL  INTEGER Input/Output
+ 11: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 85250,48 +87442,44 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 ABS(XCAP) > 1.0 + 4(epsilon), where (epsilon) is the
 machine precision. In this case the value of P is set
 arbitrarily to zero.
+ On entry NP1 < 1,
 IFAIL= 2
 On entry NPLUS1 < 1.
+ or XMAX <= XMIN,
+
+ or IA1 < 1,
+
+ or LA<=(NP11)*IA1,
+
+ or IADIF1 < 1,
+
+ or LADIF<=(NP11)*IADIF1.
7. Accuracy
 The rounding errors committed are such that the computed value of
 the polynomial is exact for a slightly perturbed set of
 coefficients a +(delta)a . The ratio of the sum of the absolute
 i i
 values of the (delta)a to the sum of the absolute values of the
 i
 a is less than a small multiple of (n+1) times machine
 i
 precision.
+ There is always a loss of precision in numerical differentiation,
+ in this case associated with the multiplication by 2i in the
+ formula quoted in Section 3.
8. Further Comments
The time taken by the routine is approximately proportional to
n+1.
 It is expected that a common use of E02AEF will be the evaluation
 of the polynomial approximations produced by E02ADF and E02AFF(*)
+ The increments IA1, IADIF1 are included as parameters to give a
+ degree of flexibility which, for example, allows a polynomial in
+ two variables to be differentiated with respect to either
+ variable without rearranging the coefficients.
9. Example


 Evaluate at 11 equallyspaced points in the interval 1<=x<=1 the
 polynomial of degree 4 with Chebyshev coefficients, 2.0, 0.5, 0.
 25, 0.125, 0.0625.

 The example program is written in a general form that will enable
 a polynomial of degree n in its Chebyshevseries form to be


 evaluated at m equallyspaced points in the interval 1<=x<=1.
 The program is selfstarting in that any number of data sets can
 be supplied.
+ Suppose a polynomial has been computed in Chebyshevseries form
+ to fit data over the interval [0.5,2.5]. The example program
+ evaluates the 1st and 2nd derivatives of this polynomial at 4
+ equally spaced points over the interval. (For the purposes of
+ this example, XMIN, XMAX and the Chebyshev coefficients are
+ simply supplied in DATA statements. Normally a program would
+ first read in or generate data and compute the fitted
+ polynomial.)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 85299,8 +87487,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02AGF
 E02AGF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02AJF
+ E02AJF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 85309,388 +87497,235 @@ have been determined.
1. Purpose
 E02AGF computes constrained weighted leastsquares polynomial
 approximations in Chebyshevseries form to an arbitrary set of
 data points. The values of the approximations and any number of
 their derivatives can be specified at selected points.
+ E02AJF determines the coefficients in the Chebyshevseries
+ representation of the indefinite integral of a polynomial given
+ in Chebyshevseries form.
2. Specification
 SUBROUTINE E02AGF (M, KPLUS1, NROWS, XMIN, XMAX, X, Y, W,
 1 MF, XF, YF, LYF, IP, A, S, NP1, WRK,
 2 LWRK, IWRK, LIWRK, IFAIL)
 INTEGER M, KPLUS1, NROWS, MF, LYF, IP(MF), NP1,
 1 LWRK, IWRK(LIWRK), LIWRK, IFAIL
 DOUBLE PRECISION XMIN, XMAX, X(M), Y(M), W(M), XF(MF), YF
 1 (LYF), A(NROWS,KPLUS1), S(KPLUS1), WRK
 2 (LWRK)

 3. Description

 This routine determines leastsquares polynomial approximations
 of degrees up to k to the set of data points (x ,y ) with weights
 r r
 w , for r=1,2,...,m. The value of k, the maximum degree required,
 r
 is prescribed by the user. At each of the values XF , for r =
 r
 1,2,...,MF, of the independent variable x, the approximations and
 their derivatives up to order p are constrained to have one of
 r
 MF
 
 the userspecified values YF , for s=1,2,...,n, where n=MF+ > p
 s  r
 r=1

 The approximation of degree i has the property that, subject to
 the imposed contraints, it minimizes (Sigma) , the sum of the
 i
 squares of the weighted residuals (epsilon) for r=1,2,...,m
 r
 where

 (epsilon) =w (y f (x ))
 r r r i r

 and f (x ) is the value of the polynomial approximation of degree
 i r
 i at the rth data point.

 Each polynomial is represented in Chebyshevseries form with


 normalised argument x. This argument lies in the range 1 to +1
 and is related to the original variable x by the linear
 transformation

 2x(x +x )
 max min
 x= 
 (x x )
 max min

 where x and x , specified by the user, are respectively the
 min max
 lower and upper endpoints of the interval of x over which the
 polynomials are to be defined.

 The polynomial approximation of degree i can be written as

 1
 a +a T (x)+...+a T (x)+...+a T (x)
 2 i,0 i,1 1 ij j ii i



 where T (x) is the Chebyshev polynomial of the first kind of
 j


 degree j with argument x. For i=n,n+1,...,k, the routine produces
 the values of the coefficients a , for j=0,1,...,i, together
 ij
 with the value of the root mean square residual, S , defined as
 i


 / 
 / >
 / 
 / i
 / , where m' is the number of data points with
 \/ (m'+ni1)
 nonzero weight.

 Values of the approximations may subsequently be computed using
 E02AEF or E02AKF.



 First E02AGF determines a polynomial (mu)(x), of degree n1,
 which satisfies the given constraints, and a polynomial (nu)(x),
 of degree n, which has value (or derivative) zero wherever a
 constrained value (or derivative) is specified. It then fits
 y (mu)(x ), for r=1,2,...,m with polynomials of the required
 r r


 degree in x each with factor (nu)(x). Finally the coefficients of


 (mu)(x) are added to the coefficients of these fits to give the
 coefficients of the constrained polynomial approximations to the
 data points (x ,y ), for r=1,2,...,m. The method employed is
 r r
 given in Hayes [3]: it is an extension of Forsythe's orthogonal
 polynomials method [2] as modified by Clenshaw [1].

 4. References

 [1] Clenshaw C W (1960) Curve Fitting with a Digital Computer.
 Comput. J. 2 170173.

 [2] Forsythe G E (1957) Generation and use of orthogonal
 polynomials for data fitting with a digital computer. J.
 Soc. Indust. Appl. Math. 5 7488.

 [3] Hayes J G (1970) Curve Fitting by Polynomials in One
 Variable. Numerical Approximation to Functions and Data. (ed
 J G Hayes) Athlone Press, London.

 5. Parameters

 1: M  INTEGER Input
 On entry: the number m of data points to be fitted.
 Constraint: M >= 1.

 2: KPLUS1  INTEGER Input
 On entry: k+1, where k is the maximum degree required.
 Constraint: n+1<=KPLUS1<=m''+n, where n is the total number
 of constraints and m'' is the number of data points with
 nonzero weights and distinct abscissae which do not
 coincide with any of the XF(r).

 3: NROWS  INTEGER Input
 On entry:
 the first dimension of the array A as declared in the
 (sub)program from which E02AGF is called.
 Constraint: NROWS >= KPLUS1.

 4: XMIN  DOUBLE PRECISION Input

 5: XMAX  DOUBLE PRECISION Input
 On entry: the lower and upper endpoints, respectively, of
 the interval [x ,x ]. Unless there are specific reasons
 min max
 to the contrary, it is recommended that XMIN and XMAX be set
 respectively to the lowest and highest value among the x
 r
 and XF(r). This avoids the danger of extrapolation provided
 there is a constraint point or data point with nonzero
 weight at each endpoint. Constraint: XMAX > XMIN.

 6: X(M)  DOUBLE PRECISION array Input
 On entry: the value x of the independent variable at the r
 r
 th data point, for r=1,2,...,m. Constraint: the X(r) must be
 in nondecreasing order and satisfy XMIN <= X(r) <= XMAX.

 7: Y(M)  DOUBLE PRECISION array Input
 On entry: Y(r) must contain y , the value of the dependent
 r
 variable at the rth data point, for r=1,2,...,m.

 8: W(M)  DOUBLE PRECISION array Input
 On entry: the weights w to be applied to the data points
 r
 x , for r=1,2...,m. For advice on the choice of weights see
 r
 the Chapter Introduction. Negative weights are treated as
 positive. A zero weight causes the corresponding data point
 to be ignored. Zero weight should be given to any data point
 whose x and y values both coincide with those of a
 constraint (otherwise the denominators involved in the root
 meansquare residuals s will be slightly in error).
 i
+ SUBROUTINE E02AJF (NP1, XMIN, XMAX, A, IA1, LA, QATM1,
+ 1 AINT, IAINT1, LAINT, IFAIL)
+ INTEGER NP1, IA1, LA, IAINT1, LAINT, IFAIL
+ DOUBLE PRECISION XMIN, XMAX, A(LA), QATM1, AINT(LAINT)
 9: MF  INTEGER Input
 On entry: the number of values of the independent variable
 at which a constraint is specified. Constraint: MF >= 1.
+ 3. Description
 10: XF(MF)  DOUBLE PRECISION array Input
 On entry: the rth value of the independent variable at
 which a constraint is specified, for r = 1,2,...,MF.
 Constraint: these values need not be ordered but must be
 distinct and satisfy XMIN <= XF(r) <= XMAX.
+ This routine forms the polynomial which is the indefinite
+ integral of a given polynomial. Both the original polynomial and
+ its integral are represented in Chebyshevseries form. If
+ supplied with the coefficients a , for i=0,1,...,n, of a
+ i
+ polynomial p(x) of degree n, where
 11: YF(LYF)  DOUBLE PRECISION array Input
 On entry: the values which the approximating polynomials
 and their derivatives are required to take at the points
 specified in XF. For each value of XF(r), YF contains in
 successive elements the required value of the approximation,
 its first derivative, second derivative,..., p th
 r
 derivative, for r = 1,2,...,MF. Thus the value which the kth
 derivative of each approximation (k=0 referring to the
 approximation itself) is required to take at the point XF(r)
 must be contained in YF(s), where
 s=r+k+p +p +...+p ,
 1 2 r1
 for k=0,1,...,p and r = 1,2,...,MF. The derivatives are
 r
 with respect to the user's variable x.
+ 1
+ p(x)= a +a T (x)+...+a T (x),
+ 2 0 1 1 n n
 12: LYF  INTEGER Input
 On entry:
 the dimension of the array YF as declared in the
 (sub)program from which E02AGF is called.
 Constraint: LYF>=n, where n=MF+p +p +...+p .
 1 2 MF
+ the routine returns the coefficients a' , for i=0,1,...,n+1, of
+ i
+ the polynomial q(x) of degree n+1, where
 13: IP(MF)  INTEGER array Input
 On entry: IP(r) must contain p , the order of the highest
 r
 order derivative specified at XF(r), for r = 1,2,...,MF.
 p =0 implies that the value of the approximation at XF(r) is
 r
 specified, but not that of any derivative. Constraint: IP(r)
 >= 0, for r=1,2,...,MF.
+ 1
+ q(x)= a' +a' T (x)+...+a' T (x),
+ 2 0 1 1 n+1 n+1
 14: A(NROWS,KPLUS1)  DOUBLE PRECISION array Output
 On exit: A(i+1,j+1) contains the coefficient a in the
 ij
 approximating polynomial of degree i, for i=n,n+1,...,k;
 j=0,1,...,i.
+ and
 15: S(KPLUS1)  DOUBLE PRECISION array Output
 On exit: S(i+1) contains s , for i=n,n+1,...,k, the root
 i
 meansquare residual corresponding to the approximating
 polynomial of degree i. In the case where the number of data
 points with nonzero weight is equal to k+1n, s is
 i
 indeterminate: the routine sets it to zero. For the
 interpretation of the values of s and their use in
 i
 selecting an appropriate degree, see Section 3.1 of the
 Chapter Introduction.
+ /
+ q(x)= p(x)dx.
+ /
 16: NP1  INTEGER Output
 On exit: n+1, where n is the total number of constraint
 conditions imposed: n=MF+p +p +...+p .
 1 2 MF
+
 17: WRK(LWRK)  DOUBLE PRECISION array Output
 On exit: WRK contains weighted residuals of the highest
 degree of fit determined (k). The residual at x is in
 element 2(n+1)+3(m+k+1)+r, for r=1,2,...,m. The rest of the
 array is used as workspace.
+ Here T (x) denotes the Chebyshev polynomial of the first kind of
+ j
+ degree j with argument x. It is assumed that the normalised
+
 18: LWRK  INTEGER Input
 On entry:
 the dimension of the array WRK as declared in the
 (sub)program from which E02AGF is called.
 Constraint: LWRK>=max(4*M+3*KPLUS1, 8*n+5*IPMAX+MF+10)+2*n+2
 , where IPMAX = max(IP(R)).
+ variable x in the interval [1,+1] was obtained from the user's
+ original variable x in the interval [x ,x ] by the linear
+ min max
+ transformation
 19: IWRK(LIWRK)  INTEGER array Workspace
+ 2x(x +x )
+ max min
+ x= 
+ x x
+ max min
 20: LIWRK  INTEGER Input
 On entry:
 the dimension of the array IWRK as declared in the
 (sub)program from which E02AGF is called.
 Constraint: LIWRK>=2*MF+2.
+ and that the user requires the integral to be with respect to the
+
 21: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ variable x. If the integral with respect to x is required, set
+ x =1 and x =1.
+ max min
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ Values of the integral can subsequently be computed, from the
+ coefficients obtained, by using E02AKF.
 6. Error Indicators and Warnings
+ The method employed is that of Chebyshevseries [1] modified for
+ integrating with respect to x. Initially taking a =a =0, the
+ n+1 n+2
+ routine forms successively
 Errors detected by the routine:
+ a a x x
+ i1 i+1 max min
+ a' = * , i=n+1,n,...,1.
+ i 2i 2
 IFAIL= 1
 On entry M < 1,
+ The constant coefficient a' is chosen so that q(x) is equal to a
+ 0
+ specified value, QATM1, at the lower endpoint of the interval on
+
 or KPLUS1 < n + 1,
+ which it is defined, i.e., x=1, which corresponds to x=x .
+ min
 or NROWS < KPLUS1,
+ 4. References
 or MF < 1,
+ [1] Unknown (1961) Chebyshevseries. Modern Computing Methods,
+ Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
+ HMSO.
 or LYF < n,
+ 5. Parameters
 or LWRK is too small (see Section 5),
+ 1: NP1  INTEGER Input
+ On entry: n+1, where n is the degree of the given
+ polynomial p(x). Thus NP1 is the number of coefficients in
+ this polynomial. Constraint: NP1 >= 1.
 or LIWRK<2*MF+2.
 (Here n is the total number of constraint conditions.)
+ 2: XMIN  DOUBLE PRECISION Input
 IFAIL= 2
 IP(r) < 0 for some r = 1,2,...,MF.
+ 3: XMAX  DOUBLE PRECISION Input
+ On entry: the lower and upper endpoints respectively of
+ the interval [x ,x ]. The Chebyshevseries
+ min max
+
 IFAIL= 3
 XMIN >= XMAX, or XF(r) is not in the interval XMIN to XMAX
 for some r = 1,2,...,MF, or the XF(r) are not distinct.
+ representation is in terms of the normalised variable x,
+ where
+ 2x(x +x )
+ max min
+ x= .
+ x x
+ max min
+ Constraint: XMAX > XMIN.
 IFAIL= 4
 X(r) is not in the interval XMIN to XMAX for some
 r=1,2,...,M.
+ 4: A(LA)  DOUBLE PRECISION array Input
+ On entry: the Chebyshev coefficients of the polynomial p(x)
+ . Specifically, element 1+i*IA1 of A must contain the
+ coefficient a , for i=0,1,...,n. Only these n+1 elements
+ i
+ will be accessed.
 IFAIL= 5
 X(r) < X(r1) for some r=2,3,...,M.
+ Unchanged on exit, but see AINT, below.
 IFAIL= 6
 KPLUS1>m''+n, where m'' is the number of data points with
 nonzero weight and distinct abscissae which do not coincide
 with any XF(r). Thus there is no unique solution.
+ 5: IA1  INTEGER Input
+ On entry: the index increment of A. Most frequently the
+ Chebyshev coefficients are stored in adjacent elements of A,
+ and IA1 must be set to 1. However, if for example, they are
+ stored in A(1),A(4),A(7),..., then the value of IA1 must be
+ 3. See also Section 8. Constraint: IA1 >= 1.
 IFAIL= 7
 The polynomials (mu)(x) and/or (nu)(x) cannot be determined.
 The problem supplied is too illconditioned. This may occur
 when the constraint points are very close together, or large
 in number, or when an attempt is made to constrain high
 order derivatives.
+ 6: LA  INTEGER Input
+ On entry:
+ the dimension of the array A as declared in the (sub)program
+ from which E02AJF is called.
+ Constraint: LA>=1+(NP11)*IA1.
 7. Accuracy
+ 7: QATM1  DOUBLE PRECISION Input
+ On entry: the value that the integrated polynomial is
+ required to have at the lower endpoint of its interval of
+
 No complete error analysis exists for either the interpolating
 algorithm or the approximating algorithm. However, considerable
 experience with the approximating algorithm shows that it is
 generally extremely satisfactory. Also the moderate number of
 constraints, of low order, which are typical of data fitting
 applications, are unlikely to cause difficulty with the
 interpolating routine.
+ definition, i.e., at x=1 which corresponds to x=x . Thus,
+ min
+ QATM1 is a constant of integration and will normally be set
+ to zero by the user.
+
+ 8: AINT(LAINT)  DOUBLE PRECISION array Output
+ On exit: the Chebyshev coefficients of the integral q(x).
+ (The integration is with respect to the variable x, and the
+ constant coefficient is chosen so that q(x ) equals QATM1)
+ min
+ Specifically, element 1+i*IAINT1 of AINT contains the
+ coefficient a' , for i=0,1,...,n+1. A call of the routine
+ i
+ may have the array name AINT the same as A, provided that
+ note is taken of the order in which elements are overwritten
+ when choosing starting elements and increments IA1 and
+ IAINT1: i.e., the coefficients, a ,a ,...,a must be
+ 0 1 i2
+ intact after coefficient a' is stored. In particular it is
+ i
+ possible to overwrite the a entirely by having IA1 =
+ i
+ IAINT1, and the actual array for A and AINT identical.
+
+ 9: IAINT1  INTEGER Input
+ On entry: the index increment of AINT. Most frequently the
+ Chebyshev coefficients are required in adjacent elements of
+ AINT, and IAINT1 must be set to 1. However, if, for example,
+ they are to be stored in AINT(1),AINT(4),AINT(7),..., then
+ the value of IAINT1 must be 3. See also Section 8.
+ Constraint: IAINT1 >= 1.
+
+ 10: LAINT  INTEGER Input
+ On entry:
+ the dimension of the array AINT as declared in the
+ (sub)program from which E02AJF is called.
+ Constraint: LAINT>=1+NP1*IAINT1.
 8. Further Comments
+ 11: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 The time taken by the routine to form the interpolating
 3
 polynomial is approximately proportional to n , and that to form
 the approximating polynomials is very approximately proportional
 to m(k+1)(k+1n).
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 To carry out a leastsquares polynomial fit without constraints,
 use E02ADF. To carry out polynomial interpolation only, use
 E01AEF(*).
+ 6. Error Indicators and Warnings
 9. Example
+ Errors detected by the routine:
 The example program reads data in the following order, using the
 notation of the parameter list above:
+ IFAIL= 1
+ On entry NP1 < 1,
 MF
+ or XMAX <= XMIN,
 IP(i), XF(i), Yvalue and derivative values (if any) at
 XF(i), for i= 1,2,...,MF
+ or IA1 < 1,
 M
+ or LA<=(NP11)*IA1,
 X(i), Y(i), W(i), for i=1,2,...,M
+ or IAINT1 < 1,
 k, XMIN, XMAX
+ or LAINT<=NP1*IAINT1.
 The output is:
+ 7. Accuracy
 the rootmeansquare residual for each degree from n to k;
+ In general there is a gain in precision in numerical integration,
+ in this case associated with the division by 2i in the formula
+ quoted in Section 3.
 the Chebyshev coefficients for the fit of degree k;
+ 8. Further Comments
 the data points, and the fitted values and residuals for
 the fit of degree k.
+ The time taken by the routine is approximately proportional to
+ n+1.
 The program is written in a generalized form which will read any
 number of data sets.
+ The increments IA1, IAINT1 are included as parameters to give a
+ degree of flexibility which, for example, allows a polynomial in
+ two variables to be integrated with respect to either variable
+ without rearranging the coefficients.
 The data set supplied specifies 5 data points in the interval [0.
 0,4.0] with unit weights, to which are to be fitted polynomials,
 p, of degrees up to 4, subject to the 3 constraints:
+ 9. Example
 p(0.0)=1.0, p'(0.0)=2.0, p(4.0)=9.0.
+ Suppose a polynomial has been computed in Chebyshevseries form
+ to fit data over the interval [0.5,2.5]. The example program
+ evaluates the integral of the polynomial from 0.0 to 2.0. (For
+ the purpose of this example, XMIN, XMAX and the Chebyshev
+ coefficients are simply supplied in DATA statements. Normally a
+ program would read in or generate data and compute the fitted
+ polynomial).
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 85698,8 +87733,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02AHF
 E02AHF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02AKF
+ E02AKF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 85708,93 +87743,87 @@ have been determined.
1. Purpose
 E02AHF determines the coefficients in the Chebyshevseries
 representation of the derivative of a polynomial given in
 Chebyshevseries form.
+ E02AKF evaluates a polynomial from its Chebyshevseries
+ representation, allowing an arbitrary index increment for
+ accessing the array of coefficients.
2. Specification
 SUBROUTINE E02AHF (NP1, XMIN, XMAX, A, IA1, LA, PATM1,
 1 ADIF, IADIF1, LADIF, IFAIL)
 INTEGER NP1, IA1, LA, IADIF1, LADIF, IFAIL
 DOUBLE PRECISION XMIN, XMAX, A(LA), PATM1, ADIF(LADIF)
+ SUBROUTINE E02AKF (NP1, XMIN, XMAX, A, IA1, LA, X, RESULT,
+ 1 IFAIL)
+ INTEGER NP1, IA1, LA, IFAIL
+ DOUBLE PRECISION XMIN, XMAX, A(LA), X, RESULT
3. Description
 This routine forms the polynomial which is the derivative of a
 given polynomial. Both the original polynomial and its derivative
 are represented in Chebyshevseries form. Given the coefficients
 a , for i=0,1,...,n, of a polynomial p(x) of degree n, where
 i

 1
 p(x)= a +a T (x)+...+a T (x)
 2 0 1 1 n n


+ If supplied with the coefficients a , for i=0,1,...,n, of a
+ i
+
 the routine returns the coefficients a , for i=0,1,...,n1, of
 i
 the polynomial q(x) of degree n1, where
+ polynomial p(x) of degree n, where
 dp(x) 1
 q(x)= = a +a T (x)+...+a T (x).
 dx 2 0 1 1 n1 n1
+ 1
+ p(x)= a +a T (x)+...+a T (x),
+ 2 0 1 1 n n

+
 Here T (x) denotes the Chebyshev polynomial of the first kind of
 j

+ this routine returns the value of p(x) at a userspecified value
+
 degree j with argument x. It is assumed that the normalised

+ of the variable x. Here T (x) denotes the Chebyshev polynomial of
+ j
+
 variable x in the interval [1,+1] was obtained from the user's
 original variable x in the interval [x ,x ] by the linear
 min max
 transformation
+ the first kind of degree j with argument x. It is assumed that
+
 2x(x +x )
 max min
 x= 
 x x
 max min
+ the independent variable x in the interval [1,+1] was obtained
+ from the user's original variable x in the interval [x ,x ]
+ min max
+ by the linear transformation
 and that the user requires the derivative to be with respect to

+ 2x(x +x )
+ max min
+ x= .
+ x x
+ max min
 the variable x. If the derivative with respect to x is required,
 set x =1 and x =1.
 max min
+ The coefficients a may be supplied in the array A, with any
+ i
+ increment between the indices of array elements which contain
+ successive coefficients. This enables the routine to be used in
+ surface fitting and other applications, in which the array might
+ have two or more dimensions.
 Values of the derivative can subsequently be computed, from the
 coefficients obtained, by using E02AKF.
+ The method employed is based upon the threeterm recurrence
+ relation due to Clenshaw [1], with modifications due to Reinsch
+ and Gentleman (see [4]). For further details of the algorithm and
+ its use see Cox [2] and Cox and Hayes [3].
 The method employed is that of [1] modified to obtain the

+ 4. References
 derivative with respect to x. Initially setting a =a =0, the
 n+1 n
 routine forms successively
+ [1] Clenshaw C W (1955) A Note on the Summation of Chebyshev
+ series. Math. Tables Aids Comput. 9 118120.
 2
 a =a + 2ia , i=n,n1,...,1.
 i1 i+1 x x i
 max min
+ [2] Cox M G (1973) A datafitting package for the nonspecialist
+ user. Report NAC40. National Physical Laboratory.
 4. References
+ [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
+ suite of algorithms for the nonspecialist user. Report
+ NAC26. National Physical Laboratory.
 [1] Unknown (1961) Chebyshevseries. Modern Computing Methods,
 Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
 HMSO.
+ [4] Gentlemen W M (1969) An Error Analysis of Goertzel's
+ (Watt's) Method for Computing Fourier Coefficients. Comput.
+ J. 12 160165.
5. Parameters
1: NP1  INTEGER Input
On entry: n+1, where n is the degree of the given
 polynomial p(x). Thus NP1 is the number of coefficients in
 this polynomial. Constraint: NP1 >= 1.
+
+
+ polynomial p(x). Constraint: NP1 >= 1.
2: XMIN  DOUBLE PRECISION Input
@@ 85811,74 +87840,38 @@ have been determined.
x= .
x x
max min
 Constraint: XMAX > XMIN.
+ Constraint: XMIN < XMAX.
4: A(LA)  DOUBLE PRECISION array Input
On entry: the Chebyshev coefficients of the polynomial p(x).
 Specifically, element 1 + i*IA1 of A must contain the
 coefficient a , for i=0,1,...,n. Only these n+1 elements
 i
 will be accessed.

 Unchanged on exit, but see ADIF, below.
+ Specifically, element 1+i*IA1 must contain the coefficient
+ a , for i=0,1,...,n. Only these n+1 elements will be
+ i
+ accessed.
5: IA1  INTEGER Input
 On entry: the index increment of A. Most frequently the
+ On entry: the index increment of A. Most frequently, the
Chebyshev coefficients are stored in adjacent elements of A,
and IA1 must be set to 1. However, if, for example, they are
stored in A(1),A(4),A(7),..., then the value of IA1 must be
 3. See also Section 8. Constraint: IA1 >= 1.
+ 3. Constraint: IA1 >= 1.
6: LA  INTEGER Input
On entry:
the dimension of the array A as declared in the (sub)program
 from which E02AHF is called.
 Constraint: LA>=1+(NP11)*IA1.

 7: PATM1  DOUBLE PRECISION Output
 On exit: the value of p(x ). If this value is passed to
 min
 the integration routine E02AJF with the coefficients of q(x)
 , then the original polynomial p(x) is recovered, including
 its constant coefficient.

 8: ADIF(LADIF)  DOUBLE PRECISION array Output
 On exit: the Chebyshev coefficients of the derived
 polynomial q(x). (The differentiation is with respect to the
 variable x). Specifically, element 1+i*IADIF1 of ADIF


 contains the coefficient a , i=0,1,...n1. Additionally
 i
 element 1+n*IADIF1 is set to zero. A call of the routine may
 have the array name ADIF the same as A, provided that note
 is taken of the order in which elements are overwritten,
 when choosing the starting elements and increments IA1 and
 IADIF1: i.e., the coefficients a ,a ,...,a must be intact
 0 1 i1

+ from which E02AKF is called.
+ Constraint: LA>=(NP11)*IA1+1.
 after coefficient a is stored. In particular, it is
 i
 possible to overwrite the a completely by having IA1 =
 i
 IADIF1, and the actual arrays for A and ADIF identical.
+ 7: X  DOUBLE PRECISION Input
+ On entry: the argument x at which the polynomial is to be
+ evaluated. Constraint: XMIN <= X <= XMAX.
 9: IADIF1  INTEGER Input
 On entry: the index increment of ADIF. Most frequently the
 Chebyshev coefficients are required in adjacent elements of
 ADIF, and IADIF1 must be set to 1. However, if, for example,
 they are to be stored in ADIF(1),ADIF(4),ADIF(7),..., then
 the value of IADIF1 must be 3. See Section 8. Constraint:
 IADIF1 >= 1.
+ 8: RESULT  DOUBLE PRECISION Output
+
 10: LADIF  INTEGER Input
 On entry:
 the dimension of the array ADIF as declared in the
 (sub)program from which E02AHF is called.
 Constraint: LADIF>=1+(NP11)*IADIF1.
+ On exit: the value of the polynomial p(x).
 11: IFAIL  INTEGER Input/Output
+ 9: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 85891,44 +87884,41 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 On entry NP1 < 1,

 or XMAX <= XMIN,
+ On entry NP1 < 1,
or IA1 < 1,
or LA<=(NP11)*IA1,
 or IADIF1 < 1,
+ or XMIN >= XMAX.
 or LADIF<=(NP11)*IADIF1.
+ IFAIL= 2
+ X does not satisfy the restriction XMIN <= X <= XMAX.
7. Accuracy
 There is always a loss of precision in numerical differentiation,
 in this case associated with the multiplication by 2i in the
 formula quoted in Section 3.
+ The rounding errors are such that the computed value of the
+ polynomial is exact for a slightly perturbed set of coefficients
+ a +(delta)a . The ratio of the sum of the absolute values of the
+ i i
+ (delta)a to the sum of the absolute values of the a is less
+ i i
+ than a small multiple of (n+1)*machine precision.
8. Further Comments
The time taken by the routine is approximately proportional to
n+1.
 The increments IA1, IADIF1 are included as parameters to give a
 degree of flexibility which, for example, allows a polynomial in
 two variables to be differentiated with respect to either
 variable without rearranging the coefficients.

9. Example
Suppose a polynomial has been computed in Chebyshevseries form
to fit data over the interval [0.5,2.5]. The example program
 evaluates the 1st and 2nd derivatives of this polynomial at 4
 equally spaced points over the interval. (For the purposes of
 this example, XMIN, XMAX and the Chebyshev coefficients are
 simply supplied in DATA statements. Normally a program would
 first read in or generate data and compute the fitted
 polynomial.)
+ evaluates the polynomial at 4 equally spaced points over the
+ interval. (For the purposes of this example, XMIN, XMAX and the
+ Chebyshev coefficients are supplied in DATA statements. Normally
+ a program would first read in or generate data and compute the
+ fitted polynomial.)
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 85936,8 +87926,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02AJF
 E02AJF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02BAF
+ E02BAF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 85946,184 +87936,193 @@ have been determined.
1. Purpose
 E02AJF determines the coefficients in the Chebyshevseries
 representation of the indefinite integral of a polynomial given
 in Chebyshevseries form.
+ E02BAF computes a weighted leastsquares approximation to an
+ arbitrary set of data points by a cubic spline with knots
+ prescribed by the user. Cubic spline interpolation can also be
+ carried out.
2. Specification
 SUBROUTINE E02AJF (NP1, XMIN, XMAX, A, IA1, LA, QATM1,
 1 AINT, IAINT1, LAINT, IFAIL)
 INTEGER NP1, IA1, LA, IAINT1, LAINT, IFAIL
 DOUBLE PRECISION XMIN, XMAX, A(LA), QATM1, AINT(LAINT)
+ SUBROUTINE E02BAF (M, NCAP7, X, Y, W, LAMDA, WORK1, WORK2,
+ 1 C, SS, IFAIL)
+ INTEGER M, NCAP7, IFAIL
+ DOUBLE PRECISION X(M), Y(M), W(M), LAMDA(NCAP7), WORK1(M),
+ 1 WORK2(4*NCAP7), C(NCAP7), SS
3. Description
 This routine forms the polynomial which is the indefinite
 integral of a given polynomial. Both the original polynomial and
 its integral are represented in Chebyshevseries form. If
 supplied with the coefficients a , for i=0,1,...,n, of a
 i
 polynomial p(x) of degree n, where

 1
 p(x)= a +a T (x)+...+a T (x),
 2 0 1 1 n n
+ This routine determines a leastsquares cubic spline
+ approximation s(x) to the set of data points (x ,y ) with weights
+ r r
+
 the routine returns the coefficients a' , for i=0,1,...,n+1, of
 i
 the polynomial q(x) of degree n+1, where
+ w , for r=1,2,...,m. The value of NCAP7 = n+7, where n is the
+ r
+ number of intervals of the spline (one greater than the number of
+ interior knots), and the values of the knots
+ (lambda) ,(lambda) ,...,(lambda) , interior to the data
+ 5 6 n+3
+ interval, are prescribed by the user.
 1
 q(x)= a' +a' T (x)+...+a' T (x),
 2 0 1 1 n+1 n+1
+ s(x) has the property that it minimizes (theta), the sum of
+ squares of the weighted residuals (epsilon) , for r=1,2,...,m,
+ r
+ where
 and
+ (epsilon) =w (y s(x )).
+ r r r r
 /
 q(x)= p(x)dx.
 /
+ The routine produces this minimizing value of (theta) and the
+

+ coefficients c ,c ,...,c , where q=n+3, in the Bspline
+ 1 2 q
+ representation
 Here T (x) denotes the Chebyshev polynomial of the first kind of
 j
 degree j with argument x. It is assumed that the normalised

+ q
+ 
+ s(x)= > c N (x).
+  i i
+ i=1
 variable x in the interval [1,+1] was obtained from the user's
 original variable x in the interval [x ,x ] by the linear
 min max
 transformation
+ Here N (x) denotes the normalised Bspline of degree 3 defined
+ i
+ upon the knots (lambda) ,(lambda) ,...,(lambda) .
+ i i+1 i+4
 2x(x +x )
 max min
 x= 
 x x
 max min
+ In order to define the full set of Bsplines required, eight
+ additional knots (lambda) ,(lambda) ,(lambda) ,(lambda) and
+ 1 2 3 4
+ (lambda) ,(lambda) ,(lambda) ,(lambda) are inserted
+ n+4 n+5 n+6 n+7
+ automatically by the routine. The first four of these are set
+ equal to the smallest x and the last four to the largest x .
+ r r
 and that the user requires the integral to be with respect to the

+ The representation of s(x) in terms of Bsplines is the most
 variable x. If the integral with respect to x is required, set
 x =1 and x =1.
 max min
+ compact form possible in that only n+3 coefficients, in addition
+
 Values of the integral can subsequently be computed, from the
 coefficients obtained, by using E02AKF.
+ to the n+7 knots, fully define s(x).
 The method employed is that of Chebyshevseries [1] modified for
 integrating with respect to x. Initially taking a =a =0, the
 n+1 n+2
 routine forms successively
+ The method employed involves forming and then computing the
+ leastsquares solution of a set of m linear equations in the
+
 a a x x
 i1 i+1 max min
 a' = * , i=n+1,n,...,1.
 i 2i 2
+ coefficients c (i=1,2,...,n+3). The equations are formed using a
+ i
+ recurrence relation for Bsplines that is unconditionally stable
+ (Cox [1], de Boor [5]), even for multiple (coincident) knots. The
+ leastsquares solution is also obtained in a stable manner by
+ using orthogonal transformations, viz. a variant of Givens
+ rotations (Gentleman [6] and [7]). This requires only one
+ equation to be stored at a time. Full advantage is taken of the
+ structure of the equations, there being at most four nonzero
+ values of N (x) for any value of x and hence at most four
+ i
+ coefficients in each equation.
 The constant coefficient a' is chosen so that q(x) is equal to a
 0
 specified value, QATM1, at the lower endpoint of the interval on

+ For further details of the algorithm and its use see Cox [2], [3]
+ and [4].
 which it is defined, i.e., x=1, which corresponds to x=x .
 min
+ Subsequent evaluation of s(x) from its Bspline representation
+ may be carried out using E02BBF. If derivatives of s(x) are also
+ required, E02BCF may be used. E02BDF can be used to compute the
+ definite integral of s(x).
4. References
 [1] Unknown (1961) Chebyshevseries. Modern Computing Methods,
 Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
 HMSO.
+ [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
+ Inst. Math. Appl. 10 134149.
 5. Parameters
+ [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
+ User. Software for Numerical Mathematics. (ed D J Evans)
+ Academic Press.
 1: NP1  INTEGER Input
 On entry: n+1, where n is the degree of the given
 polynomial p(x). Thus NP1 is the number of coefficients in
 this polynomial. Constraint: NP1 >= 1.
+ [3] Cox M G (1975) Numerical methods for the interpolation and
+ approximation of data by spline functions. PhD Thesis. City
+ University, London.
 2: XMIN  DOUBLE PRECISION Input
+ [4] Cox M G and Hayes J G (1973) Curve fitting: a guide and
+ suite of algorithms for the nonspecialist user. Report
+ NAC26. National Physical Laboratory.
 3: XMAX  DOUBLE PRECISION Input
 On entry: the lower and upper endpoints respectively of
 the interval [x ,x ]. The Chebyshevseries
 min max

+ [5] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ Theory. 6 5062.
 representation is in terms of the normalised variable x,
 where
 2x(x +x )
 max min
 x= .
 x x
 max min
 Constraint: XMAX > XMIN.
+ [6] Gentleman W M (1974) Algorithm AS 75. Basic Procedures for
+ Large Sparse or Weighted Linear Leastsquares Problems.
+ Appl. Statist. 23 448454.
 4: A(LA)  DOUBLE PRECISION array Input
 On entry: the Chebyshev coefficients of the polynomial p(x)
 . Specifically, element 1+i*IA1 of A must contain the
 coefficient a , for i=0,1,...,n. Only these n+1 elements
 i
 will be accessed.
+ [7] Gentleman W M (1973) Leastsquares Computations by Givens
+ Transformations without Square Roots. J. Inst. Math. Applic.
+ 12 329336.
 Unchanged on exit, but see AINT, below.
+ [8] Schoenberg I J and Whitney A (1953) On Polya Frequency
+ Functions III. Trans. Amer. Math. Soc. 74 246259.
 5: IA1  INTEGER Input
 On entry: the index increment of A. Most frequently the
 Chebyshev coefficients are stored in adjacent elements of A,
 and IA1 must be set to 1. However, if for example, they are
 stored in A(1),A(4),A(7),..., then the value of IA1 must be
 3. See also Section 8. Constraint: IA1 >= 1.
+ 5. Parameters
 6: LA  INTEGER Input
 On entry:
 the dimension of the array A as declared in the (sub)program
 from which E02AJF is called.
 Constraint: LA>=1+(NP11)*IA1.
+ 1: M  INTEGER Input
+ On entry: the number m of data points. Constraint: M >=
+ MDIST >= 4, where MDIST is the number of distinct x values
+ in the data.
+
+ 2: NCAP7  INTEGER Input
+
+
+ On entry: n+7, where n is the number of intervals of the
+ spline (which is one greater than the number of interior
+ knots, i.e., the knots strictly within the range x to x )
+ 1 m
+ over which the spline is defined. Constraint: 8 <= NCAP7 <=
+ MDIST + 4, where MDIST is the number of distinct x values in
+ the data.
+
+ 3: X(M)  DOUBLE PRECISION array Input
+ On entry: the values x of the independent variable
+ r
+ (abscissa), for r=1,2,...,m. Constraint: x <=x <=...<=x .
+ 1 2 m
+
+ 4: Y(M)  DOUBLE PRECISION array Input
+ On entry: the values y of the dependent variable
+ r
+ (ordinate), for r=1,2,...,m.
 7: QATM1  DOUBLE PRECISION Input
 On entry: the value that the integrated polynomial is
 required to have at the lower endpoint of its interval of

+ 5: W(M)  DOUBLE PRECISION array Input
+ On entry: the values w of the weights, for r=1,2,...,m.
+ r
+ For advice on the choice of weights, see the Chapter
+ Introduction. Constraint: W(r) > 0, for r=1,2,...,m.
 definition, i.e., at x=1 which corresponds to x=x . Thus,
 min
 QATM1 is a constant of integration and will normally be set
 to zero by the user.
+ 6: LAMDA(NCAP7)  DOUBLE PRECISION array Input/Output
+ On entry: LAMDA(i) must be set to the (i4)th (interior)
 8: AINT(LAINT)  DOUBLE PRECISION array Output
 On exit: the Chebyshev coefficients of the integral q(x).
 (The integration is with respect to the variable x, and the
 constant coefficient is chosen so that q(x ) equals QATM1)
 min
 Specifically, element 1+i*IAINT1 of AINT contains the
 coefficient a' , for i=0,1,...,n+1. A call of the routine
+ knot, (lambda) , for i=5,6,...,n+3. Constraint: X(1) < LAMDA
i
 may have the array name AINT the same as A, provided that
 note is taken of the order in which elements are overwritten
 when choosing starting elements and increments IA1 and
 IAINT1: i.e., the coefficients, a ,a ,...,a must be
 0 1 i2
 intact after coefficient a' is stored. In particular it is
 i
 possible to overwrite the a entirely by having IA1 =
 i
 IAINT1, and the actual array for A and AINT identical.
+ (5) <= LAMDA(6) <=... <= LAMDA(NCAP74) < X(M). On exit: the
+ input values are unchanged, and LAMDA(i), for i = 1, 2, 3,
+ 4, NCAP73, NCAP72, NCAP71, NCAP7 contains the additional
+ (exterior) knots introduced by the routine. For advice on
+ the choice of knots, see Section 3.3 of the Chapter
+ Introduction.
 9: IAINT1  INTEGER Input
 On entry: the index increment of AINT. Most frequently the
 Chebyshev coefficients are required in adjacent elements of
 AINT, and IAINT1 must be set to 1. However, if, for example,
 they are to be stored in AINT(1),AINT(4),AINT(7),..., then
 the value of IAINT1 must be 3. See also Section 8.
 Constraint: IAINT1 >= 1.
+ 7: WORK1(M)  DOUBLE PRECISION array Workspace
 10: LAINT  INTEGER Input
 On entry:
 the dimension of the array AINT as declared in the
 (sub)program from which E02AJF is called.
 Constraint: LAINT>=1+NP1*IAINT1.
+ 8: WORK2(4*NCAP7)  DOUBLE PRECISION array Workspace
+
+ 9: C(NCAP7)  DOUBLE PRECISION array Output
+ On exit: the coefficient c of the Bspline N (x), for
+ i i
+
+
+ i=1,2,...,n+3. The remaining elements of the array are not
+ used.
+
+ 10: SS  DOUBLE PRECISION Output
+ On exit: the residual sum of squares, (theta).
11: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
@@ 86138,43 +88137,114 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 On entry NP1 < 1,
+ The knots fail to satisfy the condition
 or XMAX <= XMIN,
+ X(1) < LAMDA(5) <= LAMDA(6) <=... <= LAMDA(NCAP74) < X(M).
+ Thus the knots are not in correct order or are not interior
+ to the data interval.
 or IA1 < 1,
+ IFAIL= 2
+ The weights are not all strictly positive.
 or LA<=(NP11)*IA1,
+ IFAIL= 3
+ The values of X(r), for r = 1,2,...,M are not in non
+ decreasing order.
 or IAINT1 < 1,
+ IFAIL= 4
+ NCAP7 < 8 (so the number of interior knots is negative) or
+ NCAP7 > MDIST + 4, where MDIST is the number of distinct x
+ values in the data (so there cannot be a unique solution).
 or LAINT<=NP1*IAINT1.
+ IFAIL= 5
+ The conditions specified by Schoenberg and Whitney [8] fail
+ to hold for at least one subset of the distinct data
+ abscissae. That is, there is no subset of NCAP74 strictly
+ increasing values, X(R(1)),X(R(2)),...,X(R(NCAP74)), among
+ the abscissae such that
+ X(R(1)) < LAMDA(1) < X(R(5)),
+
+ X(R(2)) < LAMDA(2) < X(R(6)),
+
+ ...
+
+ X(R(NCAP78)) < LAMDA(NCAP78) < X(R(NCAP74)).
+ This means that there is no unique solution: there are
+ regions containing too many knots compared with the number
+ of data points.
7. Accuracy
 In general there is a gain in precision in numerical integration,
 in this case associated with the division by 2i in the formula
 quoted in Section 3.
+ The rounding errors committed are such that the computed
+ coefficients are exact for a slightly perturbed set of ordinates
+ y +(delta)y . The ratio of the rootmeansquare value for the
+ r r
+ (delta)y to the rootmeansquare value of the y can be expected
+ r r
+ to be less than a small multiple of (kappa)*m*machine precision,
+ where (kappa) is a condition number for the problem. Values of
+ (kappa) for 2030 practical data sets all proved to lie between
+ 4.5 and 7.8 (see Cox [3]). (Note that for these data sets,
+ replacing the coincident end knots at the endpoints x and x
+ 1 m
+ used in the routine by various choices of noncoincident exterior
+ knots gave values of (kappa) between 16 and 180. Again see Cox
+ [3] for further details.) In general we would not expect (kappa)
+ to be large unless the choice of knots results in nearviolation
+ of the SchoenbergWhitney conditions.
+
+ A cubic spline which adequately fits the data and is free from
+ spurious oscillations is more likely to be obtained if the knots
+ are chosen to be grouped more closely in regions where the
+ function (underlying the data) or its derivatives change more
+ rapidly than elsewhere.
8. Further Comments
 The time taken by the routine is approximately proportional to
 n+1.
+
 The increments IA1, IAINT1 are included as parameters to give a
 degree of flexibility which, for example, allows a polynomial in
 two variables to be integrated with respect to either variable
 without rearranging the coefficients.
+ The time taken by the routine is approximately C*(2m+n+7)
+ seconds, where C is a machinedependent constant.
+
+ Multiple knots are permitted as long as their multiplicity does
+ not exceed 4, i.e., the complete set of knots must satisfy
+
+
+ (lambda) <(lambda) , for i=1,2,...,n+3, (cf. Section 6). At a
+ i i+4
+ knot of multiplicity one (the usual case), s(x) and its first two
+ derivatives are continuous. At a knot of multiplicity two, s(x)
+ and its first derivative are continuous. At a knot of
+ multiplicity three, s(x) is continuous, and at a knot of
+ multiplicity four, s(x) is generally discontinous.
+
+ The routine can be used efficiently for cubic spline
+
+
+ interpolation, i.e.,if m=n+3. The abscissae must then of course
+ satisfy x c N (x)
+  i i
+ i=1
 the first kind of degree j with argument x. It is assumed that

+
 the independent variable x in the interval [1,+1] was obtained
 from the user's original variable x in the interval [x ,x ]
 min max
 by the linear transformation
+ Here q=n+3, where n is the number of intervals of the spline, and
+ N (x) denotes the normalised Bspline of degree 3 defined upon
+ i
+ the knots (lambda) ,(lambda) ,...,(lambda) . The prescribed
+ i i+1 i+4
+ argument x must satisfy (lambda) <=x<=(lambda) .
+ 4 n+4
 2x(x +x )
 max min
 x= .
 x x
 max min
+
 The coefficients a may be supplied in the array A, with any
 i
 increment between the indices of array elements which contain
 successive coefficients. This enables the routine to be used in
 surface fitting and other applications, in which the array might
 have two or more dimensions.
+ It is assumed that (lambda) >=(lambda) , for j=2,3,...,n+7, and
+ j j1
+ (lambda) >(lambda) .
+ 4
+ n+4
 The method employed is based upon the threeterm recurrence
 relation due to Clenshaw [1], with modifications due to Reinsch
 and Gentleman (see [4]). For further details of the algorithm and
 its use see Cox [2] and Cox and Hayes [3].
+ The method employed is that of evaluation by taking convex
+ combinations due to de Boor [4]. For further details of the
+ algorithm and its use see Cox [1] and [3].
+
+ It is expected that a common use of E02BBF will be the evaluation
+ of the cubic spline approximations produced by E02BAF. A
+ generalization of E02BBF which also forms the derivative of s(x)
+ is E02BCF. E02BCF takes about 50% longer than E02BBF.
4. References
 [1] Clenshaw C W (1955) A Note on the Summation of Chebyshev
 series. Math. Tables Aids Comput. 9 118120.
+ [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
+ Inst. Math. Appl. 10 134149.
 [2] Cox M G (1973) A datafitting package for the nonspecialist
 user. Report NAC40. National Physical Laboratory.
+ [2] Cox M G (1978) The Numerical Evaluation of a Spline from its
+ Bspline Representation. J. Inst. Math. Appl. 21 135143.
[3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
suite of algorithms for the nonspecialist user. Report
NAC26. National Physical Laboratory.
 [4] Gentlemen W M (1969) An Error Analysis of Goertzel's
 (Watt's) Method for Computing Fourier Coefficients. Comput.
 J. 12 160165.
+ [4] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ Theory. 6 5062.
5. Parameters
 1: NP1  INTEGER Input
 On entry: n+1, where n is the degree of the given


 polynomial p(x). Constraint: NP1 >= 1.

 2: XMIN  DOUBLE PRECISION Input

 3: XMAX  DOUBLE PRECISION Input
 On entry: the lower and upper endpoints respectively of
 the interval [x ,x ]. The Chebyshevseries
 min max

+ 1: NCAP7  INTEGER Input
+
 representation is in terms of the normalised variable x,
 where
 2x(x +x )
 max min
 x= .
 x x
 max min
 Constraint: XMIN < XMAX.
+ On entry: n+7, where n is the number of intervals (one
+ greater than the number of interior knots, i.e., the knots
+ strictly within the range (lambda) to (lambda) ) over
+ 4
+ n+4
+ which the spline is defined. Constraint: NCAP7 >= 8.
 4: A(LA)  DOUBLE PRECISION array Input
 On entry: the Chebyshev coefficients of the polynomial p(x).
 Specifically, element 1+i*IA1 must contain the coefficient
 a , for i=0,1,...,n. Only these n+1 elements will be
 i
 accessed.
+ 2: LAMDA(NCAP7)  DOUBLE PRECISION array Input
+ On entry: LAMDA(j) must be set to the value of the jth
+ member of the complete set of knots, (lambda) for
+ j
+
 5: IA1  INTEGER Input
 On entry: the index increment of A. Most frequently, the
 Chebyshev coefficients are stored in adjacent elements of A,
 and IA1 must be set to 1. However, if, for example, they are
 stored in A(1),A(4),A(7),..., then the value of IA1 must be
 3. Constraint: IA1 >= 1.
+ j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non
+ decreasing order with LAMDA(NCAP73) > LAMDA(4).
 6: LA  INTEGER Input
 On entry:
 the dimension of the array A as declared in the (sub)program
 from which E02AKF is called.
 Constraint: LA>=(NP11)*IA1+1.
+ 3: C(NCAP7)  DOUBLE PRECISION array Input
+ On entry: the coefficient c of the Bspline N (x), for
+ i i
+
 7: X  DOUBLE PRECISION Input
 On entry: the argument x at which the polynomial is to be
 evaluated. Constraint: XMIN <= X <= XMAX.
+ i=1,2,...,n+3. The remaining elements of the array are not
+ used.
 8: RESULT  DOUBLE PRECISION Output

+ 4: X  DOUBLE PRECISION Input
+ On entry: the argument x at which the cubic spline is to be
+ evaluated. Constraint: LAMDA(4) <= X <= LAMDA(NCAP73).
 On exit: the value of the polynomial p(x).
+ 5: S  DOUBLE PRECISION Output
+ On exit: the value of the spline, s(x).
 9: IFAIL  INTEGER Input/Output
+ 6: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 86333,41 +88378,58 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 On entry NP1 < 1,

 or IA1 < 1,

 or LA<=(NP11)*IA1,
+ The argument X does not satisfy LAMDA(4) <= X <= LAMDA(
+ NCAP73).
 or XMIN >= XMAX.
+ In this case the value of S is set arbitrarily to zero.
IFAIL= 2
 X does not satisfy the restriction XMIN <= X <= XMAX.
+ NCAP7 < 8, i.e., the number of interior knots is negative.
7. Accuracy
 The rounding errors are such that the computed value of the
 polynomial is exact for a slightly perturbed set of coefficients
 a +(delta)a . The ratio of the sum of the absolute values of the
 i i
 (delta)a to the sum of the absolute values of the a is less
 i i
 than a small multiple of (n+1)*machine precision.
+ The computed value of s(x) has negligible error in most practical
+ situations. Specifically, this value has an absolute error
+ bounded in modulus by 18*c * machine precision, where c is
+ max max
+ the largest in modulus of c ,c ,c and c , and j is an
+ j j+1 j+2 j+3
+ integer such that (lambda) <=x<=(lambda) . If c ,c ,c
+ j+3 j+4 j j+1 j+2
+ and c are all of the same sign, then the computed value of
+ j+3
+ s(x) has a relative error not exceeding 20*machine precision in
+ modulus. For further details see Cox [2].
8. Further Comments
 The time taken by the routine is approximately proportional to
 n+1.
+
+
+ The time taken by the routine is approximately C*(1+0.1*log(n+7))
+ seconds, where C is a machinedependent constant.
+
+ Note: the routine does not test all the conditions on the knots
+ given in the description of LAMDA in Section 5, since to do this
+ would result in a computation time approximately linear in n+7
+ instead of log(n+7). All the conditions are tested in E02BAF,
+ however.
9. Example
 Suppose a polynomial has been computed in Chebyshevseries form
 to fit data over the interval [0.5,2.5]. The example program
 evaluates the polynomial at 4 equally spaced points over the
 interval. (For the purposes of this example, XMIN, XMAX and the
 Chebyshev coefficients are supplied in DATA statements. Normally
 a program would first read in or generate data and compute the
 fitted polynomial.)
+ Evaluate at 9 equallyspaced points in the interval 1.0<=x<=9.0
+ the cubic spline with (augmented) knots 1.0, 1.0, 1.0, 1.0, 3.0,
+ 6.0, 8.0, 9.0, 9.0, 9.0, 9.0 and normalised cubic Bspline
+ coefficients 1.0, 2.0, 4.0, 7.0, 6.0, 4.0, 3.0.
+
+ The example program is written in a general form that will enable
+
+
+ a cubic spline with n intervals, in its normalised cubic Bspline
+ form, to be evaluated at m equallyspaced points in the interval
+
+
+ LAMDA(4) <= x <= LAMDA(n+4). The program is selfstarting in that
+ any number of data sets may be supplied.
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 86375,8 +88437,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02BAF
 E02BAF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02BCF
+ E02BCF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 86385,195 +88447,146 @@ have been determined.
1. Purpose
 E02BAF computes a weighted leastsquares approximation to an
 arbitrary set of data points by a cubic spline with knots
 prescribed by the user. Cubic spline interpolation can also be
 carried out.
+ E02BCF evaluates a cubic spline and its first three derivatives
+ from its Bspline representation.
2. Specification
 SUBROUTINE E02BAF (M, NCAP7, X, Y, W, LAMDA, WORK1, WORK2,
 1 C, SS, IFAIL)
 INTEGER M, NCAP7, IFAIL
 DOUBLE PRECISION X(M), Y(M), W(M), LAMDA(NCAP7), WORK1(M),
 1 WORK2(4*NCAP7), C(NCAP7), SS
+ SUBROUTINE E02BCF (NCAP7, LAMDA, C, X, LEFT, S, IFAIL)
+ INTEGER NCAP7, LEFT, IFAIL
+ DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), X, S(4)
3. Description
 This routine determines a leastsquares cubic spline
 approximation s(x) to the set of data points (x ,y ) with weights
 r r


 w , for r=1,2,...,m. The value of NCAP7 = n+7, where n is the
 r
 number of intervals of the spline (one greater than the number of
 interior knots), and the values of the knots
 (lambda) ,(lambda) ,...,(lambda) , interior to the data
 5 6 n+3
 interval, are prescribed by the user.

 s(x) has the property that it minimizes (theta), the sum of
 squares of the weighted residuals (epsilon) , for r=1,2,...,m,
 r
 where

 (epsilon) =w (y s(x )).
 r r r r
+ This routine evaluates the cubic spline s(x) and its first three
+ derivatives at a prescribed argument x. It is assumed that s(x)
+ is represented in terms of its Bspline coefficients c , for
+ i
+
 The routine produces this minimizing value of (theta) and the

+ i=1,2,...,n+3 and (augmented) ordered knot set (lambda) , for
+ i
+
 coefficients c ,c ,...,c , where q=n+3, in the Bspline
 1 2 q
 representation
+ i=1,2,...,n+7, (see E02BAF), i.e.,
q

 s(x)= > c N (x).
+ s(x)= > c N (x)
 i i
i=1
 Here N (x) denotes the normalised Bspline of degree 3 defined
 i
 upon the knots (lambda) ,(lambda) ,...,(lambda) .
 i i+1 i+4

 In order to define the full set of Bsplines required, eight
 additional knots (lambda) ,(lambda) ,(lambda) ,(lambda) and
 1 2 3 4
 (lambda) ,(lambda) ,(lambda) ,(lambda) are inserted
 n+4 n+5 n+6 n+7
 automatically by the routine. The first four of these are set
 equal to the smallest x and the last four to the largest x .
 r r

 The representation of s(x) in terms of Bsplines is the most
+
 compact form possible in that only n+3 coefficients, in addition

+ Here q=n+3, n is the number of intervals of the spline and N (x)
+ i
+ denotes the normalised Bspline of degree 3 (order 4) defined
+ upon the knots (lambda) ,(lambda) ,...,(lambda) . The
+ i i+1 i+4
+ prescribed argument x must satisfy
 to the n+7 knots, fully define s(x).
+ (lambda) <=x<=(lambda)
+ 4 n+4
 The method employed involves forming and then computing the
 leastsquares solution of a set of m linear equations in the

+ At a simple knot (lambda) (i.e., one satisfying
+ i
+ (lambda) <(lambda) <(lambda) ), the third derivative of the
+ i1 i i+1
+ spline is in general discontinuous. At a multiple knot (i.e., two
+ or more knots with the same value), lower derivatives, and even
+ the spline itself, may be discontinuous. Specifically, at a point
+ x=u where (exactly) r knots coincide (such a point is termed a
+ knot of multiplicity r), the values of the derivatives of order
+ 4j, for j=1,2,...,r, are in general discontinuous. (Here
+ 1<=r<=4;r>4 is not meaningful.) The user must specify whether the
+ value at such a point is required to be the left or righthand
+ derivative.
 coefficients c (i=1,2,...,n+3). The equations are formed using a
 i
 recurrence relation for Bsplines that is unconditionally stable
 (Cox [1], de Boor [5]), even for multiple (coincident) knots. The
 leastsquares solution is also obtained in a stable manner by
 using orthogonal transformations, viz. a variant of Givens
 rotations (Gentleman [6] and [7]). This requires only one
 equation to be stored at a time. Full advantage is taken of the
 structure of the equations, there being at most four nonzero
 values of N (x) for any value of x and hence at most four
 i
 coefficients in each equation.
+ The method employed is based upon:
 For further details of the algorithm and its use see Cox [2], [3]
 and [4].
+ (i) carrying out a binary search for the knot interval
+ containing the argument x (see Cox [3]),
 Subsequent evaluation of s(x) from its Bspline representation
 may be carried out using E02BBF. If derivatives of s(x) are also
 required, E02BCF may be used. E02BDF can be used to compute the
 definite integral of s(x).
+ (ii) evaluating the nonzero Bsplines of orders 1,2,3 and
+ 4 by recurrence (see Cox [2] and [3]),
 4. References
+ (iii) computing all derivatives of the Bsplines of order 4
+ by applying a second recurrence to these computed Bspline
+ values (see de Boor [1]),
 [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
 Inst. Math. Appl. 10 134149.
+ (iv) multiplying the 4thorder Bspline values and their
+ derivative by the appropriate Bspline coefficients, and
+ summing, to yield the values of s(x) and its derivatives.
 [2] Cox M G (1974) A Datafitting Package for the Nonspecialist
 User. Software for Numerical Mathematics. (ed D J Evans)
 Academic Press.
+ E02BCF can be used to compute the values and derivatives of cubic
+ spline fits and interpolants produced by E02BAF.
 [3] Cox M G (1975) Numerical methods for the interpolation and
 approximation of data by spline functions. PhD Thesis. City
 University, London.
+ If only values and not derivatives are required, E02BBF may be
+ used instead of E02BCF, which takes about 50% longer than E02BBF.
 [4] Cox M G and Hayes J G (1973) Curve fitting: a guide and
 suite of algorithms for the nonspecialist user. Report
 NAC26. National Physical Laboratory.
+ 4. References
 [5] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ [1] De Boor C (1972) On Calculating with Bsplines. J. Approx.
Theory. 6 5062.
 [6] Gentleman W M (1974) Algorithm AS 75. Basic Procedures for
 Large Sparse or Weighted Linear Leastsquares Problems.
 Appl. Statist. 23 448454.

 [7] Gentleman W M (1973) Leastsquares Computations by Givens
 Transformations without Square Roots. J. Inst. Math. Applic.
 12 329336.
+ [2] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
+ Inst. Math. Appl. 10 134149.
 [8] Schoenberg I J and Whitney A (1953) On Polya Frequency
 Functions III. Trans. Amer. Math. Soc. 74 246259.
+ [3] Cox M G (1978) The Numerical Evaluation of a Spline from its
+ Bspline Representation. J. Inst. Math. Appl. 21 135143.
5. Parameters
 1: M  INTEGER Input
 On entry: the number m of data points. Constraint: M >=
 MDIST >= 4, where MDIST is the number of distinct x values
 in the data.

 2: NCAP7  INTEGER Input
+ 1: NCAP7  INTEGER Input
On entry: n+7, where n is the number of intervals of the
spline (which is one greater than the number of interior
 knots, i.e., the knots strictly within the range x to x )
 1 m
 over which the spline is defined. Constraint: 8 <= NCAP7 <=
 MDIST + 4, where MDIST is the number of distinct x values in
 the data.

 3: X(M)  DOUBLE PRECISION array Input
 On entry: the values x of the independent variable
 r
 (abscissa), for r=1,2,...,m. Constraint: x <=x <=...<=x .
 1 2 m

 4: Y(M)  DOUBLE PRECISION array Input
 On entry: the values y of the dependent variable
 r
 (ordinate), for r=1,2,...,m.

 5: W(M)  DOUBLE PRECISION array Input
 On entry: the values w of the weights, for r=1,2,...,m.
 r
 For advice on the choice of weights, see the Chapter
 Introduction. Constraint: W(r) > 0, for r=1,2,...,m.

 6: LAMDA(NCAP7)  DOUBLE PRECISION array Input/Output
 On entry: LAMDA(i) must be set to the (i4)th (interior)
+ knots, i.e., the knots strictly within the range (lambda)
+ 4
+ to (lambda) over which the spline is defined).
+ n+4
+ Constraint: NCAP7 >= 8.
 knot, (lambda) , for i=5,6,...,n+3. Constraint: X(1) < LAMDA
 i
 (5) <= LAMDA(6) <=... <= LAMDA(NCAP74) < X(M). On exit: the
 input values are unchanged, and LAMDA(i), for i = 1, 2, 3,
 4, NCAP73, NCAP72, NCAP71, NCAP7 contains the additional
 (exterior) knots introduced by the routine. For advice on
 the choice of knots, see Section 3.3 of the Chapter
 Introduction.
+ 2: LAMDA(NCAP7)  DOUBLE PRECISION array Input
+ On entry: LAMDA(j) must be set to the value of the jth
+ member of the complete set of knots, (lambda) , for
+ j
+
 7: WORK1(M)  DOUBLE PRECISION array Workspace
+ j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non
+ decreasing order with
 8: WORK2(4*NCAP7)  DOUBLE PRECISION array Workspace
+ LAMDA(NCAP73) > LAMDA(4).
 9: C(NCAP7)  DOUBLE PRECISION array Output
 On exit: the coefficient c of the Bspline N (x), for
 i i

+ 3: C(NCAP7)  DOUBLE PRECISION array Input
+ On entry: the coefficient c of the Bspline N (x), for
+ i i
i=1,2,...,n+3. The remaining elements of the array are not
used.
 10: SS  DOUBLE PRECISION Output
 On exit: the residual sum of squares, (theta).
+ 4: X  DOUBLE PRECISION Input
+ On entry: the argument x at which the cubic spline and its
+ derivatives are to be evaluated. Constraint: LAMDA(4) <= X
+ <= LAMDA(NCAP73).
 11: IFAIL  INTEGER Input/Output
+ 5: LEFT  INTEGER Input
+ On entry: specifies whether left or righthand values of
+ the spline and its derivatives are to be computed (see
+ Section 3). Left or righthand values are formed according
+ to whether LEFT is equal or not equal to 1. If x does not
+ coincide with a knot, the value of LEFT is immaterial. If x
+ = LAMDA(4), righthand values are computed, and if x = LAMDA
+ (NCAP73), lefthand values are formed, regardless of the
+ value of LEFT.
+
+ 6: S(4)  DOUBLE PRECISION array Output
+ On exit: S(j) contains the value of the (j1)th derivative
+ of the spline at the argument x, for j = 1,2,3,4. Note that
+ S(1) contains the value of the spline.
+
+ 7: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 86586,114 +88599,77 @@ have been determined.
Errors detected by the routine:
IFAIL= 1
 The knots fail to satisfy the condition

 X(1) < LAMDA(5) <= LAMDA(6) <=... <= LAMDA(NCAP74) < X(M).
 Thus the knots are not in correct order or are not interior
 to the data interval.
+ NCAP7 < 8, i.e., the number of intervals is not positive.
IFAIL= 2
 The weights are not all strictly positive.

 IFAIL= 3
 The values of X(r), for r = 1,2,...,M are not in non
 decreasing order.

 IFAIL= 4
 NCAP7 < 8 (so the number of interior knots is negative) or
 NCAP7 > MDIST + 4, where MDIST is the number of distinct x
 values in the data (so there cannot be a unique solution).
+ Either LAMDA(4) >= LAMDA(NCAP73), i.e., the range over
+ which s(x) is defined is null or negative in length, or X is
+ an invalid argument, i.e., X < LAMDA(4) or X >
+ LAMDA(NCAP73).
 IFAIL= 5
 The conditions specified by Schoenberg and Whitney [8] fail
 to hold for at least one subset of the distinct data
 abscissae. That is, there is no subset of NCAP74 strictly
 increasing values, X(R(1)),X(R(2)),...,X(R(NCAP74)), among
 the abscissae such that
 X(R(1)) < LAMDA(1) < X(R(5)),
+ 7. Accuracy
 X(R(2)) < LAMDA(2) < X(R(6)),
+ The computed value of s(x) has negligible error in most practical
+ situations. Specifically, this value has an absolute error
+ bounded in modulus by 18*c * machine precision, where c is
+ max max
+ the largest in modulus of c ,c ,c and c , and j is an
+ j j+1 j+2 j+3
+ integer such that (lambda) <=x<=(lambda) . If c ,c ,c
+ j+3 j+4 j j+1 j+2
+ and c are all of the same sign, then the computed value of
+ j+3
+ s(x) has relative error bounded by 18*machine precision. For full
+ details see Cox [3].
 ...
+ No complete error analysis is available for the computation of
+ the derivatives of s(x). However, for most practical purposes the
+ absolute errors in the computed derivatives should be small.
 X(R(NCAP78)) < LAMDA(NCAP78) < X(R(NCAP74)).
 This means that there is no unique solution: there are
 regions containing too many knots compared with the number
 of data points.
+ 8. Further Comments
 7. Accuracy
+ The time taken by this routine is approximately linear in
+
 The rounding errors committed are such that the computed
 coefficients are exact for a slightly perturbed set of ordinates
 y +(delta)y . The ratio of the rootmeansquare value for the
 r r
 (delta)y to the rootmeansquare value of the y can be expected
 r r
 to be less than a small multiple of (kappa)*m*machine precision,
 where (kappa) is a condition number for the problem. Values of
 (kappa) for 2030 practical data sets all proved to lie between
 4.5 and 7.8 (see Cox [3]). (Note that for these data sets,
 replacing the coincident end knots at the endpoints x and x
 1 m
 used in the routine by various choices of noncoincident exterior
 knots gave values of (kappa) between 16 and 180. Again see Cox
 [3] for further details.) In general we would not expect (kappa)
 to be large unless the choice of knots results in nearviolation
 of the SchoenbergWhitney conditions.
+ log(n+7).
 A cubic spline which adequately fits the data and is free from
 spurious oscillations is more likely to be obtained if the knots
 are chosen to be grouped more closely in regions where the
 function (underlying the data) or its derivatives change more
 rapidly than elsewhere.
+ Note: the routine does not test all the conditions on the knots
+ given in the description of LAMDA in Section 5, since to do this
+
 8. Further Comments
+ would result in a computation time approximately linear in n+7
+

+ instead of log(n+7). All the conditions are tested in E02BAF,
+ however.
 The time taken by the routine is approximately C*(2m+n+7)
 seconds, where C is a machinedependent constant.
+ 9. Example
 Multiple knots are permitted as long as their multiplicity does
 not exceed 4, i.e., the complete set of knots must satisfy

+ Compute, at the 7 arguments x = 0, 1, 2, 3, 4, 5, 6, the left
+ and righthand values and first 3 derivatives of the cubic spline
+ defined over the interval 0<=x<=6 having the 6 interior knots x =
+ 1, 3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6,
+ and the 10 Bspline coefficients 10, 12, 13, 15, 22, 26, 24, 18,
+ 14, 12.
 (lambda) <(lambda) , for i=1,2,...,n+3, (cf. Section 6). At a
 i i+4
 knot of multiplicity one (the usual case), s(x) and its first two
 derivatives are continuous. At a knot of multiplicity two, s(x)
 and its first derivative are continuous. At a knot of
 multiplicity three, s(x) is continuous, and at a knot of
 multiplicity four, s(x) is generally discontinous.
+ The input data items (using the notation of Section 5) comprise
+ the following values in the order indicated:
 The routine can be used efficiently for cubic spline

+
 interpolation, i.e.,if m=n+3. The abscissae must then of course
 satisfy x c N (x)
+ s(x)= > c N (x).
 i i
i=1


 Here q=n+3, where n is the number of intervals of the spline, and
 N (x) denotes the normalised Bspline of degree 3 defined upon
 i
 the knots (lambda) ,(lambda) ,...,(lambda) . The prescribed
 i i+1 i+4
 argument x must satisfy (lambda) <=x<=(lambda) .
 4 n+4


+
 It is assumed that (lambda) >=(lambda) , for j=2,3,...,n+7, and
 j j1
 (lambda) >(lambda) .
 4
 n+4
+ Here q=n+3, n is the number of intervals of the spline and N (x)
+ i
+ denotes the normalised Bspline of degree 3 (order 4) defined
+ upon the knots (lambda) ,(lambda) ,...,(lambda) .
+ i i+1 i+4
 The method employed is that of evaluation by taking convex
 combinations due to de Boor [4]. For further details of the
 algorithm and its use see Cox [1] and [3].
+ The method employed uses the formula given in Section 3 of Cox
+ [1].
 It is expected that a common use of E02BBF will be the evaluation
 of the cubic spline approximations produced by E02BAF. A
 generalization of E02BBF which also forms the derivative of s(x)
 is E02BCF. E02BCF takes about 50% longer than E02BBF.
+ E02BDF can be used to determine the definite integrals of cubic
+ spline fits and interpolants produced by E02BAF.
4. References
 [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
 Inst. Math. Appl. 10 134149.

 [2] Cox M G (1978) The Numerical Evaluation of a Spline from its
 Bspline Representation. J. Inst. Math. Appl. 21 135143.

 [3] Cox M G and Hayes J G (1973) Curve fitting: a guide and
 suite of algorithms for the nonspecialist user. Report
 NAC26. National Physical Laboratory.

 [4] De Boor C (1972) On Calculating with Bsplines. J. Approx.
 Theory. 6 5062.
+ [1] Cox M G (1975) An Algorithm for Spline Interpolation. J.
+ Inst. Math. Appl. 15 95108.
5. Parameters
1: NCAP7  INTEGER Input
 On entry: n+7, where n is the number of intervals (one
 greater than the number of interior knots, i.e., the knots
 strictly within the range (lambda) to (lambda) ) over
 4
 n+4
 which the spline is defined. Constraint: NCAP7 >= 8.
+ On entry: n+7, where n is the number of intervals of the
+ spline (which is one greater than the number of interior
+ knots, i.e., the knots strictly within the range a to b)
+ over which the spline is defined. Constraint: NCAP7 >= 8.
2: LAMDA(NCAP7)  DOUBLE PRECISION array Input
On entry: LAMDA(j) must be set to the value of the jth
@@ 86797,7 +88758,11 @@ have been determined.
j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non
 decreasing order with LAMDA(NCAP73) > LAMDA(4).
+ decreasing order with LAMDA(NCAP73) > LAMDA(4) and satisfy
+ LAMDA(1)=LAMDA(2)=LAMDA(3)=LAMDA(4)
+ and
+
+ LAMDA(NCAP73)=LAMDA(NCAP72)=LAMDA(NCAP71)=LAMDA(NCAP7).
3: C(NCAP7)  DOUBLE PRECISION array Input
On entry: the coefficient c of the Bspline N (x), for
@@ 86807,318 +88772,506 @@ have been determined.
i=1,2,...,n+3. The remaining elements of the array are not
used.
 4: X  DOUBLE PRECISION Input
 On entry: the argument x at which the cubic spline is to be
 evaluated. Constraint: LAMDA(4) <= X <= LAMDA(NCAP73).
+ 4: DEFINT  DOUBLE PRECISION Output
+ On exit: the value of the definite integral of s(x) between
+ the limits x=a and x=b, where a=(lambda) and b=(lambda) .
+ 4 n+4
 5: S  DOUBLE PRECISION Output
 On exit: the value of the spline, s(x).
+ 5: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
+
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
+
+ 6. Error Indicators and Warnings
+
+ Errors detected by the routine:
+
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
+
+ IFAIL= 1
+ NCAP7 < 8, i.e., the number of intervals is not positive.
+
+ IFAIL= 2
+ At least one of the following restrictions on the knots is
+ violated:
+ LAMDA(NCAP73) > LAMDA(4),
+
+ LAMDA(j) >= LAMDA(j1),
+ for j = 2,3,...,NCAP7, with equality in the cases
+ j=2,3,4,NCAP72,NCAP71, and NCAP7.
+
+ 7. Accuracy
+
+ The rounding errors are such that the computed value of the
+ integral is exact for a slightly perturbed set of Bspline
+ coefficients c differing in a relative sense from those supplied
+ i
+ by no more than 2.2*(n+3)*machine precision.
+
+ 8. Further Comments
+
+ The time taken by the routine is approximately proportional to
+
+
+ n+7.
+
+ 9. Example
+
+ Determine the definite integral over the interval 0<=x<=6 of a
+ cubic spline having 6 interior knots at the positions (lambda)=1,
+ 3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6, and
+ the 10 Bspline coefficients 10, 12, 13, 15, 22, 26, 24, 18, 14,
+ 12.
+
+ The input data items (using the notation of Section 5) comprise
+ the following values in the order indicated:
+
+
+
+ n
+
+ LAMDA(j) for j = 1,2,...,NCAP7
+ ,
+
+ C(j), for j = 1,2,...,NCAP73
+
+ The example program is written in a general form that will enable
+ the definite integral of a cubic spline having an arbitrary
+ number of knots to be computed. Any number of data sets may be
+ supplied. The only changes required to the program relate to the
+ dimensions of the arrays LAMDA and C.
+
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+ E02  Curve and Surface Fitting E02BEF
+ E02BEF  NAG Foundation Library Routine Document
+
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
+
+ 1. Purpose
+
+ E02BEF computes a cubic spline approximation to an arbitrary set
+ of data points. The knots of the spline are located
+ automatically, but a single parameter must be specified to
+ control the tradeoff between closeness of fit and smoothness of
+ fit.
+
+ 2. Specification
+
+ SUBROUTINE E02BEF (START, M, X, Y, W, S, NEST, N, LAMDA,
+ 1 C, FP, WRK, LWRK, IWRK, IFAIL)
+ INTEGER M, NEST, N, LWRK, IWRK(NEST), IFAIL
+ DOUBLE PRECISION X(M), Y(M), W(M), S, LAMDA(NEST), C(NEST),
+ 1 FP, WRK(LWRK)
+ CHARACTER*1 START
+
+ 3. Description
+
+ This routine determines a smooth cubic spline approximation s(x)
+ to the set of data points (x ,y ), with weights w , for
+ r r r
+ r=1,2,...,m.
+
+ The spline is given in the Bspline representation
+
+ n4
+ 
+ s(x)= > c N (x) (1)
+  i i
+ i=1
+
+ where N (x) denotes the normalised cubic Bspline defined upon
+ i
+ the knots (lambda) ,(lambda) ,...,(lambda) .
+ i i+1 i+4
+
+ The total number n of these knots and their values
+ (lambda) ,...,(lambda) are chosen automatically by the routine.
+ 1 n
+ The knots (lambda) ,...,(lambda) are the interior knots; they
+ 5 n4
+ divide the approximation interval [x ,x ] into n7 subintervals.
+ 1 m
+ The coefficients c ,c ,...,c are then determined as the
+ 1 2 n4
+ solution of the following constrained minimization problem:
 6: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ minimize
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ n4
+  2
+ (eta)= > (delta) (2)
+  i
+ i=5
 6. Error Indicators and Warnings
+ subject to the constraint
 Errors detected by the routine:
+ m
+  2
+ (theta)= > (epsilon) <=S (3)
+  r
+ r=1
 IFAIL= 1
 The argument X does not satisfy LAMDA(4) <= X <= LAMDA(
 NCAP73).
+ where: (delta) stands for the discontinuity jump in the third
+ i order derivative of s(x) at the interior knot
+ (lambda) ,
+ i
 In this case the value of S is set arbitrarily to zero.
+ (epsilon) denotes the weighted residual w (y s(x )),
+ r r r r
 IFAIL= 2
 NCAP7 < 8, i.e., the number of interior knots is negative.
+ and S is a nonnegative number to be specified by
+ the user.
 7. Accuracy
+ The quantity (eta) can be seen as a measure of the (lack of)
+ smoothness of s(x), while closeness of fit is measured through
+ (theta). By means of the parameter S, 'the smoothing factor', the
+ user will then control the balance between these two (usually
+ conflicting) properties. If S is too large, the spline will be
+ too smooth and signal will be lost (underfit); if S is too small,
+ the spline will pick up too much noise (overfit). In the extreme
+ cases the routine will return an interpolating spline ((theta)=0)
+ if S is set to zero, and the weighted leastsquares cubic
+ polynomial ((eta)=0) if S is set very large. Experimenting with S
+ values between these two extremes should result in a good
+ compromise. (See Section 8.2 for advice on choice of S.)
 The computed value of s(x) has negligible error in most practical
 situations. Specifically, this value has an absolute error
 bounded in modulus by 18*c * machine precision, where c is
 max max
 the largest in modulus of c ,c ,c and c , and j is an
 j j+1 j+2 j+3
 integer such that (lambda) <=x<=(lambda) . If c ,c ,c
 j+3 j+4 j j+1 j+2
 and c are all of the same sign, then the computed value of
 j+3
 s(x) has a relative error not exceeding 20*machine precision in
 modulus. For further details see Cox [2].
+ The method employed is outlined in Section 8.3 and fully
+ described in Dierckx [1], [2] and [3]. It involves an adaptive
+ strategy for locating the knots of the cubic spline (depending on
+ the function underlying the data and on the value of S), and an
+ iterative method for solving the constrained minimization problem
+ once the knots have been determined.
 8. Further Comments
+ Values of the computed spline, or of its derivatives or definite
+ integral, can subsequently be computed by calling E02BBF, E02BCF
+ or E02BDF, as described in Section 8.4.

+ 4. References
 The time taken by the routine is approximately C*(1+0.1*log(n+7))
 seconds, where C is a machinedependent constant.
+ [1] Dierckx P (1975) An Algorithm for Smoothing, Differentiating
+ and Integration of Experimental Data Using Spline Functions.
+ J. Comput. Appl. Math. 1 165184.
 Note: the routine does not test all the conditions on the knots
 given in the description of LAMDA in Section 5, since to do this
 would result in a computation time approximately linear in n+7
 instead of log(n+7). All the conditions are tested in E02BAF,
 however.
+ [2] Dierckx P (1982) A Fast Algorithm for Smoothing Data on a
+ Rectangular Grid while using Spline Functions. SIAM J.
+ Numer. Anal. 19 12861304.
 9. Example
+ [3] Dierckx P (1981) An Improved Algorithm for Curve Fitting
+ with Spline Functions. Report TW54. Department of Computer
+ Science, Katholieke Universiteit Leuven.
 Evaluate at 9 equallyspaced points in the interval 1.0<=x<=9.0
 the cubic spline with (augmented) knots 1.0, 1.0, 1.0, 1.0, 3.0,
 6.0, 8.0, 9.0, 9.0, 9.0, 9.0 and normalised cubic Bspline
 coefficients 1.0, 2.0, 4.0, 7.0, 6.0, 4.0, 3.0.
+ [4] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
+ 10 177183.
 The example program is written in a general form that will enable

+ 5. Parameters
 a cubic spline with n intervals, in its normalised cubic Bspline
 form, to be evaluated at m equallyspaced points in the interval

+ 1: START  CHARACTER*1 Input
+ On entry: START must be set to 'C' or 'W'.
 LAMDA(4) <= x <= LAMDA(n+4). The program is selfstarting in that
 any number of data sets may be supplied.
+ If START = 'C' (Cold start), the routine will build up the
+ knot set starting with no interior knots. No values need be
+ assigned to the parameters N, LAMDA, WRK or IWRK.
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+ If START = 'W' (Warm start), the routine will restart the
+ knotplacing strategy using the knots found in a previous
+ call of the routine. In this case, the parameters N, LAMDA,
+ WRK, and IWRK must be unchanged from that previous call.
+ This warm start can save much time in searching for a
+ satisfactory value of S. Constraint: START = 'C' or 'W'.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ 2: M  INTEGER Input
+ On entry: m, the number of data points. Constraint: M >= 4.
 E02  Curve and Surface Fitting E02BCF
 E02BCF  NAG Foundation Library Routine Document
+ 3: X(M)  DOUBLE PRECISION array Input
+ On entry: the values x of the independent variable
+ r
+ (abscissa) x, for r=1,2,...,m. Constraint: x 0, for
+ r=1,2,...,m.
 E02BCF evaluates a cubic spline and its first three derivatives
 from its Bspline representation.
+ 6: S  DOUBLE PRECISION Input
+ On entry: the smoothing factor, S.
 2. Specification
+ If S=0.0, the routine returns an interpolating spline.
 SUBROUTINE E02BCF (NCAP7, LAMDA, C, X, LEFT, S, IFAIL)
 INTEGER NCAP7, LEFT, IFAIL
 DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), X, S(4)
+ If S is smaller than machine precision, it is assumed equal
+ to zero.
 3. Description
+ For advice on the choice of S, see Section 3 and Section 8.2
+ Constraint: S >= 0.0.
 This routine evaluates the cubic spline s(x) and its first three
 derivatives at a prescribed argument x. It is assumed that s(x)
 is represented in terms of its Bspline coefficients c , for
 i

+ 7: NEST  INTEGER Input
+ On entry: an overestimate for the number, n, of knots
+ required. Constraint: NEST >= 8. In most practical
+ situations, NEST = M/2 is sufficient. NEST never needs to be
+ larger than M + 4, the number of knots needed for
+ interpolation (S = 0.0).
 i=1,2,...,n+3 and (augmented) ordered knot set (lambda) , for
 i

+ 8: N  INTEGER Input/Output
+ On entry: if the warm start option is used, the value of N
+ must be left unchanged from the previous call. On exit: the
+ total number, n, of knots of the computed spline.
 i=1,2,...,n+7, (see E02BAF), i.e.,
+ 9: LAMDA(NEST)  DOUBLE PRECISION array Input/Output
+ On entry: if the warm start option is used, the values
+ LAMDA(1), LAMDA(2),...,LAMDA(N) must be left unchanged from
+ the previous call. On exit: the knots of the spline i.e.,
+ the positions of the interior knots LAMDA(5), LAMDA(6),...
+ ,LAMDA(N4) as well as the positions of the additional knots
+ LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA(4) = x and
+ 1
 q
 
 s(x)= > c N (x)
  i i
 i=1
+ LAMDA(N3) = LAMDA(N2) = LAMDA(N1) = LAMDA(N) = x needed
+ m
+ for the Bspline representation.

+ 10: C(NEST)  DOUBLE PRECISION array Output
+ On exit: the coefficient c of the Bspline N (x) in the
+ i i
+ spline approximation s(x), for i=1,2,...,n4.
 Here q=n+3, n is the number of intervals of the spline and N (x)
 i
 denotes the normalised Bspline of degree 3 (order 4) defined
 upon the knots (lambda) ,(lambda) ,...,(lambda) . The
 i i+1 i+4
 prescribed argument x must satisfy
+ 11: FP  DOUBLE PRECISION Output
+ On exit: the sum of the squared weighted residuals, (theta),
+ of the computed spline approximation. If FP = 0.0, this is
+ an interpolating spline. FP should equal S within a relative
+ tolerance of 0.001 unless n=8 when the spline has no
+ interior knots and so is simply a cubic polynomial. For
+ knots to be inserted, S must be set to a value below the
+ value of FP produced in this case.
 (lambda) <=x<=(lambda)
 4 n+4
+ 12: WRK(LWRK)  DOUBLE PRECISION array Workspace
+ On entry: if the warm start option is used, the values WRK
+ (1),...,WRK(n) must be left unchanged from the previous
+ call.
 At a simple knot (lambda) (i.e., one satisfying
 i
 (lambda) <(lambda) <(lambda) ), the third derivative of the
 i1 i i+1
 spline is in general discontinuous. At a multiple knot (i.e., two
 or more knots with the same value), lower derivatives, and even
 the spline itself, may be discontinuous. Specifically, at a point
 x=u where (exactly) r knots coincide (such a point is termed a
 knot of multiplicity r), the values of the derivatives of order
 4j, for j=1,2,...,r, are in general discontinuous. (Here
 1<=r<=4;r>4 is not meaningful.) The user must specify whether the
 value at such a point is required to be the left or righthand
 derivative.
+ 13: LWRK  INTEGER Input
+ On entry:
+ the dimension of the array WRK as declared in the
+ (sub)program from which E02BEF is called.
+ Constraint: LWRK>=4*M+16*NEST+41.
 The method employed is based upon:
+ 14: IWRK(NEST)  INTEGER array Workspace
+ On entry: if the warm start option is used, the values IWRK
+ (1), ..., IWRK(n) must be left unchanged from the previous
+ call.
 (i) carrying out a binary search for the knot interval
 containing the argument x (see Cox [3]),
+ This array is used as workspace.
 (ii) evaluating the nonzero Bsplines of orders 1,2,3 and
 4 by recurrence (see Cox [2] and [3]),
+ 15: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 (iii) computing all derivatives of the Bsplines of order 4
 by applying a second recurrence to these computed Bspline
 values (see de Boor [1]),
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 (iv) multiplying the 4thorder Bspline values and their
 derivative by the appropriate Bspline coefficients, and
 summing, to yield the values of s(x) and its derivatives.
+ 6. Error Indicators and Warnings
 E02BCF can be used to compute the values and derivatives of cubic
 spline fits and interpolants produced by E02BAF.
+ Errors detected by the routine:
 If only values and not derivatives are required, E02BBF may be
 used instead of E02BCF, which takes about 50% longer than E02BBF.
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
 4. References
+ IFAIL= 1
+ On entry START /= 'C' or 'W',
 [1] De Boor C (1972) On Calculating with Bsplines. J. Approx.
 Theory. 6 5062.
+ or M < 4,
 [2] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
 Inst. Math. Appl. 10 134149.
+ or S < 0.0,
 [3] Cox M G (1978) The Numerical Evaluation of a Spline from its
 Bspline Representation. J. Inst. Math. Appl. 21 135143.
+ or S = 0.0 and NEST < M + 4,
 5. Parameters
+ or NEST < 8,
 1: NCAP7  INTEGER Input

+ or LWRK<4*M+16*NEST+41.
 On entry: n+7, where n is the number of intervals of the
 spline (which is one greater than the number of interior
 knots, i.e., the knots strictly within the range (lambda)
 4
 to (lambda) over which the spline is defined).
 n+4
 Constraint: NCAP7 >= 8.
+ IFAIL= 2
+ The weights are not all strictly positive.
 2: LAMDA(NCAP7)  DOUBLE PRECISION array Input
 On entry: LAMDA(j) must be set to the value of the jth
 member of the complete set of knots, (lambda) , for
 j

+ IFAIL= 3
+ The values of X(r), for r=1,2,...,M, are not in strictly
+ increasing order.
 j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non
 decreasing order with
+ IFAIL= 4
+ The number of knots required is greater than NEST. Try
+ increasing NEST and, if necessary, supplying larger arrays
+ for the parameters LAMDA, C, WRK and IWRK. However, if NEST
+ is already large, say NEST > M/2, then this error exit may
+ indicate that S is too small.
 LAMDA(NCAP73) > LAMDA(4).
+ IFAIL= 5
+ The iterative process used to compute the coefficients of
+ the approximating spline has failed to converge. This error
+ exit may occur if S has been set very small. If the error
+ persists with increased S, consult NAG.
 3: C(NCAP7)  DOUBLE PRECISION array Input
 On entry: the coefficient c of the Bspline N (x), for
 i i
+ If IFAIL = 4 or 5, a spline approximation is returned, but it
+ fails to satisfy the fitting criterion (see (2) and (3) in
+ Section 3)  perhaps by only a small amount, however.
 i=1,2,...,n+3. The remaining elements of the array are not
 used.
+ 7. Accuracy
 4: X  DOUBLE PRECISION Input
 On entry: the argument x at which the cubic spline and its
 derivatives are to be evaluated. Constraint: LAMDA(4) <= X
 <= LAMDA(NCAP73).
+ On successful exit, the approximation returned is such that its
+ weighted sum of squared residuals FP is equal to the smoothing
+ factor S, up to a specified relative tolerance of 0.001  except
+ that if n=8, FP may be significantly less than S: in this case
+ the computed spline is simply a weighted leastsquares polynomial
+ approximation of degree 3, i.e., a spline with no interior knots.
 5: LEFT  INTEGER Input
 On entry: specifies whether left or righthand values of
 the spline and its derivatives are to be computed (see
 Section 3). Left or righthand values are formed according
 to whether LEFT is equal or not equal to 1. If x does not
 coincide with a knot, the value of LEFT is immaterial. If x
 = LAMDA(4), righthand values are computed, and if x = LAMDA
 (NCAP73), lefthand values are formed, regardless of the
 value of LEFT.
+ 8. Further Comments
 6: S(4)  DOUBLE PRECISION array Output
 On exit: S(j) contains the value of the (j1)th derivative
 of the spline at the argument x, for j = 1,2,3,4. Note that
 S(1) contains the value of the spline.
+ 8.1. Timing
 7: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ The time taken for a call of E02BEF depends on the complexity of
+ the shape of the data, the value of the smoothing factor S, and
+ the number of data points. If E02BEF is to be called for
+ different values of S, much time can be saved by setting START =
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ 8.2. Choice of S
 6. Error Indicators and Warnings
+ If the weights have been correctly chosen (see Section 2.1.2 of
+ the Chapter Introduction), the standard deviation of w y would
+ r r
+ be the same for all r, equal to (sigma), say. In this case,
+ 2
+ choosing the smoothing factor S in the range (sigma) (m+\/2m),
+ as suggested by Reinsch [4], is likely to give a good start in
+ the search for a satisfactory value. Otherwise, experimenting
+ with different values of S will be required from the start,
+ taking account of the remarks in Section 3.
 Errors detected by the routine:
+ In that case, in view of computation time and memory
+ requirements, it is recommended to start with a very large value
+ for S and so determine the leastsquares cubic polynomial; the
+ value returned for FP, call it FP , gives an upper bound for S.
+ 0
+ Then progressively decrease the value of S to obtain closer fits
+  say by a factor of 10 in the beginning, i.e., S=FP /10, S=FP
+ 0 0
+ /100, and so on, and more carefully as the approximation shows
+ more details.
 IFAIL= 1
 NCAP7 < 8, i.e., the number of intervals is not positive.
+ The number of knots of the spline returned, and their location,
+ generally depend on the value of S and on the behaviour of the
+ function underlying the data. However, if E02BEF is called with
+ START = 'W', the knots returned may also depend on the smoothing
+ factors of the previous calls. Therefore if, after a number of
+ trials with different values of S and START = 'W', a fit can
+ finally be accepted as satisfactory, it may be worthwhile to call
+ E02BEF once more with the selected value for S but now using
+ START = 'C'. Often, E02BEF then returns an approximation with the
+ same quality of fit but with fewer knots, which is therefore
+ better if data reduction is also important.
 IFAIL= 2
 Either LAMDA(4) >= LAMDA(NCAP73), i.e., the range over
 which s(x) is defined is null or negative in length, or X is
 an invalid argument, i.e., X < LAMDA(4) or X >
 LAMDA(NCAP73).
+ 8.3. Outline of Method Used
 7. Accuracy
+ If S=0, the requisite number of knots is known in advance, i.e.,
+ n=m+4; the interior knots are located immediately as (lambda) =
+ i
+ x , for i=5,6,...,n4. The corresponding leastsquares spline
+ i2
+ (see E02BAF) is then an interpolating spline and therefore a
+ solution of the problem.
 The computed value of s(x) has negligible error in most practical
 situations. Specifically, this value has an absolute error
 bounded in modulus by 18*c * machine precision, where c is
 max max
 the largest in modulus of c ,c ,c and c , and j is an
 j j+1 j+2 j+3
 integer such that (lambda) <=x<=(lambda) . If c ,c ,c
 j+3 j+4 j j+1 j+2
 and c are all of the same sign, then the computed value of
 j+3
 s(x) has relative error bounded by 18*machine precision. For full
 details see Cox [3].
+ If S>0, a suitable knot set is built up in stages (starting with
+ no interior knots in the case of a cold start but with the knot
+ set found in a previous call if a warm start is chosen). At each
+ stage, a spline is fitted to the data by leastsquares (see
+ E02BAF) and (theta), the weighted sum of squares of residuals, is
+ computed. If (theta)>S, new knots are added to the knot set to
+ reduce (theta) at the next stage. The new knots are located in
+ intervals where the fit is particularly poor, their number
+ depending on the value of S and on the progress made so far in
+ reducing (theta). Sooner or later, we find that (theta)<=S and at
+ that point the knot set is accepted. The routine then goes on to
+ compute the (unique) spline which has this knot set and which
+ satisfies the full fitting criterion specified by (2) and (3).
+ The theoretical solution has (theta)=S. The routine computes the
+ spline by an iterative scheme which is ended when (theta)=S
+ within a relative tolerance of 0.001. The main part of each
+ iteration consists of a linear leastsquares computation of
+ special form, done in a similarly stable and efficient manner as
+ in E02BAF.
 No complete error analysis is available for the computation of
 the derivatives of s(x). However, for most practical purposes the
 absolute errors in the computed derivatives should be small.
+ An exception occurs when the routine finds at the start that,
+ even with no interior knots (n=8), the leastsquares spline
+ already has its weighted sum of squares of residuals <=S. In this
+ case, since this spline (which is simply a cubic polynomial) also
+ has an optimal value for the smoothness measure (eta), namely
+ zero, it is returned at once as the (trivial) solution. It will
+ usually mean that S has been chosen too large.
 8. Further Comments
+ For further details of the algorithm and its use, see Dierckx [3]
 The time taken by this routine is approximately linear in

+ 8.4. Evaluation of Computed Spline
 log(n+7).
+ The value of the computed spline at a given value X may be
+ obtained in the double precision variable S by the call:
 Note: the routine does not test all the conditions on the knots
 given in the description of LAMDA in Section 5, since to do this

 would result in a computation time approximately linear in n+7

+ CALL E02BBF(N,LAMDA,C,X,S,IFAIL)
 instead of log(n+7). All the conditions are tested in E02BAF,
 however.
+ where N, LAMDA and C are the output parameters of E02BEF.
 9. Example
+ The values of the spline and its first three derivatives at a
+ given value X may be obtained in the double precision array SDIF
+ of dimension at least 4 by the call:
 Compute, at the 7 arguments x = 0, 1, 2, 3, 4, 5, 6, the left
 and righthand values and first 3 derivatives of the cubic spline
 defined over the interval 0<=x<=6 having the 6 interior knots x =
 1, 3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6,
 and the 10 Bspline coefficients 10, 12, 13, 15, 22, 26, 24, 18,
 14, 12.
+ CALL E02BCF(N,LAMDA,C,X,LEFT,SDIF,IFAIL)
 The input data items (using the notation of Section 5) comprise
 the following values in the order indicated:
+ where if LEFT = 1, lefthand derivatives are computed and if LEFT
+ /= 1, righthand derivatives are calculated. The value of LEFT is
+ only relevant if X is an interior knot.

+ The value of the definite integral of the spline over the
+ interval X(1) to X(M) can be obtained in the double precision
+ variable SINT by the call:
 n m
+ CALL E02BDF(N,LAMDA,C,SINT,IFAIL)
 LAMDA(j), for j= 1,2,...,NCAP7
+ 9. Example
 C(j), for j= 1,2,...,NCAP74
+ This example program reads in a set of data values, followed by a
+ set of values of S. For each value of S it calls E02BEF to
+ compute a spline approximation, and prints the values of the
+ knots and the Bspline coefficients c .
+ i
+
+ The program includes code to evaluate the computed splines, by
+ calls to E02BBF, at the points x and at points midway between
+ r
+ them. These values are not printed out, however; instead the
+ results are illustrated by plots of the computed splines,
+ together with the data points (indicated by *) and the positions
+ of the knots (indicated by vertical lines): the effect of
+ decreasing S can be clearly seen. (The plots were obtained by
+ calling NAG Graphical Supplement routine J06FAF(*).)
 x(i), for i=1,2,...,m
 The example program is written in a general form that will enable
 the values and derivatives of a cubic spline having an arbitrary
 number of knots to be evaluated at a set of arbitrary points. Any
 number of data sets may be supplied. The only changes required to
 the program relate to the dimensions of the arrays LAMDA and C.
+ Please see figures in printed Reference Manual
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 87126,8 +89279,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02BDF
 E02BDF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02DAF
+ E02DAF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 87136,97 +89289,261 @@ have been determined.
1. Purpose
 E02BDF computes the definite integral of a cubic spline from its
 Bspline representation.
+ E02DAF forms a minimal, weighted leastsquares bicubic spline
+ surface fit with prescribed knots to a given set of data points.
2. Specification
 SUBROUTINE E02BDF (NCAP7, LAMDA, C, DEFINT, IFAIL)
 INTEGER NCAP7, IFAIL
 DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), DEFINT
+ SUBROUTINE E02DAF (M, PX, PY, X, Y, F, W, LAMDA, MU,
+ 1 POINT, NPOINT, DL, C, NC, WS, NWS, EPS,
+ 2 SIGMA, RANK, IFAIL)
+ INTEGER M, PX, PY, POINT(NPOINT), NPOINT, NC, NWS,
+ 1 RANK, IFAIL
+ DOUBLE PRECISION X(M), Y(M), F(M), W(M), LAMDA(PX), MU(PY),
+ 1 DL(NC), C(NC), WS(NWS), EPS, SIGMA
3. Description
 This routine computes the definite integral of the cubic spline
 s(x) between the limits x=a and x=b, where a and b are
 respectively the lower and upper limits of the range over which
 s(x) is defined. It is assumed that s(x) is represented in terms


 of its Bspline coefficients c , for i=1,2,...,n+3 and
 i

+ This routine determines a bicubic spline fit s(x,y) to the set of
+ data points (x ,y ,f ) with weights w , for r=1,2,...,m. The two
+ r r r r
+ sets of internal knots of the spline, {(lambda)} and {(mu)},
+ associated with the variables x and y respectively, are
+ prescribed by the user. These knots can be thought of as dividing
+ the data region of the (x,y) plane into panels (see diagram in
+ Section 5). A bicubic spline consists of a separate bicubic
+ polynomial in each panel, the polynomials joining together with
+ continuity up to the second derivative across the panel
+ boundaries.
 (augmented) ordered knot set (lambda) , for i=1,2,...,n+7, with
 i
 (lambda) =a, for i = 1,2,3,4 and (lambda) =b, for
 i i

+ s(x,y) has the property that (Sigma), the sum of squares of its
+ weighted residuals (rho) , for r=1,2,...,m, where
+ r
 i=n+4,n+5,n+6,n+7, (see E02BAF), i.e.,
+ (rho) =w (s(x ,y )f ), (1)
+ r r r r r
 q
 
 s(x)= > c N (x).
  i i
 i=1
+ is as small as possible for a bicubic spline with the given knot
+ sets. The routine produces this minimized value of (Sigma) and
+ the coefficients c in the Bspline representation of s(x,y) 
+ ij
+ see Section 8. E02DEF and E02DFF are available to compute values
+ of the fitted spline from the coefficients c .
+ ij

+ The leastsquares criterion is not always sufficient to determine
+ the bicubic spline uniquely: there may be a whole family of
+ splines which have the same minimum sum of squares. In these
+ cases, the routine selects from this family the spline for which
+ the sum of squares of the coefficients c is smallest: in other
+ ij
+ words, the minimal leastsquares solution. This choice, although
+ arbitrary, reduces the risk of unwanted fluctuations in the
+ spline fit. The method employed involves forming a system of m
+ linear equations in the coefficients c and then computing its
+ ij
+ leastsquares solution, which will be the minimal leastsquares
+ solution when appropriate. The basis of the method is described
+ in Hayes and Halliday [4]. The matrix of the equation is formed
+ using a recurrence relation for Bsplines which is numerically
+ stable (see Cox [1] and de Boor [2]  the former contains the
+ more elementary derivation but, unlike [2], does not cover the
+ case of coincident knots). The leastsquares solution is also
+ obtained in a stable manner by using orthogonal transformations,
+ viz. a variant of Givens rotation (see Gentleman [3]). This
+ requires only one row of the matrix to be stored at a time.
+ Advantage is taken of the steppedband structure which the matrix
+ possesses when the data points are suitably ordered, there being
+ at most sixteen nonzero elements in any row because of the
+ definition of Bsplines. First the matrix is reduced to upper
+ triangular form and then the diagonal elements of this triangle
+ are examined in turn. When an element is encountered whose
+ square, divided by the mean squared weight, is less than a
+ threshold (epsilon), it is replaced by zero and the rest of the
+ elements in its row are reduced to zero by rotations with the
+ remaining rows. The rank of the system is taken to be the number
+ of nonzero diagonal elements in the final triangle, and the non
+ zero rows of this triangle are used to compute the minimal least
+ squares solution. If all the diagonal elements are nonzero, the
+ rank is equal to the number of coefficients c and the solution
+ ij
+ obtained is the ordinary leastsquares solution, which is unique
+ in this case.
 Here q=n+3, n is the number of intervals of the spline and N (x)
 i
 denotes the normalised Bspline of degree 3 (order 4) defined
 upon the knots (lambda) ,(lambda) ,...,(lambda) .
 i i+1 i+4
+ 4. References
 The method employed uses the formula given in Section 3 of Cox
 [1].
+ [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
+ Inst. Math. Appl. 10 134149.
 E02BDF can be used to determine the definite integrals of cubic
 spline fits and interpolants produced by E02BAF.
+ [2] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ Theory. 6 5062.
 4. References
+ [3] Gentleman W M (1973) Leastsquares Computations by Givens
+ Transformations without Square Roots. J. Inst. Math. Applic.
+ 12 329336.
 [1] Cox M G (1975) An Algorithm for Spline Interpolation. J.
 Inst. Math. Appl. 15 95108.
+ [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
+ Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
+ Appl. 14 89103.
5. Parameters
 1: NCAP7  INTEGER Input

+ 1: M  INTEGER Input
+ On entry: the number of data points, m. Constraint: M > 1.
 On entry: n+7, where n is the number of intervals of the
 spline (which is one greater than the number of interior
 knots, i.e., the knots strictly within the range a to b)
 over which the spline is defined. Constraint: NCAP7 >= 8.
+ 2: PX  INTEGER Input
 2: LAMDA(NCAP7)  DOUBLE PRECISION array Input
 On entry: LAMDA(j) must be set to the value of the jth
 member of the complete set of knots, (lambda) for
 j

+ 3: PY  INTEGER Input
+ On entry: the total number of knots (lambda) and (mu)
+ associated with the variables x and y, respectively.
+ Constraint: PX >= 8 and PY >= 8.
 j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non
 decreasing order with LAMDA(NCAP73) > LAMDA(4) and satisfy
 LAMDA(1)=LAMDA(2)=LAMDA(3)=LAMDA(4)
 and
+ (They are such that PX8 and PY8 are the corresponding
+ numbers of interior knots.) The running time and storage
+ required by the routine are both minimized if the axes are
+ labelled so that PY is the smaller of PX and PY.
+
+ 4: X(M)  DOUBLE PRECISION array Input
+
+ 5: Y(M)  DOUBLE PRECISION array Input
+
+ 6: F(M)  DOUBLE PRECISION array Input
+ On entry: the coordinates of the data point (x ,y ,f ), for
+ r r r
+ r=1,2,...,m. The order of the data points is immaterial, but
+ see the array POINT, below.
+
+ 7: W(M)  DOUBLE PRECISION array Input
+ On entry: the weight w of the rth data point. It is
+ r
+ important to note the definition of weight implied by the
+ equation (1) in Section 3, since it is also common usage to
+ define weight as the square of this weight. In this routine,
+ each w should be chosen inversely proportional to the
+ r
+ (absolute) accuracy of the corresponding f , as expressed,
+ r
+ for example, by the standard deviation or probable error of
+ the f . When the f are all of the same accuracy, all the w
+ r r r
+ may be set equal to 1.0.
+
+ 8: LAMDA(PX)  DOUBLE PRECISION array Input/Output
+ On entry: LAMDA(i+4) must contain the ith interior knot
+ (lambda) associated with the variable x, for
+ i+4
+ i=1,2,...,PX8. The knots must be in nondecreasing order
+ and lie strictly within the range covered by the data values
+ of x. A knot is a value of x at which the spline is allowed
+ to be discontinuous in the third derivative with respect to
+ x, though continuous up to the second derivative. This
+ degree of continuity can be reduced, if the user requires,
+ by the use of coincident knots, provided that no more than
+ four knots are chosen to coincide at any point. Two, or
+ three, coincident knots allow loss of continuity in,
+ respectively, the second and first derivative with respect
+ to x at the value of x at which they coincide. Four
+ coincident knots split the spline surface into two
+ independent parts. For choice of knots see Section 8. On
+ exit: the interior knots LAMDA(5) to LAMDA(PX4) are
+ unchanged, and the segments LAMDA(1:4) and LAMDA(PX3:PX)
+ contain additional (exterior) knots introduced by the
+ routine in order to define the full set of Bsplines
+ required. The four knots in the first segment are all set
+ equal to the lowest data value of x and the other four
+ additional knots are all set equal to the highest value:
+ there is experimental evidence that coincident endknots are
+ best for numerical accuracy. The complete array must be left
+ undisturbed if E02DEF or E02DFF is to be used subsequently.
+
+ 9: MU(PY)  DOUBLE PRECISION array Input
+ On entry: MU(i+4) must contain the ith interior knot (mu)
+ i+4
+ associated with the variable y, i=1,2,...,PY8. The same
+ remarks apply to MU as to LAMDA above, with Y replacing X,
+ and y replacing x.
+
+ 10: POINT(NPOINT)  INTEGER array Input
+ On entry: indexing information usually provided by E02ZAF
+ which enables the data points to be accessed in the order
+ which produces the advantageous matrix structure mentioned
+ in Section 3. This order is such that, if the (x,y) plane is
+ thought of as being divided into rectangular panels by the
+ two sets of knots, all data in a panel occur before data in
+ succeeding panels, where the panels are numbered from bottom
+ to top and then left to right with the usual arrangement of
+ axes, as indicated in the diagram.
+
+ Please see figure in printed Reference Manual
+
+ A data point lying exactly on one or more panel sides is
+ considered to be in the highest numbered panel adjacent to
+ the point. E02ZAF should be called to obtain the array
+ POINT, unless it is provided by other means.
+
+ 11: NPOINT  INTEGER Input
+ On entry:
+ the dimension of the array POINT as declared in the
+ (sub)program from which E02DAF is called.
+ Constraint: NPOINT >= M + (PX7)*(PY7).
+
+ 12: DL(NC)  DOUBLE PRECISION array Output
+ On exit: DL gives the squares of the diagonal elements of
+ the reduced triangular matrix, divided by the mean squared
+ weight. It includes those elements, less than (epsilon),
+ which are treated as zero (see Section 3).
+
+ 13: C(NC)  DOUBLE PRECISION array Output
+ On exit: C gives the coefficients of the fit. C((PY4)*(i
+ 1)+j) is the coefficient c of Section 3 and Section 8 for
+ ij
+ i=1,2,...,PX4 and j=1,2,...,PY4. These coefficients are
+ used by E02DEF or E02DFF to calculate values of the fitted
+ function.
+
+ 14: NC  INTEGER Input
+ On entry: the value (PX4)*(PY4).
 LAMDA(NCAP73)=LAMDA(NCAP72)=LAMDA(NCAP71)=LAMDA(NCAP7).
+ 15: WS(NWS)  DOUBLE PRECISION array Workspace
 3: C(NCAP7)  DOUBLE PRECISION array Input
 On entry: the coefficient c of the Bspline N (x), for
 i i

+ 16: NWS  INTEGER Input
+ On entry:
+ the dimension of the array WS as declared in the
+ (sub)program from which E02DAF is called.
+ Constraint: NWS>=(2*NC+1)*(3*PY6)2.
 i=1,2,...,n+3. The remaining elements of the array are not
 used.
+ 17: EPS  DOUBLE PRECISION Input
+ On entry: a threshold (epsilon) for determining the
+ effective rank of the system of linear equations. The rank
+ is determined as the number of elements of the array DL (see
+ below) which are nonzero. An element of DL is regarded as
+ zero if it is less than (epsilon). Machine precision is a
+ suitable value for (epsilon) in most practical applications
+ which have only 2 or 3 decimals accurate in data. If some
+ coefficients of the fit prove to be very large compared with
+ the data ordinates, this suggests that (epsilon) should be
+ increased so as to decrease the rank. The array DL will give
+ a guide to appropriate values of (epsilon) to achieve this,
+ as well as to the choice of (epsilon) in other cases where
+ some experimentation may be needed to determine a value
+ which leads to a satisfactory fit.
 4: DEFINT  DOUBLE PRECISION Output
 On exit: the value of the definite integral of s(x) between
 the limits x=a and x=b, where a=(lambda) and b=(lambda) .
 4 n+4
+ 18: SIGMA  DOUBLE PRECISION Output
+ On exit: (Sigma), the weighted sum of squares of residuals.
+ This is not computed from the individual residuals but from
+ the righthand sides of the orthogonallytransformed linear
+ equations. For further details see Hayes and Halliday [4]
+ page 97. The two methods of computation are theoretically
+ equivalent, but the results may differ because of rounding
+ error.
 5: IFAIL  INTEGER Input/Output
+ 19: RANK  INTEGER Output
+ On exit: the rank of the system as determined by the value
+ of the threshold (epsilon). When RANK = NC, the least
+ squares solution is unique: in other cases the minimal
+ leastsquares solution is computed.
+
+ 20: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 87238,61 +89555,146 @@ have been determined.
Errors detected by the routine:
 If on entry IFAIL = 0 or 1, explanatory error messages are
 output on the current error message unit (as defined by X04AAF).

IFAIL= 1
 NCAP7 < 8, i.e., the number of intervals is not positive.
+ At least one set of knots is not in nondecreasing order, or
+ an interior knot is outside the range of the data values.
IFAIL= 2
 At least one of the following restrictions on the knots is
 violated:
 LAMDA(NCAP73) > LAMDA(4),
+ More than four knots coincide at a single point, possibly
+ because all data points have the same value of x (or y) or
+ because an interior knot coincides with an extreme data
+ value.
 LAMDA(j) >= LAMDA(j1),
 for j = 2,3,...,NCAP7, with equality in the cases
 j=2,3,4,NCAP72,NCAP71, and NCAP7.
+ IFAIL= 3
+ Array POINT does not indicate the data points in panel
+ order. Call E02ZAF to obtain a correct array.
+
+ IFAIL= 4
+ On entry M <= 1,
+
+ or PX < 8,
+
+ or PY < 8,
+
+ or NC /= (PX4)*(PY4),
+
+ or NWS is too small,
+
+ or NPOINT is too small.
+
+ IFAIL= 5
+ All the weights w are zero or rank determined as zero.
+ r
7. Accuracy
 The rounding errors are such that the computed value of the
 integral is exact for a slightly perturbed set of Bspline
 coefficients c differing in a relative sense from those supplied
 i
 by no more than 2.2*(n+3)*machine precision.
+ The computation of the Bsplines and reduction of the observation
+ matrix to triangular form are both numerically stable.
8. Further Comments
 The time taken by the routine is approximately proportional to

+ The time taken by this routine is approximately proportional to
+ 2
+ the number of data points, m, and to (3*(PY4)+4) .
 n+7.
+ The Bspline representation of the bicubic spline is
+
+ 
+ s(x,y)= > c M (x)N (y)
+  ij i j
+ ij
+
+ summed over i=1,2,...,PX4 and over j=1,2,...,PY4. Here M (x)
+ i
+ and N (y) denote normalised cubic Bsplines,the former defined on
+ j
+ the knots (lambda) ,(lambda) ,...,(lambda) and the latter on
+ i i+1 i+4
+ the knots (mu) ,(mu) ,...,(mu) . For further details, see
+ j j+1 j+4
+ Hayes and Halliday [4] for bicubic splines and de Boor [2] for
+ normalised Bsplines.
+
+ The choice of the interior knots, which help to determine the
+ spline's shape, must largely be a matter of trial and error. It
+ is usually best to start with a small number of knots and,
+ examining the fit at each stage, add a few knots at a time at
+ places where the fit is particularly poor. In intervals of x or y
+ where the surface represented by the data changes rapidly, in
+ function value or derivatives, more knots will be needed than
+ elsewhere. In some cases guidance can be obtained by analogy with
+ the case of coincident knots: for example, just as three
+ coincident knots can produce a discontinuity in slope, three
+ close knots can produce rapid change in slope. Of course, such
+ rapid changes in behaviour must be adequately represented by the
+ data points, as indeed must the behaviour of the surface
+ generally, if a satisfactory fit is to be achieved. When there is
+ no rapid change in behaviour, equallyspaced knots will often
+ suffice.
+
+ In all cases the fit should be examined graphically before it is
+ accepted as satisfactory.
+
+ The fit obtained is not defined outside the rectangle
+
+ (lambda) <=x<=(lambda) , (mu) <=y<=(mu)
+ 4 PX3 4 PY3
+
+ The reason for taking the extreme data values of x and y for
+ these four knots is that, as is usual in data fitting, the fit
+ cannot be expected to give satisfactory values outside the data
+ region. If, nevertheless, the user requires values over a larger
+ rectangle, this can be achieved by augmenting the data with two
+ artificial data points (a,c,0) and (b,d,0) with zero weight,
+ where a<=x<=b, c<=y<=d defines the enlarged rectangle. In the
+ case when the data are adequate to make the leastsquares
+ solution unique (RANK = NC), this enlargement will not affect the
+ fit over the original rectangle, except for possibly enlarged
+ rounding errors, and will simply continue the bicubic polynomials
+ in the panels bordering the rectangle out to the new boundaries:
+ in other cases the fit will be affected. Even using the original
+ rectangle there may be regions within it, particularly at its
+ corners, which lie outside the data region and where, therefore,
+ the fit will be unreliable. For example, if there is no data
+ point in panel 1 of the diagram in Section 5, the leastsquares
+ criterion leaves the spline indeterminate in this panel: the
+ minimal spline determined by the subroutine in this case passes
+ through the value zero at the point ((lambda) ,(mu) ).
+ 4 4
9. Example
 Determine the definite integral over the interval 0<=x<=6 of a
 cubic spline having 6 interior knots at the positions (lambda)=1,
 3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6, and
 the 10 Bspline coefficients 10, 12, 13, 15, 22, 26, 24, 18, 14,
 12.
+ This example program reads a value for (epsilon), and a set of
+ data points, weights and knot positions. If there are more y
+ knots than x knots, it interchanges the x and y axes. It calls
+ E02ZAF to sort the data points into panel order, E02DAF to fit a
+ bicubic spline to them, and E02DEF to evaluate the spline at the
+ data points.
 The input data items (using the notation of Section 5) comprise
 the following values in the order indicated:
+ Finally it prints:

+ the weighted sum of squares of residuals computed from the
+ linear equations;
 n
+ the rank determined by E02DAF;
 LAMDA(j) for j = 1,2,...,NCAP7
 ,
+ data points, fitted values and residuals in panel order;
 C(j), for j = 1,2,...,NCAP73
+ the weighted sum of squares of the residuals;
 The example program is written in a general form that will enable
 the definite integral of a cubic spline having an arbitrary
 number of knots to be computed. Any number of data sets may be
 supplied. The only changes required to the program relate to the
 dimensions of the arrays LAMDA and C.
+ the coefficients of the spline fit.
+
+ The program is written to handle any number of data sets.
+
+ Note: the data supplied in this example is not typical of a
+ realistic problem: the number of data points would normally be
+ much larger (in which case the array dimensions and the value of
+ NWS in the program would have to be increased); and the value of
+ (epsilon) would normally be much smaller on most machines (see
+ 6
+ Section 5; the relatively large value of 10 has been chosen in
+ order to illustrate a minimal leastsquares solution when RANK <
+ NC; in this example NC = 24).
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 87300,8 +89702,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02BEF
 E02BEF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02DCF
+ E02DCF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 87310,108 +89712,122 @@ have been determined.
1. Purpose
 E02BEF computes a cubic spline approximation to an arbitrary set
 of data points. The knots of the spline are located
 automatically, but a single parameter must be specified to
 control the tradeoff between closeness of fit and smoothness of
 fit.
+ E02DCF computes a bicubic spline approximation to a set of data
+ values, given on a rectangular grid in the xy plane. The knots
+ of the spline are located automatically, but a single parameter
+ must be specified to control the tradeoff between closeness of
+ fit and smoothness of fit.
2. Specification
 SUBROUTINE E02BEF (START, M, X, Y, W, S, NEST, N, LAMDA,
 1 C, FP, WRK, LWRK, IWRK, IFAIL)
 INTEGER M, NEST, N, LWRK, IWRK(NEST), IFAIL
 DOUBLE PRECISION X(M), Y(M), W(M), S, LAMDA(NEST), C(NEST),
 1 FP, WRK(LWRK)
+ SUBROUTINE E02DCF (START, MX, X, MY, Y, F, S, NXEST,
+ 1 NYEST, NX, LAMDA, NY, MU, C, FP, WRK,
+ 2 LWRK, IWRK, LIWRK, IFAIL)
+ INTEGER MX, MY, NXEST, NYEST, NX, NY, LWRK, IWRK
+ 1 (LIWRK), LIWRK, IFAIL
+ DOUBLE PRECISION X(MX), Y(MY), F(MX*MY), S, LAMDA(NXEST),
+ 1 MU(NYEST), C((NXEST4)*(NYEST4)), FP, WRK
+ 2 (LWRK)
CHARACTER*1 START
3. Description
 This routine determines a smooth cubic spline approximation s(x)
 to the set of data points (x ,y ), with weights w , for
 r r r
 r=1,2,...,m.
+ This routine determines a smooth bicubic spline approximation
+ s(x,y) to the set of data points (x ,y ,f ), for q=1,2,...,m
+ q r q,r x
+ and r=1,2,...,m .
+ y
The spline is given in the Bspline representation
 n4
 
 s(x)= > c N (x) (1)
  i i
 i=1

 where N (x) denotes the normalised cubic Bspline defined upon
 i
 the knots (lambda) ,(lambda) ,...,(lambda) .
 i i+1 i+4
+ n 4 n 4
+ x y
+  
+ s(x,y)= > > c M (x)N (y), (1)
+   ij i j
+ i=1 j=1
 The total number n of these knots and their values
 (lambda) ,...,(lambda) are chosen automatically by the routine.
 1 n
 The knots (lambda) ,...,(lambda) are the interior knots; they
 5 n4
 divide the approximation interval [x ,x ] into n7 subintervals.
 1 m
 The coefficients c ,c ,...,c are then determined as the
 1 2 n4
+ where M (x) and N (y) denote normalised cubic Bsplines, the
+ i j
+ former defined on the knots (lambda) to (lambda) and the
+ i i+4
+ latter on the knots (mu) to (mu) . For further details, see
+ j j+4
+ Hayes and Halliday [4] for bicubic splines and de Boor [1] for
+ normalised Bsplines.
+
+ The total numbers n and n of these knots and their values
+ x y
+ (lambda) ,...,(lambda) and (mu) ,...,(mu) are chosen
+ 1 n 1 n
+ x y
+ automatically by the routine. The knots (lambda) ,...,
+ 5
+ (lambda) and (mu) ,...,(mu) are the interior knots; they
+ n 4 5 n 4
+ x y
+ divide the approximation domain [x ,x ]*[y ,y ] into (
+ 1 m 1 m
+ m m
+ n 7)*(n 7) subpanels [(lambda) ,(lambda) ]*[(mu) ,(mu) ],
+ x y i i+1 j j+1
+ for i=4,5,...,n 4, j=4,5,...,n 4. Then, much as in the curve
+ x y
+ case (see E02BEF), the coefficients c are determined as the
+ ij
solution of the following constrained minimization problem:
minimize
 n4
  2
 (eta)= > (delta) (2)
  i
 i=5
+ (eta), (2)
subject to the constraint
 m
  2
 (theta)= > (epsilon) <=S (3)
  r
 r=1
+ m m
+ x y
+   2
+ (theta)= > > (epsilon) <=S, (3)
+   q,r
+ q=1 r=1
 where: (delta) stands for the discontinuity jump in the third
 i order derivative of s(x) at the interior knot
 (lambda) ,
 i
+ where (eta) is a measure of the (lack of) smoothness of s(x,y).
+ Its value depends on the discontinuity jumps in
+ s(x,y) across the boundaries of the subpanels. It is
+ zero only when there are no discontinuities and is
+ positive otherwise, increasing with the size of the
+ jumps (see Dierckx [2] for details).
 (epsilon) denotes the weighted residual w (y s(x )),
 r r r r
+ (epsilon) denotes the residual f s(x ,y ),
+ q,r q,r q r
 and S is a nonnegative number to be specified by
 the user.
+ and S is a nonnegative number to be specified by the user.
 The quantity (eta) can be seen as a measure of the (lack of)
 smoothness of s(x), while closeness of fit is measured through
 (theta). By means of the parameter S, 'the smoothing factor', the
 user will then control the balance between these two (usually
 conflicting) properties. If S is too large, the spline will be
 too smooth and signal will be lost (underfit); if S is too small,
 the spline will pick up too much noise (overfit). In the extreme
 cases the routine will return an interpolating spline ((theta)=0)
 if S is set to zero, and the weighted leastsquares cubic
 polynomial ((eta)=0) if S is set very large. Experimenting with S
 values between these two extremes should result in a good
 compromise. (See Section 8.2 for advice on choice of S.)
+ By means of the parameter S, 'the smoothing factor', the user
+ will then control the balance between smoothness and closeness of
+ fit, as measured by the sum of squares of residuals in (3). If S
+ is too large, the spline will be too smooth and signal will be
+ lost (underfit); if S is too small, the spline will pick up too
+ much noise (overfit). In the extreme cases the routine will
+ return an interpolating spline ((theta)=0) if S is set to zero,
+ and the leastsquares bicubic polynomial ((eta)=0) if S is set
+ very large. Experimenting with Svalues between these two
+ extremes should result in a good compromise. (See Section 8.3 for
+ advice on choice of S.)
 The method employed is outlined in Section 8.3 and fully
 described in Dierckx [1], [2] and [3]. It involves an adaptive
 strategy for locating the knots of the cubic spline (depending on
 the function underlying the data and on the value of S), and an
 iterative method for solving the constrained minimization problem
 once the knots have been determined.
+ The method employed is outlined in Section 8.5 and fully
+ described in Dierckx [2] and [3]. It involves an adaptive
+ strategy for locating the knots of the bicubic spline (depending
+ on the function underlying the data and on the value of S), and
+ an iterative method for solving the constrained minimization
+ problem once the knots have been determined.
 Values of the computed spline, or of its derivatives or definite
 integral, can subsequently be computed by calling E02BBF, E02BCF
 or E02BDF, as described in Section 8.4.
+ Values of the computed spline can subsequently be computed by
+ calling E02DEF or E02DFF as described in Section 8.6.
4. References
 [1] Dierckx P (1975) An Algorithm for Smoothing, Differentiating
 and Integration of Experimental Data Using Spline Functions.
 J. Comput. Appl. Math. 1 165184.
+ [1] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ Theory. 6 5062.
[2] Dierckx P (1982) A Fast Algorithm for Smoothing Data on a
Rectangular Grid while using Spline Functions. SIAM J.
@@ 87421,7 +89837,11 @@ have been determined.
with Spline Functions. Report TW54. Department of Computer
Science, Katholieke Universiteit Leuven.
 [4] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
+ [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
+ Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
+ Appl. 14 89103.
+
+ [5] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
10 177183.
5. Parameters
@@ 87431,37 +89851,50 @@ have been determined.
If START = 'C' (Cold start), the routine will build up the
knot set starting with no interior knots. No values need be
 assigned to the parameters N, LAMDA, WRK or IWRK.
+ assigned to the parameters NX, NY, LAMDA, MU, WRK or IWRK.
If START = 'W' (Warm start), the routine will restart the
knotplacing strategy using the knots found in a previous
 call of the routine. In this case, the parameters N, LAMDA,
 WRK, and IWRK must be unchanged from that previous call.
 This warm start can save much time in searching for a
+ call of the routine. In this case, the parameters NX, NY,
+ LAMDA, MU, WRK and IWRK must be unchanged from that previous
+ call. This warm start can save much time in searching for a
satisfactory value of S. Constraint: START = 'C' or 'W'.
 2: M  INTEGER Input
 On entry: m, the number of data points. Constraint: M >= 4.
+ 2: MX  INTEGER Input
+ On entry: m , the number of grid points along the x axis.
+ x
+ Constraint: MX >= 4.
 3: X(M)  DOUBLE PRECISION array Input
 On entry: the values x of the independent variable
 r
 (abscissa) x, for r=1,2,...,m. Constraint: x = 4.
 5: W(M)  DOUBLE PRECISION array Input
 On entry: the values w of the weights, for r=1,2,...,m.
 r
 For advice on the choice of weights, see the Chapter
 Introduction, Section 2.1.2. Constraint: W(r) > 0, for
 r=1,2,...,m.
+ 5: Y(MY)  DOUBLE PRECISION array Input
+ On entry: Y(r) must be set to y , the y coordinate of the
+ r
+ rth grid point along the y axis, for r=1,2,...,m .
+ y
+ Constraint: y = 0.0.
 7: NEST  INTEGER Input
 On entry: an overestimate for the number, n, of knots
 required. Constraint: NEST >= 8. In most practical
 situations, NEST = M/2 is sufficient. NEST never needs to be
 larger than M + 4, the number of knots needed for
 interpolation (S = 0.0).
+ 8: NXEST  INTEGER Input
 8: N  INTEGER Input/Output
 On entry: if the warm start option is used, the value of N
+ 9: NYEST  INTEGER Input
+ On entry: an upper bound for the number of knots n and n
+ x y
+ required in the x and ydirections respectively.
+
+ In most practical situations, NXEST =m /2 and NYEST m /2 is
+ x y
+ sufficient. NXEST and NYEST never need to be larger than
+ m +4 and m +4 respectively, the numbers of knots needed for
+ x y
+ interpolation (S=0.0). See also Section 8.4. Constraint:
+ NXEST >= 8 and NYEST >= 8.
+
+ 10: NX  INTEGER Input/Output
+ On entry: if the warm start option is used, the value of NX
must be left unchanged from the previous call. On exit: the
 total number, n, of knots of the computed spline.
+ total number of knots, n , of the computed spline with
+ x
+ respect to the x variable.
 9: LAMDA(NEST)  DOUBLE PRECISION array Input/Output
+ 11: LAMDA(NXEST)  DOUBLE PRECISION array Input/Output
On entry: if the warm start option is used, the values
 LAMDA(1), LAMDA(2),...,LAMDA(N) must be left unchanged from
 the previous call. On exit: the knots of the spline i.e.,
 the positions of the interior knots LAMDA(5), LAMDA(6),...
 ,LAMDA(N4) as well as the positions of the additional knots
 LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA(4) = x and
 1
+ LAMDA(1), LAMDA(2),...,LAMDA(NX) must be left unchanged from
+ the previous call. On exit: LAMDA contains the complete set
+ of knots (lambda) associated with the x variable, i.e., the
+ i
+ interior knots LAMDA(5), LAMDA(6), ..., LAMDA(NX4) as well
+ as the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) =
+ LAMDA(4) = X(1) and LAMDA(NX3) = LAMDA(NX2) = LAMDA(NX1)
+ = LAMDA(NX) = X(MX) needed for the Bspline representation.
 LAMDA(N3) = LAMDA(N2) = LAMDA(N1) = LAMDA(N) = x needed
 m
 for the Bspline representation.
+ 12: NY  INTEGER Input/Output
+ On entry: if the warm start option is used, the value of NY
+ must be left unchanged from the previous call. On exit: the
+ total number of knots, n , of the computed spline with
+ y
+ respect to the y variable.
 10: C(NEST)  DOUBLE PRECISION array Output
 On exit: the coefficient c of the Bspline N (x) in the
 i i
 spline approximation s(x), for i=1,2,...,n4.
+ 13: MU(NYEST)  DOUBLE PRECISION array Input/Output
+ On entry: if the warm start option is used, the values MU
+ (1), MU(2),...,MU(NY) must be left unchanged from the
+ previous call. On exit: MU contains the complete set of
+ knots (mu) associated with the y variable, i.e., the
+ i
+ interior knots MU(5), MU(6),...,MU(NY4) as well as the
+ additional knots MU(1) = MU(2) = MU(3) = MU(4) = Y(1) and MU
+ (NY3) = MU(NY2) = MU(NY1) = MU(NY) = Y(MY) needed for the
+ Bspline representation.
 11: FP  DOUBLE PRECISION Output
 On exit: the sum of the squared weighted residuals, (theta),
 of the computed spline approximation. If FP = 0.0, this is
 an interpolating spline. FP should equal S within a relative
 tolerance of 0.001 unless n=8 when the spline has no
 interior knots and so is simply a cubic polynomial. For
+ 14: C((NXEST4)*(NYEST4))  DOUBLE PRECISION array Output
+ On exit: the coefficients of the spline approximation. C(
+ (n 4)*(i1)+j) is the coefficient c defined in Section 3.
+ y ij
+
+ 15: FP  DOUBLE PRECISION Output
+ On exit: the sum of squared residuals, (theta), of the
+ computed spline approximation. If FP = 0.0, this is an
+ interpolating spline. FP should equal S within a relative
+ tolerance of 0.001 unless NX = NY = 8, when the spline has
+ no interior knots and so is simply a bicubic polynomial. For
knots to be inserted, S must be set to a value below the
value of FP produced in this case.
 12: WRK(LWRK)  DOUBLE PRECISION array Workspace
+ 16: WRK(LWRK)  DOUBLE PRECISION array Workspace
On entry: if the warm start option is used, the values WRK
 (1),...,WRK(n) must be left unchanged from the previous
+ (1),...,WRK(4) must be left unchanged from the previous
call.
 13: LWRK  INTEGER Input
+ This array is used as workspace.
+
+ 17: LWRK  INTEGER Input
On entry:
the dimension of the array WRK as declared in the
 (sub)program from which E02BEF is called.
 Constraint: LWRK>=4*M+16*NEST+41.
+ (sub)program from which E02DCF is called.
+ Constraint:
+ LWRK>=4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY
 14: IWRK(NEST)  INTEGER array Workspace
+ +max(MY,NXEST)+54.
+
+ 18: IWRK(LIWRK)  INTEGER array Workspace
On entry: if the warm start option is used, the values IWRK
 (1), ..., IWRK(n) must be left unchanged from the previous
+ (1), ..., IWRK(3) must be left unchanged from the previous
call.
This array is used as workspace.
 15: IFAIL  INTEGER Input/Output
+ 19: LIWRK  INTEGER Input
+ On entry:
+ the dimension of the array IWRK as declared in the
+ (sub)program from which E02DCF is called.
+ Constraint: LIWRK >= 3 + MX + MY + NXEST + NYEST.
+
+ 20: IFAIL  INTEGER Input/Output
On entry: IFAIL must be set to 0, 1 or 1. For users not
familiar with this parameter (described in the Essential
Introduction) the recommended value is 0.
@@ 87547,29 +90017,40 @@ have been determined.
IFAIL= 1
On entry START /= 'C' or 'W',
 or M < 4,
+ or MX < 4,
+
+ or MY < 4,
or S < 0.0,
 or S = 0.0 and NEST < M + 4,
+ or S = 0.0 and NXEST < MX + 4,
 or NEST < 8,
+ or S = 0.0 and NYEST < MY + 4,
 or LWRK<4*M+16*NEST+41.
+ or NXEST < 8,
+
+ or NYEST < 8,
+
+ or LWRK < 4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY+
+ +max(MY,NXEST)+54
+
+ or LIWRK < 3 + MX + MY + NXEST + NYEST.
IFAIL= 2
 The weights are not all strictly positive.
+ The values of X(q), for q = 1,2,...,MX, are not in strictly
+ increasing order.
IFAIL= 3
 The values of X(r), for r=1,2,...,M, are not in strictly
+ The values of Y(r), for r = 1,2,...,MY, are not in strictly
increasing order.
IFAIL= 4
 The number of knots required is greater than NEST. Try
 increasing NEST and, if necessary, supplying larger arrays
 for the parameters LAMDA, C, WRK and IWRK. However, if NEST
 is already large, say NEST > M/2, then this error exit may
 indicate that S is too small.
+ The number of knots required is greater than allowed by
+ NXEST and NYEST. Try increasing NXEST and/or NYEST and, if
+ necessary, supplying larger arrays for the parameters LAMDA,
+ MU, C, WRK and IWRK. However, if NXEST and NYEST are already
+ large, say NXEST > MX/2 and NYEST > MY/2, then this error
+ exit may indicate that S is too small.
IFAIL= 5
The iterative process used to compute the coefficients of
@@ 87579,148 +90060,175 @@ have been determined.
If IFAIL = 4 or 5, a spline approximation is returned, but it
fails to satisfy the fitting criterion (see (2) and (3) in
 Section 3)  perhaps by only a small amount, however.
+ Section 3)  perhaps by only a small amount, however.
7. Accuracy
On successful exit, the approximation returned is such that its
 weighted sum of squared residuals FP is equal to the smoothing
 factor S, up to a specified relative tolerance of 0.001  except
 that if n=8, FP may be significantly less than S: in this case
 the computed spline is simply a weighted leastsquares polynomial
 approximation of degree 3, i.e., a spline with no interior knots.
+ sum of squared residuals FP is equal to the smoothing factor S,
+ up to a specified relative tolerance of 0.001  except that if
+ n =8 and n =8, FP may be significantly less than S: in this case
+ x y
+ the computed spline is simply the leastsquares bicubic
+ polynomial approximation of degree 3, i.e., a spline with no
+ interior knots.
8. Further Comments
8.1. Timing
 The time taken for a call of E02BEF depends on the complexity of
+ The time taken for a call of E02DCF depends on the complexity of
the shape of the data, the value of the smoothing factor S, and
 the number of data points. If E02BEF is to be called for
+ the number of data points. If E02DCF is to be called for
different values of S, much time can be saved by setting START =
 8.2. Choice of S
+ 8.2. Weighting of Data Points
 If the weights have been correctly chosen (see Section 2.1.2 of
 the Chapter Introduction), the standard deviation of w y would
 r r
 be the same for all r, equal to (sigma), say. In this case,
 2
 choosing the smoothing factor S in the range (sigma) (m+\/2m),
 as suggested by Reinsch [4], is likely to give a good start in
 the search for a satisfactory value. Otherwise, experimenting
 with different values of S will be required from the start,
 taking account of the remarks in Section 3.
+ E02DCF does not allow individual weighting of the data values. If
+ these were determined to widely differing accuracies, it may be
+ better to use E02DDF. The computation time would be very much
+ longer, however.
+
+ 8.3. Choice of S
+
+ If the standard deviation of f is the same for all q and r
+ q,r
+ (the case for which this routine is designed  see Section 8.2.)
+ and known to be equal, at least approximately, to (sigma), say,
+ then following Reinsch [5] and choosing the smoothing factor S in
+ 2
+ the range (sigma) (m+\/2m), where m=m m , is likely to give a
+ x y
+ good start in the search for a satisfactory value. If the
+ standard deviations vary, the sum of their squares over all the
+ data points could be used. Otherwise experimenting with different
+ values of S will be required from the start, taking account of
+ the remarks in Section 3.
In that case, in view of computation time and memory
requirements, it is recommended to start with a very large value
 for S and so determine the leastsquares cubic polynomial; the
+ for S and so determine the leastsquares bicubic polynomial; the
value returned for FP, call it FP , gives an upper bound for S.
0
Then progressively decrease the value of S to obtain closer fits
  say by a factor of 10 in the beginning, i.e., S=FP /10, S=FP
 0 0
 /100, and so on, and more carefully as the approximation shows
 more details.
+  say by a factor of 10 in the beginning, i.e., S=FP /10,
+ 0
+ S=FP /100, and so on, and more carefully as the approximation
+ 0
+ shows more details.
The number of knots of the spline returned, and their location,
generally depend on the value of S and on the behaviour of the
 function underlying the data. However, if E02BEF is called with
+ function underlying the data. However, if E02DCF is called with
START = 'W', the knots returned may also depend on the smoothing
factors of the previous calls. Therefore if, after a number of
trials with different values of S and START = 'W', a fit can
finally be accepted as satisfactory, it may be worthwhile to call
 E02BEF once more with the selected value for S but now using
 START = 'C'. Often, E02BEF then returns an approximation with the
+ E02DCF once more with the selected value for S but now using
+ START = 'C'. Often, E02DCF then returns an approximation with the
same quality of fit but with fewer knots, which is therefore
better if data reduction is also important.
 8.3. Outline of Method Used
+ 8.4. Choice of NXEST and NYEST
+
+ The number of knots may also depend on the upper bounds NXEST and
+ NYEST. Indeed, if at a certain stage in E02DCF the number of
+ knots in one direction (say n ) has reached the value of its
+ x
+ upper bound (NXEST), then from that moment on all subsequent
+ knots are added in the other (y) direction. Therefore the user
+ has the option of limiting the number of knots the routine
+ locates in any direction. For example, by setting NXEST = 8 (the
+ lowest allowable value for NXEST), the user can indicate that he
+ wants an approximation which is a simple cubic polynomial in the
+ variable x.
+
+ 8.5. Outline of Method Used
If S=0, the requisite number of knots is known in advance, i.e.,
 n=m+4; the interior knots are located immediately as (lambda) =
 i
 x , for i=5,6,...,n4. The corresponding leastsquares spline
 i2
 (see E02BAF) is then an interpolating spline and therefore a
 solution of the problem.
+ n =m +4 and n =m +4; the interior knots are located immediately
+ x x y y
+ as (lambda) = x and (mu) = y , for i=5,6,...,n 4 and
+ i i2 j j2 x
+ j=5,6,...,n 4. The corresponding leastsquares spline is then an
+ y
+ interpolating spline and therefore a solution of the problem.
 If S>0, a suitable knot set is built up in stages (starting with
+ If S>0, suitable knot sets are built up in stages (starting with
no interior knots in the case of a cold start but with the knot
set found in a previous call if a warm start is chosen). At each
 stage, a spline is fitted to the data by leastsquares (see
 E02BAF) and (theta), the weighted sum of squares of residuals, is
 computed. If (theta)>S, new knots are added to the knot set to
 reduce (theta) at the next stage. The new knots are located in
+ stage, a bicubic spline is fitted to the data by leastsquares,
+ and (theta), the sum of squares of residuals, is computed. If
+ (theta)>S, new knots are added to one knot set or the other so as
+ to reduce (theta) at the next stage. The new knots are located in
intervals where the fit is particularly poor, their number
depending on the value of S and on the progress made so far in
reducing (theta). Sooner or later, we find that (theta)<=S and at
 that point the knot set is accepted. The routine then goes on to
 compute the (unique) spline which has this knot set and which
 satisfies the full fitting criterion specified by (2) and (3).
 The theoretical solution has (theta)=S. The routine computes the
 spline by an iterative scheme which is ended when (theta)=S
+ that point the knot sets are accepted. The routine then goes on
+ to compute the (unique) spline which has these knot sets and
+ which satisfies the full fitting criterion specified by (2) and
+ (3). The theoretical solution has (theta)=S. The routine computes
+ the spline by an iterative scheme which is ended when (theta)=S
within a relative tolerance of 0.001. The main part of each
iteration consists of a linear leastsquares computation of
special form, done in a similarly stable and efficient manner as
 in E02BAF.
+ in E02BAF for leastsquares curve fitting.
An exception occurs when the routine finds at the start that,
 even with no interior knots (n=8), the leastsquares spline
 already has its weighted sum of squares of residuals <=S. In this
 case, since this spline (which is simply a cubic polynomial) also
 has an optimal value for the smoothness measure (eta), namely
 zero, it is returned at once as the (trivial) solution. It will
 usually mean that S has been chosen too large.

 For further details of the algorithm and its use, see Dierckx [3]

 8.4. Evaluation of Computed Spline

 The value of the computed spline at a given value X may be
 obtained in the double precision variable S by the call:


 CALL E02BBF(N,LAMDA,C,X,S,IFAIL)

 where N, LAMDA and C are the output parameters of E02BEF.

 The values of the spline and its first three derivatives at a
 given value X may be obtained in the double precision array SDIF
 of dimension at least 4 by the call:
+ even with no interior knots (n =n =8), the leastsquares spline
+ x y
+ already has its sum of residuals <=S. In this case, since this
+ spline (which is simply a bicubic polynomial) also has an optimal
+ value for the smoothness measure (eta), namely zero, it is
+ returned at once as the (trivial) solution. It will usually mean
+ that S has been chosen too large.
 CALL E02BCF(N,LAMDA,C,X,LEFT,SDIF,IFAIL)
+ For further details of the algorithm and its use see Dierckx [2].
 where if LEFT = 1, lefthand derivatives are computed and if LEFT
 /= 1, righthand derivatives are calculated. The value of LEFT is
 only relevant if X is an interior knot.
+ 8.6. Evaluation of Computed Spline
 The value of the definite integral of the spline over the
 interval X(1) to X(M) can be obtained in the double precision
 variable SINT by the call:
+ The values of the computed spline at the points (TX(r),TY(r)),
+ for r = 1,2,...,N, may be obtained in the double precision array
+ FF, of length at least N, by the following code:
 CALL E02BDF(N,LAMDA,C,SINT,IFAIL)
+ IFAIL = 0
+ CALL E02DEF(N,NX,NY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)
 9. Example
+ where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
+ , WRK is a double precision workspace array of length at least
+ NY4, and IWRK is an integer workspace array of length at least
+ NY4.
 This example program reads in a set of data values, followed by a
 set of values of S. For each value of S it calls E02BEF to
 compute a spline approximation, and prints the values of the
 knots and the Bspline coefficients c .
 i
+ To evaluate the computed spline on a KX by KY rectangular grid of
+ points in the xy plane, which is defined by the x coordinates
+ stored in TX(q), for q=1,2,...,KX, and the y coordinates stored
+ in TY(r), for r=1,2,...,KY, returning the results in the double
+ precision array FG which is of length at least KX*KY, the
+ following call may be used:
 The program includes code to evaluate the computed splines, by
 calls to E02BBF, at the points x and at points midway between
 r
 them. These values are not printed out, however; instead the
 results are illustrated by plots of the computed splines,
 together with the data points (indicated by *) and the positions
 of the knots (indicated by vertical lines): the effect of
 decreasing S can be clearly seen. (The plots were obtained by
 calling NAG Graphical Supplement routine J06FAF(*).)
+ IFAIL = 0
+ CALL E02DFF(KX,KY,NX,NY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
+ * IWRK,LIWRK,IFAIL)
+
+ where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
+ , WRK is a double precision workspace array of length at least
+ LWRK = min(NWRK1,NWRK2), NWRK1 = KX*4+NX, NWRK2 = KY*4+NY, and
+ IWRK is an integer workspace array of length at least LIWRK = KY
+ + NY  4 if NWRK1 >= NWRK2, or KX + NX  4 otherwise. The result
+ of the spline evaluated at grid point (q,r) is returned in
+ element (KY*(q1)+r) of the array FG.
+ 9. Example
 Please see figures in printed Reference Manual
+ This example program reads in values of MX, MY, x , for q = 1,2,.
+ q
+ r
+ ordinates f defined at the grid points (x ,y ). It then calls
+ q,r q r
+ E02DCF to compute a bicubic spline approximation for one
+ specified value of S, and prints the values of the computed knots
+ and Bspline coefficients. Finally it evaluates the spline at a
+ small sample of points on a rectangular grid.
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 87728,8 +90236,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02DAF
 E02DAF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02DDF
+ E02DDF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 87738,946 +90246,738 @@ have been determined.
1. Purpose
 E02DAF forms a minimal, weighted leastsquares bicubic spline
 surface fit with prescribed knots to a given set of data points.
+ E02DDF computes a bicubic spline approximation to a set of
+ scattered data. The knots of the spline are located
+ automatically, but a single parameter must be specified to
+ control the tradeoff between closeness of fit and smoothness of
+ fit.
2. Specification
 SUBROUTINE E02DAF (M, PX, PY, X, Y, F, W, LAMDA, MU,
 1 POINT, NPOINT, DL, C, NC, WS, NWS, EPS,
 2 SIGMA, RANK, IFAIL)
 INTEGER M, PX, PY, POINT(NPOINT), NPOINT, NC, NWS,
 1 RANK, IFAIL
 DOUBLE PRECISION X(M), Y(M), F(M), W(M), LAMDA(PX), MU(PY),
 1 DL(NC), C(NC), WS(NWS), EPS, SIGMA
+ SUBROUTINE E02DDF (START, M, X, Y, F, W, S, NXEST, NYEST,
+ 1 NX, LAMDA, NY, MU, C, FP, RANK, WRK,
+ 2 LWRK, IWRK, LIWRK, IFAIL)
+ INTEGER M, NXEST, NYEST, NX, NY, RANK, LWRK, IWRK
+ 1 (LIWRK), LIWRK, IFAIL
+ DOUBLE PRECISION X(M), Y(M), F(M), W(M), S, LAMDA(NXEST),
+ 1 MU(NYEST), C((NXEST4)*(NYEST4)), FP, WRK
+ 2 (LWRK)
+ CHARACTER*1 START
3. Description
 This routine determines a bicubic spline fit s(x,y) to the set of
 data points (x ,y ,f ) with weights w , for r=1,2,...,m. The two
 r r r r
 sets of internal knots of the spline, {(lambda)} and {(mu)},
 associated with the variables x and y respectively, are
 prescribed by the user. These knots can be thought of as dividing
 the data region of the (x,y) plane into panels (see diagram in
 Section 5). A bicubic spline consists of a separate bicubic
 polynomial in each panel, the polynomials joining together with
 continuity up to the second derivative across the panel
 boundaries.

 s(x,y) has the property that (Sigma), the sum of squares of its
 weighted residuals (rho) , for r=1,2,...,m, where
 r

 (rho) =w (s(x ,y )f ), (1)
 r r r r r

 is as small as possible for a bicubic spline with the given knot
 sets. The routine produces this minimized value of (Sigma) and
 the coefficients c in the Bspline representation of s(x,y) 
 ij
 see Section 8. E02DEF and E02DFF are available to compute values
 of the fitted spline from the coefficients c .
 ij

 The leastsquares criterion is not always sufficient to determine
 the bicubic spline uniquely: there may be a whole family of
 splines which have the same minimum sum of squares. In these
 cases, the routine selects from this family the spline for which
 the sum of squares of the coefficients c is smallest: in other
 ij
 words, the minimal leastsquares solution. This choice, although
 arbitrary, reduces the risk of unwanted fluctuations in the
 spline fit. The method employed involves forming a system of m
 linear equations in the coefficients c and then computing its
 ij
 leastsquares solution, which will be the minimal leastsquares
 solution when appropriate. The basis of the method is described
 in Hayes and Halliday [4]. The matrix of the equation is formed
 using a recurrence relation for Bsplines which is numerically
 stable (see Cox [1] and de Boor [2]  the former contains the
 more elementary derivation but, unlike [2], does not cover the
 case of coincident knots). The leastsquares solution is also
 obtained in a stable manner by using orthogonal transformations,
 viz. a variant of Givens rotation (see Gentleman [3]). This
 requires only one row of the matrix to be stored at a time.
 Advantage is taken of the steppedband structure which the matrix
 possesses when the data points are suitably ordered, there being
 at most sixteen nonzero elements in any row because of the
 definition of Bsplines. First the matrix is reduced to upper
 triangular form and then the diagonal elements of this triangle
 are examined in turn. When an element is encountered whose
 square, divided by the mean squared weight, is less than a
 threshold (epsilon), it is replaced by zero and the rest of the
 elements in its row are reduced to zero by rotations with the
 remaining rows. The rank of the system is taken to be the number
 of nonzero diagonal elements in the final triangle, and the non
 zero rows of this triangle are used to compute the minimal least
 squares solution. If all the diagonal elements are nonzero, the
 rank is equal to the number of coefficients c and the solution
 ij
 obtained is the ordinary leastsquares solution, which is unique
 in this case.

 4. References

 [1] Cox M G (1972) The Numerical Evaluation of Bsplines. J.
 Inst. Math. Appl. 10 134149.

 [2] De Boor C (1972) On Calculating with Bsplines. J. Approx.
 Theory. 6 5062.

 [3] Gentleman W M (1973) Leastsquares Computations by Givens
 Transformations without Square Roots. J. Inst. Math. Applic.
 12 329336.

 [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
 Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
 Appl. 14 89103.

 5. Parameters

 1: M  INTEGER Input
 On entry: the number of data points, m. Constraint: M > 1.

 2: PX  INTEGER Input

 3: PY  INTEGER Input
 On entry: the total number of knots (lambda) and (mu)
 associated with the variables x and y, respectively.
 Constraint: PX >= 8 and PY >= 8.

 (They are such that PX8 and PY8 are the corresponding
 numbers of interior knots.) The running time and storage
 required by the routine are both minimized if the axes are
 labelled so that PY is the smaller of PX and PY.

 4: X(M)  DOUBLE PRECISION array Input

 5: Y(M)  DOUBLE PRECISION array Input

 6: F(M)  DOUBLE PRECISION array Input
 On entry: the coordinates of the data point (x ,y ,f ), for
 r r r
 r=1,2,...,m. The order of the data points is immaterial, but
 see the array POINT, below.

 7: W(M)  DOUBLE PRECISION array Input
 On entry: the weight w of the rth data point. It is
 r
 important to note the definition of weight implied by the
 equation (1) in Section 3, since it is also common usage to
 define weight as the square of this weight. In this routine,
 each w should be chosen inversely proportional to the
 r
 (absolute) accuracy of the corresponding f , as expressed,
 r
 for example, by the standard deviation or probable error of
 the f . When the f are all of the same accuracy, all the w
 r r r
 may be set equal to 1.0.

 8: LAMDA(PX)  DOUBLE PRECISION array Input/Output
 On entry: LAMDA(i+4) must contain the ith interior knot
 (lambda) associated with the variable x, for
 i+4
 i=1,2,...,PX8. The knots must be in nondecreasing order
 and lie strictly within the range covered by the data values
 of x. A knot is a value of x at which the spline is allowed
 to be discontinuous in the third derivative with respect to
 x, though continuous up to the second derivative. This
 degree of continuity can be reduced, if the user requires,
 by the use of coincident knots, provided that no more than
 four knots are chosen to coincide at any point. Two, or
 three, coincident knots allow loss of continuity in,
 respectively, the second and first derivative with respect
 to x at the value of x at which they coincide. Four
 coincident knots split the spline surface into two
 independent parts. For choice of knots see Section 8. On
 exit: the interior knots LAMDA(5) to LAMDA(PX4) are
 unchanged, and the segments LAMDA(1:4) and LAMDA(PX3:PX)
 contain additional (exterior) knots introduced by the
 routine in order to define the full set of Bsplines
 required. The four knots in the first segment are all set
 equal to the lowest data value of x and the other four
 additional knots are all set equal to the highest value:
 there is experimental evidence that coincident endknots are
 best for numerical accuracy. The complete array must be left
 undisturbed if E02DEF or E02DFF is to be used subsequently.

 9: MU(PY)  DOUBLE PRECISION array Input
 On entry: MU(i+4) must contain the ith interior knot (mu)
 i+4
 associated with the variable y, i=1,2,...,PY8. The same
 remarks apply to MU as to LAMDA above, with Y replacing X,
 and y replacing x.

 10: POINT(NPOINT)  INTEGER array Input
 On entry: indexing information usually provided by E02ZAF
 which enables the data points to be accessed in the order
 which produces the advantageous matrix structure mentioned
 in Section 3. This order is such that, if the (x,y) plane is
 thought of as being divided into rectangular panels by the
 two sets of knots, all data in a panel occur before data in
 succeeding panels, where the panels are numbered from bottom
 to top and then left to right with the usual arrangement of
 axes, as indicated in the diagram.

 Please see figure in printed Reference Manual

 A data point lying exactly on one or more panel sides is
 considered to be in the highest numbered panel adjacent to
 the point. E02ZAF should be called to obtain the array
 POINT, unless it is provided by other means.

 11: NPOINT  INTEGER Input
 On entry:
 the dimension of the array POINT as declared in the
 (sub)program from which E02DAF is called.
 Constraint: NPOINT >= M + (PX7)*(PY7).

 12: DL(NC)  DOUBLE PRECISION array Output
 On exit: DL gives the squares of the diagonal elements of
 the reduced triangular matrix, divided by the mean squared
 weight. It includes those elements, less than (epsilon),
 which are treated as zero (see Section 3).

 13: C(NC)  DOUBLE PRECISION array Output
 On exit: C gives the coefficients of the fit. C((PY4)*(i
 1)+j) is the coefficient c of Section 3 and Section 8 for
 ij
 i=1,2,...,PX4 and j=1,2,...,PY4. These coefficients are
 used by E02DEF or E02DFF to calculate values of the fitted
 function.
+ This routine determines a smooth bicubic spline approximation
+ s(x,y) to the set of data points (x ,y ,f ) with weights w , for
+ r r r r
+ r=1,2,...,m.
 14: NC  INTEGER Input
 On entry: the value (PX4)*(PY4).
+ The approximation domain is considered to be the rectangle
+ [x ,x ]*[y ,y ], where x (y ) and x (y ) denote
+ min max min max min min max max
+ the lowest and highest data values of x (y).
 15: WS(NWS)  DOUBLE PRECISION array Workspace
+ The spline is given in the Bspline representation
 16: NWS  INTEGER Input
 On entry:
 the dimension of the array WS as declared in the
 (sub)program from which E02DAF is called.
 Constraint: NWS>=(2*NC+1)*(3*PY6)2.
+ n 4 n 4
+ x y
+  
+ s(x,y)= > > c M (x)N (y), (1)
+   ij i j
+ i=1 j=1
 17: EPS  DOUBLE PRECISION Input
 On entry: a threshold (epsilon) for determining the
 effective rank of the system of linear equations. The rank
 is determined as the number of elements of the array DL (see
 below) which are nonzero. An element of DL is regarded as
 zero if it is less than (epsilon). Machine precision is a
 suitable value for (epsilon) in most practical applications
 which have only 2 or 3 decimals accurate in data. If some
 coefficients of the fit prove to be very large compared with
 the data ordinates, this suggests that (epsilon) should be
 increased so as to decrease the rank. The array DL will give
 a guide to appropriate values of (epsilon) to achieve this,
 as well as to the choice of (epsilon) in other cases where
 some experimentation may be needed to determine a value
 which leads to a satisfactory fit.
+ where M (x) and N (y) denote normalised cubic Bsplines, the
+ i j
+ former defined on the knots (lambda) to (lambda) and the
+ i i+4
+ latter on the knots (mu) to (mu) . For further details, see
+ j j+4
+ Hayes and Halliday [4] for bicubic splines and de Boor [1] for
+ normalised Bsplines.
 18: SIGMA  DOUBLE PRECISION Output
 On exit: (Sigma), the weighted sum of squares of residuals.
 This is not computed from the individual residuals but from
 the righthand sides of the orthogonallytransformed linear
 equations. For further details see Hayes and Halliday [4]
 page 97. The two methods of computation are theoretically
 equivalent, but the results may differ because of rounding
 error.
+ The total numbers n and n of these knots and their values
+ x y
+ (lambda) ,...,(lambda) and (mu) ,...,(mu) are chosen
+ 1 n 1 n
+ x y
+ automatically by the routine. The knots (lambda) ,...,
+ 5
+ (lambda) and (mu) ,..., (mu) are the interior knots; they
+ n 4 5 n 4
+ x y
+ divide the approximation domain [x ,x ]*[y ,y ] into (
+ min max min max
+ n 7)*(n 7) subpanels [(lambda) ,(lambda) ]*[(mu) ,(mu) ],
+ x y i i+1 j j+1
+ for i=4,5,...,n 4; j=4,5,...,n 4. Then, much as in the curve
+ x y
+ case (see E02BEF), the coefficients c are determined as the
+ ij
+ solution of the following constrained minimization problem:
 19: RANK  INTEGER Output
 On exit: the rank of the system as determined by the value
 of the threshold (epsilon). When RANK = NC, the least
 squares solution is unique: in other cases the minimal
 leastsquares solution is computed.
+ minimize
 20: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ (eta), (2)
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ subject to the constraint
 6. Error Indicators and Warnings
+ m
+  2
+ (theta)= > (epsilon) <=S (3)
+  r
+ r=1
 Errors detected by the routine:
+ where: (eta) is a measure of the (lack of) smoothness of s(x,y).
+ Its value depends on the discontinuity jumps in
+ s(x,y) across the boundaries of the subpanels. It is
+ zero only when there are no discontinuities and is
+ positive otherwise, increasing with the size of the
+ jumps (see Dierckx [2] for details).
 IFAIL= 1
 At least one set of knots is not in nondecreasing order, or
 an interior knot is outside the range of the data values.
+ (epsilon) denotes the weighted residual w (f s(x ,y )),
+ r r r r r
 IFAIL= 2
 More than four knots coincide at a single point, possibly
 because all data points have the same value of x (or y) or
 because an interior knot coincides with an extreme data
 value.
+ and S is a nonnegative number to be specified by the user.
 IFAIL= 3
 Array POINT does not indicate the data points in panel
 order. Call E02ZAF to obtain a correct array.
+ By means of the parameter S, 'the smoothing factor', the user
+ will then control the balance between smoothness and closeness of
+ fit, as measured by the sum of squares of residuals in (3). If S
+ is too large, the spline will be too smooth and signal will be
+ lost (underfit); if S is too small, the spline will pick up too
+ much noise (overfit). In the extreme cases the method would
+ return an interpolating spline ((theta)=0) if S were set to zero,
+ and returns the leastsquares bicubic polynomial ((eta)=0) if S
+ is set very large. Experimenting with Svalues between these two
+ extremes should result in a good compromise. (See Section 8.2 for
+ advice on choice of S.) Note however, that this routine, unlike
+ E02BEF and E02DCF, does not allow S to be set exactly to zero: to
+ compute an interpolant to scattered data, E01SAF or E01SEF should
+ be used.
 IFAIL= 4
 On entry M <= 1,
+ The method employed is outlined in Section 8.5 and fully
+ described in Dierckx [2] and [3]. It involves an adaptive
+ strategy for locating the knots of the bicubic spline (depending
+ on the function underlying the data and on the value of S), and
+ an iterative method for solving the constrained minimization
+ problem once the knots have been determined.
 or PX < 8,
+ Values of the computed spline can subsequently be computed by
+ calling E02DEF or E02DFF as described in Section 8.6.
 or PY < 8,
+ 4. References
 or NC /= (PX4)*(PY4),
+ [1] De Boor C (1972) On Calculating with Bsplines. J. Approx.
+ Theory. 6 5062.
 or NWS is too small,
+ [2] Dierckx P (1981) An Algorithm for Surface Fitting with
+ Spline Functions. IMA J. Num. Anal. 1 267283.
 or NPOINT is too small.
+ [3] Dierckx P (1981) An Improved Algorithm for Curve Fitting
+ with Spline Functions. Report TW54. Department of Computer
+ Science, Katholieke Universiteit Leuven.
 IFAIL= 5
 All the weights w are zero or rank determined as zero.
 r
+ [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
+ Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
+ Appl. 14 89103.
 7. Accuracy
+ [5] Peters G and Wilkinson J H (1970) The Leastsquares Problem
+ and Pseudoinverses. Comput. J. 13 309316.
 The computation of the Bsplines and reduction of the observation
 matrix to triangular form are both numerically stable.
+ [6] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
+ 10 177183.
 8. Further Comments
+ 5. Parameters
 The time taken by this routine is approximately proportional to
 2
 the number of data points, m, and to (3*(PY4)+4) .
+ 1: START  CHARACTER*1 Input
+ On entry: START must be set to 'C' or 'W'.
 The Bspline representation of the bicubic spline is
+ If START = 'C' (Cold start), the routine will build up the
+ knot set starting with no interior knots. No values need be
+ assigned to the parameters NX, NY, LAMDA, MU or WRK.
 
 s(x,y)= > c M (x)N (y)
  ij i j
 ij
+ If START = 'W' (Warm start), the routine will restart the
+ knotplacing strategy using the knots found in a previous
+ call of the routine. In this case, the parameters NX, NY,
+ LAMDA, MU and WRK must be unchanged from that previous call.
+ This warm start can save much time in searching for a
+ satisfactory value of S. Constraint: START = 'C' or 'W'.
 summed over i=1,2,...,PX4 and over j=1,2,...,PY4. Here M (x)
 i
 and N (y) denote normalised cubic Bsplines,the former defined on
 j
 the knots (lambda) ,(lambda) ,...,(lambda) and the latter on
 i i+1 i+4
 the knots (mu) ,(mu) ,...,(mu) . For further details, see
 j j+1 j+4
 Hayes and Halliday [4] for bicubic splines and de Boor [2] for
 normalised Bsplines.
+ 2: M  INTEGER Input
+ On entry: m, the number of data points.
 The choice of the interior knots, which help to determine the
 spline's shape, must largely be a matter of trial and error. It
 is usually best to start with a small number of knots and,
 examining the fit at each stage, add a few knots at a time at
 places where the fit is particularly poor. In intervals of x or y
 where the surface represented by the data changes rapidly, in
 function value or derivatives, more knots will be needed than
 elsewhere. In some cases guidance can be obtained by analogy with
 the case of coincident knots: for example, just as three
 coincident knots can produce a discontinuity in slope, three
 close knots can produce rapid change in slope. Of course, such
 rapid changes in behaviour must be adequately represented by the
 data points, as indeed must the behaviour of the surface
 generally, if a satisfactory fit is to be achieved. When there is
 no rapid change in behaviour, equallyspaced knots will often
 suffice.
+ The number of data points with nonzero weight (see W below)
+ must be at least 16.
 In all cases the fit should be examined graphically before it is
 accepted as satisfactory.
+ 3: X(M)  DOUBLE PRECISION array Input
 The fit obtained is not defined outside the rectangle
+ 4: Y(M)  DOUBLE PRECISION array Input
 (lambda) <=x<=(lambda) , (mu) <=y<=(mu)
 4 PX3 4 PY3
+ 5: F(M)  DOUBLE PRECISION array Input
+ On entry: X(r), Y(r), F(r) must be set to the coordinates
+ of (x ,y ,f ), the rth data point, for r=1,2,...,m. The
+ r r r
+ order of the data points is immaterial.
 The reason for taking the extreme data values of x and y for
 these four knots is that, as is usual in data fitting, the fit
 cannot be expected to give satisfactory values outside the data
 region. If, nevertheless, the user requires values over a larger
 rectangle, this can be achieved by augmenting the data with two
 artificial data points (a,c,0) and (b,d,0) with zero weight,
 where a<=x<=b, c<=y<=d defines the enlarged rectangle. In the
 case when the data are adequate to make the leastsquares
 solution unique (RANK = NC), this enlargement will not affect the
 fit over the original rectangle, except for possibly enlarged
 rounding errors, and will simply continue the bicubic polynomials
 in the panels bordering the rectangle out to the new boundaries:
 in other cases the fit will be affected. Even using the original
 rectangle there may be regions within it, particularly at its
 corners, which lie outside the data region and where, therefore,
 the fit will be unreliable. For example, if there is no data
 point in panel 1 of the diagram in Section 5, the leastsquares
 criterion leaves the spline indeterminate in this panel: the
 minimal spline determined by the subroutine in this case passes
 through the value zero at the point ((lambda) ,(mu) ).
 4 4
+ 6: W(M)  DOUBLE PRECISION array Input
+ On entry: W(r) must be set to w , the rth value in the set
+ r
+ of weights, for r=1,2,...,m. Zero weights are permitted and
+ the corresponding points are ignored, except when
+ determining x , x , y and y (see Section 8.4). For
+ min max min max
+ advice on the choice of weights, see Section 2.1.2 of the
+ Chapter Introduction. Constraint: the number of data points
+ with nonzero weight must be at least 16.
 9. Example
+ 7: S  DOUBLE PRECISION Input
+ On entry: the smoothing factor, S.
 This example program reads a value for (epsilon), and a set of
 data points, weights and knot positions. If there are more y
 knots than x knots, it interchanges the x and y axes. It calls
 E02ZAF to sort the data points into panel order, E02DAF to fit a
 bicubic spline to them, and E02DEF to evaluate the spline at the
 data points.
+ For advice on the choice of S, see Section 3 and Section 8.2
+ . Constraint: S > 0.0.
 Finally it prints:
+ 8: NXEST  INTEGER Input
 the weighted sum of squares of residuals computed from the
 linear equations;
 the rank determined by E02DAF;
+ 9: NYEST  INTEGER Input
+ On entry: an upper bound for the number of knots n and n
+ x y
+ required in the x and ydirections respectively.
+ ___
+ In most practical situations, NXEST = NYEST = 4+\/m/2 is
+ sufficient. See also Section 8.3. Constraint: NXEST >= 8 and
+ NYEST >= 8.
 data points, fitted values and residuals in panel order;
+ 10: NX  INTEGER Input/Output
+ On entry: if the warm start option is used, the value of NX
+ must be left unchanged from the previous call. On exit: the
+ total number of knots, n , of the computed spline with
+ x
+ respect to the x variable.
 the weighted sum of squares of the residuals;
+ 11: LAMDA(NXEST)  DOUBLE PRECISION array Input/Output
+ On entry: if the warm start option is used, the values LAMDA
+ (1), LAMDA(2),...,LAMDA(NX) must be left unchanged from the
+ previous call. On exit: LAMDA contains the complete set of
+ knots (lambda) associated with the x variable, i.e., the
+ i
+ interior knots LAMDA(5), LAMDA(6),...,LAMDA(NX4) as well as
+ the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA
+ (4) = x and LAMDA(NX3) = LAMDA(NX2) = LAMDA(NX1) =
+ min
+ LAMDA(NX) = x needed for the Bspline representation
+ max
+ (where x and x are as described in Section 3).
+ min max
 the coefficients of the spline fit.
+ 12: NY  INTEGER Input/Output
+ On entry: if the warm start option is used, the value of NY
+ must be left unchanged from the previous call. On exit: the
+ total number of knots, n , of the computed spline with
+ y
+ respect to the y variable.
 The program is written to handle any number of data sets.
+ 13: MU(NYEST)  DOUBLE PRECISION array Input/Output
+ On entry: if the warm start option is used, the values MU(1)
+ MU(2),...,MU(NY) must be left unchanged from the previous
+ call. On exit: MU contains the complete set of knots (mu)
+ i
+ associated with the y variable, i.e., the interior knots MU
+ (5), MU(6),...,MU(NY4) as well as the additional knots MU
+ (1) = MU(2) = MU(3) = MU(4) = y and MU(NY3) = MU(NY2) =
+ min
+ MU(NY1) = MU(NY) = y needed for the Bspline
+ max
+ representation (where y and y are as described in
+ min max
+ Section 3).
 Note: the data supplied in this example is not typical of a
 realistic problem: the number of data points would normally be
 much larger (in which case the array dimensions and the value of
 NWS in the program would have to be increased); and the value of
 (epsilon) would normally be much smaller on most machines (see
 6
 Section 5; the relatively large value of 10 has been chosen in
 order to illustrate a minimal leastsquares solution when RANK <
 NC; in this example NC = 24).
+ 14: C((NXEST4)*(NYEST4))  DOUBLE PRECISION array Output
+ On exit: the coefficients of the spline approximation. C(
+ (n 4)*(i1)+j) is the coefficient c defined in Section 3.
+ y ij
 The example program is not reproduced here. The source code for
 all example programs is distributed with the NAG Foundation
 Library software and should be available online.
+ 15: FP  DOUBLE PRECISION Output
+ On exit: the weighted sum of squared residuals, (theta), of
+ the computed spline approximation. FP should equal S within
+ a relative tolerance of 0.001 unless NX = NY = 8, when the
+ spline has no interior knots and so is simply a bicubic
+ polynomial. For knots to be inserted, S must be set to a
+ value below the value of FP produced in this case.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ 16: RANK  INTEGER Output
+ On exit: RANK gives the rank of the system of equations used
+ to compute the final spline (as determined by a suitable
+ machinedependent threshold). When RANK = (NX4)*(NY4), the
+ solution is unique; otherwise the system is rankdeficient
+ and the minimumnorm solution is computed. The latter case
+ may be caused by too small a value of S.
 E02  Curve and Surface Fitting E02DCF
 E02DCF  NAG Foundation Library Routine Document
+ 17: WRK(LWRK)  DOUBLE PRECISION array Workspace
+ On entry: if the warm start option is used, the value of WRK
+ (1) must be left unchanged from the previous call.
 Note: Before using this routine, please read the Users' Note for
 your implementation to check implementationdependent details.
 The symbol (*) after a NAG routine name denotes a routine that is
 not included in the Foundation Library.
+ This array is used as workspace.
 1. Purpose
+ 18: LWRK  INTEGER Input
+ On entry:
+ the dimension of the array WRK as declared in the
+ (sub)program from which E02DDF is called.
+ Constraint: LWRK >= (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,
 E02DCF computes a bicubic spline approximation to a set of data
 values, given on a rectangular grid in the xy plane. The knots
 of the spline are located automatically, but a single parameter
 must be specified to control the tradeoff between closeness of
 fit and smoothness of fit.
+ where
 2. Specification
+ u=NXEST4, v=NYEST4, and w=max(u,v).
 SUBROUTINE E02DCF (START, MX, X, MY, Y, F, S, NXEST,
 1 NYEST, NX, LAMDA, NY, MU, C, FP, WRK,
 2 LWRK, IWRK, LIWRK, IFAIL)
 INTEGER MX, MY, NXEST, NYEST, NX, NY, LWRK, IWRK
 1 (LIWRK), LIWRK, IFAIL
 DOUBLE PRECISION X(MX), Y(MY), F(MX*MY), S, LAMDA(NXEST),
 1 MU(NYEST), C((NXEST4)*(NYEST4)), FP, WRK
 2 (LWRK)
 CHARACTER*1 START
+ For some problems, the routine may need to compute the
+ minimal leastsquares solution of a rankdeficient system of
+ linear equations (see Section 3). The amount of workspace
+ required to solve such problems will be larger than
+ specified by the value given above, which must be increased
+ by an amount, LWRK2 say. An upper bound for LWRK2 is given
+ by 4*u*v*w+2*u*v+4*w, where u, v and w are as above.
+ However, if there are enough data points, scattered
+ uniformly over the approximation domain, and if the
+ smoothing factor S is not too small, there is a good chance
+ that this extra workspace is not needed. A lot of memory
+ might therefore be saved by assuming LWRK2 = 0.
 3. Description
+ 19: IWRK(LIWRK)  INTEGER array Workspace
 This routine determines a smooth bicubic spline approximation
 s(x,y) to the set of data points (x ,y ,f ), for q=1,2,...,m
 q r q,r x
 and r=1,2,...,m .
 y
 The spline is given in the Bspline representation
+ 20: LIWRK  INTEGER Input
+ On entry:
+ the dimension of the array IWRK as declared in the
+ (sub)program from which E02DDF is called.
+ Constraint: LIWRK>=M+2*(NXEST7)*(NYEST7).
 n 4 n 4
 x y
  
 s(x,y)= > > c M (x)N (y), (1)
   ij i j
 i=1 j=1
+ 21: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 where M (x) and N (y) denote normalised cubic Bsplines, the
 i j
 former defined on the knots (lambda) to (lambda) and the
 i i+4
 latter on the knots (mu) to (mu) . For further details, see
 j j+4
 Hayes and Halliday [4] for bicubic splines and de Boor [1] for
 normalised Bsplines.
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 The total numbers n and n of these knots and their values
 x y
 (lambda) ,...,(lambda) and (mu) ,...,(mu) are chosen
 1 n 1 n
 x y
 automatically by the routine. The knots (lambda) ,...,
 5
 (lambda) and (mu) ,...,(mu) are the interior knots; they
 n 4 5 n 4
 x y
 divide the approximation domain [x ,x ]*[y ,y ] into (
 1 m 1 m
 m m
 n 7)*(n 7) subpanels [(lambda) ,(lambda) ]*[(mu) ,(mu) ],
 x y i i+1 j j+1
 for i=4,5,...,n 4, j=4,5,...,n 4. Then, much as in the curve
 x y
 case (see E02BEF), the coefficients c are determined as the
 ij
 solution of the following constrained minimization problem:
+ 6. Error Indicators and Warnings
 minimize
+ Errors detected by the routine:
 (eta), (2)
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
 subject to the constraint
+ IFAIL= 1
+ On entry START /= 'C' or 'W',
 m m
 x y
   2
 (theta)= > > (epsilon) <=S, (3)
   q,r
 q=1 r=1
+ or the number of data points with nonzero weight <
+ 16,
 where (eta) is a measure of the (lack of) smoothness of s(x,y).
 Its value depends on the discontinuity jumps in
 s(x,y) across the boundaries of the subpanels. It is
 zero only when there are no discontinuities and is
 positive otherwise, increasing with the size of the
 jumps (see Dierckx [2] for details).
+ or S <= 0.0,
 (epsilon) denotes the residual f s(x ,y ),
 q,r q,r q r
+ or NXEST < 8,
 and S is a nonnegative number to be specified by the user.
+ or NYEST < 8,
 By means of the parameter S, 'the smoothing factor', the user
 will then control the balance between smoothness and closeness of
 fit, as measured by the sum of squares of residuals in (3). If S
 is too large, the spline will be too smooth and signal will be
 lost (underfit); if S is too small, the spline will pick up too
 much noise (overfit). In the extreme cases the routine will
 return an interpolating spline ((theta)=0) if S is set to zero,
 and the leastsquares bicubic polynomial ((eta)=0) if S is set
 very large. Experimenting with Svalues between these two
 extremes should result in a good compromise. (See Section 8.3 for
 advice on choice of S.)
+ or LWRK < (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,
+ where u = NXEST  4, v = NYEST  4 and w=max(u,v),
 The method employed is outlined in Section 8.5 and fully
 described in Dierckx [2] and [3]. It involves an adaptive
 strategy for locating the knots of the bicubic spline (depending
 on the function underlying the data and on the value of S), and
 an iterative method for solving the constrained minimization
 problem once the knots have been determined.
+ or LIWRK 4 + \/M/2, then this error exit
+ may indicate that S is too small.
 [2] Dierckx P (1982) A Fast Algorithm for Smoothing Data on a
 Rectangular Grid while using Spline Functions. SIAM J.
 Numer. Anal. 19 12861304.
+ IFAIL= 4
+ No more knots can be added because the number of Bspline
+ coefficients (NX4)*(NY4) already exceeds the number of
+ data points M. This error exit may occur if either of S or M
+ is too small.
 [3] Dierckx P (1981) An Improved Algorithm for Curve Fitting
 with Spline Functions. Report TW54. Department of Computer
 Science, Katholieke Universiteit Leuven.
+ IFAIL= 5
+ No more knots can be added because the additional knot would
+ (quasi) coincide with an old one. This error exit may occur
+ if too large a weight has been given to an inaccurate data
+ point, or if S is too small.
 [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
 Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
 Appl. 14 89103.
+ IFAIL= 6
+ The iterative process used to compute the coefficients of
+ the approximating spline has failed to converge. This error
+ exit may occur if S has been set very small. If the error
+ persists with increased S, consult NAG.
 [5] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
 10 177183.
+ IFAIL= 7
+ LWRK is too small; the routine needs to compute the minimal
+ leastsquares solution of a rankdeficient system of linear
+ equations, but there is not enough workspace. There is no
+ approximation returned but, having saved the information
+ contained in NX, LAMDA, NY, MU and WRK, and having adjusted
+ the value of LWRK and the dimension of array WRK
+ accordingly, the user can continue at the point the program
+ was left by calling E02DDF with START = 'W'. Note that the
+ requested value for LWRK is only large enough for the
+ current phase of the algorithm. If the routine is restarted
+ with LWRK set to the minimum value requested, a larger
+ request may be made at a later stage of the computation. See
+ Section 5 for the upper bound on LWRK. On soft failure, the
+ minimum requested value for LWRK is returned in IWRK(1) and
+ the safe value for LWRK is returned in IWRK(2).
 5. Parameters
+ If IFAIL = 3,4,5 or 6, a spline approximation is returned, but it
+ fails to satisfy the fitting criterion (see (2) and (3) in
+ Section 3  perhaps only by a small amount, however.
 1: START  CHARACTER*1 Input
 On entry: START must be set to 'C' or 'W'.
+ 7. Accuracy
 If START = 'C' (Cold start), the routine will build up the
 knot set starting with no interior knots. No values need be
 assigned to the parameters NX, NY, LAMDA, MU, WRK or IWRK.
+ On successful exit, the approximation returned is such that its
+ weighted sum of squared residuals FP is equal to the smoothing
+ factor S, up to a specified relative tolerance of 0.001  except
+ that if n =8 and n =8, FP may be significantly less than S: in
+ x y
+ this case the computed spline is simply the leastsquares bicubic
+ polynomial approximation of degree 3, i.e., a spline with no
+ interior knots.
 If START = 'W' (Warm start), the routine will restart the
 knotplacing strategy using the knots found in a previous
 call of the routine. In this case, the parameters NX, NY,
 LAMDA, MU, WRK and IWRK must be unchanged from that previous
 call. This warm start can save much time in searching for a
 satisfactory value of S. Constraint: START = 'C' or 'W'.
+ 8. Further Comments
 2: MX  INTEGER Input
 On entry: m , the number of grid points along the x axis.
 x
 Constraint: MX >= 4.
+ 8.1. Timing
 3: X(MX)  DOUBLE PRECISION array Input
 On entry: X(q) must be set to x , the x coordinate of the
 q
 qth grid point along the x axis, for q=1,2,...,m .
 x
 Constraint: x = 4.
+ 8.2. Choice of S
 5: Y(MY)  DOUBLE PRECISION array Input
 On entry: Y(r) must be set to y , the y coordinate of the
 r
 rth grid point along the y axis, for r=1,2,...,m .
 y
 Constraint: y = 0.0.
+ The number of knots may also depend on the upper bounds NXEST and
+ NYEST. Indeed, if at a certain stage in E02DDF the number of
+ knots in one direction (say n ) has reached the value of its
+ x
+ upper bound (NXEST), then from that moment on all subsequent
+ knots are added in the other (y) direction. This may indicate
+ that the value of NXEST is too small. On the other hand, it gives
+ the user the option of limiting the number of knots the routine
+ locates in any direction. For example, by setting NXEST = 8 (the
+ lowest allowable value for NXEST), the user can indicate that he
+ wants an approximation which is a simple cubic polynomial in the
+ variable x.
 8: NXEST  INTEGER Input
+ 8.4. Restriction of the approximation domain
 9: NYEST  INTEGER Input
 On entry: an upper bound for the number of knots n and n
 x y
 required in the x and ydirections respectively.
+ The fit obtained is not defined outside the rectangle
+ [(lambda) ,(lambda) ]*[(mu) ,(mu) ]. The reason for taking
+ 4 n 3 4 n 3
+ x y
+ the extreme data values of x and y for these four knots is that,
+ as is usual in data fitting, the fit cannot be expected to give
+ satisfactory values outside the data region. If, nevertheless,
+ the user requires values over a larger rectangle, this can be
+ achieved by augmenting the data with two artificial data points
+ (a,c,0) and (b,d,0) with zero weight, where [a,b]*[c,d] denotes
+ the enlarged rectangle.
 In most practical situations, NXEST =m /2 and NYEST m /2 is
 x y
 sufficient. NXEST and NYEST never need to be larger than
 m +4 and m +4 respectively, the numbers of knots needed for
 x y
 interpolation (S=0.0). See also Section 8.4. Constraint:
 NXEST >= 8 and NYEST >= 8.
+ 8.5. Outline of method used
 10: NX  INTEGER Input/Output
 On entry: if the warm start option is used, the value of NX
 must be left unchanged from the previous call. On exit: the
 total number of knots, n , of the computed spline with
 x
 respect to the x variable.
+ First suitable knot sets are built up in stages (starting with no
+ interior knots in the case of a cold start but with the knot set
+ found in a previous call if a warm start is chosen). At each
+ stage, a bicubic spline is fitted to the data by leastsquares
+ and (theta), the sum of squares of residuals, is computed. If
+ (theta)>S, a new knot is added to one knot set or the other so as
+ to reduce (theta) at the next stage. The new knot is located in
+ an interval where the fit is particularly poor. Sooner or later,
+ we find that (theta)<=S and at that point the knot sets are
+ accepted. The routine then goes on to compute a spline which has
+ these knot sets and which satisfies the full fitting criterion
+ specified by (2) and (3). The theoretical solution has (theta)=S.
+ The routine computes the spline by an iterative scheme which is
+ ended when (theta)=S within a relative tolerance of 0.001. The
+ main part of each iteration consists of a linear leastsquares
+ computation of special form, done in a similarly stable and
+ efficient manner as in E02DAF. As there also, the minimal least
+ squares solution is computed wherever the linear system is found
+ to be rankdeficient.
 11: LAMDA(NXEST)  DOUBLE PRECISION array Input/Output
 On entry: if the warm start option is used, the values
 LAMDA(1), LAMDA(2),...,LAMDA(NX) must be left unchanged from
 the previous call. On exit: LAMDA contains the complete set
 of knots (lambda) associated with the x variable, i.e., the
 i
 interior knots LAMDA(5), LAMDA(6), ..., LAMDA(NX4) as well
 as the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) =
 LAMDA(4) = X(1) and LAMDA(NX3) = LAMDA(NX2) = LAMDA(NX1)
 = LAMDA(NX) = X(MX) needed for the Bspline representation.
+ An exception occurs when the routine finds at the start that,
+ even with no interior knots (N = 8), the leastsquares spline
+ already has its sum of squares of residuals <=S. In this case,
+ since this spline (which is simply a bicubic polynomial) also has
+ an optimal value for the smoothness measure (eta), namely zero,
+ it is returned at once as the (trivial) solution. It will usually
+ mean that S has been chosen too large.
 12: NY  INTEGER Input/Output
 On entry: if the warm start option is used, the value of NY
 must be left unchanged from the previous call. On exit: the
 total number of knots, n , of the computed spline with
 y
 respect to the y variable.
+ For further details of the algorithm and its use see Dierckx [2].
 13: MU(NYEST)  DOUBLE PRECISION array Input/Output
 On entry: if the warm start option is used, the values MU
 (1), MU(2),...,MU(NY) must be left unchanged from the
 previous call. On exit: MU contains the complete set of
 knots (mu) associated with the y variable, i.e., the
 i
 interior knots MU(5), MU(6),...,MU(NY4) as well as the
 additional knots MU(1) = MU(2) = MU(3) = MU(4) = Y(1) and MU
 (NY3) = MU(NY2) = MU(NY1) = MU(NY) = Y(MY) needed for the
 Bspline representation.
+ 8.6. Evaluation of computed spline
 14: C((NXEST4)*(NYEST4))  DOUBLE PRECISION array Output
 On exit: the coefficients of the spline approximation. C(
 (n 4)*(i1)+j) is the coefficient c defined in Section 3.
 y ij
+ The values of the computed spline at the points (TX(r),TY(r)),
+ for r = 1,2,...,N, may be obtained in the double precision array
+ FF, of length at least N, by the following code:
 15: FP  DOUBLE PRECISION Output
 On exit: the sum of squared residuals, (theta), of the
 computed spline approximation. If FP = 0.0, this is an
 interpolating spline. FP should equal S within a relative
 tolerance of 0.001 unless NX = NY = 8, when the spline has
 no interior knots and so is simply a bicubic polynomial. For
 knots to be inserted, S must be set to a value below the
 value of FP produced in this case.
 16: WRK(LWRK)  DOUBLE PRECISION array Workspace
 On entry: if the warm start option is used, the values WRK
 (1),...,WRK(4) must be left unchanged from the previous
 call.
+ IFAIL = 0
+ CALL E02DEF(N,NX,NY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)
 This array is used as workspace.
 17: LWRK  INTEGER Input
 On entry:
 the dimension of the array WRK as declared in the
 (sub)program from which E02DCF is called.
 Constraint:
 LWRK>=4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY
+ where NX, NY, LAMDA, MU and C are the output parameters of E02DDF
+ , WRK is a double precision workspace array of length at least
+ NY4, and IWRK is an integer workspace array of length at least
+ NY4.
 +max(MY,NXEST)+54.
+ To evaluate the computed spline on a KX by KY rectangular grid of
+ points in the xy plane, which is defined by the x coordinates
+ stored in TX(q), for q=1,2,...,KX, and the y coordinates stored
+ in TY(r), for r=1,2,...,KY, returning the results in the double
+ precision array FG which is of length at least KX*KY, the
+ following call may be used:
 18: IWRK(LIWRK)  INTEGER array Workspace
 On entry: if the warm start option is used, the values IWRK
 (1), ..., IWRK(3) must be left unchanged from the previous
 call.
 This array is used as workspace.
+ IFAIL = 0
+ CALL E02DFF(KX,KY,NX,NY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
+ * IWRK,LIWRK,IFAIL)
 19: LIWRK  INTEGER Input
 On entry:
 the dimension of the array IWRK as declared in the
 (sub)program from which E02DCF is called.
 Constraint: LIWRK >= 3 + MX + MY + NXEST + NYEST.
 20: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+ where NX, NY, LAMDA, MU and C are the output parameters of E02DDF
+ , WRK is a double precision workspace array of length at least
+ LWRK = min(NWRK1,NWRK2), NWRK1 = KX*4+NX, NWRK2 = KY*4+NY, and
+ IWRK is an integer workspace array of length at least LIWRK = KY
+ + NY  4 if NWRK1 >= NWRK2, or KX + NX  4 otherwise. The result
+ of the spline evaluated at grid point (q,r) is returned in
+ element (KY*(q1)+r) of the array FG.
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ 9. Example
 6. Error Indicators and Warnings
+ This example program reads in a value of M, followed by a set of
+ M data points (x ,y ,f ) and their weights w . It then calls
+ r r r r
+ E02DDF to compute a bicubic spline approximation for one
+ specified value of S, and prints the values of the computed knots
+ and Bspline coefficients. Finally it evaluates the spline at a
+ small sample of points on a rectangular grid.
 Errors detected by the routine:
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 If on entry IFAIL = 0 or 1, explanatory error messages are
 output on the current error message unit (as defined by X04AAF).
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 IFAIL= 1
 On entry START /= 'C' or 'W',
+ E02  Curve and Surface Fitting E02DEF
+ E02DEF  NAG Foundation Library Routine Document
 or MX < 4,
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 or MY < 4,
+ 1. Purpose
 or S < 0.0,
+ E02DEF calculates values of a bicubic spline from its Bspline
+ representation.
 or S = 0.0 and NXEST < MX + 4,
+ 2. Specification
 or S = 0.0 and NYEST < MY + 4,
+ SUBROUTINE E02DEF (M, PX, PY, X, Y, LAMDA, MU, C, FF, WRK,
+ 1 IWRK, IFAIL)
+ INTEGER M, PX, PY, IWRK(PY4), IFAIL
+ DOUBLE PRECISION X(M), Y(M), LAMDA(PX), MU(PY), C((PX4)*
+ 1 (PY4)), FF(M), WRK(PY4)
 or NXEST < 8,
+ 3. Description
 or NYEST < 8,
+ This routine calculates values of the bicubic spline s(x,y) at
+ prescribed points (x ,y ), for r=1,2,...,m, from its augmented
+ r r
+ knot sets {(lambda)} and {(mu)} and from the coefficients c ,
+ ij
+ for i=1,2,...,PX4; j=1,2,...,PY4, in its Bspline
+ representation
 or LWRK < 4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY+
 +max(MY,NXEST)+54
+ 
+ s(x,y)= > c M (x)N (y).
+  ij i j
+ ij
 or LIWRK < 3 + MX + MY + NXEST + NYEST.
+ Here M (x) and N (y) denote normalised cubic Bsplines, the
+ i j
+ former defined on the knots (lambda) to (lambda) and the
+ i i+4
+ latter on the knots (mu) to (mu) .
+ j j+4
 IFAIL= 2
 The values of X(q), for q = 1,2,...,MX, are not in strictly
 increasing order.
+ This routine may be used to calculate values of a bicubic spline
+ given in the form produced by E01DAF, E02DAF, E02DCF and E02DDF.
+ It is derived from the routine B2VRE in Anthony et al [1].
 IFAIL= 3
 The values of Y(r), for r = 1,2,...,MY, are not in strictly
 increasing order.
+ 4. References
 IFAIL= 4
 The number of knots required is greater than allowed by
 NXEST and NYEST. Try increasing NXEST and/or NYEST and, if
 necessary, supplying larger arrays for the parameters LAMDA,
 MU, C, WRK and IWRK. However, if NXEST and NYEST are already
 large, say NXEST > MX/2 and NYEST > MY/2, then this error
 exit may indicate that S is too small.
+ [1] Anthony G T, Cox M G and Hayes J G (1982) DASL  Data
+ Approximation Subroutine Library. National Physical
+ Laboratory.
 IFAIL= 5
 The iterative process used to compute the coefficients of
 the approximating spline has failed to converge. This error
 exit may occur if S has been set very small. If the error
 persists with increased S, consult NAG.
+ [2] Cox M G (1978) The Numerical Evaluation of a Spline from its
+ Bspline Representation. J. Inst. Math. Appl. 21 135143.
 If IFAIL = 4 or 5, a spline approximation is returned, but it
 fails to satisfy the fitting criterion (see (2) and (3) in
 Section 3)  perhaps by only a small amount, however.
+ 5. Parameters
 7. Accuracy
+ 1: M  INTEGER Input
+ On entry: m, the number of points at which values of the
+ spline are required. Constraint: M >= 1.
 On successful exit, the approximation returned is such that its
 sum of squared residuals FP is equal to the smoothing factor S,
 up to a specified relative tolerance of 0.001  except that if
 n =8 and n =8, FP may be significantly less than S: in this case
 x y
 the computed spline is simply the leastsquares bicubic
 polynomial approximation of degree 3, i.e., a spline with no
 interior knots.
+ 2: PX  INTEGER Input
 8. Further Comments
+ 3: PY  INTEGER Input
+ On entry: PX and PY must specify the total number of knots
+ associated with the variables x and y respectively. They are
+ such that PX8 and PY8 are the corresponding numbers of
+ interior knots. Constraint: PX >= 8 and PY >= 8.
 8.1. Timing
+ 4: X(M)  DOUBLE PRECISION array Input
 The time taken for a call of E02DCF depends on the complexity of
 the shape of the data, the value of the smoothing factor S, and
 the number of data points. If E02DCF is to be called for
 different values of S, much time can be saved by setting START =
+ 5: Y(M)  DOUBLE PRECISION array Input
+ On entry: X and Y must contain x and y , for r=1,2,...,m,
+ r r
+ respectively. These are the coordinates of the points at
+ which values of the spline are required. The order of the
+ points is immaterial. Constraint: X and Y must satisfy
 8.2. Weighting of Data Points
+ LAMDA(4) <= X(r) <= LAMDA(PX3)
 E02DCF does not allow individual weighting of the data values. If
 these were determined to widely differing accuracies, it may be
 better to use E02DDF. The computation time would be very much
 longer, however.
+ and
 8.3. Choice of S
+ MU(4) <= Y(r) <= MU(PY3), for r=1,2,...,m.
 If the standard deviation of f is the same for all q and r
 q,r
 (the case for which this routine is designed  see Section 8.2.)
 and known to be equal, at least approximately, to (sigma), say,
 then following Reinsch [5] and choosing the smoothing factor S in
 2
 the range (sigma) (m+\/2m), where m=m m , is likely to give a
 x y
 good start in the search for a satisfactory value. If the
 standard deviations vary, the sum of their squares over all the
 data points could be used. Otherwise experimenting with different
 values of S will be required from the start, taking account of
 the remarks in Section 3.
+ The spline representation is not valid outside these
+ intervals.
 In that case, in view of computation time and memory
 requirements, it is recommended to start with a very large value
 for S and so determine the leastsquares bicubic polynomial; the
 value returned for FP, call it FP , gives an upper bound for S.
 0
 Then progressively decrease the value of S to obtain closer fits
  say by a factor of 10 in the beginning, i.e., S=FP /10,
 0
 S=FP /100, and so on, and more carefully as the approximation
 0
 shows more details.
+ 6: LAMDA(PX)  DOUBLE PRECISION array Input
 The number of knots of the spline returned, and their location,
 generally depend on the value of S and on the behaviour of the
 function underlying the data. However, if E02DCF is called with
 START = 'W', the knots returned may also depend on the smoothing
 factors of the previous calls. Therefore if, after a number of
 trials with different values of S and START = 'W', a fit can
 finally be accepted as satisfactory, it may be worthwhile to call
 E02DCF once more with the selected value for S but now using
 START = 'C'. Often, E02DCF then returns an approximation with the
 same quality of fit but with fewer knots, which is therefore
 better if data reduction is also important.
+ 7: MU(PY)  DOUBLE PRECISION array Input
+ On entry: LAMDA and MU must contain the complete sets of
+ knots {(lambda)} and {(mu)} associated with the x and y
+ variables respectively. Constraint: the knots in each set
+ must be in nondecreasing order, with LAMDA(PX3) > LAMDA(4)
+ and MU(PY3) > MU(4).
 8.4. Choice of NXEST and NYEST
+ 8: C((PX4)*(PY4))  DOUBLE PRECISION array Input
+ On entry: C((PY4)*(i1)+j) must contain the coefficient
+ c described in Section 3, for i=1,2,...,PX4;
+ ij
+ j=1,2,...,PY4.
 The number of knots may also depend on the upper bounds NXEST and
 NYEST. Indeed, if at a certain stage in E02DCF the number of
 knots in one direction (say n ) has reached the value of its
 x
 upper bound (NXEST), then from that moment on all subsequent
 knots are added in the other (y) direction. Therefore the user
 has the option of limiting the number of knots the routine
 locates in any direction. For example, by setting NXEST = 8 (the
 lowest allowable value for NXEST), the user can indicate that he
 wants an approximation which is a simple cubic polynomial in the
 variable x.
+ 9: FF(M)  DOUBLE PRECISION array Output
+ On exit: FF(r) contains the value of the spline at the
+ point (x ,y ), for r=1,2,...,m.
+ r r
+
+ 10: WRK(PY4)  DOUBLE PRECISION array Workspace
+
+ 11: IWRK(PY4)  INTEGER array Workspace
+
+ 12: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 8.5. Outline of Method Used
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
 If S=0, the requisite number of knots is known in advance, i.e.,
 n =m +4 and n =m +4; the interior knots are located immediately
 x x y y
 as (lambda) = x and (mu) = y , for i=5,6,...,n 4 and
 i i2 j j2 x
 j=5,6,...,n 4. The corresponding leastsquares spline is then an
 y
 interpolating spline and therefore a solution of the problem.
+ 6. Error Indicators and Warnings
 If S>0, suitable knot sets are built up in stages (starting with
 no interior knots in the case of a cold start but with the knot
 set found in a previous call if a warm start is chosen). At each
 stage, a bicubic spline is fitted to the data by leastsquares,
 and (theta), the sum of squares of residuals, is computed. If
 (theta)>S, new knots are added to one knot set or the other so as
 to reduce (theta) at the next stage. The new knots are located in
 intervals where the fit is particularly poor, their number
 depending on the value of S and on the progress made so far in
 reducing (theta). Sooner or later, we find that (theta)<=S and at
 that point the knot sets are accepted. The routine then goes on
 to compute the (unique) spline which has these knot sets and
 which satisfies the full fitting criterion specified by (2) and
 (3). The theoretical solution has (theta)=S. The routine computes
 the spline by an iterative scheme which is ended when (theta)=S
 within a relative tolerance of 0.001. The main part of each
 iteration consists of a linear leastsquares computation of
 special form, done in a similarly stable and efficient manner as
 in E02BAF for leastsquares curve fitting.
+ Errors detected by the routine:
 An exception occurs when the routine finds at the start that,
 even with no interior knots (n =n =8), the leastsquares spline
 x y
 already has its sum of residuals <=S. In this case, since this
 spline (which is simply a bicubic polynomial) also has an optimal
 value for the smoothness measure (eta), namely zero, it is
 returned at once as the (trivial) solution. It will usually mean
 that S has been chosen too large.
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
 For further details of the algorithm and its use see Dierckx [2].
+ IFAIL= 1
+ On entry M < 1,
 8.6. Evaluation of Computed Spline
+ or PY < 8,
 The values of the computed spline at the points (TX(r),TY(r)),
 for r = 1,2,...,N, may be obtained in the double precision array
 FF, of length at least N, by the following code:
+ or PX < 8.
 IFAIL = 0
 CALL E02DEF(N,NX,NY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)
+ IFAIL= 2
+ On entry the knots in array LAMDA, or those in array MU, are
+ not in nondecreasing order, or LAMDA(PX3) <= LAMDA(4), or
+ MU(PY3) <= MU(4).
 where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
 , WRK is a double precision workspace array of length at least
 NY4, and IWRK is an integer workspace array of length at least
 NY4.
+ IFAIL= 3
+ On entry at least one of the prescribed points (x ,y ) lies
+ r r
+ outside the rectangle defined by LAMDA(4), LAMDA(PX3) and
+ MU(4), MU(PY3).
 To evaluate the computed spline on a KX by KY rectangular grid of
 points in the xy plane, which is defined by the x coordinates
 stored in TX(q), for q=1,2,...,KX, and the y coordinates stored
 in TY(r), for r=1,2,...,KY, returning the results in the double
 precision array FG which is of length at least KX*KY, the
 following call may be used:
+ 7. Accuracy
 IFAIL = 0
 CALL E02DFF(KX,KY,NX,NY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
 * IWRK,LIWRK,IFAIL)
+ The method used to evaluate the Bsplines is numerically stable,
+ in the sense that each computed value of s(x ,y ) can be regarded
+ r r
+ as the value that would have been obtained in exact arithmetic
+ from slightly perturbed Bspline coefficients. See Cox [2] for
+ details.
 where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
 , WRK is a double precision workspace array of length at least
 LWRK = min(NWRK1,NWRK2), NWRK1 = KX*4+NX, NWRK2 = KY*4+NY, and
 IWRK is an integer workspace array of length at least LIWRK = KY
 + NY  4 if NWRK1 >= NWRK2, or KX + NX  4 otherwise. The result
 of the spline evaluated at grid point (q,r) is returned in
 element (KY*(q1)+r) of the array FG.
+ 8. Further Comments
+
+ Computation time is approximately proportional to the number of
+ points, m, at which the evaluation is required.
9. Example
 This example program reads in values of MX, MY, x , for q = 1,2,.
 q
 r
 ordinates f defined at the grid points (x ,y ). It then calls
 q,r q r
 E02DCF to compute a bicubic spline approximation for one
 specified value of S, and prints the values of the computed knots
 and Bspline coefficients. Finally it evaluates the spline at a
 small sample of points on a rectangular grid.
+ This program reads in knot sets LAMDA(1),..., LAMDA(PX) and MU(1)
+ ,..., MU(PY), and a set of bicubic spline coefficients c .
+ ij
+ Following these are a value for m and the coordinates (x ,y ),
+ r r
+ for r=1,2,...,m, at which the spline is to be evaluated.
The example program is not reproduced here. The source code for
all example programs is distributed with the NAG Foundation
@@ 88685,8 +90985,8 @@ have been determined.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 E02  Curve and Surface Fitting E02DDF
 E02DDF  NAG Foundation Library Routine Document
+ E02  Curve and Surface Fitting E02DFF
+ E02DFF  NAG Foundation Library Routine Document
Note: Before using this routine, please read the Users' Note for
your implementation to check implementationdependent details.
@@ 88695,564 +90995,459 @@ have been determined.
1. Purpose
 E02DDF computes a bicubic spline approximation to a set of
 scattered data. The knots of the spline are located
 automatically, but a single parameter must be specified to
 control the tradeoff between closeness of fit and smoothness of
 fit.
+ E02DFF calculates values of a bicubic spline from its Bspline
+ representation. The spline is evaluated at all points on a
+ rectangular grid.
2. Specification
 SUBROUTINE E02DDF (START, M, X, Y, F, W, S, NXEST, NYEST,
 1 NX, LAMDA, NY, MU, C, FP, RANK, WRK,
 2 LWRK, IWRK, LIWRK, IFAIL)
 INTEGER M, NXEST, NYEST, NX, NY, RANK, LWRK, IWRK
 1 (LIWRK), LIWRK, IFAIL
 DOUBLE PRECISION X(M), Y(M), F(M), W(M), S, LAMDA(NXEST),
 1 MU(NYEST), C((NXEST4)*(NYEST4)), FP, WRK
 2 (LWRK)
 CHARACTER*1 START
+ SUBROUTINE E02DFF (MX, MY, PX, PY, X, Y, LAMDA, MU, C, FF,
+ 1 WRK, LWRK, IWRK, LIWRK, IFAIL)
+ INTEGER MX, MY, PX, PY, LWRK, IWRK(LIWRK), LIWRK,
+ 1 IFAIL
+ DOUBLE PRECISION X(MX), Y(MY), LAMDA(PX), MU(PY), C((PX4)*
+ 1 (PY4)), FF(MX*MY), WRK(LWRK)
3. Description
 This routine determines a smooth bicubic spline approximation
 s(x,y) to the set of data points (x ,y ,f ) with weights w , for
 r r r r
 r=1,2,...,m.

 The approximation domain is considered to be the rectangle
 [x ,x ]*[y ,y ], where x (y ) and x (y ) denote
 min max min max min min max max
 the lowest and highest data values of x (y).

 The spline is given in the Bspline representation
+ This routine calculates values of the bicubic spline s(x,y) on a
+ rectangular grid of points in the xy plane, from its augmented
+ knot sets {(lambda)} and {(mu)} and from the coefficients c ,
+ ij
+ for i=1,2,...,PX4; j=1,2,...,PY4, in its Bspline
+ representation
 n 4 n 4
 x y
  
 s(x,y)= > > c M (x)N (y), (1)
   ij i j
 i=1 j=1
+ 
+ s(x,y)= > c M (x)N (y).
+  ij i j
+ ij
 where M (x) and N (y) denote normalised cubic Bsplines, the
 i j
+ Here M (x) and N (y) denote normalised cubic Bsplines, the
+ i j
former defined on the knots (lambda) to (lambda) and the
i i+4
 latter on the knots (mu) to (mu) . For further details, see
+ latter on the knots (mu) to (mu) .
j j+4
 Hayes and Halliday [4] for bicubic splines and de Boor [1] for
 normalised Bsplines.

 The total numbers n and n of these knots and their values
 x y
 (lambda) ,...,(lambda) and (mu) ,...,(mu) are chosen
 1 n 1 n
 x y
 automatically by the routine. The knots (lambda) ,...,
 5
 (lambda) and (mu) ,..., (mu) are the interior knots; they
 n 4 5 n 4
 x y
 divide the approximation domain [x ,x ]*[y ,y ] into (
 min max min max
 n 7)*(n 7) subpanels [(lambda) ,(lambda) ]*[(mu) ,(mu) ],
 x y i i+1 j j+1
 for i=4,5,...,n 4; j=4,5,...,n 4. Then, much as in the curve
 x y
 case (see E02BEF), the coefficients c are determined as the
 ij
 solution of the following constrained minimization problem:

 minimize
 (eta), (2)

 subject to the constraint

 m
  2
 (theta)= > (epsilon) <=S (3)
  r
 r=1

 where: (eta) is a measure of the (lack of) smoothness of s(x,y).
 Its value depends on the discontinuity jumps in
 s(x,y) across the boundaries of the subpanels. It is
 zero only when there are no discontinuities and is
 positive otherwise, increasing with the size of the
 jumps (see Dierckx [2] for details).

 (epsilon) denotes the weighted residual w (f s(x ,y )),
 r r r r r
+ The points in the grid are defined by coordinates x , for
+ q
+ q=1,2,...,m , along the x axis, and coordinates y , for
+ x r
+ r=1,2,...,m along the y axis.
+ y
 and S is a nonnegative number to be specified by the user.
+ This routine may be used to calculate values of a bicubic spline
+ given in the form produced by E01DAF, E02DAF, E02DCF and E02DDF.
+ It is derived from the routine B2VRE in Anthony et al [1].
 By means of the parameter S, 'the smoothing factor', the user
 will then control the balance between smoothness and closeness of
 fit, as measured by the sum of squares of residuals in (3). If S
 is too large, the spline will be too smooth and signal will be
 lost (underfit); if S is too small, the spline will pick up too
 much noise (overfit). In the extreme cases the method would
 return an interpolating spline ((theta)=0) if S were set to zero,
 and returns the leastsquares bicubic polynomial ((eta)=0) if S
 is set very large. Experimenting with Svalues between these two
 extremes should result in a good compromise. (See Section 8.2 for
 advice on choice of S.) Note however, that this routine, unlike
 E02BEF and E02DCF, does not allow S to be set exactly to zero: to
 compute an interpolant to scattered data, E01SAF or E01SEF should
 be used.
+ 4. References
 The method employed is outlined in Section 8.5 and fully
 described in Dierckx [2] and [3]. It involves an adaptive
 strategy for locating the knots of the bicubic spline (depending
 on the function underlying the data and on the value of S), and
 an iterative method for solving the constrained minimization
 problem once the knots have been determined.
+ [1] Anthony G T, Cox M G and Hayes J G (1982) DASL  Data
+ Approximation Subroutine Library. National Physical
+ Laboratory.
 Values of the computed spline can subsequently be computed by
 calling E02DEF or E02DFF as described in Section 8.6.
+ [2] Cox M G (1978) The Numerical Evaluation of a Spline from its
+ Bspline Representation. J. Inst. Math. Appl. 21 135143.
 4. References
+ 5. Parameters
 [1] De Boor C (1972) On Calculating with Bsplines. J. Approx.
 Theory. 6 5062.
+ 1: MX  INTEGER Input
 [2] Dierckx P (1981) An Algorithm for Surface Fitting with
 Spline Functions. IMA J. Num. Anal. 1 267283.
+ 2: MY  INTEGER Input
+ On entry: MX and MY must specify m and m respectively,
+ x y
+ the number of points along the x and y axis that define the
+ rectangular grid. Constraint: MX >= 1 and MY >= 1.
 [3] Dierckx P (1981) An Improved Algorithm for Curve Fitting
 with Spline Functions. Report TW54. Department of Computer
 Science, Katholieke Universiteit Leuven.
+ 3: PX  INTEGER Input
 [4] Hayes J G and Halliday J (1974) The Leastsquares Fitting of
 Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
 Appl. 14 89103.
+ 4: PY  INTEGER Input
+ On entry: PX and PY must specify the total number of knots
+ associated with the variables x and y respectively. They are
+ such that PX8 and PY8 are the corresponding numbers of
+ interior knots. Constraint: PX >= 8 and PY >= 8.
 [5] Peters G and Wilkinson J H (1970) The Leastsquares Problem
 and Pseudoinverses. Comput. J. 13 309316.
+ 5: X(MX)  DOUBLE PRECISION array Input
 [6] Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
 10 177183.
+ 6: Y(MY)  DOUBLE PRECISION array Input
+ On entry: X and Y must contain x , for q=1,2,...,m , and y ,
+ q x r
+ for r=1,2,...,m , respectively. These are the x and y co
+ y
+ ordinates that define the rectangular grid of points at
+ which values of the spline are required. Constraint: X and Y
+ must satisfy
 5. Parameters
+ LAMDA(4) <= X(q) < X(q+1) <= LAMDA(PX3), for q=1,2,...,m 1
+ x
+ and
 1: START  CHARACTER*1 Input
 On entry: START must be set to 'C' or 'W'.
+ MU(4) <= Y(r) < Y(r+1) <= MU(PY3), for r=1,2,...,m 1.
+ y
 If START = 'C' (Cold start), the routine will build up the
 knot set starting with no interior knots. No values need be
 assigned to the parameters NX, NY, LAMDA, MU or WRK.
+ The spline representation is not valid outside these
+ intervals.
 If START = 'W' (Warm start), the routine will restart the
 knotplacing strategy using the knots found in a previous
 call of the routine. In this case, the parameters NX, NY,
 LAMDA, MU and WRK must be unchanged from that previous call.
 This warm start can save much time in searching for a
 satisfactory value of S. Constraint: START = 'C' or 'W'.
+ 7: LAMDA(PX)  DOUBLE PRECISION array Input
 2: M  INTEGER Input
 On entry: m, the number of data points.
+ 8: MU(PY)  DOUBLE PRECISION array Input
+ On entry: LAMDA and MU must contain the complete sets of
+ knots {(lambda)} and {(mu)} associated with the x and y
+ variables respectively. Constraint: the knots in each set
+ must be in nondecreasing order, with LAMDA(PX3) > LAMDA(4)
+ and MU(PY3) > MU(4).
 The number of data points with nonzero weight (see W below)
 must be at least 16.
+ 9: C((PX4)*(PY4))  DOUBLE PRECISION array Input
+ On entry: C((PY4)*(i1)+j) must contain the coefficient
+ c described in Section 3, for i=1,2,...,PX4;
+ ij
+ j=1,2,...,PY4.
 3: X(M)  DOUBLE PRECISION array Input
+ 10: FF(MX*MY)  DOUBLE PRECISION array Output
+ On exit: FF(MY*(q1)+r) contains the value of the spline at
+ the point (x ,y ), for q=1,2,...,m ; r=1,2,...,m .
+ q r x y
 4: Y(M)  DOUBLE PRECISION array Input
+ 11: WRK(LWRK)  DOUBLE PRECISION array Workspace
 5: F(M)  DOUBLE PRECISION array Input
 On entry: X(r), Y(r), F(r) must be set to the coordinates
 of (x ,y ,f ), the rth data point, for r=1,2,...,m. The
 r r r
 order of the data points is immaterial.
+ 12: LWRK  INTEGER Input
+ On entry:
+ the dimension of the array WRK as declared in the
+ (sub)program from which E02DFF is called.
+ Constraint: LWRK >= min(NWRK1,NWRK2), where NWRK1=4*MX+PX,
+ NWRK2=4*MY+PY.
 6: W(M)  DOUBLE PRECISION array Input
 On entry: W(r) must be set to w , the rth value in the set
 r
 of weights, for r=1,2,...,m. Zero weights are permitted and
 the corresponding points are ignored, except when
 determining x , x , y and y (see Section 8.4). For
 min max min max
 advice on the choice of weights, see Section 2.1.2 of the
 Chapter Introduction. Constraint: the number of data points
 with nonzero weight must be at least 16.
+ 13: IWRK(LIWRK)  INTEGER array Workspace
 7: S  DOUBLE PRECISION Input
 On entry: the smoothing factor, S.
+ 14: LIWRK  INTEGER Input
+ On entry:
+ the dimension of the array IWRK as declared in the
+ (sub)program from which E02DFF is called.
+ Constraint: LIWRK >= MY + PY  4 if NWRK1 > NWRK2, or MX +
+ PX  4 otherwise, where NWRK1 and NWRK2 are as defined in
+ the description of argument LWRK.
 For advice on the choice of S, see Section 3 and Section 8.2
 . Constraint: S > 0.0.
+ 15: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 8: NXEST  INTEGER Input
+ On exit: IFAIL = 0 unless the routine detects an error (see
+ Section 6).
+ 6. Error Indicators and Warnings
 9: NYEST  INTEGER Input
 On entry: an upper bound for the number of knots n and n
 x y
 required in the x and ydirections respectively.
 ___
 In most practical situations, NXEST = NYEST = 4+\/m/2 is
 sufficient. See also Section 8.3. Constraint: NXEST >= 8 and
 NYEST >= 8.
+ Errors detected by the routine:
 10: NX  INTEGER Input/Output
 On entry: if the warm start option is used, the value of NX
 must be left unchanged from the previous call. On exit: the
 total number of knots, n , of the computed spline with
 x
 respect to the x variable.
+ If on entry IFAIL = 0 or 1, explanatory error messages are
+ output on the current error message unit (as defined by X04AAF).
 11: LAMDA(NXEST)  DOUBLE PRECISION array Input/Output
 On entry: if the warm start option is used, the values LAMDA
 (1), LAMDA(2),...,LAMDA(NX) must be left unchanged from the
 previous call. On exit: LAMDA contains the complete set of
 knots (lambda) associated with the x variable, i.e., the
 i
 interior knots LAMDA(5), LAMDA(6),...,LAMDA(NX4) as well as
 the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA
 (4) = x and LAMDA(NX3) = LAMDA(NX2) = LAMDA(NX1) =
 min
 LAMDA(NX) = x needed for the Bspline representation
 max
 (where x and x are as described in Section 3).
 min max
+ IFAIL= 1
+ On entry MX < 1,
 12: NY  INTEGER Input/Output
 On entry: if the warm start option is used, the value of NY
 must be left unchanged from the previous call. On exit: the
 total number of knots, n , of the computed spline with
 y
 respect to the y variable.
+ or MY < 1,
 13: MU(NYEST)  DOUBLE PRECISION array Input/Output
 On entry: if the warm start option is used, the values MU(1)
 MU(2),...,MU(NY) must be left unchanged from the previous
 call. On exit: MU contains the complete set of knots (mu)
 i
 associated with the y variable, i.e., the interior knots MU
 (5), MU(6),...,MU(NY4) as well as the additional knots MU
 (1) = MU(2) = MU(3) = MU(4) = y and MU(NY3) = MU(NY2) =
 min
 MU(NY1) = MU(NY) = y needed for the Bspline
 max
 representation (where y and y are as described in
 min max
 Section 3).
+ or PY < 8,
 14: C((NXEST4)*(NYEST4))  DOUBLE PRECISION array Output
 On exit: the coefficients of the spline approximation. C(
 (n 4)*(i1)+j) is the coefficient c defined in Section 3.
 y ij
+ or PX < 8.
 15: FP  DOUBLE PRECISION Output
 On exit: the weighted sum of squared residuals, (theta), of
 the computed spline approximation. FP should equal S within
 a relative tolerance of 0.001 unless NX = NY = 8, when the
 spline has no interior knots and so is simply a bicubic
 polynomial. For knots to be inserted, S must be set to a
 value below the value of FP produced in this case.
+ IFAIL= 2
+ On entry LWRK is too small,
 16: RANK  INTEGER Output
 On exit: RANK gives the rank of the system of equations used
 to compute the final spline (as determined by a suitable
 machinedependent threshold). When RANK = (NX4)*(NY4), the
 solution is unique; otherwise the system is rankdeficient
 and the minimumnorm solution is computed. The latter case
 may be caused by too small a value of S.
+ or LIWRK is too small.
 17: WRK(LWRK)  DOUBLE PRECISION array Workspace
 On entry: if the warm start option is used, the value of WRK
 (1) must be left unchanged from the previous call.
+ IFAIL= 3
+ On entry the knots in array LAMDA, or those in array MU, are
+ not in nondecreasing order, or LAMDA(PX3) <= LAMDA(4), or
+ MU(PY3) <= MU(4).
 This array is used as workspace.
+ IFAIL= 4
+ On entry the restriction LAMDA(4) <= X(1) <... < X(MX) <=
+ LAMDA(PX3), or the restriction MU(4) <= Y(1) <... < Y(MY)
+ <= MU(PY3), is violated.
 18: LWRK  INTEGER Input
 On entry:
 the dimension of the array WRK as declared in the
 (sub)program from which E02DDF is called.
 Constraint: LWRK >= (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,
+ 7. Accuracy
 where
+ The method used to evaluate the Bsplines is numerically stable,
+ in the sense that each computed value of s(x ,y ) can be regarded
+ r r
+ as the value that would have been obtained in exact arithmetic
+ from slightly perturbed Bspline coefficients. See Cox [2] for
+ details.
 u=NXEST4, v=NYEST4, and w=max(u,v).
+ 8. Further Comments
 For some problems, the routine may need to compute the
 minimal leastsquares solution of a rankdeficient system of
 linear equations (see Section 3). The amount of workspace
 required to solve such problems will be larger than
 specified by the value given above, which must be increased
 by an amount, LWRK2 say. An upper bound for LWRK2 is given
 by 4*u*v*w+2*u*v+4*w, where u, v and w are as above.
 However, if there are enough data points, scattered
 uniformly over the approximation domain, and if the
 smoothing factor S is not too small, there is a good chance
 that this extra workspace is not needed. A lot of memory
 might therefore be saved by assuming LWRK2 = 0.
+ Computation time is approximately proportional to m m +4(m +m ).
+ x y x y
 19: IWRK(LIWRK)  INTEGER array Workspace
+ 9. Example
+ This program reads in knot sets LAMDA(1),..., LAMDA(PX) and MU(1)
+ ,..., MU(PY), and a set of bicubic spline coefficients c .
+ ij
+ Following these are values for m and the x coordinates x , for
+ x q
+ q=1,2,...,m , and values for m and the y coordinates y , for
+ x y r
+ r=1,2,...,m , defining the grid of points on which the spline is
+ y
+ to be evaluated.
 20: LIWRK  INTEGER Input
 On entry:
 the dimension of the array IWRK as declared in the
 (sub)program from which E02DDF is called.
 Constraint: LIWRK>=M+2*(NXEST7)*(NYEST7).
+ The example program is not reproduced here. The source code for
+ all example programs is distributed with the NAG Foundation
+ Library software and should be available online.
 21: IFAIL  INTEGER Input/Output
 On entry: IFAIL must be set to 0, 1 or 1. For users not
 familiar with this parameter (described in the Essential
 Introduction) the recommended value is 0.
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 On exit: IFAIL = 0 unless the routine detects an error (see
 Section 6).
+ E02  Curve and Surface Fitting E02GAF
+ E02GAF  NAG Foundation Library Routine Document
 6. Error Indicators and Warnings
+ Note: Before using this routine, please read the Users' Note for
+ your implementation to check implementationdependent details.
+ The symbol (*) after a NAG routine name denotes a routine that is
+ not included in the Foundation Library.
 Errors detected by the routine:
+ 1. Purpose
 If on entry IFAIL = 0 or 1, explanatory error messages are
 output on the current error message unit (as defined by X04AAF).
+ E02GAF calculates an l solution to an overdetermined system of
+ 1
+ linear equations.
 IFAIL= 1
 On entry START /= 'C' or 'W',
+ 2. Specification
 or the number of data points with nonzero weight <
 16,
+ SUBROUTINE E02GAF (M, A, LA, B, NPLUS2, TOLER, X, RESID,
+ 1 IRANK, ITER, IWORK, IFAIL)
+ INTEGER M, LA, NPLUS2, IRANK, ITER, IWORK(M),
+ 1 IFAIL
+ DOUBLE PRECISION A(LA,NPLUS2), B(M), TOLER, X(NPLUS2),
+ 1 RESID
 or S <= 0.0,
+ 3. Description
 or NXEST < 8,
+ Given a matrix A with m rows and n columns (m>=n) and a vector b
+ with m elements, the routine calculates an l solution to the
+ 1
+ overdetermined system of equations
 or NYEST < 8,
+ Ax=b.
 or LWRK < (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,
 where u = NXEST  4, v = NYEST  4 and w=max(u,v),
+ That is to say, it calculates a vector x, with n elements, which
+ minimizes the l norm (the sum of the absolute values) of the
+ 1
+ residuals
 or LIWRK r ,
+  i
+ i=1
 IFAIL= 2
 On entry either all the X(r), for r = 1,2,...,M, are equal,
 or all the Y(r), for r = 1,2,...,M, are equal.
+ where the residuals r are given by
+ i
 IFAIL= 3
 The number of knots required is greater than allowed by
 NXEST and NYEST. Try increasing NXEST and/or NYEST and, if
 necessary, supplying larger arrays for the parameters LAMDA,
 MU, C, WRK and IWRK. However, if NXEST and NYEST are already

+ n
+ 
+ r =b  > a x , i=1,2,...,m.
+ i i  ij j
+ j=1
 large, say NXEST, NYEST > 4 + \/M/2, then this error exit
 may indicate that S is too small.
+ Here a is the element in row i and column j of A, b is the ith
+ ij i
+ element of b and x the jth element of x. The matrix A need not
+ j
+ be of full rank.
 IFAIL= 4
 No more knots can be added because the number of Bspline
 coefficients (NX4)*(NY4) already exceeds the number of
 data points M. This error exit may occur if either of S or M
 is too small.
+ Typically in applications to data fitting, data consisting of m
+ points with coordinates (t ,y ) are to be approximated in the l
+ i i 1
+ norm by a linear combination of known functions (phi) (t),
+ j
 IFAIL= 5
 No more knots can be added because the additional knot would
 (quasi) coincide with an old one. This error exit may occur
 if too large a weight has been given to an inaccurate data
 point, or if S is too small.
+ (alpha) (phi) (t)+(alpha) (phi) (t)+...+(alpha) (phi) (t).
+ 1 1 2 2 n n
 IFAIL= 6
 The iterative process used to compute the coefficients of
 the approximating spline has failed to converge. This error
 exit may occur if S has been set very small. If the error
 persists with increased S, consult NAG.
+ This is equivalent to fitting an l solution to the over
+ 1
+ determined system of equations
 IFAIL= 7
 LWRK is too small; the routine needs to compute the minimal
 leastsquares solution of a rankdeficient system of linear
 equations, but there is not enough workspace. There is no
 approximation returned but, having saved the information
 contained in NX, LAMDA, NY, MU and WRK, and having adjusted
 the value of LWRK and the dimension of array WRK
 accordingly, the user can continue at the point the program
 was left by calling E02DDF with START = 'W'. Note that the
 requested value for LWRK is only large enough for the
 current phase of the algorithm. If the routine is restarted
 with LWRK set to the minimum value requested, a larger
 request may be made at a later stage of the computation. See
 Section 5 for the upper bound on LWRK. On soft failure, the
 minimum requested value for LWRK is returned in IWRK(1) and
 the safe value for LWRK is returned in IWRK(2).
+ n
+ 
+ > (phi) (t )(alpha) =y , i=1,2,...,m.
+  j i j i
+ j=1
 If IFAIL = 3,4,5 or 6, a spline approximation is returned, but it
 fails to satisfy the fitting criterion (see (2) and (3) in
 Section 3  perhaps only by a small amount, however.
+ Thus if, for each value of i and j, the element a of the matrix
+ ij
+ A in the previous paragraph is set equal to the value of
+ (phi) (t ) and b is set equal to y , the solution vector x will
+ j i i i
+ contain the required values of the (alpha) . Note that the
+ j
+ independent variable t above can, instead, be a vector of several
+ independent variables (this includes the case where each (phi)
+ i
+ is a function of a different variable, or set of variables).
 7. Accuracy
+ The algorithm is a modification of the simplex method of linear
+ programming applied to the primal formulation of the l problem
+ 1
+ (see Barrodale and Roberts [1] and [2]). The modification allows
+ several neighbouring simplex vertices to be passed through in a
+ single iteration, providing a substantial improvement in
+ efficiency.
 On successful exit, the approximation returned is such that its
 weighted sum of squared residuals FP is equal to the smoothing
 factor S, up to a specified relative tolerance of 0.001  except
 that if n =8 and n =8, FP may be significantly less than S: in
 x y
 this case the computed spline is simply the leastsquares bicubic
 polynomial approximation of degree 3, i.e., a spline with no
 interior knots.
+ 4. References
 8. Further Comments
+ [1] Barrodale I and Roberts F D K (1973) An Improved Algorithm
+ for Discrete \\ll Linear Approximation. SIAM J. Numer.
+ 1
+ Anal. 10 839848.
 8.1. Timing
+ [2] Barrodale I and Roberts F D K (1974) Solution of an
+ Overdetermined System of Equations in the \\ll norm. Comm.
+ 1
+ ACM. 17, 6 319320.
 The time taken for a call of E02DDF depends on the complexity of
 the shape of the data, the value of the smoothing factor S, and
 the number of data points. If E02DDF is to be called for
 different values of S, much time can be saved by setting START =
 It should be noted that choosing S very small considerably
 increases computation time.
+ 5. Parameters
 8.2. Choice of S
+ 1: M  INTEGER Input
+ On entry: the number of equations, m (the number of rows of
+ the matrix A). Constraint: M >= n >= 1.
 If the weights have been correctly chosen (see Section 2.1.2 of
 the Chapter Introduction), the standard deviation of w f would
 r r
 be the same for all r, equal to (sigma), say. In this case,
 2
 choosing the smoothing factor S in the range (sigma) (m+\/2m),
 as suggested by Reinsch [6], is likely to give a good start in
 the search for a satisfactory value. Otherwise, experimenting
 with different values of S will be required from the start.
+ 2: A(LA,NPLUS2)  DOUBLE PRECISION array Input/Output
+ On entry: A(i,j) must contain a , the element in the ith
+ ij
+ row and jth column of the matrix A, for i=1,2,...,m and
+ j=1,2,...,n. The remaining elements need not be set. On
+ exit: A contains the last simplex tableau generated by the
+ simplex method.
 In that case, in view of computation time and memory
 requirements, it is recommended to start with a very large value
 for S and so determine the leastsquares bicubic polynomial; the
 value returned for FP, call it FP , gives an upper bound for S.
 0
 Then progressively decrease the value of S to obtain closer fits
  say by a factor of 10 in the beginning, i.e., S=FP /10,
 0
 S=FP /100, and so on, and more carefully as the approximation
 0
 shows more details.
+ 3: LA  INTEGER Input
+ On entry:
+ the first dimension of the array A as declared in the
+ (sub)program from which E02GAF is called.
+ Constraint: LA >= M + 2.
 To choose S very small is strongly discouraged. This considerably
 increases computation time and memory requirements. It may also
 cause rankdeficiency (as indicated by the parameter RANK) and
 endanger numerical stability.
+ 4: B(M)  DOUBLE PRECISION array Input/Output
+ On entry: b , the ith element of the vector b, for
+ i
+ i=1,2,...,m. On exit: the ith residual r corresponding to
+ i
+ the solution vector x, for i=1,2,...,m.
 The number of knots of the spline returned, and their location,
 generally depend on the value of S and on the behaviour of the
 function underlying the data. However, if E02DDF is called with
 START = 'W', the knots returned may also depend on the smoothing
 factors of the previous calls. Therefore if, after a number of
 trials with different values of S and START = 'W', a fit can
 finally be accepted as satisfactory, it may be worthwhile to call
 E02DDF once more with the selected value for S but now using
 START = 'C'. Often, E02DDF then returns an approximation with the
 same quality of fit but with fewer knots, which is therefore
 better if data reduction is also important.
+ 5: NPLUS2  INTEGER Input
+ On entry: n+2, where n is the number of unknowns (the
+ number of columns of the matrix A). Constraint: 3 <= NPLUS2
+ <= M + 2.
 8.3. Choice of NXEST and NYEST
+ 6: TOLER  DOUBLE PRECISION Input
+ On entry: a nonnegative value. In general TOLER specifies
+ a threshold below which numbers are regarded as zero. The
+ 2/3
+ recommended threshold value is (epsilon) where (epsilon)
+ is the machine precision. The recommended value can be
+ computed within the routine by setting TOLER to zero. If
+ premature termination occurs a larger value for TOLER may
+ result in a valid solution. Suggested value: 0.0.
 The number of knots may also depend on the upper bounds NXEST and
 NYEST. Indeed, if at a certain stage in E02DDF the number of
 knots in one direction (say n ) has reached the value of its
 x
 upper bound (NXEST), then from that moment on all subsequent
 knots are added in the other (y) direction. This may indicate
 that the value of NXEST is too small. On the other hand, it gives
 the user the option of limiting the number of knots the routine
 locates in any direction. For example, by setting NXEST = 8 (the
 lowest allowable value for NXEST), the user can indicate that he
 wants an approximation which is a simple cubic polynomial in the
 variable x.
+ 7: X(NPLUS2)  DOUBLE PRECISION array Output
+ On exit: X(j) contains the jth element of the solution
+ vector x, for j=1,2,...,n. The elements X(n+1) and X(n+2)
+ are unused.
+
+ 8: RESID  DOUBLE PRECISION Output
+ On exit: the sum of the absolute values of the residuals
+ for the solution vector x.
 8.4. Restriction of the approximation domain
+ 9: IRANK  INTEGER Output
+ On exit: the computed rank of the matrix A.
 The fit obtained is not defined outside the rectangle
 [(lambda) ,(lambda) ]*[(mu) ,(mu) ]. The reason for taking
 4 n 3 4 n 3
 x y
 the extreme data values of x and y for these four knots is that,
 as is usual in data fitting, the fit cannot be expected to give
 satisfactory values outside the data region. If, nevertheless,
 the user requires values over a larger rectangle, this can be
 achieved by augmenting the data with two artificial data points
 (a,c,0) and (b,d,0) with zero weight, where [a,b]*[c,d] denotes
 the enlarged rectangle.
+ 10: ITER  INTEGER Output
+ On exit: the number of iterations taken by the simplex
+ method.
 8.5. Outline of method used
+ 11: IWORK(M)  INTEGER array Workspace
 First suitable knot sets are built up in stages (starting with no
 interior knots in the case of a cold start but with the knot set
 found in a previous call if a warm start is chosen). At each
 stage, a bicubic spline is fitted to the data by leastsquares
 and (theta), the sum of squares of residuals, is computed. If
 (theta)>S, a new knot is added to one knot set or the other so as
 to reduce (theta) at the next stage. The new knot is located in
 an interval where the fit is particularly poor. Sooner or later,
 we find that (theta)<=S and at that point the knot sets are
 accepted. The routine then goes on to compute a spline which has
 these knot sets and which satisfies the full fitting criterion
 specified by (2) and (3). The theoretical solution has (theta)=S.
 The routine computes the spline by an iterative scheme which is
 ended when (theta)=S within a relative tolerance of 0.001. The
 main part of each iteration consists of a linear leastsquares
 computation of special form, done in a similarly stable and
 efficient manner as in E02DAF. As there also, the minimal least
 squares solution is computed wherever the linear system is found
 to be rankdeficient.
+ 12: IFAIL  INTEGER Input/Output
+ On entry: IFAIL must be set to 0, 1 or 1. For users not
+ familiar with this parameter (described in the Essential
+ Introduction) the recommended value is 0.
 An exception occurs when the routine finds at the start that,
 even with no interior knots (N = 8), the leastsquares spline
 already has its sum of squares of res