3 * =========== DOCUMENTATION ===========
5 * Online html documentation available at
6 * http://www.netlib.org/lapack/explore-html/
9 *> Download SGEJSV + dependencies
10 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/sgejsv.f">
12 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/sgejsv.f">
14 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/sgejsv.f">
21 * SUBROUTINE SGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
22 * M, N, A, LDA, SVA, U, LDU, V, LDV,
23 * WORK, LWORK, IWORK, INFO )
25 * .. Scalar Arguments ..
27 * INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
29 * .. Array Arguments ..
30 * REAL A( LDA, * ), SVA( N ), U( LDU, * ), V( LDV, * ),
33 * CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
42 *> SGEJSV computes the singular value decomposition (SVD) of a real M-by-N
43 *> matrix [A], where M >= N. The SVD of [A] is written as
45 *> [A] = [U] * [SIGMA] * [V]^t,
47 *> where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N
48 *> diagonal elements, [U] is an M-by-N (or M-by-M) orthonormal matrix, and
49 *> [V] is an N-by-N orthogonal matrix. The diagonal elements of [SIGMA] are
50 *> the singular values of [A]. The columns of [U] and [V] are the left and
51 *> the right singular vectors of [A], respectively. The matrices [U] and [V]
52 *> are computed and stored in the arrays U and V, respectively. The diagonal
53 *> of [SIGMA] is computed and stored in the array SVA.
54 *> SGEJSV can sometimes compute tiny singular values and their singular vectors much
55 *> more accurately than other SVD routines, see below under Further Details.
63 *> JOBA is CHARACTER*1
64 *> Specifies the level of accuracy:
65 *> = 'C': This option works well (high relative accuracy) if A = B * D,
66 *> with well-conditioned B and arbitrary diagonal matrix D.
67 *> The accuracy cannot be spoiled by COLUMN scaling. The
68 *> accuracy of the computed output depends on the condition of
69 *> B, and the procedure aims at the best theoretical accuracy.
70 *> The relative error max_{i=1:N}|d sigma_i| / sigma_i is
71 *> bounded by f(M,N)*epsilon* cond(B), independent of D.
72 *> The input matrix is preprocessed with the QRF with column
73 *> pivoting. This initial preprocessing and preconditioning by
74 *> a rank revealing QR factorization is common for all values of
75 *> JOBA. Additional actions are specified as follows:
76 *> = 'E': Computation as with 'C' with an additional estimate of the
77 *> condition number of B. It provides a realistic error bound.
78 *> = 'F': If A = D1 * C * D2 with ill-conditioned diagonal scalings
79 *> D1, D2, and well-conditioned matrix C, this option gives
80 *> higher accuracy than the 'C' option. If the structure of the
81 *> input matrix is not known, and relative accuracy is
82 *> desirable, then this option is advisable. The input matrix A
83 *> is preprocessed with QR factorization with FULL (row and
85 *> = 'G' Computation as with 'F' with an additional estimate of the
86 *> condition number of B, where A=D*B. If A has heavily weighted
87 *> rows, then using this condition number gives too pessimistic
89 *> = 'A': Small singular values are the noise and the matrix is treated
90 *> as numerically rank defficient. The error in the computed
91 *> singular values is bounded by f(m,n)*epsilon*||A||.
92 *> The computed SVD A = U * S * V^t restores A up to
93 *> f(m,n)*epsilon*||A||.
94 *> This gives the procedure the licence to discard (set to zero)
95 *> all singular values below N*epsilon*||A||.
96 *> = 'R': Similar as in 'A'. Rank revealing property of the initial
97 *> QR factorization is used do reveal (using triangular factor)
98 *> a gap sigma_{r+1} < epsilon * sigma_r in which case the
99 *> numerical RANK is declared to be r. The SVD is computed with
100 *> absolute error bounds, but more accurately than with 'A'.
105 *> JOBU is CHARACTER*1
106 *> Specifies whether to compute the columns of U:
107 *> = 'U': N columns of U are returned in the array U.
108 *> = 'F': full set of M left sing. vectors is returned in the array U.
109 *> = 'W': U may be used as workspace of length M*N. See the description
111 *> = 'N': U is not computed.
116 *> JOBV is CHARACTER*1
117 *> Specifies whether to compute the matrix V:
118 *> = 'V': N columns of V are returned in the array V; Jacobi rotations
119 *> are not explicitly accumulated.
120 *> = 'J': N columns of V are returned in the array V, but they are
121 *> computed as the product of Jacobi rotations. This option is
122 *> allowed only if JOBU .NE. 'N', i.e. in computing the full SVD.
123 *> = 'W': V may be used as workspace of length N*N. See the description
125 *> = 'N': V is not computed.
130 *> JOBR is CHARACTER*1
131 *> Specifies the RANGE for the singular values. Issues the licence to
132 *> set to zero small positive singular values if they are outside
133 *> specified range. If A .NE. 0 is scaled so that the largest singular
134 *> value of c*A is around SQRT(BIG), BIG=SLAMCH('O'), then JOBR issues
135 *> the licence to kill columns of A whose norm in c*A is less than
136 *> SQRT(SFMIN) (for JOBR.EQ.'R'), or less than SMALL=SFMIN/EPSLN,
137 *> where SFMIN=SLAMCH('S'), EPSLN=SLAMCH('E').
138 *> = 'N': Do not kill small columns of c*A. This option assumes that
139 *> BLAS and QR factorizations and triangular solvers are
140 *> implemented to work in that range. If the condition of A
141 *> is greater than BIG, use SGESVJ.
142 *> = 'R': RESTRICTED range for sigma(c*A) is [SQRT(SFMIN), SQRT(BIG)]
143 *> (roughly, as described above). This option is recommended.
144 *> ===========================
145 *> For computing the singular values in the FULL range [SFMIN,BIG]
151 *> JOBT is CHARACTER*1
152 *> If the matrix is square then the procedure may determine to use
153 *> transposed A if A^t seems to be better with respect to convergence.
154 *> If the matrix is not square, JOBT is ignored. This is subject to
155 *> changes in the future.
156 *> The decision is based on two values of entropy over the adjoint
157 *> orbit of A^t * A. See the descriptions of WORK(6) and WORK(7).
158 *> = 'T': transpose if entropy test indicates possibly faster
159 *> convergence of Jacobi process if A^t is taken as input. If A is
160 *> replaced with A^t, then the row pivoting is included automatically.
161 *> = 'N': do not speculate.
162 *> This option can be used to compute only the singular values, or the
163 *> full SVD (U, SIGMA and V). For only one set of singular vectors
164 *> (U or V), the caller should provide both U and V, as one of the
165 *> matrices is used as workspace if the matrix A is transposed.
166 *> The implementer can easily remove this constraint and make the
167 *> code more complicated. See the descriptions of U and V.
172 *> JOBP is CHARACTER*1
173 *> Issues the licence to introduce structured perturbations to drown
174 *> denormalized numbers. This licence should be active if the
175 *> denormals are poorly implemented, causing slow computation,
176 *> especially in cases of fast convergence (!). For details see [1,2].
177 *> For the sake of simplicity, this perturbations are included only
178 *> when the full SVD or only the singular values are requested. The
179 *> implementer/user can easily add the perturbation for the cases of
180 *> computing one set of singular vectors.
181 *> = 'P': introduce perturbation
182 *> = 'N': do not perturb
188 *> The number of rows of the input matrix A. M >= 0.
194 *> The number of columns of the input matrix A. M >= N >= 0.
199 *> A is REAL array, dimension (LDA,N)
200 *> On entry, the M-by-N matrix A.
206 *> The leading dimension of the array A. LDA >= max(1,M).
211 *> SVA is REAL array, dimension (N)
213 *> - For WORK(1)/WORK(2) = ONE: The singular values of A. During the
214 *> computation SVA contains Euclidean column norms of the
215 *> iterated matrices in the array A.
216 *> - For WORK(1) .NE. WORK(2): The singular values of A are
217 *> (WORK(1)/WORK(2)) * SVA(1:N). This factored form is used if
218 *> sigma_max(A) overflows or if small singular values have been
219 *> saved from underflow by scaling the input matrix A.
220 *> - If JOBR='R' then some of the singular values may be returned
221 *> as exact zeros obtained by "set to zero" because they are
222 *> below the numerical rank threshold or are denormalized numbers.
227 *> U is REAL array, dimension ( LDU, N )
228 *> If JOBU = 'U', then U contains on exit the M-by-N matrix of
229 *> the left singular vectors.
230 *> If JOBU = 'F', then U contains on exit the M-by-M matrix of
231 *> the left singular vectors, including an ONB
232 *> of the orthogonal complement of the Range(A).
233 *> If JOBU = 'W' .AND. (JOBV.EQ.'V' .AND. JOBT.EQ.'T' .AND. M.EQ.N),
234 *> then U is used as workspace if the procedure
235 *> replaces A with A^t. In that case, [V] is computed
236 *> in U as left singular vectors of A^t and then
237 *> copied back to the V array. This 'W' option is just
238 *> a reminder to the caller that in this case U is
239 *> reserved as workspace of length N*N.
240 *> If JOBU = 'N' U is not referenced, unless JOBT='T'.
246 *> The leading dimension of the array U, LDU >= 1.
247 *> IF JOBU = 'U' or 'F' or 'W', then LDU >= M.
252 *> V is REAL array, dimension ( LDV, N )
253 *> If JOBV = 'V', 'J' then V contains on exit the N-by-N matrix of
254 *> the right singular vectors;
255 *> If JOBV = 'W', AND (JOBU.EQ.'U' AND JOBT.EQ.'T' AND M.EQ.N),
256 *> then V is used as workspace if the pprocedure
257 *> replaces A with A^t. In that case, [U] is computed
258 *> in V as right singular vectors of A^t and then
259 *> copied back to the U array. This 'W' option is just
260 *> a reminder to the caller that in this case V is
261 *> reserved as workspace of length N*N.
262 *> If JOBV = 'N' V is not referenced, unless JOBT='T'.
268 *> The leading dimension of the array V, LDV >= 1.
269 *> If JOBV = 'V' or 'J' or 'W', then LDV >= N.
274 *> WORK is REAL array, dimension at least LWORK.
276 *> WORK(1) = SCALE = WORK(2) / WORK(1) is the scaling factor such
277 *> that SCALE*SVA(1:N) are the computed singular values
278 *> of A. (See the description of SVA().)
279 *> WORK(2) = See the description of WORK(1).
280 *> WORK(3) = SCONDA is an estimate for the condition number of
281 *> column equilibrated A. (If JOBA .EQ. 'E' or 'G')
282 *> SCONDA is an estimate of SQRT(||(R^t * R)^(-1)||_1).
283 *> It is computed using SPOCON. It holds
284 *> N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
285 *> where R is the triangular factor from the QRF of A.
286 *> However, if R is truncated and the numerical rank is
287 *> determined to be strictly smaller than N, SCONDA is
288 *> returned as -1, thus indicating that the smallest
289 *> singular values might be lost.
291 *> If full SVD is needed, the following two condition numbers are
292 *> useful for the analysis of the algorithm. They are provied for
293 *> a developer/implementer who is familiar with the details of
296 *> WORK(4) = an estimate of the scaled condition number of the
297 *> triangular factor in the first QR factorization.
298 *> WORK(5) = an estimate of the scaled condition number of the
299 *> triangular factor in the second QR factorization.
300 *> The following two parameters are computed if JOBT .EQ. 'T'.
301 *> They are provided for a developer/implementer who is familiar
302 *> with the details of the method.
304 *> WORK(6) = the entropy of A^t*A :: this is the Shannon entropy
305 *> of diag(A^t*A) / Trace(A^t*A) taken as point in the
306 *> probability simplex.
307 *> WORK(7) = the entropy of A*A^t.
313 *> Length of WORK to confirm proper allocation of work space.
314 *> LWORK depends on the job:
316 *> If only SIGMA is needed ( JOBU.EQ.'N', JOBV.EQ.'N' ) and
317 *> -> .. no scaled condition estimate required (JOBE.EQ.'N'):
318 *> LWORK >= max(2*M+N,4*N+1,7). This is the minimal requirement.
319 *> ->> For optimal performance (blocked code) the optimal value
320 *> is LWORK >= max(2*M+N,3*N+(N+1)*NB,7). Here NB is the optimal
321 *> block size for DGEQP3 and DGEQRF.
322 *> In general, optimal LWORK is computed as
323 *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DGEQRF), 7).
324 *> -> .. an estimate of the scaled condition number of A is
325 *> required (JOBA='E', 'G'). In this case, LWORK is the maximum
326 *> of the above and N*N+4*N, i.e. LWORK >= max(2*M+N,N*N+4*N,7).
327 *> ->> For optimal performance (blocked code) the optimal value
328 *> is LWORK >= max(2*M+N,3*N+(N+1)*NB, N*N+4*N, 7).
329 *> In general, the optimal length LWORK is computed as
330 *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DGEQRF),
331 *> N+N*N+LWORK(DPOCON),7).
333 *> If SIGMA and the right singular vectors are needed (JOBV.EQ.'V'),
334 *> -> the minimal requirement is LWORK >= max(2*M+N,4*N+1,7).
335 *> -> For optimal performance, LWORK >= max(2*M+N,3*N+(N+1)*NB,7),
336 *> where NB is the optimal block size for DGEQP3, DGEQRF, DGELQ,
337 *> DORMLQ. In general, the optimal length LWORK is computed as
338 *> LWORK >= max(2*M+N,N+LWORK(DGEQP3), N+LWORK(DPOCON),
339 *> N+LWORK(DGELQ), 2*N+LWORK(DGEQRF), N+LWORK(DORMLQ)).
341 *> If SIGMA and the left singular vectors are needed
342 *> -> the minimal requirement is LWORK >= max(2*M+N,4*N+1,7).
343 *> -> For optimal performance:
344 *> if JOBU.EQ.'U' :: LWORK >= max(2*M+N,3*N+(N+1)*NB,7),
345 *> if JOBU.EQ.'F' :: LWORK >= max(2*M+N,3*N+(N+1)*NB,N+M*NB,7),
346 *> where NB is the optimal block size for DGEQP3, DGEQRF, DORMQR.
347 *> In general, the optimal length LWORK is computed as
348 *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DPOCON),
349 *> 2*N+LWORK(DGEQRF), N+LWORK(DORMQR)).
350 *> Here LWORK(DORMQR) equals N*NB (for JOBU.EQ.'U') or
351 *> M*NB (for JOBU.EQ.'F').
353 *> If the full SVD is needed: (JOBU.EQ.'U' or JOBU.EQ.'F') and
355 *> the minimal requirement is LWORK >= max(2*M+N,6*N+2*N*N).
356 *> -> if JOBV.EQ.'J' the minimal requirement is
357 *> LWORK >= max(2*M+N, 4*N+N*N,2*N+N*N+6).
358 *> -> For optimal performance, LWORK should be additionally
359 *> larger than N+M*NB, where NB is the optimal block size
365 *> IWORK is INTEGER array, dimension M+3*N.
367 *> IWORK(1) = the numerical rank determined after the initial
368 *> QR factorization with pivoting. See the descriptions
370 *> IWORK(2) = the number of the computed nonzero singular values
371 *> IWORK(3) = if nonzero, a warning message:
372 *> If IWORK(3).EQ.1 then some of the column norms of A
373 *> were denormalized floats. The requested high accuracy
374 *> is not warranted by the data.
380 *> < 0 : if INFO = -i, then the i-th argument had an illegal value.
381 *> = 0 : successful exit;
382 *> > 0 : SGEJSV did not converge in the maximal allowed number
383 *> of sweeps. The computed values may be inaccurate.
389 *> \author Univ. of Tennessee
390 *> \author Univ. of California Berkeley
391 *> \author Univ. of Colorado Denver
396 *> \ingroup realGEsing
398 *> \par Further Details:
399 * =====================
403 *> SGEJSV implements a preconditioned Jacobi SVD algorithm. It uses SGEQP3,
404 *> SGEQRF, and SGELQF as preprocessors and preconditioners. Optionally, an
405 *> additional row pivoting can be used as a preprocessor, which in some
406 *> cases results in much higher accuracy. An example is matrix A with the
407 *> structure A = D1 * C * D2, where D1, D2 are arbitrarily ill-conditioned
408 *> diagonal matrices and C is well-conditioned matrix. In that case, complete
409 *> pivoting in the first QR factorizations provides accuracy dependent on the
410 *> condition number of C, and independent of D1, D2. Such higher accuracy is
411 *> not completely understood theoretically, but it works well in practice.
412 *> Further, if A can be written as A = B*D, with well-conditioned B and some
413 *> diagonal D, then the high accuracy is guaranteed, both theoretically and
414 *> in software, independent of D. For more details see [1], [2].
415 *> The computational range for the singular values can be the full range
416 *> ( UNDERFLOW,OVERFLOW ), provided that the machine arithmetic and the BLAS
417 *> & LAPACK routines called by SGEJSV are implemented to work in that range.
418 *> If that is not the case, then the restriction for safe computation with
419 *> the singular values in the range of normalized IEEE numbers is that the
420 *> spectral condition number kappa(A)=sigma_max(A)/sigma_min(A) does not
421 *> overflow. This code (SGEJSV) is best used in this restricted range,
422 *> meaning that singular values of magnitude below ||A||_2 / SLAMCH('O') are
423 *> returned as zeros. See JOBR for details on this.
424 *> Further, this implementation is somewhat slower than the one described
425 *> in [1,2] due to replacement of some non-LAPACK components, and because
426 *> the choice of some tuning parameters in the iterative part (SGESVJ) is
427 *> left to the implementer on a particular machine.
428 *> The rank revealing QR factorization (in this code: SGEQP3) should be
429 *> implemented as in [3]. We have a new version of SGEQP3 under development
430 *> that is more robust than the current one in LAPACK, with a cleaner cut in
431 *> rank defficient cases. It will be available in the SIGMA library [4].
432 *> If M is much larger than N, it is obvious that the initial QRF with
433 *> column pivoting can be preprocessed by the QRF without pivoting. That
434 *> well known trick is not used in SGEJSV because in some cases heavy row
435 *> weighting can be treated with complete pivoting. The overhead in cases
436 *> M much larger than N is then only due to pivoting, but the benefits in
437 *> terms of accuracy have prevailed. The implementer/user can incorporate
438 *> this extra QRF step easily. The implementer can also improve data movement
439 *> (matrix transpose, matrix copy, matrix transposed copy) - this
440 *> implementation of SGEJSV uses only the simplest, naive data movement.
443 *> \par Contributors:
446 *> Zlatko Drmac (Zagreb, Croatia) and Kresimir Veselic (Hagen, Germany)
453 *> [1] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
454 *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
455 *> LAPACK Working note 169.
456 *> [2] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
457 *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
458 *> LAPACK Working note 170.
459 *> [3] Z. Drmac and Z. Bujanovic: On the failure of rank-revealing QR
460 *> factorization software - a case study.
461 *> ACM Trans. Math. Softw. Vol. 35, No 2 (2008), pp. 1-28.
462 *> LAPACK Working note 176.
463 *> [4] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
464 *> QSVD, (H,K)-SVD computations.
465 *> Department of Mathematics, University of Zagreb, 2008.
468 *> \par Bugs, examples and comments:
469 * =================================
471 *> Please report all bugs and send interesting examples and/or comments to
472 *> drmac@math.hr. Thank you.
474 * =====================================================================
475 SUBROUTINE SGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
476 $ M, N, A, LDA, SVA, U, LDU, V, LDV,
477 $ WORK, LWORK, IWORK, INFO )
479 * -- LAPACK computational routine (version 3.6.1) --
480 * -- LAPACK is a software package provided by Univ. of Tennessee, --
481 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
484 * .. Scalar Arguments ..
486 INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
488 * .. Array Arguments ..
489 REAL A( LDA, * ), SVA( N ), U( LDU, * ), V( LDV, * ),
492 CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
495 * ===========================================================================
497 * .. Local Parameters ..
499 PARAMETER ( ZERO = 0.0E0, ONE = 1.0E0 )
501 * .. Local Scalars ..
502 REAL AAPP, AAQQ, AATMAX, AATMIN, BIG, BIG1, COND_OK,
503 $ CONDR1, CONDR2, ENTRA, ENTRAT, EPSLN, MAXPRJ, SCALEM,
504 $ SCONDA, SFMIN, SMALL, TEMP1, USCAL1, USCAL2, XSC
505 INTEGER IERR, N1, NR, NUMRANK, p, q, WARNING
506 LOGICAL ALMORT, DEFR, ERREST, GOSCAL, JRACC, KILL, LSVEC,
507 $ L2ABER, L2KILL, L2PERT, L2RANK, L2TRAN,
508 $ NOSCAL, ROWPIV, RSVEC, TRANSP
510 * .. Intrinsic Functions ..
511 INTRINSIC ABS, ALOG, MAX, MIN, FLOAT, NINT, SIGN, SQRT
513 * .. External Functions ..
517 EXTERNAL ISAMAX, LSAME, SLAMCH, SNRM2
519 * .. External Subroutines ..
520 EXTERNAL SCOPY, SGELQF, SGEQP3, SGEQRF, SLACPY, SLASCL,
521 $ SLASET, SLASSQ, SLASWP, SORGQR, SORMLQ,
522 $ SORMQR, SPOCON, SSCAL, SSWAP, STRSM, XERBLA
527 * Test the input arguments
529 LSVEC = LSAME( JOBU, 'U' ) .OR. LSAME( JOBU, 'F' )
530 JRACC = LSAME( JOBV, 'J' )
531 RSVEC = LSAME( JOBV, 'V' ) .OR. JRACC
532 ROWPIV = LSAME( JOBA, 'F' ) .OR. LSAME( JOBA, 'G' )
533 L2RANK = LSAME( JOBA, 'R' )
534 L2ABER = LSAME( JOBA, 'A' )
535 ERREST = LSAME( JOBA, 'E' ) .OR. LSAME( JOBA, 'G' )
536 L2TRAN = LSAME( JOBT, 'T' )
537 L2KILL = LSAME( JOBR, 'R' )
538 DEFR = LSAME( JOBR, 'N' )
539 L2PERT = LSAME( JOBP, 'P' )
541 IF ( .NOT.(ROWPIV .OR. L2RANK .OR. L2ABER .OR.
542 $ ERREST .OR. LSAME( JOBA, 'C' ) )) THEN
544 ELSE IF ( .NOT.( LSVEC .OR. LSAME( JOBU, 'N' ) .OR.
545 $ LSAME( JOBU, 'W' )) ) THEN
547 ELSE IF ( .NOT.( RSVEC .OR. LSAME( JOBV, 'N' ) .OR.
548 $ LSAME( JOBV, 'W' )) .OR. ( JRACC .AND. (.NOT.LSVEC) ) ) THEN
550 ELSE IF ( .NOT. ( L2KILL .OR. DEFR ) ) THEN
552 ELSE IF ( .NOT. ( L2TRAN .OR. LSAME( JOBT, 'N' ) ) ) THEN
554 ELSE IF ( .NOT. ( L2PERT .OR. LSAME( JOBP, 'N' ) ) ) THEN
556 ELSE IF ( M .LT. 0 ) THEN
558 ELSE IF ( ( N .LT. 0 ) .OR. ( N .GT. M ) ) THEN
560 ELSE IF ( LDA .LT. M ) THEN
562 ELSE IF ( LSVEC .AND. ( LDU .LT. M ) ) THEN
564 ELSE IF ( RSVEC .AND. ( LDV .LT. N ) ) THEN
566 ELSE IF ( (.NOT.(LSVEC .OR. RSVEC .OR. ERREST).AND.
567 $ (LWORK .LT. MAX(7,4*N+1,2*M+N))) .OR.
568 $ (.NOT.(LSVEC .OR. RSVEC) .AND. ERREST .AND.
569 $ (LWORK .LT. MAX(7,4*N+N*N,2*M+N))) .OR.
570 $ (LSVEC .AND. (.NOT.RSVEC) .AND. (LWORK .LT. MAX(7,2*M+N,4*N+1)))
572 $ (RSVEC .AND. (.NOT.LSVEC) .AND. (LWORK .LT. MAX(7,2*M+N,4*N+1)))
574 $ (LSVEC .AND. RSVEC .AND. (.NOT.JRACC) .AND.
575 $ (LWORK.LT.MAX(2*M+N,6*N+2*N*N)))
576 $ .OR. (LSVEC .AND. RSVEC .AND. JRACC .AND.
577 $ LWORK.LT.MAX(2*M+N,4*N+N*N,2*N+N*N+6)))
585 IF ( INFO .NE. 0 ) THEN
587 CALL XERBLA( 'SGEJSV', - INFO )
591 * Quick return for void matrix (Y3K safe)
593 IF ( ( M .EQ. 0 ) .OR. ( N .EQ. 0 ) ) THEN
599 * Determine whether the matrix U should be M x N or M x M
603 IF ( LSAME( JOBU, 'F' ) ) N1 = M
606 * Set numerical parameters
608 *! NOTE: Make sure SLAMCH() does not fail on the target architecture.
610 EPSLN = SLAMCH('Epsilon')
611 SFMIN = SLAMCH('SafeMinimum')
612 SMALL = SFMIN / EPSLN
616 * Initialize SVA(1:N) = diag( ||A e_i||_2 )_1^N
618 *(!) If necessary, scale SVA() to protect the largest norm from
619 * overflow. It is possible that this scaling pushes the smallest
620 * column norm left from the underflow threshold (extreme case).
622 SCALEM = ONE / SQRT(FLOAT(M)*FLOAT(N))
628 CALL SLASSQ( M, A(1,p), 1, AAPP, AAQQ )
629 IF ( AAPP .GT. BIG ) THEN
631 CALL XERBLA( 'SGEJSV', -INFO )
635 IF ( ( AAPP .LT. (BIG / AAQQ) ) .AND. NOSCAL ) THEN
639 SVA(p) = AAPP * ( AAQQ * SCALEM )
642 CALL SSCAL( p-1, SCALEM, SVA, 1 )
647 IF ( NOSCAL ) SCALEM = ONE
652 AAPP = MAX( AAPP, SVA(p) )
653 IF ( SVA(p) .NE. ZERO ) AAQQ = MIN( AAQQ, SVA(p) )
656 * Quick return for zero M x N matrix
658 IF ( AAPP .EQ. ZERO ) THEN
659 IF ( LSVEC ) CALL SLASET( 'G', M, N1, ZERO, ONE, U, LDU )
660 IF ( RSVEC ) CALL SLASET( 'G', N, N, ZERO, ONE, V, LDV )
663 IF ( ERREST ) WORK(3) = ONE
664 IF ( LSVEC .AND. RSVEC ) THEN
678 * Issue warning if denormalized column norms detected. Override the
679 * high relative accuracy request. Issue licence to kill columns
680 * (set them to zero) whose norm is less than sigma_max / BIG (roughly).
683 IF ( AAQQ .LE. SFMIN ) THEN
689 * Quick return for one-column matrix
694 CALL SLASCL( 'G',0,0,SVA(1),SCALEM, M,1,A(1,1),LDA,IERR )
695 CALL SLACPY( 'A', M, 1, A, LDA, U, LDU )
696 * computing all M left singular vectors of the M x 1 matrix
697 IF ( N1 .NE. N ) THEN
698 CALL SGEQRF( M, N, U,LDU, WORK, WORK(N+1),LWORK-N,IERR )
699 CALL SORGQR( M,N1,1, U,LDU,WORK,WORK(N+1),LWORK-N,IERR )
700 CALL SCOPY( M, A(1,1), 1, U(1,1), 1 )
706 IF ( SVA(1) .LT. (BIG*SCALEM) ) THEN
707 SVA(1) = SVA(1) / SCALEM
710 WORK(1) = ONE / SCALEM
712 IF ( SVA(1) .NE. ZERO ) THEN
714 IF ( ( SVA(1) / SCALEM) .GE. SFMIN ) THEN
724 IF ( ERREST ) WORK(3) = ONE
725 IF ( LSVEC .AND. RSVEC ) THEN
738 L2TRAN = L2TRAN .AND. ( M .EQ. N )
742 IF ( ROWPIV .OR. L2TRAN ) THEN
744 * Compute the row norms, needed to determine row pivoting sequence
745 * (in the case of heavily row weighted A, row pivoting is strongly
746 * advised) and to collect information needed to compare the
747 * structures of A * A^t and A^t * A (in the case L2TRAN.EQ..TRUE.).
753 CALL SLASSQ( N, A(p,1), LDA, XSC, TEMP1 )
754 * SLASSQ gets both the ell_2 and the ell_infinity norm
755 * in one pass through the vector
756 WORK(M+N+p) = XSC * SCALEM
757 WORK(N+p) = XSC * (SCALEM*SQRT(TEMP1))
758 AATMAX = MAX( AATMAX, WORK(N+p) )
759 IF (WORK(N+p) .NE. ZERO) AATMIN = MIN(AATMIN,WORK(N+p))
763 WORK(M+N+p) = SCALEM*ABS( A(p,ISAMAX(N,A(p,1),LDA)) )
764 AATMAX = MAX( AATMAX, WORK(M+N+p) )
765 AATMIN = MIN( AATMIN, WORK(M+N+p) )
771 * For square matrix A try to determine whether A^t would be better
772 * input for the preconditioned Jacobi SVD, with faster convergence.
773 * The decision is based on an O(N) function of the vector of column
774 * and row norms of A, based on the Shannon entropy. This should give
775 * the right choice in most cases when the difference actually matters.
776 * It may fail and pick the slower converging side.
784 CALL SLASSQ( N, SVA, 1, XSC, TEMP1 )
789 BIG1 = ( ( SVA(p) / XSC )**2 ) * TEMP1
790 IF ( BIG1 .NE. ZERO ) ENTRA = ENTRA + BIG1 * ALOG(BIG1)
792 ENTRA = - ENTRA / ALOG(FLOAT(N))
794 * Now, SVA().^2/Trace(A^t * A) is a point in the probability simplex.
795 * It is derived from the diagonal of A^t * A. Do the same with the
796 * diagonal of A * A^t, compute the entropy of the corresponding
797 * probability distribution. Note that A * A^t and A^t * A have the
802 BIG1 = ( ( WORK(p) / XSC )**2 ) * TEMP1
803 IF ( BIG1 .NE. ZERO ) ENTRAT = ENTRAT + BIG1 * ALOG(BIG1)
805 ENTRAT = - ENTRAT / ALOG(FLOAT(M))
807 * Analyze the entropies and decide A or A^t. Smaller entropy
808 * usually means better input for the algorithm.
810 TRANSP = ( ENTRAT .LT. ENTRA )
812 * If A^t is better than A, transpose A.
815 * In an optimal implementation, this trivial transpose
816 * should be replaced with faster transpose.
845 * Scale the matrix so that its maximal singular value remains less
846 * than SQRT(BIG) -- the matrix is scaled so that its maximal column
847 * has Euclidean norm equal to SQRT(BIG/N). The only reason to keep
848 * SQRT(BIG) instead of BIG is the fact that SGEJSV uses LAPACK and
849 * BLAS routines that, in some implementations, are not capable of
850 * working in the full interval [SFMIN,BIG] and that they may provoke
851 * overflows in the intermediate results. If the singular values spread
852 * from SFMIN to BIG, then SGESVJ will compute them. So, in that case,
853 * one should use SGESVJ instead of SGEJSV.
856 TEMP1 = SQRT( BIG / FLOAT(N) )
858 CALL SLASCL( 'G', 0, 0, AAPP, TEMP1, N, 1, SVA, N, IERR )
859 IF ( AAQQ .GT. (AAPP * SFMIN) ) THEN
860 AAQQ = ( AAQQ / AAPP ) * TEMP1
862 AAQQ = ( AAQQ * TEMP1 ) / AAPP
864 TEMP1 = TEMP1 * SCALEM
865 CALL SLASCL( 'G', 0, 0, AAPP, TEMP1, M, N, A, LDA, IERR )
867 * To undo scaling at the end of this procedure, multiply the
868 * computed singular values with USCAL2 / USCAL1.
874 * L2KILL enforces computation of nonzero singular values in
875 * the restricted range of condition number of the initial A,
876 * sigma_max(A) / sigma_min(A) approx. SQRT(BIG)/SQRT(SFMIN).
881 * Now, if the condition number of A is too big,
882 * sigma_max(A) / sigma_min(A) .GT. SQRT(BIG/N) * EPSLN / SFMIN,
883 * as a precaution measure, the full SVD is computed using SGESVJ
884 * with accumulated Jacobi rotations. This provides numerically
885 * more robust computation, at the cost of slightly increased run
886 * time. Depending on the concrete implementation of BLAS and LAPACK
887 * (i.e. how they behave in presence of extreme ill-conditioning) the
888 * implementor may decide to remove this switch.
889 IF ( ( AAQQ.LT.SQRT(SFMIN) ) .AND. LSVEC .AND. RSVEC ) THEN
894 IF ( AAQQ .LT. XSC ) THEN
896 IF ( SVA(p) .LT. XSC ) THEN
897 CALL SLASET( 'A', M, 1, ZERO, ZERO, A(1,p), LDA )
903 * Preconditioning using QR factorization with pivoting
906 * Optional row permutation (Bjoerck row pivoting):
907 * A result by Cox and Higham shows that the Bjoerck's
908 * row pivoting combined with standard column pivoting
909 * has similar effect as Powell-Reid complete pivoting.
910 * The ell-infinity norms of A are made nonincreasing.
912 q = ISAMAX( M-p+1, WORK(M+N+p), 1 ) + p - 1
916 WORK(M+N+p) = WORK(M+N+q)
920 CALL SLASWP( N, A, LDA, 1, M-1, IWORK(2*N+1), 1 )
923 * End of the preparation phase (scaling, optional sorting and
924 * transposing, optional flushing of small columns).
928 * If the full SVD is needed, the right singular vectors are computed
929 * from a matrix equation, and for that we need theoretical analysis
930 * of the Businger-Golub pivoting. So we use SGEQP3 as the first RR QRF.
931 * In all other cases the first RR QRF can be chosen by other criteria
932 * (eg speed by replacing global with restricted window pivoting, such
933 * as in SGEQPX from TOMS # 782). Good results will be obtained using
934 * SGEQPX with properly (!) chosen numerical parameters.
935 * Any improvement of SGEQP3 improves overal performance of SGEJSV.
937 * A * P1 = Q1 * [ R1^t 0]^t:
939 * .. all columns are free columns
942 CALL SGEQP3( M,N,A,LDA, IWORK,WORK, WORK(N+1),LWORK-N, IERR )
944 * The upper triangular matrix R1 from the first QRF is inspected for
945 * rank deficiency and possibilities for deflation, or possible
946 * ill-conditioning. Depending on the user specified flag L2RANK,
947 * the procedure explores possibilities to reduce the numerical
948 * rank by inspecting the computed upper triangular factor. If
949 * L2RANK or L2ABER are up, then SGEJSV will compute the SVD of
950 * A + dA, where ||dA|| <= f(M,N)*EPSLN.
954 * Standard absolute error bound suffices. All sigma_i with
955 * sigma_i < N*EPSLN*||A|| are flushed to zero. This is an
956 * agressive enforcement of lower numerical rank by introducing a
957 * backward error of the order of N*EPSLN*||A||.
958 TEMP1 = SQRT(FLOAT(N))*EPSLN
960 IF ( ABS(A(p,p)) .GE. (TEMP1*ABS(A(1,1))) ) THEN
967 ELSE IF ( L2RANK ) THEN
968 * .. similarly as above, only slightly more gentle (less agressive).
969 * Sudden drop on the diagonal of R1 is used as the criterion for
970 * close-to-rank-defficient.
973 IF ( ( ABS(A(p,p)) .LT. (EPSLN*ABS(A(p-1,p-1))) ) .OR.
974 $ ( ABS(A(p,p)) .LT. SMALL ) .OR.
975 $ ( L2KILL .AND. (ABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3402
981 * The goal is high relative accuracy. However, if the matrix
982 * has high scaled condition number the relative accuracy is in
983 * general not feasible. Later on, a condition number estimator
984 * will be deployed to estimate the scaled condition number.
985 * Here we just remove the underflowed part of the triangular
986 * factor. This prevents the situation in which the code is
987 * working hard to get the accuracy not warranted by the data.
990 IF ( ( ABS(A(p,p)) .LT. SMALL ) .OR.
991 $ ( L2KILL .AND. (ABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3302
999 IF ( NR .EQ. N ) THEN
1002 TEMP1 = ABS(A(p,p)) / SVA(IWORK(p))
1003 MAXPRJ = MIN( MAXPRJ, TEMP1 )
1005 IF ( MAXPRJ**2 .GE. ONE - FLOAT(N)*EPSLN ) ALMORT = .TRUE.
1014 IF ( N .EQ. NR ) THEN
1016 * .. V is available as workspace
1017 CALL SLACPY( 'U', N, N, A, LDA, V, LDV )
1019 TEMP1 = SVA(IWORK(p))
1020 CALL SSCAL( p, ONE/TEMP1, V(1,p), 1 )
1022 CALL SPOCON( 'U', N, V, LDV, ONE, TEMP1,
1023 $ WORK(N+1), IWORK(2*N+M+1), IERR )
1024 ELSE IF ( LSVEC ) THEN
1025 * .. U is available as workspace
1026 CALL SLACPY( 'U', N, N, A, LDA, U, LDU )
1028 TEMP1 = SVA(IWORK(p))
1029 CALL SSCAL( p, ONE/TEMP1, U(1,p), 1 )
1031 CALL SPOCON( 'U', N, U, LDU, ONE, TEMP1,
1032 $ WORK(N+1), IWORK(2*N+M+1), IERR )
1034 CALL SLACPY( 'U', N, N, A, LDA, WORK(N+1), N )
1036 TEMP1 = SVA(IWORK(p))
1037 CALL SSCAL( p, ONE/TEMP1, WORK(N+(p-1)*N+1), 1 )
1039 * .. the columns of R are scaled to have unit Euclidean lengths.
1040 CALL SPOCON( 'U', N, WORK(N+1), N, ONE, TEMP1,
1041 $ WORK(N+N*N+1), IWORK(2*N+M+1), IERR )
1043 SCONDA = ONE / SQRT(TEMP1)
1044 * SCONDA is an estimate of SQRT(||(R^t * R)^(-1)||_1).
1045 * N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
1051 L2PERT = L2PERT .AND. ( ABS( A(1,1)/A(NR,NR) ) .GT. SQRT(BIG1) )
1052 * If there is no violent scaling, artificial perturbation is not needed.
1056 IF ( .NOT. ( RSVEC .OR. LSVEC ) ) THEN
1058 * Singular Values only
1060 * .. transpose A(1:NR,1:N)
1061 DO 1946 p = 1, MIN( N-1, NR )
1062 CALL SCOPY( N-p, A(p,p+1), LDA, A(p+1,p), 1 )
1065 * The following two DO-loops introduce small relative perturbation
1066 * into the strict upper triangle of the lower triangular matrix.
1067 * Small entries below the main diagonal are also changed.
1068 * This modification is useful if the computing environment does not
1069 * provide/allow FLUSH TO ZERO underflow, for it prevents many
1070 * annoying denormalized numbers in case of strongly scaled matrices.
1071 * The perturbation is structured so that it does not introduce any
1072 * new perturbation of the singular values, and it does not destroy
1073 * the job done by the preconditioner.
1074 * The licence for this perturbation is in the variable L2PERT, which
1075 * should be .FALSE. if FLUSH TO ZERO underflow is active.
1077 IF ( .NOT. ALMORT ) THEN
1081 XSC = EPSLN / FLOAT(N)
1083 TEMP1 = XSC*ABS(A(q,q))
1085 IF ( ( (p.GT.q) .AND. (ABS(A(p,q)).LE.TEMP1) )
1086 $ .OR. ( p .LT. q ) )
1087 $ A(p,q) = SIGN( TEMP1, A(p,q) )
1091 CALL SLASET( 'U', NR-1,NR-1, ZERO,ZERO, A(1,2),LDA )
1094 * .. second preconditioning using the QR factorization
1096 CALL SGEQRF( N,NR, A,LDA, WORK, WORK(N+1),LWORK-N, IERR )
1098 * .. and transpose upper to lower triangular
1099 DO 1948 p = 1, NR - 1
1100 CALL SCOPY( NR-p, A(p,p+1), LDA, A(p+1,p), 1 )
1105 * Row-cyclic Jacobi SVD algorithm with column pivoting
1107 * .. again some perturbation (a "background noise") is added
1108 * to drown denormals
1111 XSC = EPSLN / FLOAT(N)
1113 TEMP1 = XSC*ABS(A(q,q))
1115 IF ( ( (p.GT.q) .AND. (ABS(A(p,q)).LE.TEMP1) )
1116 $ .OR. ( p .LT. q ) )
1117 $ A(p,q) = SIGN( TEMP1, A(p,q) )
1121 CALL SLASET( 'U', NR-1, NR-1, ZERO, ZERO, A(1,2), LDA )
1124 * .. and one-sided Jacobi rotations are started on a lower
1125 * triangular matrix (plus perturbation which is ignored in
1126 * the part which destroys triangular form (confusing?!))
1128 CALL SGESVJ( 'L', 'NoU', 'NoV', NR, NR, A, LDA, SVA,
1129 $ N, V, LDV, WORK, LWORK, INFO )
1132 NUMRANK = NINT(WORK(2))
1135 ELSE IF ( RSVEC .AND. ( .NOT. LSVEC ) ) THEN
1137 * -> Singular Values and Right Singular Vectors <-
1141 * .. in this case NR equals N
1143 CALL SCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1145 CALL SLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1147 CALL SGESVJ( 'L','U','N', N, NR, V,LDV, SVA, NR, A,LDA,
1148 $ WORK, LWORK, INFO )
1150 NUMRANK = NINT(WORK(2))
1154 * .. two more QR factorizations ( one QRF is not enough, two require
1155 * accumulated product of Jacobi rotations, three are perfect )
1157 CALL SLASET( 'Lower', NR-1, NR-1, ZERO, ZERO, A(2,1), LDA )
1158 CALL SGELQF( NR, N, A, LDA, WORK, WORK(N+1), LWORK-N, IERR)
1159 CALL SLACPY( 'Lower', NR, NR, A, LDA, V, LDV )
1160 CALL SLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1161 CALL SGEQRF( NR, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1164 CALL SCOPY( NR-p+1, V(p,p), LDV, V(p,p), 1 )
1166 CALL SLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1168 CALL SGESVJ( 'Lower', 'U','N', NR, NR, V,LDV, SVA, NR, U,
1169 $ LDU, WORK(N+1), LWORK-N, INFO )
1171 NUMRANK = NINT(WORK(N+2))
1172 IF ( NR .LT. N ) THEN
1173 CALL SLASET( 'A',N-NR, NR, ZERO,ZERO, V(NR+1,1), LDV )
1174 CALL SLASET( 'A',NR, N-NR, ZERO,ZERO, V(1,NR+1), LDV )
1175 CALL SLASET( 'A',N-NR,N-NR,ZERO,ONE, V(NR+1,NR+1), LDV )
1178 CALL SORMLQ( 'Left', 'Transpose', N, N, NR, A, LDA, WORK,
1179 $ V, LDV, WORK(N+1), LWORK-N, IERR )
1184 CALL SCOPY( N, V(p,1), LDV, A(IWORK(p),1), LDA )
1186 CALL SLACPY( 'All', N, N, A, LDA, V, LDV )
1189 CALL SLACPY( 'All', N, N, V, LDV, U, LDU )
1192 ELSE IF ( LSVEC .AND. ( .NOT. RSVEC ) ) THEN
1194 * .. Singular Values and Left Singular Vectors ..
1196 * .. second preconditioning step to avoid need to accumulate
1197 * Jacobi rotations in the Jacobi iterations.
1199 CALL SCOPY( N-p+1, A(p,p), LDA, U(p,p), 1 )
1201 CALL SLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1203 CALL SGEQRF( N, NR, U, LDU, WORK(N+1), WORK(2*N+1),
1206 DO 1967 p = 1, NR - 1
1207 CALL SCOPY( NR-p, U(p,p+1), LDU, U(p+1,p), 1 )
1209 CALL SLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1211 CALL SGESVJ( 'Lower', 'U', 'N', NR,NR, U, LDU, SVA, NR, A,
1212 $ LDA, WORK(N+1), LWORK-N, INFO )
1214 NUMRANK = NINT(WORK(N+2))
1216 IF ( NR .LT. M ) THEN
1217 CALL SLASET( 'A', M-NR, NR,ZERO, ZERO, U(NR+1,1), LDU )
1218 IF ( NR .LT. N1 ) THEN
1219 CALL SLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1), LDU )
1220 CALL SLASET( 'A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1), LDU )
1224 CALL SORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1225 $ LDU, WORK(N+1), LWORK-N, IERR )
1228 $ CALL SLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1231 XSC = ONE / SNRM2( M, U(1,p), 1 )
1232 CALL SSCAL( M, XSC, U(1,p), 1 )
1236 CALL SLACPY( 'All', N, N, U, LDU, V, LDV )
1243 IF ( .NOT. JRACC ) THEN
1245 IF ( .NOT. ALMORT ) THEN
1247 * Second Preconditioning Step (QRF [with pivoting])
1248 * Note that the composition of TRANSPOSE, QRF and TRANSPOSE is
1249 * equivalent to an LQF CALL. Since in many libraries the QRF
1250 * seems to be better optimized than the LQF, we do explicit
1251 * transpose and use the QRF. This is subject to changes in an
1252 * optimized implementation of SGEJSV.
1255 CALL SCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1258 * .. the following two loops perturb small entries to avoid
1259 * denormals in the second QR factorization, where they are
1260 * as good as zeros. This is done to avoid painfully slow
1261 * computation with denormals. The relative size of the perturbation
1262 * is a parameter that can be changed by the implementer.
1263 * This perturbation device will be obsolete on machines with
1264 * properly implemented arithmetic.
1265 * To switch it off, set L2PERT=.FALSE. To remove it from the
1266 * code, remove the action under L2PERT=.TRUE., leave the ELSE part.
1267 * The following two loops should be blocked and fused with the
1268 * transposed copy above.
1273 TEMP1 = XSC*ABS( V(q,q) )
1275 IF ( ( p .GT. q ) .AND. ( ABS(V(p,q)) .LE. TEMP1 )
1276 $ .OR. ( p .LT. q ) )
1277 $ V(p,q) = SIGN( TEMP1, V(p,q) )
1278 IF ( p .LT. q ) V(p,q) = - V(p,q)
1282 CALL SLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1285 * Estimate the row scaled condition number of R1
1286 * (If R1 is rectangular, N > NR, then the condition number
1287 * of the leading NR x NR submatrix is estimated.)
1289 CALL SLACPY( 'L', NR, NR, V, LDV, WORK(2*N+1), NR )
1291 TEMP1 = SNRM2(NR-p+1,WORK(2*N+(p-1)*NR+p),1)
1292 CALL SSCAL(NR-p+1,ONE/TEMP1,WORK(2*N+(p-1)*NR+p),1)
1294 CALL SPOCON('Lower',NR,WORK(2*N+1),NR,ONE,TEMP1,
1295 $ WORK(2*N+NR*NR+1),IWORK(M+2*N+1),IERR)
1296 CONDR1 = ONE / SQRT(TEMP1)
1297 * .. here need a second oppinion on the condition number
1298 * .. then assume worst case scenario
1299 * R1 is OK for inverse <=> CONDR1 .LT. FLOAT(N)
1300 * more conservative <=> CONDR1 .LT. SQRT(FLOAT(N))
1302 COND_OK = SQRT(FLOAT(NR))
1303 *[TP] COND_OK is a tuning parameter.
1305 IF ( CONDR1 .LT. COND_OK ) THEN
1306 * .. the second QRF without pivoting. Note: in an optimized
1307 * implementation, this QRF should be implemented as the QRF
1308 * of a lower triangular matrix.
1310 CALL SGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1314 XSC = SQRT(SMALL)/EPSLN
1316 DO 3958 q = 1, p - 1
1317 TEMP1 = XSC * MIN(ABS(V(p,p)),ABS(V(q,q)))
1318 IF ( ABS(V(q,p)) .LE. TEMP1 )
1319 $ V(q,p) = SIGN( TEMP1, V(q,p) )
1325 $ CALL SLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
1328 * .. this transposed copy should be better than naive
1329 DO 1969 p = 1, NR - 1
1330 CALL SCOPY( NR-p, V(p,p+1), LDV, V(p+1,p), 1 )
1337 * .. ill-conditioned case: second QRF with pivoting
1338 * Note that windowed pivoting would be equaly good
1339 * numerically, and more run-time efficient. So, in
1340 * an optimal implementation, the next call to SGEQP3
1341 * should be replaced with eg. CALL SGEQPX (ACM TOMS #782)
1342 * with properly (carefully) chosen parameters.
1344 * R1^t * P2 = Q2 * R2
1348 CALL SGEQP3( N, NR, V, LDV, IWORK(N+1), WORK(N+1),
1349 $ WORK(2*N+1), LWORK-2*N, IERR )
1350 ** CALL SGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1351 ** $ LWORK-2*N, IERR )
1355 DO 3968 q = 1, p - 1
1356 TEMP1 = XSC * MIN(ABS(V(p,p)),ABS(V(q,q)))
1357 IF ( ABS(V(q,p)) .LE. TEMP1 )
1358 $ V(q,p) = SIGN( TEMP1, V(q,p) )
1363 CALL SLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
1368 DO 8971 q = 1, p - 1
1369 TEMP1 = XSC * MIN(ABS(V(p,p)),ABS(V(q,q)))
1370 V(p,q) = - SIGN( TEMP1, V(q,p) )
1374 CALL SLASET( 'L',NR-1,NR-1,ZERO,ZERO,V(2,1),LDV )
1376 * Now, compute R2 = L3 * Q3, the LQ factorization.
1377 CALL SGELQF( NR, NR, V, LDV, WORK(2*N+N*NR+1),
1378 $ WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, IERR )
1379 * .. and estimate the condition number
1380 CALL SLACPY( 'L',NR,NR,V,LDV,WORK(2*N+N*NR+NR+1),NR )
1382 TEMP1 = SNRM2( p, WORK(2*N+N*NR+NR+p), NR )
1383 CALL SSCAL( p, ONE/TEMP1, WORK(2*N+N*NR+NR+p), NR )
1385 CALL SPOCON( 'L',NR,WORK(2*N+N*NR+NR+1),NR,ONE,TEMP1,
1386 $ WORK(2*N+N*NR+NR+NR*NR+1),IWORK(M+2*N+1),IERR )
1387 CONDR2 = ONE / SQRT(TEMP1)
1389 IF ( CONDR2 .GE. COND_OK ) THEN
1390 * .. save the Householder vectors used for Q3
1391 * (this overwrittes the copy of R2, as it will not be
1392 * needed in this branch, but it does not overwritte the
1393 * Huseholder vectors of Q2.).
1394 CALL SLACPY( 'U', NR, NR, V, LDV, WORK(2*N+1), N )
1395 * .. and the rest of the information on Q3 is in
1396 * WORK(2*N+N*NR+1:2*N+N*NR+N)
1404 TEMP1 = XSC * V(q,q)
1405 DO 4969 p = 1, q - 1
1406 * V(p,q) = - SIGN( TEMP1, V(q,p) )
1407 V(p,q) = - SIGN( TEMP1, V(p,q) )
1411 CALL SLASET( 'U', NR-1,NR-1, ZERO,ZERO, V(1,2), LDV )
1414 * Second preconditioning finished; continue with Jacobi SVD
1415 * The input matrix is lower trinagular.
1417 * Recover the right singular vectors as solution of a well
1418 * conditioned triangular matrix equation.
1420 IF ( CONDR1 .LT. COND_OK ) THEN
1422 CALL SGESVJ( 'L','U','N',NR,NR,V,LDV,SVA,NR,U,
1423 $ LDU,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,INFO )
1424 SCALEM = WORK(2*N+N*NR+NR+1)
1425 NUMRANK = NINT(WORK(2*N+N*NR+NR+2))
1427 CALL SCOPY( NR, V(1,p), 1, U(1,p), 1 )
1428 CALL SSCAL( NR, SVA(p), V(1,p), 1 )
1431 * .. pick the right matrix equation and solve it
1433 IF ( NR .EQ. N ) THEN
1434 * :)) .. best case, R1 is inverted. The solution of this matrix
1435 * equation is Q2*V2 = the product of the Jacobi rotations
1436 * used in SGESVJ, premultiplied with the orthogonal matrix
1437 * from the second QR factorization.
1438 CALL STRSM( 'L','U','N','N', NR,NR,ONE, A,LDA, V,LDV )
1440 * .. R1 is well conditioned, but non-square. Transpose(R2)
1441 * is inverted to get the product of the Jacobi rotations
1442 * used in SGESVJ. The Q-factor from the second QR
1443 * factorization is then built in explicitly.
1444 CALL STRSM('L','U','T','N',NR,NR,ONE,WORK(2*N+1),
1446 IF ( NR .LT. N ) THEN
1447 CALL SLASET('A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV)
1448 CALL SLASET('A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV)
1449 CALL SLASET('A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV)
1451 CALL SORMQR('L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1452 $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR)
1455 ELSE IF ( CONDR2 .LT. COND_OK ) THEN
1457 * :) .. the input matrix A is very likely a relative of
1458 * the Kahan matrix :)
1459 * The matrix R2 is inverted. The solution of the matrix equation
1460 * is Q3^T*V3 = the product of the Jacobi rotations (appplied to
1461 * the lower triangular L3 from the LQ factorization of
1462 * R2=L3*Q3), pre-multiplied with the transposed Q3.
1463 CALL SGESVJ( 'L', 'U', 'N', NR, NR, V, LDV, SVA, NR, U,
1464 $ LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
1465 SCALEM = WORK(2*N+N*NR+NR+1)
1466 NUMRANK = NINT(WORK(2*N+N*NR+NR+2))
1468 CALL SCOPY( NR, V(1,p), 1, U(1,p), 1 )
1469 CALL SSCAL( NR, SVA(p), U(1,p), 1 )
1471 CALL STRSM('L','U','N','N',NR,NR,ONE,WORK(2*N+1),N,U,LDU)
1472 * .. apply the permutation from the second QR factorization
1475 WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1478 U(p,q) = WORK(2*N+N*NR+NR+p)
1481 IF ( NR .LT. N ) THEN
1482 CALL SLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1483 CALL SLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1484 CALL SLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1486 CALL SORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1487 $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1489 * Last line of defense.
1490 * #:( This is a rather pathological case: no scaled condition
1491 * improvement after two pivoted QR factorizations. Other
1492 * possibility is that the rank revealing QR factorization
1493 * or the condition estimator has failed, or the COND_OK
1494 * is set very close to ONE (which is unnecessary). Normally,
1495 * this branch should never be executed, but in rare cases of
1496 * failure of the RRQR or condition estimator, the last line of
1497 * defense ensures that SGEJSV completes the task.
1498 * Compute the full SVD of L3 using SGESVJ with explicit
1499 * accumulation of Jacobi rotations.
1500 CALL SGESVJ( 'L', 'U', 'V', NR, NR, V, LDV, SVA, NR, U,
1501 $ LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
1502 SCALEM = WORK(2*N+N*NR+NR+1)
1503 NUMRANK = NINT(WORK(2*N+N*NR+NR+2))
1504 IF ( NR .LT. N ) THEN
1505 CALL SLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1506 CALL SLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1507 CALL SLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1509 CALL SORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1510 $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1512 CALL SORMLQ( 'L', 'T', NR, NR, NR, WORK(2*N+1), N,
1513 $ WORK(2*N+N*NR+1), U, LDU, WORK(2*N+N*NR+NR+1),
1514 $ LWORK-2*N-N*NR-NR, IERR )
1517 WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1520 U(p,q) = WORK(2*N+N*NR+NR+p)
1526 * Permute the rows of V using the (column) permutation from the
1527 * first QRF. Also, scale the columns to make them unit in
1528 * Euclidean norm. This applies to all cases.
1530 TEMP1 = SQRT(FLOAT(N)) * EPSLN
1533 WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
1536 V(p,q) = WORK(2*N+N*NR+NR+p)
1538 XSC = ONE / SNRM2( N, V(1,q), 1 )
1539 IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1540 $ CALL SSCAL( N, XSC, V(1,q), 1 )
1542 * At this moment, V contains the right singular vectors of A.
1543 * Next, assemble the left singular vector matrix U (M x N).
1544 IF ( NR .LT. M ) THEN
1545 CALL SLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
1546 IF ( NR .LT. N1 ) THEN
1547 CALL SLASET('A',NR,N1-NR,ZERO,ZERO,U(1,NR+1),LDU)
1548 CALL SLASET('A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1),LDU)
1552 * The Q matrix from the first QRF is built into the left singular
1553 * matrix U. This applies to all cases.
1555 CALL SORMQR( 'Left', 'No_Tr', M, N1, N, A, LDA, WORK, U,
1556 $ LDU, WORK(N+1), LWORK-N, IERR )
1558 * The columns of U are normalized. The cost is O(M*N) flops.
1559 TEMP1 = SQRT(FLOAT(M)) * EPSLN
1561 XSC = ONE / SNRM2( M, U(1,p), 1 )
1562 IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1563 $ CALL SSCAL( M, XSC, U(1,p), 1 )
1566 * If the initial QRF is computed with row pivoting, the left
1567 * singular vectors must be adjusted.
1570 $ CALL SLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1574 * .. the initial matrix A has almost orthogonal columns and
1575 * the second QRF is not needed
1577 CALL SLACPY( 'Upper', N, N, A, LDA, WORK(N+1), N )
1581 TEMP1 = XSC * WORK( N + (p-1)*N + p )
1582 DO 5971 q = 1, p - 1
1583 WORK(N+(q-1)*N+p)=-SIGN(TEMP1,WORK(N+(p-1)*N+q))
1587 CALL SLASET( 'Lower',N-1,N-1,ZERO,ZERO,WORK(N+2),N )
1590 CALL SGESVJ( 'Upper', 'U', 'N', N, N, WORK(N+1), N, SVA,
1591 $ N, U, LDU, WORK(N+N*N+1), LWORK-N-N*N, INFO )
1593 SCALEM = WORK(N+N*N+1)
1594 NUMRANK = NINT(WORK(N+N*N+2))
1596 CALL SCOPY( N, WORK(N+(p-1)*N+1), 1, U(1,p), 1 )
1597 CALL SSCAL( N, SVA(p), WORK(N+(p-1)*N+1), 1 )
1600 CALL STRSM( 'Left', 'Upper', 'NoTrans', 'No UD', N, N,
1601 $ ONE, A, LDA, WORK(N+1), N )
1603 CALL SCOPY( N, WORK(N+p), N, V(IWORK(p),1), LDV )
1605 TEMP1 = SQRT(FLOAT(N))*EPSLN
1607 XSC = ONE / SNRM2( N, V(1,p), 1 )
1608 IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1609 $ CALL SSCAL( N, XSC, V(1,p), 1 )
1612 * Assemble the left singular vector matrix U (M x N).
1614 IF ( N .LT. M ) THEN
1615 CALL SLASET( 'A', M-N, N, ZERO, ZERO, U(N+1,1), LDU )
1616 IF ( N .LT. N1 ) THEN
1617 CALL SLASET( 'A',N, N1-N, ZERO, ZERO, U(1,N+1),LDU )
1618 CALL SLASET( 'A',M-N,N1-N, ZERO, ONE,U(N+1,N+1),LDU )
1621 CALL SORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1622 $ LDU, WORK(N+1), LWORK-N, IERR )
1623 TEMP1 = SQRT(FLOAT(M))*EPSLN
1625 XSC = ONE / SNRM2( M, U(1,p), 1 )
1626 IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1627 $ CALL SSCAL( M, XSC, U(1,p), 1 )
1631 $ CALL SLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1635 * end of the >> almost orthogonal case << in the full SVD
1639 * This branch deploys a preconditioned Jacobi SVD with explicitly
1640 * accumulated rotations. It is included as optional, mainly for
1641 * experimental purposes. It does perfom well, and can also be used.
1642 * In this implementation, this branch will be automatically activated
1643 * if the condition number sigma_max(A) / sigma_min(A) is predicted
1644 * to be greater than the overflow threshold. This is because the
1645 * a posteriori computation of the singular vectors assumes robust
1646 * implementation of BLAS and some LAPACK procedures, capable of working
1647 * in presence of extreme values. Since that is not always the case, ...
1650 CALL SCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1654 XSC = SQRT(SMALL/EPSLN)
1656 TEMP1 = XSC*ABS( V(q,q) )
1658 IF ( ( p .GT. q ) .AND. ( ABS(V(p,q)) .LE. TEMP1 )
1659 $ .OR. ( p .LT. q ) )
1660 $ V(p,q) = SIGN( TEMP1, V(p,q) )
1661 IF ( p .LT. q ) V(p,q) = - V(p,q)
1665 CALL SLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1668 CALL SGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1670 CALL SLACPY( 'L', N, NR, V, LDV, WORK(2*N+1), N )
1673 CALL SCOPY( NR-p+1, V(p,p), LDV, U(p,p), 1 )
1677 XSC = SQRT(SMALL/EPSLN)
1679 DO 9971 p = 1, q - 1
1680 TEMP1 = XSC * MIN(ABS(U(p,p)),ABS(U(q,q)))
1681 U(p,q) = - SIGN( TEMP1, U(q,p) )
1685 CALL SLASET('U', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1688 CALL SGESVJ( 'L', 'U', 'V', NR, NR, U, LDU, SVA,
1689 $ N, V, LDV, WORK(2*N+N*NR+1), LWORK-2*N-N*NR, INFO )
1690 SCALEM = WORK(2*N+N*NR+1)
1691 NUMRANK = NINT(WORK(2*N+N*NR+2))
1693 IF ( NR .LT. N ) THEN
1694 CALL SLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1695 CALL SLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1696 CALL SLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1699 CALL SORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1700 $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1702 * Permute the rows of V using the (column) permutation from the
1703 * first QRF. Also, scale the columns to make them unit in
1704 * Euclidean norm. This applies to all cases.
1706 TEMP1 = SQRT(FLOAT(N)) * EPSLN
1709 WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
1712 V(p,q) = WORK(2*N+N*NR+NR+p)
1714 XSC = ONE / SNRM2( N, V(1,q), 1 )
1715 IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1716 $ CALL SSCAL( N, XSC, V(1,q), 1 )
1719 * At this moment, V contains the right singular vectors of A.
1720 * Next, assemble the left singular vector matrix U (M x N).
1722 IF ( NR .LT. M ) THEN
1723 CALL SLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
1724 IF ( NR .LT. N1 ) THEN
1725 CALL SLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1),LDU )
1726 CALL SLASET( 'A',M-NR,N1-NR, ZERO, ONE,U(NR+1,NR+1),LDU )
1730 CALL SORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1731 $ LDU, WORK(N+1), LWORK-N, IERR )
1734 $ CALL SLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1739 * .. swap U and V because the procedure worked on A^t
1741 CALL SSWAP( N, U(1,p), 1, V(1,p), 1 )
1746 * end of the full SVD
1748 * Undo scaling, if necessary (and possible)
1750 IF ( USCAL2 .LE. (BIG/SVA(1))*USCAL1 ) THEN
1751 CALL SLASCL( 'G', 0, 0, USCAL1, USCAL2, NR, 1, SVA, N, IERR )
1756 IF ( NR .LT. N ) THEN
1762 WORK(1) = USCAL2 * SCALEM
1764 IF ( ERREST ) WORK(3) = SCONDA
1765 IF ( LSVEC .AND. RSVEC ) THEN