3 * =========== DOCUMENTATION ===========
5 * Online html documentation available at
6 * http://www.netlib.org/lapack/explore-html/
9 *> Download SGGHRD + dependencies
10 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/sgghd3.f">
12 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/sgghd3.f">
14 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/sgghd3.f">
21 * SUBROUTINE SGGHD3( COMPQ, COMPZ, N, ILO, IHI, A, LDA, B, LDB, Q,
22 * LDQ, Z, LDZ, WORK, LWORK, INFO )
24 * .. Scalar Arguments ..
25 * CHARACTER COMPQ, COMPZ
26 * INTEGER IHI, ILO, INFO, LDA, LDB, LDQ, LDZ, N, LWORK
28 * .. Array Arguments ..
29 * REAL A( LDA, * ), B( LDB, * ), Q( LDQ, * ),
30 * $ Z( LDZ, * ), WORK( * )
39 *> SGGHD3 reduces a pair of real matrices (A,B) to generalized upper
40 *> Hessenberg form using orthogonal transformations, where A is a
41 *> general matrix and B is upper triangular. The form of the
42 *> generalized eigenvalue problem is
44 *> and B is typically made upper triangular by computing its QR
45 *> factorization and moving the orthogonal matrix Q to the left side
48 *> This subroutine simultaneously reduces A to a Hessenberg matrix H:
50 *> and transforms B to another upper triangular matrix T:
52 *> in order to reduce the problem to its standard form
56 *> The orthogonal matrices Q and Z are determined as products of Givens
57 *> rotations. They may either be formed explicitly, or they may be
58 *> postmultiplied into input matrices Q1 and Z1, so that
60 *> Q1 * A * Z1**T = (Q1*Q) * H * (Z1*Z)**T
62 *> Q1 * B * Z1**T = (Q1*Q) * T * (Z1*Z)**T
64 *> If Q1 is the orthogonal matrix from the QR factorization of B in the
65 *> original equation A*x = lambda*B*x, then SGGHD3 reduces the original
66 *> problem to generalized Hessenberg form.
68 *> This is a blocked variant of SGGHRD, using matrix-matrix
69 *> multiplications for parts of the computation to enhance performance.
77 *> COMPQ is CHARACTER*1
78 *> = 'N': do not compute Q;
79 *> = 'I': Q is initialized to the unit matrix, and the
80 *> orthogonal matrix Q is returned;
81 *> = 'V': Q must contain an orthogonal matrix Q1 on entry,
82 *> and the product Q1*Q is returned.
87 *> COMPZ is CHARACTER*1
88 *> = 'N': do not compute Z;
89 *> = 'I': Z is initialized to the unit matrix, and the
90 *> orthogonal matrix Z is returned;
91 *> = 'V': Z must contain an orthogonal matrix Z1 on entry,
92 *> and the product Z1*Z is returned.
98 *> The order of the matrices A and B. N >= 0.
110 *> ILO and IHI mark the rows and columns of A which are to be
111 *> reduced. It is assumed that A is already upper triangular
112 *> in rows and columns 1:ILO-1 and IHI+1:N. ILO and IHI are
113 *> normally set by a previous call to SGGBAL; otherwise they
114 *> should be set to 1 and N respectively.
115 *> 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0.
120 *> A is REAL array, dimension (LDA, N)
121 *> On entry, the N-by-N general matrix to be reduced.
122 *> On exit, the upper triangle and the first subdiagonal of A
123 *> are overwritten with the upper Hessenberg matrix H, and the
124 *> rest is set to zero.
130 *> The leading dimension of the array A. LDA >= max(1,N).
135 *> B is REAL array, dimension (LDB, N)
136 *> On entry, the N-by-N upper triangular matrix B.
137 *> On exit, the upper triangular matrix T = Q**T B Z. The
138 *> elements below the diagonal are set to zero.
144 *> The leading dimension of the array B. LDB >= max(1,N).
149 *> Q is REAL array, dimension (LDQ, N)
150 *> On entry, if COMPQ = 'V', the orthogonal matrix Q1,
151 *> typically from the QR factorization of B.
152 *> On exit, if COMPQ='I', the orthogonal matrix Q, and if
153 *> COMPQ = 'V', the product Q1*Q.
154 *> Not referenced if COMPQ='N'.
160 *> The leading dimension of the array Q.
161 *> LDQ >= N if COMPQ='V' or 'I'; LDQ >= 1 otherwise.
166 *> Z is REAL array, dimension (LDZ, N)
167 *> On entry, if COMPZ = 'V', the orthogonal matrix Z1.
168 *> On exit, if COMPZ='I', the orthogonal matrix Z, and if
169 *> COMPZ = 'V', the product Z1*Z.
170 *> Not referenced if COMPZ='N'.
176 *> The leading dimension of the array Z.
177 *> LDZ >= N if COMPZ='V' or 'I'; LDZ >= 1 otherwise.
182 *> WORK is REAL array, dimension (LWORK)
183 *> On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
189 *> The length of the array WORK. LWORK >= 1.
190 *> For optimum performance LWORK >= 6*N*NB, where NB is the
191 *> optimal blocksize.
193 *> If LWORK = -1, then a workspace query is assumed; the routine
194 *> only calculates the optimal size of the WORK array, returns
195 *> this value as the first entry of the WORK array, and no error
196 *> message related to LWORK is issued by XERBLA.
202 *> = 0: successful exit.
203 *> < 0: if INFO = -i, the i-th argument had an illegal value.
209 *> \author Univ. of Tennessee
210 *> \author Univ. of California Berkeley
211 *> \author Univ. of Colorado Denver
214 *> \date January 2015
216 *> \ingroup realOTHERcomputational
218 *> \par Further Details:
219 * =====================
223 *> This routine reduces A to Hessenberg form and maintains B in
224 *> using a blocked variant of Moler and Stewart's original algorithm,
225 *> as described by Kagstrom, Kressner, Quintana-Orti, and Quintana-Orti
229 * =====================================================================
230 SUBROUTINE SGGHD3( COMPQ, COMPZ, N, ILO, IHI, A, LDA, B, LDB, Q,
231 $ LDQ, Z, LDZ, WORK, LWORK, INFO )
233 * -- LAPACK computational routine (version 3.6.1) --
234 * -- LAPACK is a software package provided by Univ. of Tennessee, --
235 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
240 * .. Scalar Arguments ..
241 CHARACTER COMPQ, COMPZ
242 INTEGER IHI, ILO, INFO, LDA, LDB, LDQ, LDZ, N, LWORK
244 * .. Array Arguments ..
245 REAL A( LDA, * ), B( LDB, * ), Q( LDQ, * ),
246 $ Z( LDZ, * ), WORK( * )
249 * =====================================================================
253 PARAMETER ( ZERO = 0.0E+0, ONE = 1.0E+0 )
255 * .. Local Scalars ..
256 LOGICAL BLK22, INITQ, INITZ, LQUERY, WANTQ, WANTZ
257 CHARACTER*1 COMPQ2, COMPZ2
258 INTEGER COLA, I, IERR, J, J0, JCOL, JJ, JROW, K,
259 $ KACC22, LEN, LWKOPT, N2NB, NB, NBLST, NBMIN,
260 $ NH, NNB, NX, PPW, PPWO, PW, TOP, TOPQ
261 REAL C, C1, C2, S, S1, S2, TEMP, TEMP1, TEMP2, TEMP3
263 * .. External Functions ..
266 EXTERNAL ILAENV, LSAME
268 * .. External Subroutines ..
269 EXTERNAL SGGHRD, SLARTG, SLASET, SORM22, SROT, XERBLA
271 * .. Intrinsic Functions ..
274 * .. Executable Statements ..
276 * Decode and test the input parameters.
279 NB = ILAENV( 1, 'SGGHD3', ' ', N, ILO, IHI, -1 )
280 LWKOPT = MAX( 6*N*NB, 1 )
281 WORK( 1 ) = REAL( LWKOPT )
282 INITQ = LSAME( COMPQ, 'I' )
283 WANTQ = INITQ .OR. LSAME( COMPQ, 'V' )
284 INITZ = LSAME( COMPZ, 'I' )
285 WANTZ = INITZ .OR. LSAME( COMPZ, 'V' )
286 LQUERY = ( LWORK.EQ.-1 )
288 IF( .NOT.LSAME( COMPQ, 'N' ) .AND. .NOT.WANTQ ) THEN
290 ELSE IF( .NOT.LSAME( COMPZ, 'N' ) .AND. .NOT.WANTZ ) THEN
292 ELSE IF( N.LT.0 ) THEN
294 ELSE IF( ILO.LT.1 ) THEN
296 ELSE IF( IHI.GT.N .OR. IHI.LT.ILO-1 ) THEN
298 ELSE IF( LDA.LT.MAX( 1, N ) ) THEN
300 ELSE IF( LDB.LT.MAX( 1, N ) ) THEN
302 ELSE IF( ( WANTQ .AND. LDQ.LT.N ) .OR. LDQ.LT.1 ) THEN
304 ELSE IF( ( WANTZ .AND. LDZ.LT.N ) .OR. LDZ.LT.1 ) THEN
306 ELSE IF( LWORK.LT.1 .AND. .NOT.LQUERY ) THEN
310 CALL XERBLA( 'SGGHD3', -INFO )
312 ELSE IF( LQUERY ) THEN
316 * Initialize Q and Z if desired.
319 $ CALL SLASET( 'All', N, N, ZERO, ONE, Q, LDQ )
321 $ CALL SLASET( 'All', N, N, ZERO, ONE, Z, LDZ )
323 * Zero out lower triangle of B.
326 $ CALL SLASET( 'Lower', N-1, N-1, ZERO, ZERO, B(2, 1), LDB )
328 * Quick return if possible
336 * Determine the blocksize.
338 NBMIN = ILAENV( 2, 'SGGHD3', ' ', N, ILO, IHI, -1 )
339 IF( NB.GT.1 .AND. NB.LT.NH ) THEN
341 * Determine when to use unblocked instead of blocked code.
343 NX = MAX( NB, ILAENV( 3, 'SGGHD3', ' ', N, ILO, IHI, -1 ) )
346 * Determine if workspace is large enough for blocked code.
348 IF( LWORK.LT.LWKOPT ) THEN
350 * Not enough workspace to use optimal NB: determine the
351 * minimum value of NB, and reduce NB or force use of
354 NBMIN = MAX( 2, ILAENV( 2, 'SGGHD3', ' ', N, ILO, IHI,
356 IF( LWORK.GE.6*N*NBMIN ) THEN
365 IF( NB.LT.NBMIN .OR. NB.GE.NH ) THEN
367 * Use unblocked code below
375 KACC22 = ILAENV( 16, 'SGGHD3', ' ', N, ILO, IHI, -1 )
377 DO JCOL = ILO, IHI-2, NB
378 NNB = MIN( NB, IHI-JCOL-1 )
380 * Initialize small orthogonal factors that will hold the
381 * accumulated Givens rotations in workspace.
382 * N2NB denotes the number of 2*NNB-by-2*NNB factors
383 * NBLST denotes the (possibly smaller) order of the last
386 N2NB = ( IHI-JCOL-1 ) / NNB - 1
387 NBLST = IHI - JCOL - N2NB*NNB
388 CALL SLASET( 'All', NBLST, NBLST, ZERO, ONE, WORK, NBLST )
389 PW = NBLST * NBLST + 1
391 CALL SLASET( 'All', 2*NNB, 2*NNB, ZERO, ONE,
392 $ WORK( PW ), 2*NNB )
396 * Reduce columns JCOL:JCOL+NNB-1 of A to Hessenberg form.
398 DO J = JCOL, JCOL+NNB-1
400 * Reduce Jth column of A. Store cosines and sines in Jth
401 * column of A and B, respectively.
405 CALL SLARTG( TEMP, A( I, J ), C, S, A( I-1, J ) )
410 * Accumulate Givens rotations into workspace array.
412 PPW = ( NBLST + 1 )*( NBLST - 2 ) - J + JCOL + 1
414 JROW = J + N2NB*NNB + 2
418 DO JJ = PPW, PPW+LEN-1
419 TEMP = WORK( JJ + NBLST )
420 WORK( JJ + NBLST ) = C*TEMP - S*WORK( JJ )
421 WORK( JJ ) = S*TEMP + C*WORK( JJ )
424 PPW = PPW - NBLST - 1
427 PPWO = NBLST*NBLST + ( NNB+J-JCOL-1 )*2*NNB + NNB
429 DO JROW = J0, J+2, -NNB
432 DO I = JROW+NNB-1, JROW, -1
435 DO JJ = PPW, PPW+LEN-1
436 TEMP = WORK( JJ + 2*NNB )
437 WORK( JJ + 2*NNB ) = C*TEMP - S*WORK( JJ )
438 WORK( JJ ) = S*TEMP + C*WORK( JJ )
441 PPW = PPW - 2*NNB - 1
443 PPWO = PPWO + 4*NNB*NNB
446 * TOP denotes the number of top rows in A and B that will
447 * not be updated during the next steps.
455 * Propagate transformations through B and replace stored
456 * left sines/cosines by right sines/cosines.
460 * Update JJth column of B.
462 DO I = MIN( JJ+1, IHI ), J+2, -1
466 B( I, JJ ) = C*TEMP - S*B( I-1, JJ )
467 B( I-1, JJ ) = S*TEMP + C*B( I-1, JJ )
470 * Annihilate B( JJ+1, JJ ).
473 TEMP = B( JJ+1, JJ+1 )
474 CALL SLARTG( TEMP, B( JJ+1, JJ ), C, S,
477 CALL SROT( JJ-TOP, B( TOP+1, JJ+1 ), 1,
478 $ B( TOP+1, JJ ), 1, C, S )
484 * Update A by transformations from right.
485 * Explicit loop unrolling provides better performance
487 * CALL SLASR( 'Right', 'Variable', 'Backward', IHI-TOP,
488 * $ IHI-J, A( J+2, J ), B( J+2, J ),
489 * $ A( TOP+1, J+1 ), LDA )
491 JJ = MOD( IHI-J-1, 3 )
492 DO I = IHI-J-3, JJ+1, -3
502 TEMP1 = A( K, J+I+1 )
503 TEMP2 = A( K, J+I+2 )
504 TEMP3 = A( K, J+I+3 )
505 A( K, J+I+3 ) = C2*TEMP3 + S2*TEMP2
506 TEMP2 = -S2*TEMP3 + C2*TEMP2
507 A( K, J+I+2 ) = C1*TEMP2 + S1*TEMP1
508 TEMP1 = -S1*TEMP2 + C1*TEMP1
509 A( K, J+I+1 ) = C*TEMP1 + S*TEMP
510 A( K, J+I ) = -S*TEMP1 + C*TEMP
516 CALL SROT( IHI-TOP, A( TOP+1, J+I+1 ), 1,
517 $ A( TOP+1, J+I ), 1, A( J+1+I, J ),
522 * Update (J+1)th column of A by transformations from left.
524 IF ( J .LT. JCOL + NNB - 1 ) THEN
527 * Multiply with the trailing accumulated orthogonal
528 * matrix, which takes the form
534 * where U21 is a LEN-by-LEN matrix and U12 is lower
537 JROW = IHI - NBLST + 1
538 CALL SGEMV( 'Transpose', NBLST, LEN, ONE, WORK,
539 $ NBLST, A( JROW, J+1 ), 1, ZERO,
542 DO I = JROW, JROW+NBLST-LEN-1
543 WORK( PPW ) = A( I, J+1 )
546 CALL STRMV( 'Lower', 'Transpose', 'Non-unit',
547 $ NBLST-LEN, WORK( LEN*NBLST + 1 ), NBLST,
548 $ WORK( PW+LEN ), 1 )
549 CALL SGEMV( 'Transpose', LEN, NBLST-LEN, ONE,
550 $ WORK( (LEN+1)*NBLST - LEN + 1 ), NBLST,
551 $ A( JROW+NBLST-LEN, J+1 ), 1, ONE,
552 $ WORK( PW+LEN ), 1 )
554 DO I = JROW, JROW+NBLST-1
555 A( I, J+1 ) = WORK( PPW )
559 * Multiply with the other accumulated orthogonal
560 * matrices, which take the form
568 * where I denotes the (NNB-LEN)-by-(NNB-LEN) identity
569 * matrix, U21 is a LEN-by-LEN upper triangular matrix
570 * and U12 is an NNB-by-NNB lower triangular matrix.
572 PPWO = 1 + NBLST*NBLST
574 DO JROW = J0, JCOL+1, -NNB
576 DO I = JROW, JROW+NNB-1
577 WORK( PPW ) = A( I, J+1 )
581 DO I = JROW+NNB, JROW+NNB+LEN-1
582 WORK( PPW ) = A( I, J+1 )
585 CALL STRMV( 'Upper', 'Transpose', 'Non-unit', LEN,
586 $ WORK( PPWO + NNB ), 2*NNB, WORK( PW ),
588 CALL STRMV( 'Lower', 'Transpose', 'Non-unit', NNB,
589 $ WORK( PPWO + 2*LEN*NNB ),
590 $ 2*NNB, WORK( PW + LEN ), 1 )
591 CALL SGEMV( 'Transpose', NNB, LEN, ONE,
592 $ WORK( PPWO ), 2*NNB, A( JROW, J+1 ), 1,
593 $ ONE, WORK( PW ), 1 )
594 CALL SGEMV( 'Transpose', LEN, NNB, ONE,
595 $ WORK( PPWO + 2*LEN*NNB + NNB ), 2*NNB,
596 $ A( JROW+NNB, J+1 ), 1, ONE,
597 $ WORK( PW+LEN ), 1 )
599 DO I = JROW, JROW+LEN+NNB-1
600 A( I, J+1 ) = WORK( PPW )
603 PPWO = PPWO + 4*NNB*NNB
608 * Apply accumulated orthogonal matrices to A.
610 COLA = N - JCOL - NNB + 1
612 CALL SGEMM( 'Transpose', 'No Transpose', NBLST,
613 $ COLA, NBLST, ONE, WORK, NBLST,
614 $ A( J, JCOL+NNB ), LDA, ZERO, WORK( PW ),
616 CALL SLACPY( 'All', NBLST, COLA, WORK( PW ), NBLST,
617 $ A( J, JCOL+NNB ), LDA )
618 PPWO = NBLST*NBLST + 1
620 DO J = J0, JCOL+1, -NNB
623 * Exploit the structure of
629 * where all blocks are NNB-by-NNB, U21 is upper
630 * triangular and U12 is lower triangular.
632 CALL SORM22( 'Left', 'Transpose', 2*NNB, COLA, NNB,
633 $ NNB, WORK( PPWO ), 2*NNB,
634 $ A( J, JCOL+NNB ), LDA, WORK( PW ),
638 * Ignore the structure of U.
640 CALL SGEMM( 'Transpose', 'No Transpose', 2*NNB,
641 $ COLA, 2*NNB, ONE, WORK( PPWO ), 2*NNB,
642 $ A( J, JCOL+NNB ), LDA, ZERO, WORK( PW ),
644 CALL SLACPY( 'All', 2*NNB, COLA, WORK( PW ), 2*NNB,
645 $ A( J, JCOL+NNB ), LDA )
647 PPWO = PPWO + 4*NNB*NNB
650 * Apply accumulated orthogonal matrices to Q.
655 TOPQ = MAX( 2, J - JCOL + 1 )
661 CALL SGEMM( 'No Transpose', 'No Transpose', NH,
662 $ NBLST, NBLST, ONE, Q( TOPQ, J ), LDQ,
663 $ WORK, NBLST, ZERO, WORK( PW ), NH )
664 CALL SLACPY( 'All', NH, NBLST, WORK( PW ), NH,
665 $ Q( TOPQ, J ), LDQ )
666 PPWO = NBLST*NBLST + 1
668 DO J = J0, JCOL+1, -NNB
670 TOPQ = MAX( 2, J - JCOL + 1 )
675 * Exploit the structure of U.
677 CALL SORM22( 'Right', 'No Transpose', NH, 2*NNB,
678 $ NNB, NNB, WORK( PPWO ), 2*NNB,
679 $ Q( TOPQ, J ), LDQ, WORK( PW ),
683 * Ignore the structure of U.
685 CALL SGEMM( 'No Transpose', 'No Transpose', NH,
686 $ 2*NNB, 2*NNB, ONE, Q( TOPQ, J ), LDQ,
687 $ WORK( PPWO ), 2*NNB, ZERO, WORK( PW ),
689 CALL SLACPY( 'All', NH, 2*NNB, WORK( PW ), NH,
690 $ Q( TOPQ, J ), LDQ )
692 PPWO = PPWO + 4*NNB*NNB
696 * Accumulate right Givens rotations if required.
698 IF ( WANTZ .OR. TOP.GT.0 ) THEN
700 * Initialize small orthogonal factors that will hold the
701 * accumulated Givens rotations in workspace.
703 CALL SLASET( 'All', NBLST, NBLST, ZERO, ONE, WORK,
705 PW = NBLST * NBLST + 1
707 CALL SLASET( 'All', 2*NNB, 2*NNB, ZERO, ONE,
708 $ WORK( PW ), 2*NNB )
712 * Accumulate Givens rotations into workspace array.
714 DO J = JCOL, JCOL+NNB-1
715 PPW = ( NBLST + 1 )*( NBLST - 2 ) - J + JCOL + 1
717 JROW = J + N2NB*NNB + 2
723 DO JJ = PPW, PPW+LEN-1
724 TEMP = WORK( JJ + NBLST )
725 WORK( JJ + NBLST ) = C*TEMP - S*WORK( JJ )
726 WORK( JJ ) = S*TEMP + C*WORK( JJ )
729 PPW = PPW - NBLST - 1
732 PPWO = NBLST*NBLST + ( NNB+J-JCOL-1 )*2*NNB + NNB
734 DO JROW = J0, J+2, -NNB
737 DO I = JROW+NNB-1, JROW, -1
742 DO JJ = PPW, PPW+LEN-1
743 TEMP = WORK( JJ + 2*NNB )
744 WORK( JJ + 2*NNB ) = C*TEMP - S*WORK( JJ )
745 WORK( JJ ) = S*TEMP + C*WORK( JJ )
748 PPW = PPW - 2*NNB - 1
750 PPWO = PPWO + 4*NNB*NNB
755 CALL SLASET( 'Lower', IHI - JCOL - 1, NNB, ZERO, ZERO,
756 $ A( JCOL + 2, JCOL ), LDA )
757 CALL SLASET( 'Lower', IHI - JCOL - 1, NNB, ZERO, ZERO,
758 $ B( JCOL + 2, JCOL ), LDB )
761 * Apply accumulated orthogonal matrices to A and B.
765 CALL SGEMM( 'No Transpose', 'No Transpose', TOP,
766 $ NBLST, NBLST, ONE, A( 1, J ), LDA,
767 $ WORK, NBLST, ZERO, WORK( PW ), TOP )
768 CALL SLACPY( 'All', TOP, NBLST, WORK( PW ), TOP,
770 PPWO = NBLST*NBLST + 1
772 DO J = J0, JCOL+1, -NNB
775 * Exploit the structure of U.
777 CALL SORM22( 'Right', 'No Transpose', TOP, 2*NNB,
778 $ NNB, NNB, WORK( PPWO ), 2*NNB,
779 $ A( 1, J ), LDA, WORK( PW ),
783 * Ignore the structure of U.
785 CALL SGEMM( 'No Transpose', 'No Transpose', TOP,
786 $ 2*NNB, 2*NNB, ONE, A( 1, J ), LDA,
787 $ WORK( PPWO ), 2*NNB, ZERO,
789 CALL SLACPY( 'All', TOP, 2*NNB, WORK( PW ), TOP,
792 PPWO = PPWO + 4*NNB*NNB
796 CALL SGEMM( 'No Transpose', 'No Transpose', TOP,
797 $ NBLST, NBLST, ONE, B( 1, J ), LDB,
798 $ WORK, NBLST, ZERO, WORK( PW ), TOP )
799 CALL SLACPY( 'All', TOP, NBLST, WORK( PW ), TOP,
801 PPWO = NBLST*NBLST + 1
803 DO J = J0, JCOL+1, -NNB
806 * Exploit the structure of U.
808 CALL SORM22( 'Right', 'No Transpose', TOP, 2*NNB,
809 $ NNB, NNB, WORK( PPWO ), 2*NNB,
810 $ B( 1, J ), LDB, WORK( PW ),
814 * Ignore the structure of U.
816 CALL SGEMM( 'No Transpose', 'No Transpose', TOP,
817 $ 2*NNB, 2*NNB, ONE, B( 1, J ), LDB,
818 $ WORK( PPWO ), 2*NNB, ZERO,
820 CALL SLACPY( 'All', TOP, 2*NNB, WORK( PW ), TOP,
823 PPWO = PPWO + 4*NNB*NNB
827 * Apply accumulated orthogonal matrices to Z.
832 TOPQ = MAX( 2, J - JCOL + 1 )
838 CALL SGEMM( 'No Transpose', 'No Transpose', NH,
839 $ NBLST, NBLST, ONE, Z( TOPQ, J ), LDZ,
840 $ WORK, NBLST, ZERO, WORK( PW ), NH )
841 CALL SLACPY( 'All', NH, NBLST, WORK( PW ), NH,
842 $ Z( TOPQ, J ), LDZ )
843 PPWO = NBLST*NBLST + 1
845 DO J = J0, JCOL+1, -NNB
847 TOPQ = MAX( 2, J - JCOL + 1 )
852 * Exploit the structure of U.
854 CALL SORM22( 'Right', 'No Transpose', NH, 2*NNB,
855 $ NNB, NNB, WORK( PPWO ), 2*NNB,
856 $ Z( TOPQ, J ), LDZ, WORK( PW ),
860 * Ignore the structure of U.
862 CALL SGEMM( 'No Transpose', 'No Transpose', NH,
863 $ 2*NNB, 2*NNB, ONE, Z( TOPQ, J ), LDZ,
864 $ WORK( PPWO ), 2*NNB, ZERO, WORK( PW ),
866 CALL SLACPY( 'All', NH, 2*NNB, WORK( PW ), NH,
867 $ Z( TOPQ, J ), LDZ )
869 PPWO = PPWO + 4*NNB*NNB
875 * Use unblocked code to reduce the rest of the matrix
876 * Avoid re-initialization of modified Q and Z.
880 IF ( JCOL.NE.ILO ) THEN
888 $ CALL SGGHRD( COMPQ2, COMPZ2, N, JCOL, IHI, A, LDA, B, LDB, Q,
889 $ LDQ, Z, LDZ, IERR )
890 WORK( 1 ) = REAL( LWKOPT )