Project: Building Efficient Tools for Sparse, Block-Symmetric Matrices
Student Researchers: Annemarie Dahm, Hong Fang
Advisor: Don Morton
Institution: University of Montana, Missoula


Results


This project has significantly increased our interest in academic
research and provided us with incredible opportunities in the field of computer science. Both of us attended the Supercomputing 2000 Conference in Dallas as volunteers, met other researchers in the field and learned about the interaction of math and scientific computing. However, even through several simplifications, our results did not further general knowledge of the field. Rather, it increased our own understanding of research methods throughout the entire experience.

A year ago, Dr. Morton proposed this project which involved finding the inverse of large sparse block symmetric matrices. The problem with any large matrix in its expanded form is computational time and storage requirements. Thus, if the entire matrix does not fit in main memory, then there is significant time decrease as parts of the matrix are swapped in and out. Because the upper triangular is repeated in the lower triangular and many of the entries are zero and do not need storage, the sparse symmetric property implies that less than half of the matrix needs to be stored. Block symmetry refers to a type of matrix in which a block in the upper triangular portion also is in the lower triangular section of the matrix (not the transpose as with regular symmetry). Then blocks along the diagonal are symmetric within themselves. Thus the symmetric opposite of an element in the i, j entry is dependent on its location and block size and will not necessarily appear in the j, i entry.

We started work in September by studying matrices and different iterative methods for finding the inverse. These include Gauss-Jordan method and doing an L-U factorization of the original matrix and thus finding the inverse of the L-U factors. Both of these methods are computationally expensive and have a high number of fill-ins or zero elements which throughthe procedure become non-zero. It is true that with any sparse matrix, the inverse is often adense matrix. But in some cases, as explained by Yu. B. Gol'dshtein, the inverse of a sparse matrix is not necessarily dense(1992). He uses graph theory to draw the matrix and show whether the inverse would be sparse. Knowing that the inverse may be sparse allows us to consider the large sparse matrix. Namely if the initial component is sparse and the final component is sparse, then we want to find the method through which the intermediate matrices are also sparse.

During this time we were also debating whether the inverse of a block-symmetric matrix is also block symmetric. It is known that the inverse of a symmetric matrix is symmetric; however with block-symmetry, in which any one entry is dependent on the block size and the entry's position within the block, it was uncertain if this would produce a symmetric inverse. Through expanding the matrix and examining each iteration of the Gauss-Jordan method, we concluded that the block symmetric matrix is not necessarily symmetric. Subsequently, this implies that the lower and upper triangular form must also be stored in the final matrix. Because this is the case, and believing that the best method is the Gauss-Jordan method for these matrices, we simplified the problem to be symmetric matrices in which the value at i, j entry is also the value at the j, i entry.

Finding the inverse of symmetric matrices has already been accomplished such as the LAPACK routines, but the question of sparsity remains and thus whether more storage than the initial requirements are needed (Anderson 1999). Midyear, we studied sparse matrices and methods used to reduce fill-ins. These include Markowitz reordering and minimum degree reordering algorithms (Dongarra 1998). Minimum degree reordering is the symmetric analog to Markowitz reordering whereby rows and columns are reordered so non-zero elements center at the lower right hand corner where they have less effect on the zero elements during iterative methods (Amestoy 1996). This creates a new difficulty of whether it is appropriate to reorder a matrix without prior knowledge of the matrix. Matrices often represent the coefficients in systems of equations; if the rows and columns are moved, then the user would need to know how they were moved in order to adjust both the solution vector and the variable vector. A symmetric matrix can also represent a graph composed of nodes and vertices whereby the entries of the matrix represent adjacencies between two vertices. The user would want to know which rows and columns were swapped in order to preserve the original graph. Therefore, the user would need to know the final position of all rows and columns.

By January, it became apparent that we were a long way from writing our own code and finding another method for taking the inverse of large sparse symmetric matrices. The next step was to find as much information as possible on sparse symmetric matrices. For several weeks we reviewed and discussed recently published articles while talking about the direction the project was leading us. As most of the articles contained complicated terminology and subject matter, this stage proved to be slower that planned. One particular article of interest found approximate L-D-U factors by dropping elements whose absolute value was smaller than a preset threshold. Thus if an element was numerically close to zero it was ignored and assumed to be zero. Then the inverse of the two factors created an approximate inverse to the original matrix (Benzi and Tuma 2000). We determined that the number of operations needed was still computationally significant and comparable to the other methods used. However this method did decrease the storage requirements in the final matrix.

By late March and early April, we discussed storage schemes such as used in the Harwell-Boeing Matrices and their effects on doing known algorithms. As the algorithms are iterative accessing both zero and nonzero entries, one obstacle was finding the next element which the algorithm specified (Duff 2000). With the idea that if we simplify to only large symmetric matrices and then narrow to storage schemes of sparse systems, we then researched methods of full symmetric matrices. This meant reviewing LAPACK Routines looking in particular at the routines of Cholskey Factorization and then finding the inverse.

With both of us having full academic loads, the research ended without the expected final result. We have discussed what we could have changed if we did this project again. First, we should have spent less time on the introduction and preparation. While it was necessary for everyone involved to understand the previous classical solutions and their associated problems, we could have accomplished more if preparatory work was done during the summer months. Secondly, time elements were a factor; however there should have been three meetings perweek. The first is the conference with the advisor. The second is to meet with each other and do group research whereby we could immediately collate our ideas, and the final meeting is to discuss outside research done. By increasing the number of meeting times, we would have greater turnaround time for ideas. As it was with two meetings, we presented ideas and new concepts twice a week, once with each other and once with the advisor. An associated problem was the number of difficult classes we were both taking this semester. Time management increasingly became an issue as the project continued. Probably, it would have been better to do the project during a less academically stressful year.

As much as we now wish we could have changed, this project was still a total success in terms of our own interest in the field. We now both without hesitation plan to attend graduate school, have been awakened to an entire new realm of research which we had never done before, and are more confident in our ability to learn and discover. Furthermore, because of doing this research, new opportunities occurred such as going to the Supercomputing 2000 conference and the invitation to intern in Alaska. This project also increased our understanding of the field. We thought of issues such as memory size and storage schemes and we applied knowledge obtained from various previous math and computer science classes.

Although we did not complete a program for finding an inverse for large sparse block symmetric matrices, the above personal gains enable us to pursue original research in our future endeavors. We wish to thank the CREW Program and especially our advisor, Dr. Morton, for this opportunity.

Bibliography

Amestoy, Patrick R., et al. "An Approximate Minimum Degree Ordering Algorithm." Siam Journal on Matrix Analysis and Applications. vol. 17, no. 4. Oct 01, 1996. p. 886.

Amestoy, Patrick R., et al. "J. Matrix Analysis and Applications." CISC Dept, University of Florida. Available at: http://www.cisc.ufl.edu/~davis .

Anderson, E., et al. "LAPACK Users' Guide: Third Edition". Available at:
http://www.netlib.org/lapack/lug/lapack_lug.html . Last Updated: August 22, 1999.

Benzi, Michele and Tuma, Miroslav. "Orderings for Factorized Sparse
Approximate Inverse Preconditioners." Siam Journal on Scientific Computing. vol. 21, no. 5. 2000. pp. 1851 -1868.

Chow, Edmund. "A Priori Sparsity Patterns for Parallel Sparse Approximate Inverse Preconditioners." Siam Journal on Scientific Computing. vol. 21, no. 5. 2000. pp. 1804 - 1822.

Dongarra, Jack S., et al. Numerical Linear Algebra for High Performance
Computers. Society for Industrial and Applied Mathematics, 1998.

Dongarra, Jack., et al. November 7, 2000. "Tutorial M7: High Speed Numerical Linear Algebra:Algorithms and Research Directions." Tutorial presented to Super Computing 2000 conference in Dallas.

Duff, Iain S., et al. "Users' Guide for the Harwell-Boeing Sparse
Matrices." HSL 2000. Available at: http://www.cse.ac.uk/Activity/HSL/ . Last updated: September 30, 2000.

Flannery, William H., et al. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, New York 1996. p. 19-76, 96-98.

George, Alan and Liu, Joseph W.H. "The Evolution of the Minimum Degree Ordering Algorithm." Siam Review. vol. 21, no. 1. March 1989. pp. 1-19.

Gol'dshtein, Yu. B. "Portrait of the Inverse of a Sparse Matrix." Cybernetics and Systems Analysis. vol. 28, no. 4. July 01, 1992.

Hua, Dai. "Completing a Symmetric 2x2 Block Matrix and Its Inverse." Linear Algebra and Its Applications. vol. 235. March 01, 1996. p. 235.

Jones, Mark T. and Plassman, Paul E. "BlockSolve v2.0: Scalable Library Software for the Parallel Solution of Sparse Linear Systems Analysis." ANL Report 92-46. October 1994.

Sherman, Andrew H. "Algorithm 533: NSPIV, A Fortran Subroutine for Sparse Gaussian Elimination with Partial Pivoting." ACM Transaction on Mathematical Software. vol. 4, no. 4. December 1998. pp. 391-398.

Van der Vorst, Henk A. "Parallel Iterative Solution Methods for Linear
Systems arising from Discretized PDE's." Accessed at: http://citeseer.nj.nec.com/update/41005 . Accessed date: February 27, 2001.