Representing sparse binary matrices as straight-line programs for fast matrix-vector multiplication
Authors
Abstract
Sparse matrix-vector multiplication dominates the performance of many scientific and industrial problems. For example, iterative methods for solving linear systems rely on the performance of this critical operation. The particular case of binary matrices shows up in many important areas of computing, such as graph theory and cryptography. Unfortunately, irregular memory access patterns cause poor memory throughput, slowing down this operation.We transform the matrix into a straight-line program that takes full advantage of the instruction cache. The regular loop-less pattern of the program minimizes cache misses, thus decreasing the latency for most instructions. We focus on the widely used {\tt x86\_64} architecture and on binary matrices, to explore several possible tradeoffs regarding memory access policies and code size. When compared to a Compressed Row Storage (CRS) implementation, we obtain a 20\% performance improvement in a binary sparse matrix with $5426753$ rows and weight $370909586$.