c++fortranmatrix-multiplicationblas# How does BLAS get such extreme performance?

Out of curiosity I decided to benchmark my own matrix multiplication function versus the BLAS implementation... I was to say the least surprised at the result:

Custom Implementation, 10 trials of 1000x1000 matrix multiplication:

`Took: 15.76542 seconds.`

BLAS Implementation, 10 trials of 1000x1000 matrix multiplication:

`Took: 1.32432 seconds.`

This is using single precision floating point numbers.

My Implementation:

```
template<class ValT>
void mmult(const ValT* A, int ADim1, int ADim2, const ValT* B, int BDim1, int BDim2, ValT* C)
{
if ( ADim2!=BDim1 )
throw std::runtime_error("Error sizes off");
memset((void*)C,0,sizeof(ValT)*ADim1*BDim2);
int cc2,cc1,cr1;
for ( cc2=0 ; cc2<BDim2 ; ++cc2 )
for ( cc1=0 ; cc1<ADim2 ; ++cc1 )
for ( cr1=0 ; cr1<ADim1 ; ++cr1 )
C[cc2*ADim2+cr1] += A[cc1*ADim1+cr1]*B[cc2*BDim1+cc1];
}
```

I have two questions:

- Given that a matrix-matrix multiplication say: nxm * mxn requires n*n*m multiplications, so in the case above 1000^3 or 1e9 operations. How is it possible on my 2.6Ghz processor for BLAS to do 10*1e9 operations in 1.32 seconds? Even if multiplcations were a single operation and there was nothing else being done, it should take ~4 seconds.
- Why is my implementation so much slower?

Solution

A good starting point is the great book The Science of Programming Matrix Computations by Robert A. van de Geijn and Enrique S. Quintana-Ortí. They provide a free download version.

BLAS is divided into three levels:

Level 1 defines a set of linear algebra functions that operate on vectors only. These functions benefit from vectorization (e.g. from using SIMD such as SSE).

Level 2 functions are matrix-vector operations, e.g. some matrix-vector product. These functions could be implemented in terms of Level 1 functions. However, you can boost the performance of these functions if you can provide a dedicated implementation that makes use of some multiprocessor architecture with shared memory.

Level 3 functions are operations like the matrix-matrix product. Again you could implement them in terms of Level 2 functions. But Level 3 functions perform O(N^3) operations on O(N^2) data. So if your platform has a cache hierarchy then you can boost performance if you provide a dedicated implementation that is

*cache optimized/cache friendly*. This is nicely described in the book. The main boost of Level 3 functions comes from cache optimization. This boost significantly exceeds the second boost from parallelism and other hardware optimizations.

By the way, most (or even all) of the high performance BLAS implementations are NOT implemented in Fortran. ATLAS is implemented in C. GotoBLAS/OpenBLAS is implemented in C and its performance-critical parts in Assembler. Only the reference implementation of BLAS is implemented in Fortran. However, all these BLAS implementations provide a Fortran interface such that it can be linked against LAPACK (LAPACK gains all its performance from BLAS).

Optimized compilers play a minor role in this respect (and for GotoBLAS/OpenBLAS the compiler does not matter at all).

IMHO no BLAS implementation uses algorithms like the Coppersmith–Winograd algorithm or the Strassen algorithm. The likely reasons are:

- Maybe it's not possible to provide a cache-optimized implementation of these algorithms (i.e. you would lose more than you would win)
- These algorithms are numerically not stable. As BLAS is the computational kernel of LAPACK this is a no-go.
- Although these algorithms have a nice time complexity on paper, the Big O notation hides a large constant, so it only starts to become viable for extremely large matrices.

Edit/Update:

The new and groundbreaking papers for this topic are the BLIS papers. They are exceptionally well written. For my lecture "Software Basics for High Performance Computing" I implemented the matrix-matrix product following their paper. Actually I implemented several variants of the matrix-matrix product. The simplest variant is entirely written in plain C and has less than 450 lines of code. All the other variants merely optimize the loops

```
for (l=0; l<MR*NR; ++l) {
AB[l] = 0;
}
for (l=0; l<kc; ++l) {
for (j=0; j<NR; ++j) {
for (i=0; i<MR; ++i) {
AB[i+j*MR] += A[i]*B[j];
}
}
A += MR;
B += NR;
}
```

The overall performance of the matrix-matrix product *only* depends on these loops. About 99.9% of the time is spent here. In the other variants I used intrinsics and assembler code to improve the performance. You can see the tutorial going through all the variants here:

ulmBLAS: Tutorial on GEMM (Matrix-Matrix Product)

Together with the BLIS papers it becomes fairly easy to understand how libraries like Intel MKL can gain such performance. And why it does not matter whether you use row or column major storage!

The final benchmarks are here (we called our project ulmBLAS):

Benchmarks for ulmBLAS, BLIS, MKL, openBLAS and Eigen

Another Edit/Update:

I also wrote some tutorials on how BLAS is used for numerical linear algebra problems like solving a system of linear equations:

High Performance LU Factorization

(This LU factorization is for example used by Matlab for solving a system of linear equations.)

~~I hope to find time~~ to extend the tutorial to describe and demonstrate how to realise a highly scalable parallel implementation of the LU factorization like in PLASMA.

Ok, here you go: Coding a Cache Optimized Parallel LU Factorization

P.S.: I also did make some experiments on improving the performance of uBLAS. It actually is pretty simple to boost (yeah, play on words :) ) the performance of uBLAS:

Here a similar project with BLAZE:

- Segmentation fault when converting char * to char **
- What's the difference between "C system calls" and "C library routines"?
- C standard - const qualification of struct members
- Conventions/good ways to initialize variables with default/NULL values
- why there are int56_t in clang c header file, and how to use them
- How to watch char values of strings pointed by a pointer to pointers in debugger mode of VSCode
- How to repeat a char using printf?
- Is left shifting a long long considered Undefined Behavior in C?
- String buffer gets skipped on the last loop iteration
- Unsigned int weird behavior with Loops in C
- How are Linked Lists implemented in the Ready Queue before moving onto the scheduler during the PCB implementation?
- How do C/C++/Objective-C compare with C# when it comes to using libraries?
- Why no switch on pointers?
- Similar syntax causes strange, repeated compile errors when building PHP from source on Windows
- Segmentation fault in C shellcode x64
- Where is the documentation for the XRandr header?
- i*=j ==x and i=i*j ==x behaving differently in C
- How can I make objcopy -Obinary append every text section?
- timespec equivalent for windows
- Register keyword: stack or heap
- Detect a parenthesized macro argument in C
- Modifying (stealing) Linux syscalls using kprobe
- Python's bz2 module not compiled by default
- ASCII file encoding with 16-bit bytes
- GtkContainer/GtkWidget maximum width
- AF-XDP: Implement Shared Umem sockets
- Are there known implementations of the CIEDE2000 or CIE94 Delta-E color difference calculation algorithm?
- ssh_channel_request_exec is not working without ssh_channel_read
- Are DHCP options ordered?
- Can't install apk using adb