c++cachingprofilingvectorizationpapi

Increased number of cache misses when vectorizing code


I vectorized the dot product between 2 vectors with SSE 4.2 and AVX 2, as you can see below. The code was compiled with GCC 4.8.4 with the -O2 optimization flag. As expected the performance got better with both (and AVX 2 faster than SSE 4.2), but when I profiled the code with PAPI, I found out that the total number of misses (mainly L1 and L2) increased a lot:

Without Vectorization:

PAPI_L1_TCM: 784,112,091
PAPI_L2_TCM: 195,315,365
PAPI_L3_TCM: 79,362

With SSE 4.2:

PAPI_L1_TCM: 1,024,234,171
PAPI_L2_TCM: 311,541,918
PAPI_L3_TCM: 68,842

With AVX 2:

PAPI_L1_TCM: 2,719,959,741
PAPI_L2_TCM: 1,459,375,105
PAPI_L3_TCM: 108,140

Might there be something wrong with my code or is this kind of behavior normal?

AVX 2 code:

double vec_dotProduct(const vec& vecs, const unsigned int& start_a, const unsigned int& start_b, const int& n) {
    double dot = 0;
    register int i = 0;
    const int loopBound = n-3;

    __m256d vsum, vecPi, vecCi, vecQCi;

    vsum = _mm256_set1_pd(0);

    double * const pA = vecs.x+start_a ;
    double * const pB = vecs.x+start_b ;

    for( ; i<loopBound ;i+=4){
        vecPi  = _mm256_loadu_pd(&(pA)[i]);
        vecCi  = _mm256_loadu_pd(&(pB)[i]);
        vecQCi = _mm256_mul_pd(vecPi,vecCi);
        vsum   = _mm256_add_pd(vsum,vecQCi);
    }

    vsum = _mm256_hadd_pd(vsum, vsum);

    dot = ((double*)&vsum)[0] + ((double*)&vsum)[2];

    for( ; i<n; i++)
        dot += pA[i] * pB[i];

    return dot;
}

SSE 4.2 code:

double vec_dotProduct(const vec& vecs, const unsigned int& start_a, const unsigned int& start_b, const int& n) {
    double dot = 0;
    register int i = 0;

    const int loopBound = n-1;

    __m128d vsum, vecPi, vecCi, vecQCi;

    vsum = _mm_set1_pd(0);

    double * const pA = vecs.x+start_a ;
    double * const pB = vecs.x+start_b ;

    for( ; i<loopBound ;i+=2){
        vecPi  = _mm_load_pd(&(pA)[i]);
        vecCi  = _mm_load_pd(&(pB)[i]);
        vecQCi = _mm_mul_pd(vecPi,vecCi);
        vsum   = _mm_add_pd(vsum,vecQCi);
    }

    vsum = _mm_hadd_pd(vsum, vsum);

    _mm_storeh_pd(&dot, vsum);

    for( ; i<n; i++)
        dot += pA[i] * pB[i];

    return dot;
}

Non-vectorized code:

double dotProduct(const vec& vecs, const unsigned int& start_a, const unsigned int& start_b, const int& n) {
    double dot = 0;
    register int i = 0;

    for (i = 0; i < n; ++i)
    {
        dot += vecs.x[start_a+i] * vecs.x[start_b+i];
    }
    return dot;
}

Edit: Assembly of the non-vectorized code:

   0x000000000040f9e0 <+0>:     mov    (%rcx),%r8d
   0x000000000040f9e3 <+3>:     test   %r8d,%r8d
   0x000000000040f9e6 <+6>:     jle    0x40fa1d <dotProduct(vec const&, unsigned int const&, unsigned int const&, int const&)+61>
   0x000000000040f9e8 <+8>:     mov    (%rsi),%eax
   0x000000000040f9ea <+10>:    mov    (%rdi),%rcx
   0x000000000040f9ed <+13>:    mov    (%rdx),%edi
   0x000000000040f9ef <+15>:    vxorpd %xmm0,%xmm0,%xmm0
   0x000000000040f9f3 <+19>:    add    %eax,%r8d
   0x000000000040f9f6 <+22>:    sub    %eax,%edi
   0x000000000040f9f8 <+24>:    nopl   0x0(%rax,%rax,1)
   0x000000000040fa00 <+32>:    mov    %eax,%esi
   0x000000000040fa02 <+34>:    lea    (%rdi,%rax,1),%edx
   0x000000000040fa05 <+37>:    add    $0x1,%eax
   0x000000000040fa08 <+40>:    vmovsd (%rcx,%rsi,8),%xmm1
   0x000000000040fa0d <+45>:    cmp    %r8d,%eax
   0x000000000040fa10 <+48>:    vmulsd (%rcx,%rdx,8),%xmm1,%xmm1
   0x000000000040fa15 <+53>:    vaddsd %xmm1,%xmm0,%xmm0
   0x000000000040fa19 <+57>:    jne    0x40fa00 <dotProduct(vec const&, unsigned int const&, unsigned int const&, int const&)+32>
   0x000000000040fa1b <+59>:    repz retq 
   0x000000000040fa1d <+61>:    vxorpd %xmm0,%xmm0,%xmm0
   0x000000000040fa21 <+65>:    retq   

Edit2: Below you can find the comparison of L1 cache misses between the vectorized and the non-vectorized code for bigger N's (N on the x-label and L1 cache misses on the y-label). Basically, for bigger N's there are still more misses in the vectorized version than in the non-vectorized version.

enter image description here


Solution

  • Rostislav is right that the compiler is auto-vectorizing, and from the GCC documentation on -O2:

    "-O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff." (from here: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html)

    GCC with -O2 flag is attempting to generate the most efficient code, without favoring either code size or speed.

    So, in terms of CPU cycles, the -O2 auto-vectorized code will require the fewest watts to run, but will not be the fastest or the smallest code. This is the best case for code that runs on mobile devices and on multi-user systems, and these tend to be the preferred use of C++. If you want absolute maximum speed regardless of how many watts it uses, try -O3 or -Ofast if your version of GCC supports them, or go with your hand-optimized faster solutions.

    The cause of this is likely a combination of two factors.

    First, faster code generates more requests to the memory/cache within the same amount of time, which stresses the pre-fetch prediction algorithms. L1 cache isn't very large, typically 1MB - 3MB, and is shared among all running processes on that CPU Core, so the CPU Core cannot pre-fetch until the previously pre-fetched block is no longer in use. If the code is running faster, there is less time to pre-fetch between blocks, and in code that pipe-lines effectively, more cache misses will be executed before the CPU Core halts completely until the pending fetches are completed.

    And second, modern operating systems typically divide single-threaded processes among multiple cores by adjusting thread affinity dynamically, in order to make use of the extra cache across multiple cores, even though it cannot run any of the code in parallel - e.g. fill core 0's cache with your data and then run it while filling core 1's cache, then run on core 1 while refilling core 0's cache, round-robin until completed. This pseudo-parallelism improves the overall speed of single-threaded processes and should greatly reduce cache misses, but can only be done in very specific circumstances... specific circumstances for which good compilers will generate code whenever possible.