I am using Visual Studio 2022 on Windows 10. My processor: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz 1.80 GHz. Here is the code:
#include <vector>
#include <iostream>
#include <time.h>
using namespace std;
void func(int* A, int na)
{
for (int k = 0; k < na; k++)
for (int i = 0; i < na; i++)
for (int j = 0; j < na; j++)
A[j] = A[j] + 1;
}
int main()
{
int na = 5000;
int* aint = new int[na];
func(aint, na);
cout << aint[rand() % na];
delete[] aint;
}
MSVC options: cl /c /Zi /W3 /WX- /diagnostics:column /sdl /O2 /Oi /GL /D NDEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm- /EHsc /MD /GS /Gy /arch:AVX2 /Zc:wchar_t /Zc:forScope /Zc:inline /permissive- /Fo"x64\Release\" /Fd"x64\Release\vc143.pdb" /external:W3 /Gd /TP /FC /errorReport:prompt /Qvec-report:2 Source.cpp
MSVC does not vectorize j-loop. The log the loop is not vectorized due to the code 1300: "Loop body contains little or no computation". MSVC CPU time is 17 seconds, j-loop not vectorized due to Intel Advisor diagnostics. Intel 2024 C/C++ compiler CPU time is 7 seconds, j-loop is vectorized with the use of AVX2 instructions due to Intel Advisor diagnostics. Am I missing something I should set in MSVC options, or MSVC 2022 is just that stupid in automatic vectorization? Could you please give an example when MSVC uses auto vectorization? For all the examples I've tried MSVC does not vectorize.
I tried specifying different enhanced instruction sets, but none of it worked.
Your loop is weird; the best optimization would be to unroll over the outer loops so there's one loop that does A[j] += na*na;
. If you wrote the source that way, I'd expect compilers to auto-vectorize.
With your current source, Clang auto-vectorizes without rearranging the loops, and only adds 1
at a time. GCC does a mix of both for the stand-alone func
where na
is a runtime variable.
GCC doesn't auto-vectorize when inlining func
into your main
where na = 5000
is a compile-time constant. But it interchanges some loops so it's doing add edx, 4
in the inner loop, with sub eax, 2
/ jnz
as the loop condition, starting with eax=2500
as the inner loop counter. It loads and stores from the array only in the middle loop, load hoisted / store sunk out of the inner loop. (The middle loop runs 5000 iterations like the source). Only the outer loop actually increments the pointer, so there are 5000 loads/stores of the same array element between pointer increments. (The outer loop condition is a pointer compare against a pointer to one-past-the-end of the array, from lea rdi, [rax+20000]
)
In the stand-alone non-inline version of func
, GCC unrolls k
or i
by 2 and vectorizes over j
, without interchanging any loops, so it's doing A[j + 0..7] += 2;
in nested loops, looping over the std::vector
every time through the inner loop.
GCC gives main
an implicit __attribute__((cold))
, but we get the same code when inlining into a void foo()
as into int main()
, so the key difference appears to be that na
is a compile-time constant, not the cold
attribute (optimize less and/or favouring size a bit more than speed).
MSVC does auto-vectorize with size_t j
(with -O2 -arch:AVX2
), so perhaps it failed to prove something about 32-bit j
sign-extending to 64-bit pointer width? Hopefully MSVC does know that the j < na
condition makes the loop definitely non-infinite so it can avoid sign-extension inside the inner loop. On Godbolt with MSVC 19.40 the only movsxd
instruction in func
is at the top of the function to sign-extend na
, and the inner loop uses 64-bit rcx
as a counter (for no reason; the count can't be higher than 2^31 - 1 so that REX prefix is a waste of code-size). I linked the int
version; uncomment the size_t
line instead to see the asm for that. (And scroll to the bottom; MSVC spews a lot of stuff due to including <iostream>
. I also included GCC and Clang -O2 -march=x86-64-v3
which is equivalent to your MSVC command: full optimization with AVX2+FMA3+BMI1/2)
Other compilers don't care about int
vs. size_t j
, unless there's a difference I didn't notice in the asm.