I am trying to generate code coverage for a big C++ project using LLVM tools on windows, but all I can manage to get is one-level deep coverage, is that normal behavior or am I doing something wrong? Here's the setup: I have two classes:
class A
{
public:
bool foo() {return true;}
};
class B
{
private:
A kung;
public:
bool bar(){return kung.foo();}
};
And I use Google Test for unit testing everything:
TEST(Suite, Name)
{
B ba;
EXPECT_TRUE(ba.bar());
}
The project is configured and generated with CMake/Ninja and compiled with clang-cl with the following flags: -fprofile-instr-generate -fcoverage-mapping
. Compilation is done in release, with all optimisations off (-Xclang -O0
).
Running the test creates the expected profraw
file which I then feed into grcov
to generate html and markdown reports.
grcov ./bin --llvm --branch --llvm-path <MS_LLMV_PATH> -b ./bin -s ./Sources -t html,markdown --ignore-not-existing -o .
The generated report shows only 50% coverage, and shows that B::bar()
is called, but that A::foo()
is not.
Is this behavior correct and expected? Am I doing something wrong? Is there a bug in the tools I am using?
PS: I also tried the --coverage compiler option, which generates .gcno
files, then .gcda
files after running the tests. But I get even weirder results in the report, with a lot of blank lines in the code browser (so neither covered nor uncovered!).
Turns out I am an idiot :)
I am generating both a static and a dynamic version for all my libraries, and I was only generation CC information for the dynamic libraries (and most of the Unit Tests are currently using the static libraries).
All is well now.