Despite the Zig compiler not having built-in support for generating code coverage information, it is still possible to generate it (on Linux at least). There might be other possibilities, but this post will focus on two possible tools:
- kcov, which adds breakpoints on each line of code to generate coverage information
- grindcov, which uses Valgrind's Callgrind tool to instrument code at runtime to generate coverage information
(note: other tools that similarily don't rely on compile-time instrumentation can likely be used/integrated in the same way as detailed in this post)
Coverage for compiled executables
This is the simplest case. As long as the executable is compiled with debug information, then it's as simple as running the executable with the chosen coverage tool. For example, if you had a main.zig
consisting of:
const std = @import("std");
pub fn main() !void {
var args_it = std.process.args();
std.debug.assert(args_it.skip());
const arg = args_it.nextPosix() orelse "goodbye";
if (std.mem.eql(u8, arg, "hello")) {
std.debug.print("hello!\n", .{});
} else {
std.debug.print("goodbye!\n", .{});
}
}
Then generating coverage information would be done by:
$ zig build-exe main.zig
# with kcov
$ kcov kcov-output ./main hello
hello!
# or with grindcov
$ grindcov -- ./main hello
Results for 1 source files generated in directory 'coverage'
File Covered LOC Executable LOC Coverage
-------------- ----------- -------------- --------
main.zig 6 7 85.71%
-------------- ----------- -------------- --------
Total 6 7 85.71%
Coverage for tests using zig test
Tests in Zig are handled a bit differently, since the actual test binary is considered temporary and only lives in zig-cache
. There are two options here:
The first (and more manual) option is to use the --enable-cache
flag to get the path to the directory that the test binary is created in, and then use that to construct the arguments to pass to the coverage tool.
- The test executable itself is named
test
- To run the test executable you need to pass the path to the
zig
binary to it
Using this option would therefore look something like this:
$ zig test test.zig --enable-cache
zig-cache/o/ac1029e39986cc8cf3d732585f5a8060
All 1 tests passed.
# with kcov
$ kcov kcov-output ./zig-cache/o/ac1029e39986cc8cf3d732585f5a8060/test zig
# or with grindcov
$ grindcov -- ./zig-cache/o/ac1029e39986cc8cf3d732585f5a8060/test zig
The second (and more preferred) option is to use the --test-cmd
and --test-cmd-bin
options to set a coverage generator as the 'test executor'. With this, Zig handles the passing of the necessary arguments to the coverage tool for you.
--test-cmd-bin
is necessary to tell zig to append the test binary path to the executor's arguments
Using this option would instead look like this:
# with kcov
$ zig test test.zig --test-cmd kcov --test-cmd kcov-output --test-cmd-bin
# or with grindcov
$ zig test test.zig --test-cmd grindcov --test-cmd -- --test-cmd-bin
Integrating test coverage with build.zig
To generate coverage for a test step in build.zig
, the idea is the same as with zig test
but instead of passing --test-cmd
yourself, you can use LibExeObjStep.setExecCmd
to make the Zig build system pass those arguments to zig test
for you.
For example, if you had a test step in your build.zig
like:
var tests = b.addTest("test.zig");
const test_step = b.step("test", "Run all tests");
test_step.dependOn(&tests.step);
then you could add an option to run the tests with a coverage tool by updating the code to:
const coverage = b.option(bool, "test-coverage", "Generate test coverage") orelse false;
var tests = b.addTest("test.zig");
if (coverage) {
// with kcov
tests.setExecCmd(&[_]?[]const u8{
"kcov",
//"--path-strip-level=3", // any kcov flags can be specified here
"kcov-output", // output dir for kcov
null, // to get zig to use the --test-cmd-bin flag
});
// or with grindcov
//tests.setExecCmd(&[_]?[]const u8{
// "grindcov",
// //"--keep-out-file", // any grindcov flags can be specified here
// "--",
// null, // to get zig to use the --test-cmd-bin flag
//});
}
const test_step = b.step("test", "Run all tests");
test_step.dependOn(&tests.step);
And test coverage information could then be generated by doing:
zig build test -Dtest-coverage
A caveat worth noting
Since the Zig compiler only compiles functions that are actually called/referenced, completely unused functions don’t contribute to the ‘executable lines’ total in either tools' coverage results. Because of this, a file with one used function and many unused functions could potentially show up as 100% covered.
In other words, the results are only indicative of the coverage of used functions.
kcov
vs grindcov
, which should you use?
grindcov has a lot of shortcomings that kcov doesn't, so kcov
is almost certainly the better option for most use cases (unfortunately, I wasn't aware of kcov
when I was writing grindcov
).
kcov
is more mature, has support for more use cases (like dynamic libraries), and is way faster to execute.
To give an idea of the speed difference, when generating coverage for the Zig standard library tests for a single target:
- Running normally took ~5 seconds
- Running with
kcov
took ~9 seconds - Running with
grindcov
took ~2 minutes (and roughly the same amount of time was taken when running withvalgrind --tool=callgrind
directly, so this may not be improvable without patches to callgrind)
However, if you have a straightforward use case, execution speed isn't too important, and you prefer the output format of grindcov
's results, then grindcov
would be a fine choice as well.
Oldest comments (3)
That header image, chef's kiss
Thanks for this article! It's really straightorward to use kcov und when you can merge the results of several runs, the output is actually useful!
Same with comptime logic. At first, I thought it would be fairly simple to write a tool that updates kcov result to add unused functions, but thinking about it, we can't say, looking at the coverage result file if comptime code was used or not.
It's no longer 'find every function declaration that doesn't have coverage info and count the function as nuncovered'. We have to tell if each line of code is comptime or not and check if it was used. Basically, rewrite parts of the compiler.
There is no way we get accurate coverage reports without implementing a bit of the logic into the compiler itself.
EDIT: that said, thanks for the post, I use this in my projects, with great benefits since I read it months ago :).