I have a test tool (roughly, a diffing tool) that takes two inputs, and returns both an output (the difference between the two inputs), and a return code (0 if the two inputs are matching, 1 otherwise). It's built in Kotlin, and available at //java/fr/enoent/phosphorus
in my repo.
I want to write a rule that tests that a file generated by something is identical to the reference file already present in the repository. I tried something with ctx.actions.run
, the problem being that my rule, having test = True
set, needs to return an executable built by that rule (so not a tool provided to the rule). I then tried to wrap it in a shell script following the example, like this:
def _phosphorus_test_impl(ctx):
output = ctx.actions.declare_file("{name}.phs".format(name = ctx.label.name))
script = phosphorus_compare(
ctx,
reference = ctx.file.reference,
comparison = ctx.file.comparison,
out = output,
)
ctx.actions.write(
output = ctx.outputs.executable,
content = script,
)
runfiles = ctx.runfiles(files = [ctx.executable._phosphorus_tool, ctx.file.reference, ctx.file.comparison])
return [DefaultInfo(runfiles = runfiles)]
phosphorus_test = rule(
_phosphorus_test_impl,
attrs = {
"comparison": attr.label(
allow_single_file = [".phs"],
doc = "File to compare to the reference",
mandatory = True,
),
"reference": attr.label(
allow_single_file = [".phs"],
doc = "Reference file",
mandatory = True,
),
"_phosphorus_tool": attr.label(
default = "//java/fr/enoent/phosphorus",
executable = True,
cfg = "host",
),
},
doc = "Compares two files, and fails if they are different.",
test = True,
)
(phosphorus_compare
is just a macro generating the actual command.)
However, this approach has two issues:
java/fr/enoent/phosphorus/phosphorus: line 359: /home/kernald/.cache/bazel/_bazel_kernald/58c025fbb926eac6827117ef80f7d2fa/sandbox/linux-sandbox/1979/execroot/fr_enoent/bazel-out/k8-fastbuild/bin/tools/phosphorus/tests/should_pass.runfiles/remotejdk11_linux/bin/java: No such file or directory
Overall I feel like using a shell script is just adding an unnecessary indirection, and losing some context (e.g. tools' runfiles). Ideally, I would just use ctx.actions.run
and rely on its return code, but it doesn't seem to be an option as a test apparently needs to generate an executable. What would be the correct approach to write such a rule?
Turns out, generating a script is the correct approach, it's (as far as I understood) impossible to return some kind of pointer to a ctx.actions.run
. A test rule needs to have an executable output.
Regarding the output file that the tool is generating: there's no need to declare it, at all. I just need to make sure that it's generated in $TEST_UNDECLARED_OUTPUTS_DIR
. Every single file in this directory will be added to an archive called output.zip
by Bazel. This is (partly) documented here.
Concerning the runfiles, well, I had the tool's binary, but not its own runfiles. Here is the fixed rule:
def _phosphorus_test_impl(ctx):
script = phosphorus_compare(
ctx,
reference = ctx.file.reference,
comparison = ctx.file.comparison,
out = "%s.phs" % ctx.label.name,
)
ctx.actions.write(
output = ctx.outputs.executable,
content = script,
)
return [
DefaultInfo(
runfiles = ctx.runfiles(
files = [
ctx.executable._phosphorus_tool,
ctx.file.reference,
ctx.file.comparison,
],
).merge(ctx.attr._phosphorus_tool[DefaultInfo].default_runfiles),
executable = ctx.outputs.executable,
),
]
def phosphorus_test(size = "small", **kwargs):
_phosphorus_test(size = size, **kwargs)
_phosphorus_test = rule(
_phosphorus_test_impl,
attrs = {
"comparison": attr.label(
allow_single_file = [".phs"],
doc = "File to compare to the reference",
mandatory = True,
),
"reference": attr.label(
allow_single_file = [".phs"],
doc = "Reference file",
mandatory = True,
),
"_phosphorus_tool": attr.label(
default = "//java/fr/enoent/phosphorus",
executable = True,
cfg = "target",
),
},
doc = "Compares two files, and fails if they are different.",
test = True,
)
The key part being .merge(ctx.attr._phosphorus_tool[DefaultInfo].default_runfiles)
in the returned DefaultInfo
.
I also made a small mistake about the configuration, as this test is intended to run on the target configuration, not host, it's been fixed accordingly.