I have a project that I was working on for some time. I came back to it yesterday to find that the project didn't load anymore. I'm now getting the following error:
Uncaught (in promise) GPUPipelineError: Texture binding (group:0, binding:1) is TextureSampleType::Depth but used statically with a sampler (group:0, binding:0) that's SamplerBindingType::Filtering
I researched a bit and found that now in chromi(um) 135,
It's no longer possible to use a filtering sampler to sample a depth texture. As a reminder, a depth texture can only be used with a non filtering or a comparison sampler. source
This was my code:
Shader:
...
@group(0) @binding(0) var env_sampler: sampler;
@group(0) @binding(1) var depth_texture: texture_depth_2d;
@group(0) @binding(2) var normal_texture: texture_2d<f32>;
@group(0) @binding(3) var specular_texture: texture_2d<f32>;
@group(0) @binding(4) var ssao_texture: texture_2d<f32>;
...
fn VSPositionFromDepth(uv: vec2f) -> vec3f {
// Get the depth value for this pixel
var z = textureSample(depth_texture, env_sampler, uv); // <-- This line is causing the error
// Get x/w and y/w from the viewport position
var x = uv.x * 2.0 - 1.0;
var y = (1.0 - uv.y) * 2.0 - 1.0;
var vProjectedPos = vec4f(x, y, z, 1.0);
// Transform by the inverse projection matrix
var vPositionVS = env.proj_inverse * vProjectedPos;
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
And the bindings:
...
private readonly _sampler = device.createSampler({
minFilter: 'linear',
magFilter: 'linear',
mipmapFilter: 'linear',
});
...
private createTexturesBindGroup(pool: RenderResourcePool) {
return device.createBindGroup({
label: 'environment texture bind group',
layout: this._pipeline.getBindGroupLayout(EnvironmentShader.BINDING_GROUPS.TEXTURES),
entries: [
{ binding: 0, resource: this._sampler },
{ binding: 1, resource: pool.depthTextureView },
{ binding: 2, resource: pool.normalTextureView },
{ binding: 3, resource: pool.specularTextureView },
{ binding: 4, resource: pool.ssaoTextureViewBlurred },
],
});
}
...
render(pool: RenderResourcePool) {
pool.commandEncoder.pushDebugGroup('Environment Renderer');
const texturesBindGroup = this.createTexturesBindGroup(pool);
this.updateVariablesBuffer(pool);
this.setRenderTexture(pool.hdrBufferChain.current.view);
const rpe = pool.commandEncoder.beginRenderPass(this._renderPassDescriptor);
rpe.setPipeline(this._pipeline);
rpe.setBindGroup(
EnvironmentShader.BINDING_GROUPS.SCENE,
pool.scene.info.getBindGroup(this._pipeline, this._sceneBindGroupOptions),
);
rpe.setBindGroup(EnvironmentShader.BINDING_GROUPS.TEXTURES, texturesBindGroup);
rpe.setBindGroup(EnvironmentShader.BINDING_GROUPS.VARIABLES, this._variablesBindGroup);
rpe.draw(6);
rpe.end();
pool.commandEncoder.popDebugGroup();
}
Seemed pretty straightforward, all I need to do is change the sampler or use a new one. However, neither changing every filtering type to nearest
nor removing the definitions altogether made any difference.
I'm still getting the same error with these definitions:
private readonly _sampler = device.createSampler({
minFilter: 'nearest',
magFilter: 'nearest',
mipmapFilter: 'nearest',
});
// or
private readonly _sampler = device.createSampler();
If you want to see the full code (+ a few spelling mistakes that I fixed before posting it here), here it is:
I found the solution after reading the WebGPU and WGSL w3 specs.
WGSL Sampler Types states:
A sampler mediates access to a sampled texture or a depth texture [...]
- for a sampled texture, optionally filtering retrieved texel values.
- for a depth texture, determining the comparison function applied to the retrieved texel.
A sampler types are:
- sampler: Mediates access to a sampled texture.
- sampler_comparison: Mediates access to a depth texture.
It seems that, while Chromium supported sampling a depth texture, it was actually against the spec.
Reading a bit more of the spec, into Depth Texture:
A depth texture is capable of being accessed in conjunction with a sampler_comparison. It can also be accessed without the use of a sampler. [...]
That confirms it, I'll need to use a sampler_comparasion
or no sampler at all to read the depth textures.
As I want to read depth instead of testing against it, I decided to use no sampler and access the texture using the texel position with textureLoad.
Instead of calculating z
as
var z = textureSample(depth_texture, env_sampler, uv);
I'm now doing:
var depth_dimensions = textureDimensions(depth_texture);
var texel_coords = vec2u(uv * vec2f(depth_dimensions));
var z = textureLoad(depth_texture, texel_coords, 0);
I'll link the diff too if anyone wants to see it in the actual codebase.
Getting the texture size for every texel and multiple times on each frame may not be the smartest approach, but it works.
I don't know if textureDimensions
is a slow operation or not, the spec doesn't go into such detail as I guess it's implementation-specific. But at the end of the day it's more computation to be done. I'll probably just pass the size as a uniform in the future.