I'm trying to simulate diffusion in glsl (not the Gray Scott reaction-diffusion equation), and I seem to be having trouble getting it to work quite right. In all of my tests so far, diffusion stops at a certain point and reaches an equilibrium long before I expect it to.
My glsl code:
#version 460
#define KERNEL_SIZE 9
float kernel[KERNEL_SIZE];
vec2 offset[KERNEL_SIZE];
uniform float width;
uniform float height;
uniform sampler2D current_concentration_tex;
uniform float diffusion_constant; // rate of diffusion of U
in vec2 vTexCoord0;
out vec4 fragColor;
void main(void)
{
float w = 1.0/width;
float h = 1.0/height;
kernel[0] = 0.707106781;
kernel[1] = 1.0;
kernel[2] = 0.707106781;
kernel[3] = 1.0;
kernel[4] = -6.82842712;
kernel[5] = 1.0;
kernel[6] = 0.707106781;
kernel[7] = 1.0;
kernel[8] = 0.707106781;
offset[0] = vec2( -w, -h);
offset[1] = vec2(0.0, -h);
offset[2] = vec2( w, -h);
offset[3] = vec2( -w, 0.0);
offset[4] = vec2(0.0, 0.0);
offset[5] = vec2( w, 0.0);
offset[6] = vec2( -w, h);
offset[7] = vec2(0.0, h);
offset[8] = vec2( w, h);
float chemical_density = texture( current_concentration_tex, vTexCoord0 ).r; // reference texture from last frame
float laplacian;
for( int i=0; i<KERNEL_SIZE; i++ ){
float tmp = texture( current_concentration_tex, vTexCoord0 + offset[i] ).r;
laplacian += tmp * kernel[i];
}
float du = diffusion_constant * laplacian; // diffusion equation
chemical_density += du; // diffuse; dt * du
//chemical_density *= 0.9999; // decay
fragColor = vec4( clamp(chemical_density, 0.0, 1.0 ), 0.0, 0.0, 1.0 );
}
If a diffusion simulation was working properly, I would expect to draw a "source" each frame, and have the chemical diffusion away the source slowly into the rest of the frame. However, diffusion seems to "stop" early and doesn't fade any further. For example, when I draw a constant ring in the center as my source:
I also tried it with a one-time initial condition of chemicals in the center, with no additional chemicals added -- again, I would expect it to fade to zero (evenly distributed across the entire frame), but instead it stops much earlier than I would expect:
Is there something wrong with my simulation code in glsl? Or is this more of a numerical methods kind of issue? Would expanding from a 3x3 kernel to a larger one improve the situation?
(Moved from comment)
This is most probably a precision issue -- are you rendering it into an 8-bit target? You probably want, at least, a 32-bit float.
Since you use modern OpenGL, it is best to create your textures with glTextureStorage2D
and specify GL_RGBA32F
for the internalformat
. There's no such thing as "default format for my GL texture was 8-bit RGBA", unless you use a legacy API, a non-standard extension, or rendering directly on the screen.