postgresqldockerblazor

Blazor App in Docker Occasionally Fails to Connect to PostgreSQL Database


I'm working on a Blazor application that connects to a PostgreSQL database, with both the Blazor app and PostgreSQL running in separate Docker containers.

Here are the steps I’ve taken so far:

I’ve created a Docker network to allow communication between the containers. The PostgreSQL container is accessible from other tools (e.g., pgAdmin). The Blazor app usually runs fine, but occasionally it fails to connect to the PostgreSQL database without any clear reason.

Problem:

While the app usually connects to the PostgreSQL database without issues, there are times when it randomly fails to connect and throws an error like:

An error occurred using the connection to database '{database}' on server '{server}'.

What could be causing these intermittent connection issues between my Blazor app and PostgreSQL? Are there any best practices or configurations I should check to avoid these occasional failures?

I’ve checked the PostgreSQL logs, but there’s nothing unusual or relevant that indicates why the connection is being lost.

94200.log 
2024-09-03 19:42:00.392 -04 [1] LOG:  starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2024-09-03 19:42:00.408 -04 [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-09-03 19:42:00.414 -04 [1] LOG:  could not create IPv6 socket for address "::": Address family not supported by protocol
2024-09-03 19:42:00.417 -04 [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-09-03 19:42:00.426 -04 [30] LOG:  database system was shut down at 2024-09-03 19:41:52 -04
2024-09-03 19:42:00.446 -04 [1] LOG:  database system is ready to accept connections
2024-09-03 19:47:00.500 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:47:00.545 -04 [28] LOG:  checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.034 s, sync=0.002 s, total=0.046 s; sync files=3, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=3/2C044728, redo lsn=3/2C0446F0
2024-09-03 19:52:00.644 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:53:10.575 -04 [28] LOG:  checkpoint complete: wrote 699 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=69.920 s, sync=0.004 s, total=69.932 s; sync files=8, longest=0.002 s, average=0.001 s; distance=5396 kB, estimate=5396 kB; lsn=3/2C591A10, redo lsn=3/2C589930
2024-09-03 19:57:00.630 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:57:09.160 -04 [28] LOG:  checkpoint complete: wrote 85 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=8.487 s, sync=0.034 s, total=8.531 s; sync files=8, longest=0.008 s, average=0.005 s; distance=577 kB, estimate=4914 kB; lsn=3/2C619E88, redo lsn=3/2C619E18
2024-09-03 20:02:00.235 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:02:12.026 -04 [28] LOG:  checkpoint complete: wrote 118 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.774 s, sync=0.007 s, total=11.791 s; sync files=17, longest=0.002 s, average=0.001 s; distance=775 kB, estimate=4500 kB; lsn=3/2C6DBC70, redo lsn=3/2C6DBC00
2024-09-03 20:07:00.096 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:07:14.588 -04 [28] LOG:  checkpoint complete: wrote 145 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=14.478 s, sync=0.004 s, total=14.493 s; sync files=11, longest=0.002 s, average=0.001 s; distance=56 kB, estimate=4056 kB; lsn=3/2C6E9DD8, redo lsn=3/2C6E9DA0
2024-09-03 20:12:00.687 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:13:18.920 -04 [28] LOG:  checkpoint complete: wrote 781 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=78.220 s, sync=0.004 s, total=78.234 s; sync files=9, longest=0.002 s, average=0.001 s; distance=262 kB, estimate=3676 kB; lsn=3/2C72B818, redo lsn=3/2C72B7E0
2024-09-03 20:17:01.007 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:17:01.748 -04 [28] LOG:  checkpoint complete: wrote 7 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.728 s, sync=0.003 s, total=0.741 s; sync files=4, longest=0.002 s, average=0.001 s; distance=13 kB, estimate=3310 kB; lsn=3/2C72ED80, redo lsn=3/2C72ED48

I’ve checked the PostgreSQL logs, but there’s nothing unusual or relevant that indicates why the connection is being lost.

94200.log 
2024-09-03 19:42:00.392 -04 [1] LOG:  starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2024-09-03 19:42:00.408 -04 [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-09-03 19:42:00.414 -04 [1] LOG:  could not create IPv6 socket for address "::": Address family not supported by protocol
2024-09-03 19:42:00.417 -04 [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-09-03 19:42:00.426 -04 [30] LOG:  database system was shut down at 2024-09-03 19:41:52 -04
2024-09-03 19:42:00.446 -04 [1] LOG:  database system is ready to accept connections
2024-09-03 19:47:00.500 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:47:00.545 -04 [28] LOG:  checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.034 s, sync=0.002 s, total=0.046 s; sync files=3, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=3/2C044728, redo lsn=3/2C0446F0
2024-09-03 19:52:00.644 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:53:10.575 -04 [28] LOG:  checkpoint complete: wrote 699 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=69.920 s, sync=0.004 s, total=69.932 s; sync files=8, longest=0.002 s, average=0.001 s; distance=5396 kB, estimate=5396 kB; lsn=3/2C591A10, redo lsn=3/2C589930
2024-09-03 19:57:00.630 -04 [28] LOG:  checkpoint starting: time
2024-09-03 19:57:09.160 -04 [28] LOG:  checkpoint complete: wrote 85 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=8.487 s, sync=0.034 s, total=8.531 s; sync files=8, longest=0.008 s, average=0.005 s; distance=577 kB, estimate=4914 kB; lsn=3/2C619E88, redo lsn=3/2C619E18
2024-09-03 20:02:00.235 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:02:12.026 -04 [28] LOG:  checkpoint complete: wrote 118 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=11.774 s, sync=0.007 s, total=11.791 s; sync files=17, longest=0.002 s, average=0.001 s; distance=775 kB, estimate=4500 kB; lsn=3/2C6DBC70, redo lsn=3/2C6DBC00
2024-09-03 20:07:00.096 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:07:14.588 -04 [28] LOG:  checkpoint complete: wrote 145 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=14.478 s, sync=0.004 s, total=14.493 s; sync files=11, longest=0.002 s, average=0.001 s; distance=56 kB, estimate=4056 kB; lsn=3/2C6E9DD8, redo lsn=3/2C6E9DA0
2024-09-03 20:12:00.687 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:13:18.920 -04 [28] LOG:  checkpoint complete: wrote 781 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=78.220 s, sync=0.004 s, total=78.234 s; sync files=9, longest=0.002 s, average=0.001 s; distance=262 kB, estimate=3676 kB; lsn=3/2C72B818, redo lsn=3/2C72B7E0
2024-09-03 20:17:01.007 -04 [28] LOG:  checkpoint starting: time
2024-09-03 20:17:01.748 -04 [28] LOG:  checkpoint complete: wrote 7 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.728 s, sync=0.003 s, total=0.741 s; sync files=4, longest=0.002 s, average=0.001 s; distance=13 kB, estimate=3310 kB; lsn=3/2C72ED80, redo lsn=3/2C72ED48


Solution

  • It turned out the timeout setting on the reverse proxy was too short, causing the connection to drop between the client and the database. After increasing the timeout, the connection stabilized, and I haven't had any issues since

    Thanks