We are trying to deploy a Next.js standalone server to AWS AppRunner, using a Docker image built and pushed to ECR.
It worked perfectly about a month ago when I set it up manually, but since then we migrated our infrastructure to Terraform. Now, something is broken.
Whenever we deploy our app (or even a minimal Next.js app) with output: "standalone"
, AppRunner fails to reach the healthcheck route. It does not throw any error, either. The only log we see is that the server starts normally:
2025-07-01T18:32:49.527Z
▲ Next.js 14.2.14
2025-07-01T18:32:49.528Z
- Local: http://ip-[some-generic-ip].aws-region.compute.internal:3000
2025-07-01T18:32:49.528Z
- Network: http://[some-generic-ip]:3000
2025-07-01T18:32:49.528Z
✓ Starting...
2025-07-01T18:32:49.592Z
✓ Ready in 78ms
…and then nothing.
Yet, if I run the exact same Docker image locally (docker run -p 3000:3000 ...
), it works perfectly. The healthcheck route is reachable, logs appear, responses are fine.
We confirmed the container is built correctly for linux/amd64
architecture, and even tried deploying to a public VPC with fully open security groups, but no change. The AppRunner endpoint is publicly accessible, but still fails.
What’s odd is that a minimal reproduction with Next.js output: "default"
works flawlessly on AppRunner — healthcheck passes, routes work.
Only output: "standalone"
consistently fails.
FROM node:20-alpine3.20 AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN npm install -g corepack
RUN corepack enable
RUN corepack prepare pnpm@latest --activate
# install dependencies
FROM base AS deps
RUN apk add --no-cache libc6-compat python3 make g++
WORKDIR /app
COPY . .
RUN pnpm i --frozen-lockfile
# build
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/ ./
RUN pnpm run --filter=database generate
RUN pnpm run --filter=api build
# production
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder --chown=nextjs:nodejs /app/apps/api/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/api/.next/server ./.next/server
COPY --from=builder --chown=nextjs:nodejs /app/apps/api/.next/static ./apps/api/.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME=0.0.0.0
CMD ["node", "apps/api/server.js"]
• Protocol: HTTP
• Path: /api/healthcheck
• Timeout: 2 seconds
• Interval: 5 seconds
• Unhealthy threshold: 5
• Healthy threshold: 1
✅ Stays up and stable (no crash loops, no exit codes)
✅ Shows only the startup message from Next.js
✅ Never responds to healthcheck requests
✅ Fails on both HTTP and TCP checks
✅ Verified container runs fine with docker run -p 3000:3000 ... locally
✅ Confirmed AppRunner security groups allow inbound
✅ Switched AppRunner healthcheck from HTTP to TCP
✅ Confirmed platform is linux/amd64
✅ Rebuilt images with no caching
✅ Confirmed minimal output: default Next.js app works fine
✅ Removed VPC altogether, still fails
✅ Tried setting HOSTNAME to 0.0.0.0
✅ Confirmed container is stable, no restarts
• In standalone mode, Next.js copies only required files listed in required-server-files.json. Could AppRunner’s routing or its load balancer break something in that structure?
• Could AppRunner’s ALB somehow not forward requests to the Node server with the standalone output?
• Did something change recently in Next.js standalone routing logic that affects AWS AppRunner?
• Is there a known issue with Next.js 14’s standalone mode on AppRunner?
After consulting with a colleague, it became obvious that AWS AppRunner is ignoring the HOSTNAME
env variable set up in the docker container itself and is binding to a host it cannot reach.
Thus setting HOSTNAME=0.0.0.0
in the AppRunner env variables resolves this issue.