502 Bad Gateway — What It Means and How to Fix It

Updated April 2026

Quick Answer A 502 Bad Gateway means the server acting as a proxy or gateway (Nginx, Cloudflare, a load balancer) received an invalid response from the upstream server behind it. The proxy is working — the application server behind it is not. Check if your app server is running, check Nginx upstream config, and check error logs.

A 502 Bad Gateway means the server acting as a gateway (Nginx, Cloudflare, a load balancer) received an invalid or no response from the upstream server it was trying to reach. The proxy is up — the backend behind it is not.

What the error actually means

Client → [Nginx / Cloudflare / Load Balancer] → [Your App Server] ↑ This layer returned 502 because THIS layer failed ↓

502 is always the middle layer reporting a problem with the layer behind it. The gateway is working — what it's proxying to is not.

Most common causes

CauseSymptom
App server crashed or not running502 immediately on all requests
App server overloaded, queue full502 under load, works on low traffic
Wrong upstream address in Nginx config502 immediately, connection refused in logs
App server taking too long (timeout)504 Gateway Timeout, sometimes reported as 502
App crashed mid-response502 on specific endpoints only
Cloudflare can't reach your originCloudflare's own 502 error page

Diagnosing — first steps

# Check if your app server is running
systemctl status your-app # or pm2 status, docker ps, etc.

# Check Nginx error logs
tail -50 /var/log/nginx/error.log

# Test the upstream directly (bypass Nginx)
curl -v http://localhost:3000/your-endpoint

# Check if the port is listening
ss -tlnp | grep 3000

Nginx — fixing common 502 causes

Wrong upstream address

# Wrong — app is on port 3000, not 8080
proxy_pass http://localhost:8080;

# Correct
proxy_pass http://localhost:3000;

Upstream timeout too short

# Nginx defaults (60s) may be too short for slow backends
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

# Increase for slow APIs or heavy processing
proxy_read_timeout 120s;

App crashes under load — upstream keepalive

upstream backend { server localhost:3000; keepalive 32;  # Reuse connections instead of creating new ones
}

server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection "";  # Required for keepalive }
}

Cloudflare — 502 causes

Cloudflare shows its own branded 502 when it cannot reach your origin. Check:

# Test origin directly (get your origin IP from Cloudflare DNS settings)
curl -v --resolve yoursite.com:443:ORIGIN_IP https://yoursite.com/

# Check Cloudflare can reach origin
# Cloudflare Dashboard → Support → Diagnostics

Docker / containerised apps

# Check container is running
docker ps | grep your-app

# Check container logs for crashes
docker logs your-app --tail 50

# Check if port is mapped correctly
docker inspect your-app | grep HostPort

# Nginx inside Docker — use service name not localhost
proxy_pass http://app:3000;  # app = docker-compose service name

502 vs 503 vs 504

CodeMeaningFix direction
502 Bad GatewayInvalid response from upstreamCheck upstream server is running and responding correctly
503 Service UnavailableServer intentionally refusing (maintenance, overload)Check upstream capacity, add retry-after header
504 Gateway TimeoutUpstream took too longIncrease proxy timeout, optimise slow endpoint
Scan your response headers → HeadersFixer