How to Fix "upstream sent too big header" Error

From 502 Bad Gateway in Kubernetes

Posted by admin on January 10, 2024

First, the TLDR;

Annotate your Nginx ingress controller to increase the proxy buffering size for your upstream server. By default they're inadequately small and you end up with the following errors in your reverse proxy logs.

For the following symptoms

  • You access your web application, service or API and get a 502 Bad Request even though you're certain the operation should work as expected.
  • You also enable verbose logging and you're not seeing any log message or errors that would indicate what the problem might be.

The more involved answer follows.

Getting a 502 Bad Request for OrchardCore Running in Kubernetes

Talk about deep troubleshooting. This one was several days in the making.

I was setting up a new website in my OrchardCore deployment. The tenant loaded just fine but I was getting a 502 Bad Request when trying to log in.

This was strange because the authentication code is all native Orchard Core (and ASP.NET) default stuff. But I had recently upgraded to OrchardCore 1.8.0 so maybe there was a problem with that.

I enabled verbose application logging and not event a mention of any kind of problem or error.

WTF.

Check Kubernetes Nginx Ingress Controller Logs

I run OrchardCore in a Kubernetes cluster.

This cluster uses an Nginx Ingress controller to handle incoming requests for this particular client application.

After seeing nothing interesting in OrchardCore's applicaiton logs, and also nothing in the Kestrel host logs, I decided to check my reverse proxy or ingress controller logs.

For my MicroK8s setup, my ingress pods are in their own ingress namespace. There are multiple pods and I want to see the logs for all of them. Luckly, they share a common label nginx-ingress-microk8s.

Identify the Upstream Sent Too Big Header Error

Run the following (or similar) command to see your ingress logs.

kubectl logs -l name=nginx-ingress-microk8s -n ingress 

And search for upstream sent too big header like in the following log output.

2024/01/10 15:30:05 [error] 711#711: *4185444 upstream sent too big header while reading response header from upstream, client: 10.1.40.193, server: www.*********.com, request: "POST /Login HTTP/2.0", upstream: "http://10.1.x.x:80/Login", host: "www.*********.com"

Boom, found it. After a little web searching you find out that Nginx has very small proxy buffers by default and they need to be increased.

Add Ingress Annotations Configuration to Increase Nginx Proxy Buffer Size

The solution was to add these annotations to the ingress configuration.

    nginx.ingress.kubernetes.io/proxy-body-size: 30m
    nginx.ingress.kubernetes.io/proxy-buffer-size: 256k
    nginx.ingress.kubernetes.io/proxy-buffering: 'on'
    nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
    nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 1024m

And that's it. After I added and deployed these annotations to the Nginx Ingress for my application, I was able to successfully log in no problem.

Incidentally, this also fixed my OrchardCore "file exceeds the maximum upload size" error, which you can read about here.

If you found value in this post, consider following me on X @davidpuplava for more valuable information about Game Dev, OrchardCore, C#/.NET and other topics.