HTTPS and HSTS with Varnish, thanks to HAProxy - Bigdinosaur Blog

The unencrypted web is on the way out. We made the switch here at BigDino Central to all-HTTPS a few weeks ago, but doing so brought with it a problem: how can we keep using Varnish cache with HTTPS traffic? The answer turned out to be by adding another layer into the web stack—and now we’re using HAProxy to terminate SSL. It wasn’t difficult to set up, and it works for all the different sites we host on our one physical web server.

Keeping Varnish in the mix felt important, because we’ve been using it for a few years (wow, has it been that long?), and it’s a neat tool that’s helped the site bear up under some crazy Reddit- and Ars-driven traffic storms. But the nature of Varnish means that out of the box it won’t help you with HTTPS traffic. Because the HTTPS negotiation happens between the end user and Nginx—which sits below Varnish in the stack—all Varnish sees of HTTPS traffic is the encrypted side, and you can’t cache what looks like an unending string of unique, encrypted nonsense.

Read the rest of this blog entry…

1 Like

Hi Lee,

a few comments. First I released 1.6.1 last night which fixes the issue you met with the 2 crt on the same bind line. Second, with haproxy 1.6 and Varnish, you should really remove “option http-server-close” to enable keep-alive between haproxy and varnish, and add “http-reuse always” to enable connection multiplexing between haproxy and varnish. This will significantly speed up the communication between the two, reduce the number of concurrent connections there, and reduce overall CPU usage.

Great article overall!

Thanks, @WillyTarreau! Sorry I missed this post when you made it last month. I should probably check my own damn forum more often :smile:

I’ve made the changes locally and updated the blog post. Thanks for stopping by—always great to hear from the experts on stuff like this!!

@Lee_Ars: Why do you introduce HAProxy in the stack when you could have solved it with Nginx as the TLS proxy endpoint in front of Varnish and with a Nginx webserver instance as backend?

Is there performance differences between these setups?
HAPROXY (TLS) -> Varnish -> Nginx
Nginx (TLS) -> Varnish -> Nginx

Here is a blogpost about such a setup with only Nginx and Varnish

It took a long time to come to a solution I was happy with, after a lot of napkin doodling.

Initially, I rejected the nginx-varnish-nginx sandwich because I didn’t want to introduce a separate physical server into the mix, and I wasn’t sure if there was a way to have two separate nginx instances on a single box, one proxying to the other (or a single split-brained instance proxying to itself). Then I thought about perhaps using stud because there’s a huge amount of example documentation out there, but the problem there seemed to be that stud has been abandoned and doesn’t appear to support TLS1.2 or wildcard SSL/TLS certificates (though there’s a relatively new fork called hitch that does—I should look at that!).

So then it came down to a choice between stunnel and HAProxy, and it was easier to find HAProxy examples for what I wanted to do (SSL termination for varnish and nginx with multiple domains with wildcard certificates). Plus, a few sites seemed to indicate that stunnel introduced more overhead into the process of connection negotiation. Finally, I wasn’t sure whether or not stunnel could support reverse-proxying websockets. So, HAProxy it was.

But, once I started walking down the actual setup, I realized that the testing and implementation was going to be a hell of a lot easier if I did use a second physical server, because trying to do all the testing and stuff on my main prod web server while keeping it in production was going to be a giant nightmare pain in the ass. So I built my HAProxy config, tested it, fired it up, and then cut it over into production by changing my firewall’s incoming rules to point ports 80 and 443 at it.

There are a few scattered bits of benchmarking out there showing how nginx and haproxy compare when doing SSL termination (like this one), and they tend to show that the two solutions are pretty much even in performance. Given that, the choice for me to keep HAproxy or swap it out for nginx simply becomes one of configuration complexity, and HAProxy is simpler to configure as an SSL/TLS-terminating reverse proxy.

Long answer, but I hope it’s helpful! The tl;dr to your original question is that I initially thought about using nginx, rejected it because I didn’t want a second physical server, decided to use HAProxy, then went ahead and used a second physical server anyway :smile: