Tweaking nginx for serving static content
First of all, it is worth to mention the setup. Our staging environment had just one virtual machine with nginx running. This was an Azure Standard A2 VM with 2 cores and 3,5 GB of RAM. That’s not a lot, and in a real production environment you would definitely go for more web servers, with some kind of load balancing. But the goal of this experiment was mainly to experiment with the many options nginx provides for tweaking. The virtual was located in the same region as where the project will be deployed. Also, we used a similarly sized machine for putting load on our web server. This machine was located in that same region as well.
- As a guide, go for the number of CPU cores you have available. As opposed to e.g. Apache HTTPD, nginx is not multi-threaded. In practice this means that each worker uses just one thread to handle incoming traffic. It makes no sense to start more workers than you have CPU cores; it would force the operating system to needless context switching.
- This is a limit on the number of open file handles each worker process is allowed to have. This also includes the sockets a worker has opened. On the aforementioned Azure VM, we could easily reach 400000.
- Number of connections nginx is allowed to keep open.
This is particularly interesting if your web server doubles as a reverse proxy for e.g. one or more API’s.
This parameters is also affected by the
worker_rlimit_nofile(see above). On the aforementioned Azure VM, we could easily reach 300000.
- Allows worker processes to accept multiple connections at the same time. As long as the actual request processing doesn’t need to be done by nginx itself this can be quite interesting. If, for example, your requests are in fact handled by upstream application servers, the worker would normally be waiting for the response to come in. This parameter allows to handle another request in the meantime.
- Allows nginx to keep a cache of file descriptors. This is particularly interesting for static or semi-static content. Note that this directive does not make nginx cache the content of the file, just the file descriptor.
gzip_disable "MSIE [1-6]\.";doc
- Make nginx compress response bodies using GZip before sending them back. It will make the actual response smaller, so theoretically it should be quicker to get it loaded at the client side. A small warning, though: compression will cost some extra CPU-cicles. If your machine is not up for that task, these directives will actually slow down your server.
The general saying “to measure is to know” certainly applies here.
Make a simple script for yourself - could be as easy as invoking
Run your script a couple of times from a server close to your “target” and compare the outcomes.