July 18th, 2017
In Technology
No Comments
If you enjoy this article, see the other most popular articles
If you enjoy this article, see the other most popular articles
If you enjoy this article, see the other most popular articles
When Nginx becomes the bottleneck
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.
This makes me sad. I love Nginx as a reverse proxy, so long as it is invisible and I never have to think about. Realizing that it, too, can be a source of problems really is discouraging.
47,135 connections in TIME_WAIT! Moreover, ss indicates that they are all closed connections. This suggests the server is burning through a large portion of the available port range, which implies that it is allocating a new port for each connection it’s handling. Tweaking the networking settings helped firefight the problem a bit, but the socket range was still getting saturated.
After some digging around, I uncovered some documentation about an upstream keepalive directive. The docs state:
Sets the maximum number of idle keepalive connections to upstream servers that are retained in the cache per one worker process
This is interesting. In theory, this will help minimise connection wastage by pumping requests down connections that have already been established and cached. Additionally, the documentation also states that the proxy_http_version directive should be set to “1.1” and the “Connection” header cleared. On further research, it’s clear this is a good idea since HTTP/1.1 optimises TCP connection usage much more efficiently than HTTP/1.0, which is the default in Nginx Proxy.
Post external references
- 1
https://engineering.gosquared.com/optimising-nginx-node-js-and-networking-for-heavy-workloads
February 8, 2022 9:33 am
From Michael S on How I recovered from Lyme Disease: I fasted for two weeks, no food, just water
"Did you have Bartonella, too? Seems it uses autogenesis..."