Convert Figma logo to code with AI

denji logonginx-tuning

NGINX tuning for best performance

2,558
395
2,558
1

Top Related Projects

26,708

The official NGINX Open Source repository.

Nginx HTTP server boilerplate configs

How to improve NGINX performance, security, and other important things.

⚙️ NGINX config generator on steroids 💉

A collection of resources covering Nginx, Nginx + Lua, OpenResty and Tengine

A curated list of awesome Nginx distributions, 3rd party modules, Active developers, etc. :octocat:

Quick Overview

The denji/nginx-tuning repository provides a collection of configuration files and scripts to optimize the performance and security of the Nginx web server. It includes various tuning parameters and best practices to enhance the server's efficiency and resilience.

Pros

  • Comprehensive Configuration: The repository offers a wide range of configuration files covering various aspects of Nginx, including performance, security, and optimization.
  • Ease of Use: The project provides clear instructions and documentation, making it easy for users to understand and apply the recommended settings.
  • Community Contributions: The project has received contributions from the community, ensuring that the configurations are up-to-date and address common Nginx-related issues.
  • Adaptability: The configurations can be easily customized and adapted to fit the specific needs of different environments and use cases.

Cons

  • Potential Compatibility Issues: Some of the configurations may not be compatible with older versions of Nginx or specific server environments, requiring additional testing and adjustments.
  • Complexity: The extensive configuration options and settings may be overwhelming for users who are new to Nginx optimization.
  • Maintenance Overhead: Keeping the configurations up-to-date with the latest Nginx versions and security patches may require ongoing maintenance and monitoring.
  • Limited Documentation: While the project provides good documentation, some users may desire more detailed explanations and examples for certain configuration settings.

Code Examples

This project is not a code library, so there are no code examples to provide.

Getting Started

Since this project is not a code library, there are no quick start instructions to include.

Competitor Comparisons

26,708

The official NGINX Open Source repository.

Pros of nginx

  • Official repository with comprehensive documentation and support
  • Regular updates and releases with new features and security patches
  • Larger community and ecosystem for plugins and modules

Cons of nginx

  • Less focused on performance optimization out-of-the-box
  • Requires more manual configuration for advanced tuning
  • May have unnecessary features enabled by default for some use cases

Code comparison

nginx:

worker_processes auto;
events {
    worker_connections 1024;
}
http {
    # Default configuration
}

nginx-tuning:

worker_processes auto;
worker_rlimit_nofile 100000;
events {
    worker_connections 4000;
    use epoll;
}
http {
    # Optimized configuration
}

Summary

nginx is the official repository for the popular web server, offering comprehensive features and regular updates. nginx-tuning focuses on performance optimization, providing pre-configured settings for improved efficiency. While nginx offers a broader range of features and support, nginx-tuning may be more suitable for users seeking immediate performance enhancements without extensive manual configuration.

Nginx HTTP server boilerplate configs

Pros of server-configs-nginx

  • More comprehensive and well-documented configuration examples
  • Regularly updated with contributions from a larger community
  • Includes specific configurations for various web applications and frameworks

Cons of server-configs-nginx

  • May be overwhelming for beginners due to its extensive nature
  • Requires more manual customization to fit specific use cases
  • Some configurations might be overkill for simple deployments

Code Comparison

server-configs-nginx:

location ~* \.(?:manifest|appcache|html?|xml|json)$ {
    expires -1;
}

location ~* \.(?:rss|atom)$ {
    expires 1h;
}

nginx-tuning:

location ~* \.(jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
    expires 1M;
    access_log off;
    add_header Cache-Control "public";
}

The server-configs-nginx example shows more granular control over caching different file types, while nginx-tuning focuses on aggressive caching for static assets. Both approaches have their merits depending on the specific requirements of the project.

How to improve NGINX performance, security, and other important things.

Pros of nginx-admins-handbook

  • More comprehensive coverage of NGINX configuration and best practices
  • Includes detailed explanations and examples for various NGINX features
  • Regularly updated with new content and community contributions

Cons of nginx-admins-handbook

  • May be overwhelming for beginners due to its extensive content
  • Requires more time to navigate and find specific information

Code Comparison

nginx-admins-handbook:

server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

nginx-tuning:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

Both repositories provide NGINX configuration examples, but nginx-admins-handbook offers more detailed explanations and covers a wider range of topics. nginx-tuning focuses primarily on performance optimization, while nginx-admins-handbook provides a more comprehensive guide to NGINX administration.

nginx-admins-handbook is better suited for users seeking in-depth knowledge and best practices, while nginx-tuning may be more appropriate for those specifically interested in performance tuning. The choice between the two depends on the user's needs and level of expertise with NGINX.

⚙️ NGINX config generator on steroids 💉

Pros of nginxconfig.io

  • User-friendly web interface for generating NGINX configurations
  • Provides a visual approach to configuring NGINX, making it accessible to beginners
  • Offers real-time preview and downloadable configuration files

Cons of nginxconfig.io

  • Limited to predefined configuration options
  • May not cover all advanced NGINX tuning scenarios
  • Requires an internet connection to use the web interface

Code Comparison

nginxconfig.io generates configuration files based on user input:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com;
    # ... (generated configuration)
}

nginx-tuning provides manual tuning recommendations:

worker_processes auto;
worker_rlimit_nofile 100000;
events {
    worker_connections 4000;
}

Summary

nginxconfig.io offers a user-friendly approach to NGINX configuration with its web interface, making it accessible to beginners. However, it may lack advanced tuning options. nginx-tuning, on the other hand, provides detailed manual tuning recommendations for performance optimization but requires more technical knowledge to implement. The choice between the two depends on the user's expertise level and specific configuration needs.

A collection of resources covering Nginx, Nginx + Lua, OpenResty and Tengine

Pros of nginx-resources

  • Comprehensive collection of Nginx resources, including articles, books, and tools
  • Well-organized structure with categorized sections for easy navigation
  • Regularly updated with new resources and contributions from the community

Cons of nginx-resources

  • Lacks specific configuration examples or tuning recommendations
  • May overwhelm users with the sheer amount of information provided
  • Requires users to explore external links for detailed information

Code comparison

nginx-tuning provides specific configuration examples:

worker_processes auto;
worker_rlimit_nofile 100000;
events {
    worker_connections 2048;
    use epoll;
    multi_accept on;
}

nginx-resources doesn't include code snippets but offers links to resources:

## Tutorials
- [NGINX Fundamentals](https://github.com/g0t4/course-nginx-fundamentals)
- [Nginx Admin's Handbook](https://github.com/trimstray/nginx-admins-handbook)

Summary

nginx-resources is a comprehensive collection of Nginx-related resources, offering a wide range of information for users of all levels. It excels in providing a curated list of external resources but lacks specific configuration examples. On the other hand, nginx-tuning focuses on providing concrete configuration recommendations and performance optimization techniques, making it more suitable for users looking for immediate implementation guidance.

A curated list of awesome Nginx distributions, 3rd party modules, Active developers, etc. :octocat:

Pros of awesome-nginx

  • Comprehensive collection of NGINX resources, including tutorials, modules, and tools
  • Well-organized structure with categorized sections for easy navigation
  • Regularly updated with contributions from the community

Cons of awesome-nginx

  • Lacks specific configuration examples for performance tuning
  • May be overwhelming for beginners due to the large amount of information

Code Comparison

nginx-tuning provides specific configuration examples for performance optimization:

worker_processes auto;
worker_rlimit_nofile 100000;
events {
    worker_connections 2048;
    use epoll;
    multi_accept on;
}

awesome-nginx doesn't include direct configuration examples but links to resources:

## Modules

- [Official NGINX modules](https://www.nginx.com/resources/wiki/modules/)
- [3rd party NGINX modules](https://www.nginx.com/resources/wiki/modules/third_party_modules/)

Summary

nginx-tuning focuses on providing specific configuration examples for NGINX performance optimization, making it ideal for users looking to fine-tune their NGINX setup. awesome-nginx, on the other hand, offers a broader range of resources and information about NGINX, including modules, tutorials, and tools. While it may not provide direct configuration examples, it serves as a comprehensive reference for NGINX users of all levels.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

NGINX Tuning For Best Performance

For this configuration you can use web server you like, I decided, because I work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered). Most what I saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, of course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in a testing environment and not in production, so you will need to find a way to implement most of those features as best possible for your servers.

First, you will need to install nginx

yum install nginx
apt install nginx

Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at /etc/nginx/nginx.conf with your favorite editor.

# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# only log critical errors
error_log /var/log/nginx/error.log crit;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}

http {
    # cache informations about FDs, frequently accessed files
    # can boost performance, but you need to test those values
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # to boost I/O on HDD we can disable access logs
    access_log off;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece, it is better than sending them one by one
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    # https://brooker.co.za/blog/2024/05/09/nagle.html
    # https://news.ycombinator.com/item?id=10608356
    #tcp_nodelay on;

    # reduce the data that needs to be sent over network -- for testing environment
    gzip on;
    # gzip_static on;
    gzip_min_length 10240;
    gzip_comp_level 1;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types
        # text/html is always compressed by HttpGzipModule
        text/css
        text/javascript
        text/xml
        text/plain
        text/x-component
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        image/svg+xml;

    # allow the server to close connection on non responding client, this will free up memory
    reset_timedout_connection on;

    # request timed out -- default 60
    client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    send_timeout 2;

    # server will close connection after this time -- default 75
    keepalive_timeout 30;

    # number of requests client can make over keep-alive -- for testing environment
    keepalive_requests 100000;
}

Now you can save the configuration and run the below command

nginx -s reload
/etc/init.d/nginx start|restart

If you wish to test the configuration first you can run

nginx -t
/etc/init.d/nginx configtest

Just For Security Reasons

server_tokens off;

NGINX Simple DDoS Defense

This is far away from a secure DDoS defense but can slow down some small DDoS. This configuration is for a testing environment and you should use your own values.

# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

# zone which we want to limit by upper values, we want limit whole server
server {
    limit_conn conn_limit_per_ip 10;
    limit_req zone=req_limit_per_ip burst=10 nodelay;
}

# if the request body size is more than the buffer size, then the entire (or partial)
# request body is written into a temporary file
client_body_buffer_size  128k;

# buffer size for reading client request header -- for testing environment
client_header_buffer_size 3m;

# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;

# read timeout for the request body from client -- for testing environment
client_body_timeout   3m;

# how long to wait for the client to send a request header -- for testing environment
client_header_timeout 3m;

Now you can test the configuration again

nginx -t # /etc/init.d/nginx configtest

And then reload or restart your nginx

nginx -s reload
/etc/init.d/nginx reload|restart

You can test this configuration with tsung and when you are satisfied with the result you can hit Ctrl+C because it can run for hours.

Increase The Maximum Number Of Open Files (nofile limit) – Linux

There are two ways to raise the nofile/max open files/file descriptors/file handles limit for NGINX in RHEL/CentOS 7+. With NGINX running, check the current limit on the master process

$ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files
Max open files            1024                 4096                 files

And worker processes

ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files

Max open files            1024                 4096                 files
Max open files            1024                 4096                 files

Trying with the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf fails as SELinux policy doesn't allow setrlimit. This is shown in /var/log/nginx/error.log

015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied)

And in /var/log/audit/audit.log

type=AVC msg=audit(1437731200.211:366): avc:  denied  { setrlimit } for  pid=12066 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process

nolimit without Systemd

# /etc/security/limits.conf
# /etc/default/nginx (ULIMIT)
$ nano /etc/security/limits.d/nginx.conf
nginx   soft    nofile  65536
nginx   hard    nofile  65536
$ sysctl -p

nolimit with Systemd

$ mkdir -p /etc/systemd/system/nginx.service.d
$ nano /etc/systemd/system/nginx.service.d/nginx.conf
[Service]
LimitNOFILE=30000
$ systemctl daemon-reload
$ systemctl restart nginx.service

SELinux boolean httpd_setrlimit to true(1)

This will set fd limits for the worker processes. Leave the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf and run the following as root

setsebool -P httpd_setrlimit 1

DoS HTTP/1.1 and above: Range Requests

By default max_ranges is not limited. DoS attacks can create many Range-Requests (Impact on stability I/O).

Socket Sharding in NGINX 1.9.1+ (DragonFly BSD and Linux 3.9+)

Socket typeLatency (ms)Latency stdev (ms)CPU Load
Default15.6526.590.3
accept_mutex off15.5926.4810
reuseport12.353.150.3

Thread Pools in NGINX Boost Performance 9x! (Linux)

Multi-threaded sending of files is currently supported only in Linux. Without sendfile_max_chunk limit, one fast connection may seize the worker process entirely.

Selecting an upstream based on SSL protocol version

map $ssl_preread_protocol $upstream {
    ""        ssh.example.com:22;
    "TLSv1.2" new.example.com:443;
    default   tls.example.com:443;
}

# ssh and https on the same port
server {
    listen      192.168.0.1:443;
    proxy_pass  $upstream;
    ssl_preread on;
}

Happy Hacking!

Reference links

Static analyzers

Syntax highlighting

NGINX config formatter

NGINX configuration tools

BBR (Linux 4.9+)

modprobe tcp_bbr && echo 'tcp_bbr' >> /etc/modules-load.d/bbr.conf
echo 'net.ipv4.tcp_congestion_control=bbr' >> /etc/sysctl.d/99-bbr.conf
# Recommended for production, but with  Linux v4.13rc1+ can be used not only in FQ (`q_disc') in BBR mode.
echo 'net.core.default_qdisc=fq' >> /etc/sysctl.d/99-bbr.conf
sysctl --system