Serving your Phoenix app with Nginx

By Jake Morrison in DevOps on Wed 16 May 2018

It's common to run web apps behind a proxy such as Nginx or HAProxy. Nginx listens on port 80, then forwards traffic to the app on another port, e.g. 4000.

Following is an example nginx.conf config:

user nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

worker_rlimit_nofile 65536;

events {
  worker_connections 65536;
  use epoll;
  multi_accept on;
}

http {
  real_ip_header X-Forwarded-For;
  set_real_ip_from 0.0.0.0/0;
  server_tokens off;

  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  log_format  main '$remote_addr - $remote_user [$time_local] $host "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for" $request_time';

  access_log  /var/log/nginx/access.log main;

  limit_req_zone $binary_remote_addr zone=foo:10m rate=1r/s;
  limit_req_status 429;

  include /etc/nginx/conf.d/*.conf;
}

Here is a vhost for the app, e.g. /etc/nginx/conf.d/foo.conf:

server {
  listen       80 default_server;
  # server_name  example.com;
  root        /opt/foo/current/priv/static;

  access_log  /var/log/nginx/foo.access.log main;
  error_log   /var/log/nginx/foo.error.log;

  location / {
    index   index.html;
    # first attempt to serve request as file, then fall back to app
    try_files $uri @app;
    # expires max;
    # access_log off;
  }

  location @app {
    proxy_set_header Host               $host;
    proxy_set_header X-Real-IP          $remote_addr;
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header Refrerer           $http_referer;
    proxy_set_header User-Agent         $http_user_agent;

    limit_req zone=foo burst=5 nodelay;

    proxy_pass http://127.0.0.1:4000;
  }
}

Proxy settings

The main setting that does the forwarding is proxy_pass.

You can set additional options depending on usage, e.g. if it's an API endpoint, then you can reduce various buffers and timers to give better response vs the defaults, which are for more generic web serving:

proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;

Running multiple applications together

It's common to migrate parts of an existing Rails app to Phoenix in order to improve performance. The first step is configuring Nginx to route certain URL prefixes to Phoenix, keeping the rest on Rails, e.g. http://api.example.com/ or /api.

Beyond that, we need to integrate the applications, e.g. sharing a login session between Phoenix and Rails. This depends on the specific authentication frameworks used by each app.

We can also implement the UI and navigation on Phoenix to match a Rails app, allowing users to seamlessly work between both apps. The only thing the user will notice is that the Phoenix pages are 10x faster :-)

See this blog post on migrating legacy apps or this presentation for details.

High load

Once you start pushing Nginx hard, you will see issues. One of the first things is limits on the number of open sockets. A symptom of this is that you see delays at the client side, but your app looks fine. Your app might take 10 ms to respond, but the client sees a five second delay or a 503 error.

What is happening is that the client talks to Nginx, then Nginx talks to your app, but there are not enough filehandles available, so Nginx queues the request. You may start with 1024 by default, which is pitifully small. You will need to raise that at each step in the config, e.g. systemd unit file, Nginx, and Erlang VM.

Create /etc/systemd/system/nginx.service.d/override.conf with the following contents:

[Service]
LimitNOFILE=65536
systemctl daemon-reload

In the nginx config file, set worker_rlimit_nofile, smaller or equal to LimitNOFILE:

worker_rlimit_nofile 65536;
systemctl restart nginx

You can verify that it works with cat /proc/<nginx-pid>/limits.

Running out of TCP ports

After that, you may run into lack of TCP ports. In TCP/IP, a connection is defined by the combination of source IP + source port + destination IP + destination port. In this proxy situation, all but the source port is fixed: 127.0.0.1 + random + 127.0.0.1 + 4000. There are only 64K ports. The TCP/IP stack won't reuse a port for 2 x maximum segment lifetime, which by default is 2 minutes.

Doing the math:

You can also tune the kernel to reduce the maximum segment lifetime, e.g.:

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_reuse = 1

The HTTP client may keep the connection open, assuming that there will be another request. Depending on your use case (e.g. for an API endpoint), that may not be needed. Shut it down immediately by adding the "Connection: close" HTTP header. This is particularly useful for abuse, e.g. DDOS attacks.

See rate limitmiting Nginx requests for details about the rate limiting config in this example.

Nginx also has some complex behavior when it runs into errors when proxying.

It can be hard to figure out what is going on, as you don't get visibility. The Nginx business model is to hide the detailed proxy metrics unless you buy their Nginx Plus product, which costs thousands of dollars per server per year. A dedicated proxy server like HAProxy gives more visibility and control over the process.

At a certain point, you wonder what value you were getting from the local proxy. If you are only running a single app on your instance, common in cloud deployments, you can listen directly to HTTP traffic in Phoenix. That will end up giving you lower latency and overall lower complexity.

In order to listen on a TCP port less than 1024, an app traditionally needs to be started as root. Over the years this has resulted in many security problems. A better solution is to run the application on a normal port such as 4000, and redirect traffic from port 80 to 4000 in the firewall using iptables.

<< Back to blog index

Featured posts

Deploying an Elixir app to Digital Ocean with mix_deploy Port forwarding with iptables