By DevOps on Fri 18 May 2018
inJust for fun, I decided to benchmark the performance of the elixir deploy template running on a $5/month Digital Ocean Droplet.
Following Saša Jurić's post, I set up wrk on my Mac and on some Digital Ocean instances in the same data center.
I made a simple request function in wrk.lua
:
request = function()
wrk.method = "GET"
return wrk.format(nil, "/")
end
This is just reading the default Phoenix home page. There is no actual logic performed, or more importantly, database calls.
With zero tuning of Phoenix, from my Mac in Taiwan going to Digital Ocean in Singapore, I got the following results:
wrk -t12 -c12 -d60s --latency -s wrk.lua "http://159.89.197.173"
Running 1m test @ http://159.89.197.173
12 threads and 12 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 73.06ms 78.84ms 1.18s 96.43%
Req/Sec 15.63 5.09 20.00 63.53%
Latency Distribution
50% 60.49ms
75% 62.98ms
90% 72.32ms
99% 454.03ms
11099 requests in 1.00m, 24.06MB read
Requests/sec: 184.77
Transfer/sec: 410.20KB
The response time is all driven from the network latency:
ping 159.89.197.173
PING 159.89.197.173 (159.89.197.173): 56 data bytes
64 bytes from 159.89.197.173: icmp_seq=0 ttl=49 time=56.037 ms
64 bytes from 159.89.197.173: icmp_seq=1 ttl=49 time=55.475 ms
64 bytes from 159.89.197.173: icmp_seq=2 ttl=49 time=55.455 ms
The actual processing time of Phoenix is in the microsecond range:
May 16 10:07:05 elixir-deploy-template deploy-template[29275]: 10:07:05.777 request_id=5d6bcb89q0jv834pu7d39fmaaq828opg [info] GET /
May 16 10:07:05 elixir-deploy-template deploy-template[29275]: 10:07:05.777 request_id=5d6bcb89q0jv834pu7d39fmaaq828opg [info] Sent 200 in 142µs
May 16 10:07:05 elixir-deploy-template deploy-template[29275]: 10:07:05.781 request_id=mbrp97u5btbvike3057ho7h4ao9j4jfd [info] GET /
May 16 10:07:05 elixir-deploy-template deploy-template[29275]: 10:07:05.781 request_id=mbrp97u5btbvike3057ho7h4ao9j4jfd [info] Sent 200 in 251µs
The machine itself is not working hard at all. The worst case latency is driven by occasional network glitches, e.g. lost packets.
I did a bit of tuning, changing the log level so that we are not writing each
request to the log twice, and also changing the max_keepalive
so that we run
multiple requests on the same connection. Reusing connections is realistic
if you have users doing multiple things on your site. It mainly affects
the max latency stats as compared to the average latency.
git diff
diff --git a/config/prod.exs b/config/prod.exs
index 1cf2f5a..7acf2d7 100644
--- a/config/prod.exs
+++ b/config/prod.exs
@@ -14,12 +14,16 @@ use Mix.Config
# manifest is generated by the mix phx.digest task
# which you typically run after static files are built.
config :deploy_template, DeployTemplateWeb.Endpoint,
+ http: [port: 4001,
+ protocol_options: [max_keepalive: 5_000_000]
+ ],
load_from_system_env: true,
url: [host: "example.com", port: 80],
cache_static_manifest: "priv/static/cache_manifest.json"
# Do not print debug messages in production
-config :logger, level: :info
+#config :logger, level: :info
+config :logger, level: :warn
Cranking up the concurrency makes things more interesting:
wrk -t200 -c200 -d60s --latency -s wrk.lua "http://159.89.197.173"
Running 1m test @ http://159.89.197.173
200 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 90.91ms 106.45ms 1.12s 95.02%
Req/Sec 14.06 5.31 20.00 73.44%
Latency Distribution
50% 67.35ms
75% 73.49ms
90% 83.67ms
99% 681.16ms
162932 requests in 1.00m, 353.22MB read
Requests/sec: 2710.91
Transfer/sec: 5.88MB
Now we are actually making the server work, and it's CPU bound during the run.
Next I ran the same test from another Droplet in the same data center:
wrk -t12 -c12 -d60s --latency -s wrk.lua "http://159.89.197.173"
Running 1m test @ http://159.89.197.173
12 threads and 12 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.54ms 0.91ms 21.62ms 82.76%
Req/Sec 284.03 29.73 414.00 74.49%
Latency Distribution
50% 3.35ms
75% 3.95ms
90% 4.48ms
99% 6.53ms
203770 requests in 1.00m, 441.75MB read
Requests/sec: 3393.59
Transfer/sec: 7.36MB
ping 159.89.197.173
PING 159.89.197.173 (159.89.197.173) 56(84) bytes of data.
64 bytes from 159.89.197.173: icmp_seq=1 ttl=61 time=1.36 ms
64 bytes from 159.89.197.173: icmp_seq=2 ttl=61 time=0.439 ms
64 bytes from 159.89.197.173: icmp_seq=3 ttl=61 time=0.431 ms
64 bytes from 159.89.197.173: icmp_seq=4 ttl=61 time=0.457 ms
On the whole, it's quite impressive performance for such a cheap instance.
<< Back to blog index