最新项目打算使用Sanic,根据官方给出的性能比Flask要高出很多。
下面根据不同的部署方式进行并发测试。
1.测试环境:
cnetos7 1cpu 1G
python3.7
2.页面使用Jinja2模版,简单化只进行一个字符串渲染
<html>
<head>
<title>Test</title>
</head>
<body>
Hello, {{ name }}
</body>
</html>
3.Flask服务端代码,只有一个请求,对上面页面进行渲染输出
from flask import Flask, render_template, request, url_for, send_file, redirect, request_started
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html", name="zhanheng")
#app.run(host='0.0.0.0', port=5000)
4.Sanic服务端代码,同Flask一样,只有一个请求,对上面页面进行渲染输出
from sanic import Sanic
from sanic import response
from jinja2 import Environment, PackageLoader, select_autoescape
app = Sanic()
app.config.ACCESS_LOG = False
template_env = Environment(
loader=PackageLoader('sn', 'templates'),
autoescape=select_autoescape(['html']),
enable_async=True
)
template = template_env.get_template("index.html")
@app.route("/")
async def index(request):
html = await template.render_async(name="zhanheng")
return response.html(html)
app.run(host="0.0.0.0", port=8000, debug=False, access_log=False)
5.Flask + Gunicorn进行测试(因为Flask内置服务并不适合生产环境,所有使用Gunicorn作为服务)
gunicorn -w 1 -b 127.0.0.1:5000 fn:app
wrk 1条线程(CPU内核1),注意关注Requests/sec(每秒处理请求)和AVG(平均值变化) 测试结果:
(1)保证同时打开100连接情况:
Running 10s test @ http://127.0.0.1:5000/
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 76.97ms 19.47ms 175.58ms 89.70%
Req/Sec 1.29k 269.92 1.62k 82.11%
12909 requests in 10.05s, 3.36MB read
Requests/sec: 1284.11
Transfer/sec: 342.35KB
(2) 保证同时打开300连接情况:
Running 10s test @ http://127.0.0.1:5000/
1 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 158.29ms 195.86ms 1.68s 92.21%
Req/Sec 1.14k 381.03 1.74k 78.46%
7898 requests in 10.08s, 2.06MB read
Socket errors: connect 0, read 0, write 0, timeout 7
Requests/sec: 783.65
Transfer/sec: 208.92KB
(3)保证同时打开500连接情况:
Running 10s test @ http://127.0.0.1:5000/
1 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 146.69ms 180.47ms 1.73s 93.73%
Req/Sec 1.16k 374.31 1.59k 80.00%
8091 requests in 10.03s, 2.11MB read
Socket errors: connect 0, read 0, write 0, timeout 8
Requests/sec: 806.81
Transfer/sec: 215.10KB
6.直接使用Sanic内置服务,裸跑
(1)保证同时打开100连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 13.83ms 5.28ms 39.64ms 75.36%
Req/Sec 7.07k 1.57k 9.82k 70.10%
70705 requests in 10.06s, 15.58MB read
Requests/sec: 7029.11
Transfer/sec: 1.55MB
(2) 保证同时打开300连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 46.87ms 14.94ms 119.14ms 76.18%
Req/Sec 6.16k 1.66k 9.89k 68.04%
61582 requests in 10.08s, 13.57MB read
Requests/sec: 6110.70
Transfer/sec: 1.35MB
(3)保证同时打开500连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 74.40ms 31.05ms 686.18ms 88.14%
Req/Sec 6.40k 2.03k 10.24k 68.04%
63907 requests in 10.08s, 14.08MB read
Requests/sec: 6339.30
Transfer/sec: 1.40MB
7.Sanic + Gunicorn部署方式:
gunicorn sn:app --bind 0.0.0.0:8000 --worker-class sanic.worker.GunicornWorker
(1)保证同时打开100连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.88ms 6.87ms 50.60ms 78.31%
Req/Sec 6.66k 1.90k 9.35k 57.29%
66214 requests in 10.00s, 14.59MB read
Requests/sec: 6620.40
Transfer/sec: 1.46MB
(2) 保证同时打开300连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 40.62ms 13.18ms 100.96ms 72.12%
Req/Sec 6.94k 1.74k 10.61k 68.75%
68984 requests in 10.01s, 15.20MB read
Requests/sec: 6888.82
Transfer/sec: 1.52MB
(3)保证同时打开500连接情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 84.19ms 29.41ms 568.45ms 82.97%
Req/Sec 5.61k 1.70k 9.20k 67.01%
56219 requests in 10.09s, 12.38MB read
Requests/sec: 5570.61
Transfer/sec: 1.23MB
8.Sanic + Nginx代理部署方式
server{
listen 9000;
location / {
proxy_pass http://localhost:8000;
}
}
(1)保证同时打开100连接情况:
Running 10s test @ http://127.0.0.1:9000/
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 25.58ms 8.11ms 85.36ms 84.91%
Req/Sec 3.87k 738.39 5.09k 72.16%
38630 requests in 10.03s, 10.13MB read
Requests/sec: 3850.73
Transfer/sec: 1.01MB
(2) 保证同时打开300连接情况:
Running 10s test @ http://127.0.0.1:9000/
1 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 185.45ms 278.54ms 1.34s 85.19%
Req/Sec 3.33k 827.69 4.85k 61.86%
33460 requests in 10.09s, 8.77MB read
Socket errors: connect 0, read 0, write 0, timeout 33
Requests/sec: 3315.21
Transfer/sec: 0.87MB
(3)保证同时打开500连接情况:
Running 10s test @ http://127.0.0.1:9000/
1 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 226.41ms 321.52ms 1.51s 83.23%
Req/Sec 3.36k 0.89k 5.25k 69.79%
33529 requests in 10.08s, 8.79MB read
Socket errors: connect 0, read 0, write 0, timeout 232
Requests/sec: 3324.72
Transfer/sec: 0.87MB
注意:
Sanic裸跑和与Gunicorn跑的测试都没有启动日志功能,(因为启动了日志性能大幅下降)。
如果不在乎日志功能,可以使用Sanic+Gunicorn配置或者直接裸跑。
如果需要访问日志功能,可以使用Sanic+Nginx配置,日志依赖于Nginx access log,Sanic日志功能始终禁止。
附:
Sanic开启日志功能 100、300、500连接的情况:
Running 10s test @ http://127.0.0.1:8000/
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 112.04ms 154.61ms 649.85ms 85.08%
Req/Sec 2.45k 1.07k 4.16k 60.87%
18330 requests in 10.08s, 4.04MB read
Requests/sec: 1817.94
Transfer/sec: 410.10KB
[root@google-ssr wrk]# ./wrk -t1 -c300 http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
1 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 154.51ms 110.71ms 623.44ms 85.63%
Req/Sec 2.47k 1.12k 5.72k 71.23%
20598 requests in 10.04s, 4.54MB read
Requests/sec: 2050.97
Transfer/sec: 462.67KB
[root@google-ssr wrk]# ./wrk -t1 -c500 http://127.0.0.1:8000/
Running 10s test @ http://127.0.0.1:8000/
1 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 241.63ms 119.20ms 579.68ms 80.16%
Req/Sec 2.33k 1.32k 5.82k 65.71%
19782 requests in 10.05s, 4.36MB read
Requests/sec: 1967.86
Transfer/sec: 443.92KB
比Flask好点,Sanic+Nginx差点。