FastAPI vs. Fastify vs. Spring Boot vs. Gin Benchmark

01.10.2022

In a previous article, I benchmarked FastAPI, Express.js, Flask, and Nest.js in order to verify FastAPI’s claims of being on par with Node.js. In this article, I am pitting the champion, FastAPI, against a new set of faster competitors. For each framework, I created an API endpoint that returns 100 rows of data from a PostgreSQL database. The data is returned as JSON.

Here are the results:

FastAPI + asyncpg + orjson + gunicorn

Running 10s test @ http://localhost:8000/orjson
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.29ms 0.93ms 10.28ms 55.43%
Req/Sec 2.19k 568.66 3.25k 60.50%
43575 requests in 10.01s, 333.28MB read
Requests/sec: 4355.30
Transfer/sec: 33.31MB

Fastify + pg

Running 10s test @ http://localhost:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.62ms 1.49ms 20.94ms 87.98%
Req/Sec 1.10k 165.28 1.31k 76.50%
21860 requests in 10.01s, 172.30MB read
Requests/sec: 2184.83
Transfer/sec: 17.22MB

Spring Boot + jdbc

Running 10s test @ http://localhost:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.37ms 1.95ms 73.33ms 98.92%
Req/Sec 3.98k 361.02 5.78k 76.12%
79653 requests in 10.10s, 609.25MB read
Requests/sec: 7886.63
Transfer/sec: 60.32MB

Spring Boot + JPA

Running 10s test @ http://localhost:8080/posts
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 15.52ms 17.42ms 134.96ms 90.44%
Req/Sec 424.58 117.08 737.00 75.50%
8473 requests in 10.03s, 55.25MB read
Requests/sec: 844.82
Transfer/sec: 5.51MB

Gin + database/sql + lib/pq

Running 10s test @ http://localhost:8080/loadtest
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.31ms 5.76ms 33.29ms 80.44%
Req/Sec 1.49k 209.14 2.00k 68.50%
29687 requests in 10.01s, 182.53MB read
Requests/sec: 2966.86
Transfer/sec: 18.24MB

The Rankings

These include benchmarks from part 1 of this article.

  1. Spring Boot + jdbc (7886 req/sec)
  2. FastAPI + asyncpg + ujson + gunicorn 4w (4401 req/sec)
  3. FastAPI + asyncpg + gunicorn 4w + orjson (4193 req/sec)
  4. Gin + database/sql + lib/pq (2966 req/sec)
  5. Fastify + pg (2184 req/sec)
  6. Express.js + pg (1931 req/sec)
  7. FastAPI + asyncpg + uvicorn + orjson (1885 req/sec)
  8. FastAPI + asyncpg + uvicorn + ujson (1711 req/sec)
  9. Flask + psycopg2 + gunicorn 4w (1478 req/sec)
  10. Nest.js + Prisma (1184 req/sec)
  11. FastAPI + psycopg2 + gunicorn 4w (989 req/sec)
  12. FastAPI + asyncpg + gunicorn 4w (952 req/sec)
  13. SpringBoot + JPA (844 req/sec)
  14. FastAPI + psycopg2 + uvicorn + orjson (827 req/sec)
  15. Flask + psycopg2 + flask run (705 req/sec)
  16. FastAPI + SQLModel + gunicorn 4w (569 req/sec)
  17. Flask + psycopg2 + gunicorn 1w (536 req/sec)
  18. FastAPI + asyncpg + uvicorn (314 req/sec)
  19. FastAPI + psycopg2 + uvicorn (308 req/sec)
  20. FastAPI + databases + uvicorn (267 req/sec)
  21. FastAPI + SQLModel + uvicorn (182 req/sec)

Conclusion

I initially did these benchmarks to verify FastAPI’s claims of being on par with Node.js. During the process, I also wanted to figure out why I wasn’t getting a great performance out of FastAPI when used in a typical scenario with a database request. There were a few things I learned from doing these benchmarks:

  1. FastAPI is not fast out of the box. You have to use the correct database drivers such as asyncpg to fully take advantage of FastAPI’s speed.
  2. Even with asyncpg, you still have to use a faster json library with FastAPI to push performance up to Node.js levels.
  3. Going from raw sql queries to json is significantly faster than using an ORM, which makes sense as you are skipping the object mapping process.
  4. I’ve always heard that compiled languages were faster than interpreted languages, although I never verified it for myself. Java/Go was indeed faster compared to similar setups in interpreted languages.

Thanks for reading.