Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.

Dino makers are well aware of this. An otherwise great Python framework (based on Starlette) is even named "FastAPI" in an attempt to use this to their advantage (it is great for other reasons, not because of its speed).

Unfortunately lots of devs are looking for silver bullets when it comes to speed, instead of detecting, determining, investigating and removing the bottlenecks.



I've thought of the fastapi name as that it's fast to get going with rather than speed, it's python after all.


Also what I thought … and indeed turns out to be the case in actual usage. It's quite fast to get a usable API up and running with FastAPI, starting from zero to the point of making useful requests to the API and getting back useful data. The actual speed of API access itself (the response times for the requests, etc) has never really been an issue I've wrestled with (me not being Twitter, etc. and not needing zillions of requests per second).


That's my take on it as well. Fast apu does use asyncio which may have speed avantages in some circonstances. But the main takeaway and the killer feature is the fact you build your api just declaring functions signatures.


Is it really that big of a deal that you can do @app.post("/foo") instead of @app.route("/foo", methods=["POST"])?


That's not what anyone is talking about.

Take this example from their docs:

    @app.get("/items/{item_id}")
    async def read_item(item_id: int, q: Union[str, None] = None):
        return {"item_id": item_id, "q": q}
You are declaring the types of the parameters, and FastAPI parses and enforces them for you. This saves quite a bit of code, and lets you focus on your business logic. Writing the code this way also allows FastAPI to generate a meaningful description of your API, which can be schema (such as Swagger) that tools can use to generate clients, or it can be documentation pages for humans to read. Any good documentation will still require you to write things, but this gets you further, faster than using something like Flask.

This example[0] takes these concepts even further.

Or just glance at the summary and see that it has nothing to do with @app.post.[1]

FastAPI has also properly supported async routes for longer than Flask, from what I understand.

(I've never personally used FastAPI for anything serious, since I have rarely used Python for anything other than machine learning for the past 5+ years, preferring to use Go, Rust, or TypeScript for most things, but I am aware of it, and seeing its claims misrepresented like that is mildly annoying. FastAPI is far more appealing to me than any other Python web framework I've ever seen, and I've only heard good things about it. Based on my experiences in other languages, their approach to writing APIs is absolutely a good one.)

[0]: https://fastapi.tiangolo.com/#example-upgrade

[1]: https://fastapi.tiangolo.com/#recap


It seems we are talking past each other.

Typing is the base for many reasons I love FastAPI, so yes it is useful. And I speak as someone who has used it in production on multiple projects, and even converted some from Flask (but not because of speed).

Far from "fast to get going", starting with FastAPI is slower actually, but it takes you further (as you pointed out). The train of thought that "typing" leads to "fast to get going" which leads to "fast" in the name... Let's say I don't buy it.

The link to "performance" is difficult to miss, I don't think that is a coincidence. And it's OK. If this is what matters to devs that much, they would be stupid not to highlight it. I'm actually happy people are using it, even if for the "wrong" reasons.


Properly supporting async seems like the main reason why it's taken off.


You think no one cares about the convenience of having the type system do a lot of the work for you? Or being able to autogenerate client libraries? I find that position confusing.

Proper async support is decently important to me in any language or framework, but in the real world, I haven't often run into other developers who care much about that.


Async in Python is a huge deal as it is in any language, yes. For very real reasons (cutting down on incredible amounts of confusing boilerplate) to very lame reasons (it's been memed into developer consciousness enough that it becomes a primary yes/no gate for development teams).


I assure you, a large part of using fastapi for my company was the integration with pydantic for easy validation.


It's not that. Fast API comes with a way to declare python types to get for free:

- URL params, forms and query string parsing and validation

- (de)serialization format choice

- dependency injection for complex data retrieval (this one is very underrated and yet is amazing for identification, authentication, session and so on)

- output consistency guarantees

- API doc generation


If the latter requires you to do your own mini router to handle each verb separately, then yes. It greatly improves readability and reduces boilerplate to have the ability to have one handler per path x verb


Maybe - in that case I'm judging them wrongly. But flask (the king they dethroned) was faster to get going (with less features, granted) and their docs feature "Performance" [0] rather prominently.

Note that I don't hold this against them. They simply understand what makes the devs pick them and adjusted their market strategy accordingly. It would be nice if they didn't have to though.

[0] https://fastapi.tiangolo.com/#performance


Flask is faster to get going in what way? Writing types instantly saves you a ton of time and effort right out of the gate.[0]

And if your framework is faster, then of course you're going to mention it. Do you really think Flask or Django wouldn't point out that they were fast, if they were? I'm quite sure they would, since it's not shameful to educate the reader on what your framework offers compared to the competition, but they can't, because they're not.

Your link goes to the very bottom of that page, so is it really prominently featured compared to everything else they're trying to sell you on? It really doesn't seem like it. More convincing would be pointing to their list of "key features" at the top, which does mention performance first, but then quickly focuses back on "Fast to code" and "Fewer bugs".

[0]: https://news.ycombinator.com/item?id=33224324


flask took the approach of only providing basic funtionality and relying on 3rd party packages for things like openapi docs and thr like. fastapi has a lot more included and it makes the common tasks like documenting your api easier and more consistent


Kinda devil's advocate, but I've met several senior-should-be-junior cargo-culting devs who swear by popular tools that are objectively slower than alternatives, instead of actually taking the time to evaluate the less popular alternatives to see if the lack of surrounding ecosystem will actually affect their project. The result is a death by a thousand cuts, because they auto-pilot to "what is everyone else using"?


Putting aside who-wants-to-be-what, a change in a common tech stack is a serious change with all sorts of implications. Arguments for and against should be carefully considered, and yes "going faster can't hurt" is not an argument.


Mostly agree; if you're a Java shop, maybe stick to Java instead of confusing all of your engineers just because you read on HN how much faster Golang can be. But again, YMMV.


> to see if the lack of surrounding ecosystem will actually affect their project

for projects of any significant size, that answer is a resounding "yes".


I think engineering is more nuanced than that, personally.


it's really only nuanced when you have small team and a very tightly scoped project.

for anything that can plausibly grow in scope and team size (which, let's be honest here, is most complex projects), it almost never makes sense to go without an existing ecosystem. it becomes difficult to hire, difficult to train, difficult to pass off maintenance, slows down velocity of shipping, makes your team gradually re-invent a worse version of the framework/tooling you initially tried to avoid, etc.

i've been on both kinds of projects. when i build something solo it's a work of art in code size, API consistency, and performance...and that feels truly amazing. but unfortunately it's not something that is feasible with bigger and more diverse-skillset teams. ever-growing scope and shipping features quickly usually means giving up performance and well thought out design.


I agree. PReact for instance comes to my mind. But it’s faster! And? And is it really in a business web app and not just a printf(« hello world »)? Does it matter that much? Do they have as many dev working on it? What about edge cases? More than anything for this type of change does your faster magic new thing has even an ecosystem?


The answer to all your questions are "it depends". Context matters. To be honest, Preact would probably be a better fit for most projects using React.


It doesn’t help that many tools, frameworks and etc advertise these kinds of numbers.

I find the reason for using a tool often isn’t what they list first in their technical documentation.

I can understand why someone might think those really are the reasons to use that tool.


> You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.

I think that we as an industry don't have the best hindsight.

I'll use enterprise Java as an example of a few common situations:

  - sometimes we go for Spring (or Spring Boot) as a framework, because that's what we know, but buy into a lot of complexity that actually slows us down
  - other times we might look in the direction of something like Quarkus or Vert.X in the name of performance, but have to deal with a lack of maturity
  - there's also something like Dropwizard which stitches together various idiomatic packages, yet doesn't have the popularity and tutorials we'd like
  - people still end up being limited by ORMs, which can speed up development and make it convenient, but have hard to debug issues like over-eager fetching
  - regardless of how fancy and "enterprise" your framework is, people still make data structure mistakes (e.g. iterating over a list instead of using a map)
  - if you've written a singleton app (runs just on a single instance) that's monolithic, your background processes will still slow everything down
And then, people wonder why it's hard to change their enterprise codebase and wave around their hands helplessly when their app needs at least 2 GB of RAM to even run locally and answering some simple REST requests needs close to 20 seconds and the app does about 2000 database queries to return a relatively simple list with some data.

When people should think about performance, they're instead busy "getting things done", when people should think about "getting things done" they're busy bikeshedding about which new framework would look best on their CV. And we even pick the wrong problems to solve, given that many (but not all) of the systems out there won't really have that stringent load requirements and just writing decent code should be our priority, regardless of the framework/technology/language.

I remember load testing a Ruby API that I wrote: on a small VPS (1 CPU core, 4 GB of RAM), it could consistently (over 30 minutes) serve around 200 requests/second with database interaction for each, which is probably enough for almost any smaller project out there. Doubling those resources by scaling horizontally almost doubled that number, with the database eventually being the limiting factor (which could have been scaled vertically as well). And that is even considering that Ruby is slow, when compared to other options. But even "slow" can be enough, when your code is decently written (and the scale at which you operate doesn't force your hand).


If you cache correctly I think 200 requests per second comes out to something like tens of thousands of active users


Read requests, sure. In that particular instance, I was testing a write heavy workload for my Master's Degree with K6 at the time: https://k6.io/

The idea was to see how performance intensive COVID contact tracking would be, if GPS positions were to be sent by all of the devices using an app, which would later allow generating heat maps from this data, instead of just contact tracing. Of course, I talked about the privacy implications of this as well (the repository was called "COVID 1984"), but it was a nice exercise to demonstrate horizontal scaling, the benefits of containerization and to compare the overhead of Docker Swarm with lightweight Kubernetes (K3s).

So yes, write heavy workloads are viable with Ruby on limited hardware, read heavy workloads can be even more easy (depending on who can access what data).


Bet they, the junior devs, can’t even optimise their SQL queries.


Maybe they should use a faster db like nosql

/S


You would think by now that SQL databases would be pretty good about optimizing any query it receives.


The last sentence kind of contradicts the preceding ones.

I agree that it’s harmful to distract from what actually matters: the core goal and competencies of the team.

But therefore we should hope devs look for silver bullets to address performance without having to be distracted by it. “It’s out of the box pretty good so I don’t have to think about caching or CDNs or load balancing until later” is deeply valuable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: