Skip to content(if available)orjump to list(if available)

There are no open issues or pull requests on Flask


I really feel like Python has an embarrassment of riches in terms of web frameworks. Django, Flask, and even FastAPI are master classes in how to build great open source projects that will survive the test of time. In my opinion they're a huge reason why Python continues to be a popular language for backend development.


Why not the other way around?

What did Python have relative to other languages that inspired these developers to devote so much energy into their projects?

They could have chosen PHP, Ruby, Node, Go, Java, etc.

If I think back to ~2012 I remember that compared to other languages Python had:

1) Decent package management and "virtualization" (via virtualenv)

2) Nice balance of expressiveness and strictness

3) Was fun to write, hard to explain but it was the vibe

4) Decent enough performance for most things

5) Lots of developer tooling for debugging and stuff

It felt like the least messy of the scripting languages.

- Everyone knows all the issues PHP had, no need to list it all here

- Ruby was cool but there were a thousand ways to do the same thing

- Node was inheriting that weird web language with all its quirks

- Go was just in its infancy

- Java was the kitchen sink


"Back in 2012", Django was already 7 years old and had taken most of the market share of the first generation of Python web libraries and frameworks. This included that strange beast that was once considered the "standard" Python web system, Zope. Bluntly, Django had a much better development experience overall than the older web options.

(Mind you, the likes of Zope and CherryPy are still out there and under development. You just don't hear about them much.)

Up until about 2010, web dev in Python was converging hard on Django. It took Flask and similar small frameworks to bring back variety, and all of those had to meet the requirement of better than Django in at least some narrow sense, if only in being much lighter, to get any uptake.


don't leave Pyramid out of the picture, it powers key infrastructure pypi.


Python had PyPI and scientific packages.

It’s an attractive language for people who want to integrate existing packages into a solution instead of creating everything from scratch.

Poorly written software that correctly implements science/physics tends to be more valuable than expertly written software that encodes a science/physics defect.


People who aren’t software engineers don’t really care about exotic language features. They care more about packages, developer experience, documentation, tutorials, and how difficult it is to share code with teammates.


I mentioned this in another comment below, but scientific packages were often ignored when building web services and APIs and that's the context in which Django/Flask were built.

At the time the scientific packages were extremely heavy and best avoided in production environments. Some people did it anyway but removed it once they needed to scale. Simply installing these packages was a huge pain and blocked one's ability to provision new machines, for example.

PyPI was indeed a big benefit. Ruby Gems are also what made Ruby so competitive at the time.


Ruby had all of your points 1-5. I don't think the difference was 'a thousand ways to do the same thing' (python has many ways to do the same thing also, and ruby's approach made it 'more fun') but some of the core libraries - especially things like numpy, pandas, scipy, etc. that both increased performance, simplified a wide variety of tasks and also attracted a broader range of users (mostly the 'scientific community').


This is anecdotal, but my experience was that Ruby was fashionable in the wrong kind of ways.

For example...

Ruby doesn't have primitive data types. Everything, including strings, are just objects. This level of purity is quite nice and "cleaner" in a conceptual sense. Until you encounter a large codebase with hundreds of monkey patches. Or, something I saw a lot, the extending of "base" data types such as String.

I must admit that my memory is a little hazy here, but I remember that it was impossible to find language tooling that allowed me to "jump to definition" of anything I encountered. Ruby was so free-form with so much ambiguity that at the time it just didn't exist. I'm sure this has been resolved by now. In Python we had "Jedi" and other tools that worked really well for this purpose.

You'd be browsing a Ruby codebase, see ``, and wonder:

1) Is foo a method or a property? In Ruby a function call doesn't need parenthesis if it takes no arguments.

2) Where is foo? I don't remember it being part of the standard library. Where is it defined?

3) If I do find foo, did something else monkey patch it? How would I know?

4) If I need to pass an argument into foo, should I factor it out of String? At what point am I overloading a base class like String with too much functionality?

These are very real questions and the answers are important. Come across enough of these scenarios (remember this is just one example w/ Ruby) and you eventually give up trying to understand the codebase at that level. Everything becomes a black box, everything is magic.

Do these footguns exist in Python? Sort of. You can't extend primitive types and monkey patching doesn't fit so cleanly into a normal program (think of it as "friction"). There was less ambiguity in Python's syntax. And the language community promoted a list of idioms which was helpful for discouraging bad practices.

These things may seem subtle but they made a pretty big difference at the time.

> especially things like numpy, pandas, scipy, etc

These libraries were generally avoided when building web services, APIs, etc. The context of this discussion is Flask and the engineer(s) who built it were generally working in the world of live services. Dropbox, Reddit, and other YC companies made heavy use of Python as live service type software. Data analytics stuff def existed but different context.

The reason numpy and friends were avoided was due to the complexity of installing them. The dependency list was enormous and some of those dependencies needed to be compiled on the fly. Scientific packages also typically shipped with very large datasets to support whatever complex computing they were doing (think training data).

The 'scientific community' played a small part in the creation of the frameworks discussed here. If they did, it was more that Python was one of the first languages that the creators picked up in University.


Ruby has comparatively crappy documentation (still does, arguably worse now because the Pickaxe is no longer updated), and also had a very weak non-Rails English-language community, but comparison to Python (by rumor, it had a much stronger Japanese language community, but I can't attest to that.)

If you were doing something nontrivial where the interesting part weren't the kind of things Rails/ActiveSupport addressed, those factors alone (before even considering language/library features) made it a lot more of an uphill climb with Ruby.


Go and Node were infants back then

Java was (and still is) insanely heavyweight

Ruby was tied to the hip with Rails despite being more than capable on its own and being much more expressive than Python (IMO). Python is faster though.

PHP was (and still is) PHP


We were shipping Go services all the way back in 2011! A bunch of my friends were working at other well known companies (of the time) and shipping Go stuff. Service based architecture was pretty popular and people were experimenting a lot.

Walmart picked up Node.js in 2012. Not for their core stuff obviously but they were running a bunch of internal services with it.

LinkedIn was evaluating it alongside Ruby (EventMachine) and Python (Twisted). They ended up picking Node for some of their services.

Highly concurrent green-thread-ish type stuff was all the rage. Go and Node were the front runners. Python was the conservative choice. Ruby was hipster choice :P


Love Ruby method chaining and .tap(). Ruby was the language I could most “think in” and write a lot of stuff then have it just work like I figure it would. Python makes me roll my eyes having to explicitly handle a bunch of random small errors, like it’s being overly pedantic. I’m also not a fan of significant whitespace, which is another reason I prefer JSON to YAML (along with always having to look up if I need a hyphen on a line or not).


Ruby has Sinatra, which is much closer to Flask than to Rails.


> What did Python have relative to other languages that inspired these developers to devote so much energy into their projects?

Not a popular opinion, but Python had Google.


Google App Engine was ahead of it’s time, but unfortunately not very good if you weren’t trying to write a webapp with the intention of scaling it on Google hosting.


Sadly Python has stood still in the interim, focusing mostly on completing the migration from 2 to 3, by which point the competition was much farther along and continuing to innovate at a rapid pace (performance, package management, type system, tooling, etc).


Honestly, I'm having a hard time buying this opinion. Python3 has changed a lot in the last 10 years, and much of those changes have been for the better.


At the time, Python was the best choice, and nowadays since Python has all these frameworks that solve the problem there's less demand for an alternative in a different language if you can use the thing that already exists and has over a decade of battle-testing.


Core Python didn't include pip until a few years after that; package management with Python in 2012 was a confusing mess.


Something not often mentioned: WSGI and ASGI do a great job at providing a common, well-thought abstraction to library writers.


I disagree about FastAPI in this list of frameworks. Django and Flask have great documentation, FastAPI does not. There is no API reference for FastAPI, so often you will have to read the source code to understand what options are available and how to use them. Also the FastAPI explanation about concurrency ( is terrible. It's filled with emojis and seems to be written for children.


It’s the worst thing I’ve ever seen in documentation. It made it really hard to take the entire project seriously.


I built a production app in Django last year, and I loved a lot of things about it, but I felt some important shortcomings.

Async support was clunky at best... The Django way of doing things has traditionally been to either use a queueing system like Celery or multi-threading, but I find first-class async like in Node is much easier for many typical web tasks (database I/O and calling external services).

I also faced issues with libraries for menial tasks. My setup was Django + React and JWT was poorly supported - the most popular auth library needed quite a bit of manual patching in code to make it all work. This was very surprising... I walked away with the feeling that some of the libs for what should be common use cases nowadays are not "production-grade".

DRF while powerful and well-documented also had a decent amount of quirks for things that were not so far at the edges.

Raw deployment on PaaS like Heroku and Elastic Beanstalk (tried both during the project, ended up settling on the latter) without Docker or K8s, also a bit messier than I'd have liked at times, but ultimately OK.

Things I really liked: models and the ORM (again the occasional niggle at the edges but good 98% of the time), most of DRF, Python itself, Python types, the admin panel, management commands, the custom commands "framework", testing.

But as it stands, with my qualms above, I think for the next project that deserves a full-batteries framework, I might give something like .Net or Spring + Kotlin a chance.


Just a few observations:

- Async... The Django way of doing things... Celery -> it's a background task system which is like bringing a bazooka to kill a fly for what you wanted to do (I assume proxying the request to an external API).

- database I/O and calling external services -> Use gunicorn gevent worker type that brings up the pseudo threads.

- JWT was poorly supported -> There's at least 5 battle tested libraries that you can use to implement JWT alone (simplejwt comes to mind first)

- the most popular auth library needed quite a bit of manual patching in code to make it all work -> Which one? There's few that are standard and most are easy to customize via global settings.

- Raw deployment on PaaS like Heroku and Elastic Beanstalk -> that's more on the platform rather than Django itself

- DRF -> I agree, DRF has it's own way of doing things and deviating from them can give you massive headaches.


> it's a background task system which is like bringing a bazooka to kill a fly for what you wanted to do

We actually used for things that were close to its purpose. Mainly scheduled background tasks. However, from what I could see in the Django world, sometimes there's also a temptation to use it for heavy CPU or network bound tasks.

> Use gunicorn gevent worker type that brings up the pseudo threads

My point still stands, more complex than async in Node.js, C# or Spring for example.

> There's at least 5 battle tested libraries that you can use to implement JWT alone (simplejwt comes to mind first)

Exactly what we used: dj-rest-auth (the supported one) with simplejwt plugin. And we encountered problems on a very simple use-case: simply sending the JWT to the client. I had to patch some code found in an obscure GitHub issue for the repo, which I cannot find right now, otherwise I would link to the issue itself.

> that's more on the platform rather than Django itself

Yes, mostly, I don't blame Django itself, but support for deployment is important for me and at the end of the day I do factor it in.

> DRF -> I agree, DRF has it's own way of doing things and deviating from them can give you massive headaches.

I would add, sometimes the way of doing things was not that clear... It was not always clear whether the convention was to annotate some data or do some intermediate computations in the models or in the serializers, and opinions on the internet varied wildly. But overall that comes with the territory in software development, again I don't think DRF itself was to blame, it does many things extremely well, and does offer many clear conventions.


> the most popular auth library needed quite a bit of manual patching in code to make it all work

Which one did you use?


It was dj-rest-auth, the version that continues to be supported, with the jwt plugin.


Definitely, the biggest hiccup has been the py3 migration. But generally, once a framework is stable on a major version, it seems to have very minor necessities for upkeep (though, changes for new features definitely can improve the functionality).

As to flask, I would love to see it reach the milestone that Openbox has achieved of “being done” and only really being updated for bug+security fixes or base language feature updates. As a “micro framework”, it has perfectly filled it’s necessary niche.


Sadly, if it were “done”, a lot of devs would complain that it was dead and stop using it.


> Sadly, if it were “done”, a lot of devs would complain that it was dead and stop using it

Both pessimistic and incorrect.

You can observe the vast number of libraries across all ecosystems that are in use, as reliable libraries, well into functional obsolescence. Not to say some niche software doesn't suffer the fate of "nothing new, so it's dead" but that's not the norm.

Web frameworks are never feature-complete, per se. There are always new technologies and workflows to support. Frameworks are usually considered dead when the development halts in exhaustion or in transition to another project (as individuals, the development team members drift to these modalities), not when they are "done" by some finite collection of features.


Django was the first web framework I took the time to learn about a decade ago.

I've learned a lot of different languages and done various types of projects, but whenever I get free time to start a personal project that has a web component in my favorite language at the time, I'm always a little bit sad I inevitably have to build out something that was just included in Django a decade ago.


Why not build the web component in Django, and the rest in your language of choice?


nodejs is still in bad shape in my opinion in this regard. There are many frameworks, but most of them are things that do not work stable or their features are not fully settled. maybe the problem is directly related to javascript, i don't know.


Not JavaScript but the way Node was originally designed.

When your business application reaches a point of "non-triviality" (for lack of a better term) you'd (hopefully) realize that you need to ditch Node ASAP or face growing pains.

To understand why requires some historical context.

During the time when Node was becoming popular most scripting languages had weird bottlenecks at the request level.

PHP: When a request was received, your whole application would load into memory. You know how an application might read a config file on startup? Yea, every single request would load that file. There was no concept of "global variables across all requests". That's one OS process per request. Sometimes process forking was used to make that faster but it didn't matter that much because your entire application needed to load for every request to be handled.

Python: You'd use a toolkit like uWSGI to launch N number of processes when your server started. The processes would stay up and continue to serve requests. You'd avoid the overhead of initializing your app on every request, but you were still locked to one process per request because of the GIL.

Ruby: Basically the same as Python.

Java, C++, Go, etc: Too slow to develop in, too complicated for rapid iteration. Type systems scary. Weak dynamic typing fun. Productivity was king and you needed some time in the hard languages to git gud.

Okay so the fundamental problem with PHP/Python/Ruby was that you needed to tie up a single OS process per request. Let's say you wrote an HTTP API that would fetch the current time from or something. While your HTTP handler was fetching data from that OS process would be STUCK. It's sitting around and doing nothing while holding your whole application state in memory just for that one request you're serving.

This wasn't actually a problem, not really. You could serve an ridiculous amount of web requests using this model. Computers are fast! Until... you needed concurrency... like a chat app. Because in a chat app you need to keep a TCP request open for every single user in the system. 500k active users? 500k OS processes just sitting there doing nothing with all of your application state in memory. Not a good fit for PHP/Python/Ruby where every open request was tying up all those resources.

So how would you solve it in C++/Java/etc? Easy, you'd use non-blocking IO. With some fancy connection pooling, threading, and use of the epoll_wait syscall (or whatever) you could handle all 500k users with a SINGLE OS process. That's because your chat app is really just routing bytes between client applications. Most of the time is spent in I/O, not in your application.

Except non-blocking IO is not easy. Threading is not easy. Connection pooling is not easy. Type systems, code compilation, etc etc are not easy.

Enter NodeJS.

The premise is simple. Node is a single threaded runtime environment that does all IO using non-blocking IO. All accessible via a well known language called JavaScript. No memory management, no types - oh and functions are first class primitives. Nice. Oh and don't try to calculate the Nth prime number because that'll block your whole single threaded application even if it's serving 500k requests.

All of the early demos of Node (that I saw) were basically chat apps. Applications with very little business logic that spend most of their time doing IO. It was more of a glorified router of bytes...

But then people realized that CPUs are actually really fast and you could start modeling basic web apps. All you're doing in most web apps is pulling some data out of a DB, decorating that data, and pushing it out of NodeJS. Single threading is no problem. So people starting building on it... and building on it... until they needed to scale for real.

Some unfortunate souls didn't realize any of this and built complex business applications in Node. They struggled a lot with the single-threaded nature of this environment. That's when you started seeing things like Node Cluster which basically used threading under the hood to distribute some of that CPU load. Over time better alternatives came along.

So the reason you don't see big complex frameworks in Node is because Node doesn't need them. Using a big complex framework means you're shoving way too much business logic into a technology that wasn't designed for it.

Express and Koa are basically peak Node "frameworks". They're effectively just syntax sugar for cleanly routing your requests somewhere else. Need something more than that? Look elsewhere or you'll regret it later!

EDIT: Fixed a bunch of typos. Did not expect to write so much.


The larger the codebase, the greater the need for stronger typing.

When dealing with JS this typing should be in the head of the developer. Too hard and too unreliable.

Typescript seems to be having it's heart in the right place though.


Call me old but I remember in the early 00s just unbelievably fragmented and difficult it was to use python on the web.

And then you had PHP, where you didn’t need any middleware and could just drop a file into your webroot and it just worked.


After a cursory browse of the last few months of issues, this looks legitimate and not even a case of "ornery maintainers close everything as won't fix." (Not that it isn't their right to do so, but I did wonder.)


Yeah, 21 PRs (from 8-10 different contributors) & 32 issues closed during the last month, pretty great stats from a FOSS project that seems to mainly be maintained by just one or two developers.


It speaks to organization.

When I see a project this well kept I'm likely to assume there's a plan and it is being efficiently executed on.

When a project is a huge backlog of unattended issues and PRs, it is much more likely that progress is slow and there's duplication effort.


I've used Flask for years and it's been the least troublesome / biggest thing in my stack through everything. Every time I'm like, "I Wish Flask did that," I go to the docs and find out someone already thought about it, implemented it, and documented it. Coding around Flask is so boring I get excited every time.


I had pitchforks at the ready if I saw any appearance of stale-bot, but you're right, it's legitimate.


Just echoing what a lot of people here have said – I love Flask. It was the first time I ever could relax and just enjoy programming. It was beautiful, simple, easy, thoughtful. I had been programming for years, and it just felt like a breath of fresh air at the time.


The only problem I have now is FastAPI is so much better I cannot justify continuing with flask :(


what orm do you use with FastAPI?


Sqlalchemy, I just follow the pattern recommendation in the FastAPI tutorial.


I’ve been working with it lately and I like that it (mostly) gets out of your way. There are a few ways to get yourself into a hole as a project grows with it but for the most part it’s tough to find a more concise way to have a simple and approachable REST app


Coming from Java, flask felt effortless. Import the package, tack on an annotation to whatever function you want and voila you have a rest api? Fantastic! Kind of helps that a lot of complexity is shoved into the application server like gunicorn though (which is equivalent to Java's Tomcat?).


Ironically, what you've written there applies to modern Java frameworks too.


You should also take a look at Bottle. It's like mini-Flask, except the library is a single .py file. Great for doing one-off web dashboards, embedded web UI etc.


Fully developed Flask projects tend to be significantly more hairballish and custom compared to something like Django, but it's extremely hard to kick the habit and convenience once something has initially been prototyped in Flask (or Bottle, same deal). Commercially I'd much rather encounter an existing Django project, but at home 90% of the time I'll reach for Flask first, the rest being cases where there are some off-the-shelf Django components that will definitely save a ton of time, and the ceremony is worth paying.


I am not sure if thats so hard. I had a commercial app based on Flask, about 20 endpoints and 1MB of source code, and it took me a whole 1 day to port it to FastAPI. IMHO Flask is quite elegant if used properly.


Flask is tight and elegant, the parents point is that unlike Django which is far more batteries included that large Flask projects end up half implementing Django but worse.

And it’s no dig on Flask specifically, you can play this game with Django too, large Django projects end up half implementing ActiveJob and ActiveSupport but worse.


I've seen enough "hairball" Django projects in my time. That's not to knock either Flask or Django (or FastAPI for that matter), they all have their strengths and weaknesses, but there is little defence against a developer determined to avoid best practices and reading the documentation even for a framework as mature and well-documented as Django.


I've been doing similar projects (ecommerce apps) with both Django and Flask. I would choose Django over Flask anytime. MUch less fiddling.


I love flask so much. Along with requests it has been among the two single most useful things about Python for me and the things I typically reach for Python for.


Fully seconded, I just want to add for those of you in the asyncio world, aiohttp and FastAPI provide nearly identical APIs to requests and Flask (respectively), and for those of you familiar with these tools, they're a great way to be productive with asyncio almost immediately.


What's the advantage over using Flask's built-in support for async described here?

This is an honest question; I don't know much about python async yet.


I wasn't aware of this support! That would certainly be a more natural choice if you were already familiar with Flask.

To me the biggest advantage of FastAPI is the excellent integration with type hinting. It permeates the entire framework and makes things really productive.

When I saw how you could use Pydantic models to define the schema of your request and response body, and then have that autogenerated into documentation - there was no going back.

But I would like to stress that, though it has displaced Flask in my workflow, I think of FastAPI as a spiritual successor to Flask, and would never say an ill word about Flask.


Another advantage of FastAPI is the pydantic integration for data validation. That being said, I think FastAPI needs to find a community governance model because there are a lot of long-standing issues that the community has tried to fix only to find those fixes languishing with no official response.


Flask async still uses individual workers for their "async views". So, suppose your db has a bad moment and freezes for 20 seconds. All your flask worker threads become frozen on this IO - say, you have 50 of them. Your backend will time out on the 50 requests and won't even accept the 51st. Depending on how you are hosted, you may begin to autoscale aggresively because your (limited) worker threads on every node are locked. If you get lots of request you will soon find yourself running with hundreds of nodes, all of them waiting for the DB to get unstuck - and costing you money.

With a proper async framework, you will keep accepting all the requests as they come -- a single node can take thousands of those requests and time out on them gracefully, as they are just lightweight entries in the event loop, and you can have lots of those compared to # of worker threads.

Of course, fully async framework requires all your code to be fully async, which in practice means minimizing the # of dependencies since its hard to trust them. And problems with SDKs, like for instance on GCP where Google keeps lacking async support (!) for most of their things. So with the "hybrid" async you get from Flask you can still choose to use sync code for some things and async for others.


Flask still uses WSGI while other options use ASGI or other async interfaces.


Try httpx if you want an async web client has a more identical API to requests.


I learned about this thanks to you. Looks great.


Interesting. I have used Quart in the past for “Flask with asyncio support” and it was a great experience. Everything even played well with an asyncio MODBUS library that was part of the same project.


Agreed with both of these. Flask and HTMX has been an awesome combination.


This is the first time I hear of HTMX. Thanks for the heads up on it.


HTMX has been great because I've just added a single endpoint in Flask that HTMX pings and makes it feel and respond like a single page application. You can have a flask endpoint, use Jinja to create the HTML and plug it into your page with HTMX. really nice and simple.


Flask + HTMX is the new hot sauce.


Really anything + htmx. That's part of the beauty of it.


Add SQLAlchemy to that list!


It's just the right level of being able to get a lot of stuff done but without too much "magic".


Agreed. Personally, I find that too much magic impedes my learning process. So when I was first learning Python, I was able to understand how Flask works fairly quickly. The documentation is also excellent imo.


Flask ain’t going anywhere, but i’m a serious FastAPI devotee now.

pydantic validation throwing a 422 with what’s wrong before my endpoint ever gets hit is a fucking superpower.


I'm so spoiled by pydantic/fastapi spelling out exactly where my payload schema is wrong. I was interfacing with some google calendar API and it was constantly giving me a super vague "Bad Request" and I'm messing with the fields until finally I figure out I need a strict datetime string with four-digit timezone, no Z allowed. Like, it's delicious to be told exactly what you need to tweak to get it to accept.


Really awesome feat.

As a side note, not sure if it's just me, but I feel a sort of "tip jar effect" with projects that have no open PRs/issues. I feel like I'd be far less willing to submit one if there's no others there. Like all eyes would be on me if I were to do so. Something about adding your issue to the pile just feels a little more welcoming. Just me?


Well, you can look at history. The jar is never empty.


I've been similarly impressed by the quality of Sequel [1], a database toolkit for Ruby that accomplishes a similar feat. 0 open / 1150 closed issues, 0 open / 672 closed PRs.



Go to for me has been a simple Flask framework and HTMX to make my sites seem and feel more dynamic, and then deploy the whole thing with Zappa as an AWS Lambda function. Super simple to add a new endpoint in Flask and ping it with HTMX.


I've been using flask for 10+ years now, mostly for prototyping webapps, but it's cool to see those projects which never got too large for needing something more scalable or apt to complexity that are still up and running with minimal maintenance needed to be done on them. I think that most of the problems in flask programs I might have encountered are just problems with the language it's written in, and the tradeoffs you have to make in a dynamically typed language in terms of tooling, but those are just things you get used to as a developer wielding double edged swords all over the place.

It just does an incredible job of staying out of the way and never having become some bloated beast which ends up causing problems due to some misplaced voracious appetite for eating as many batteries to include as possible.


I'm not a big Python user but Flask keeps making my radar for being a good product.


Read the code some time! It is a remarkably well done project.