Skip to content(if available)orjump to list(if available)

Grafana Mimir – Horizontally scalable long-term storage for Prometheus

nosequel

Grafana Labs needs to make a convincing comparison chart of some kind between Mimir, Thanos, and Cortex. Thanos and Cortex are both mature projects and are both CNCF Incubating projects. Why would anyone switch to a new prometheus long-term storage solution from those?

*EDIT*: I see from another reply there is a basic comparison to Cortex here: https://grafana.com/blog/2022/03/30/announcing-grafana-mimir... To the Mimir folks, I'd love to see something similar Mimir v. Thanos.

sciurus

It looks like this is a fork of Cortex driven by the maintainers employed by Grafana Labs, done so they can change the license to one that will prevent cloud providers like Amazon from offering it without contributing changes back.

This is interesting, since Amazon offers both hosted Grafana and Cortex today. I was under the impression Amazon and Grafana Labs were successfully collaborating (unlike e.g. AWS and Elastic), but seems like that's not the case.

WraithM

Does AWS provide managed Cortex? Is that just a part of the AWS managed prometheus thing?

sciurus

Yes, Amazon's managed Prometheus is based on Cortex. See the first question at https://aws.amazon.com/prometheus/faqs/

_msw_

Disclosure: I work for AWS, but I don't work on the Amazon Managed Service for Prometheus. I have my own very long held opinions about Free and Open Source software, and I am only speaking for myself.

To me, the AGPLv3 license isn't about forcing software users to "give changes back" to a project. It is about giving the permissions to users of software that are necessary for Software Freedom [1] when they access a program over a network. In practice, that means that changes often flow "upstream" to copyleft licensed programs one way or another. But it was never about obligating changes to be "given back" to upstream. In my personal opinion, you should be "free to fork" Free and Open Source Software (FOSS). Indeed, the Grafana folks seem to have decided to do that with Grafana Mimir.

Personally, I hope that they accept contributions under the AGPLv3 license, and hold themselves to the same obligations that others are held to with regard to providing corresponding source code of derivative works when it is made available to users over a network. In my personal opinion, too often companies use a contributor agreement that excuses them from those obligations, and also allows them to sell the software to others under licenses that do not carry copyleft obligations. See [2] for a blog post that goes into some detail about this.

If you look at the Coretex project MAINTAINERS file [3], you will see that there are two folks listed that currently work at AWS, but no other company other than Grafana Labs today. I would love to see more diversity in maintainers for a project like this, as I think too many maintainers from any one company isn't the best for long term project sustainability.

I think if you look at the Cortex Community Meeting minutes [4], you can see that AWS folks are regularly "showing up" in healthy numbers, and working collaboratively with anyone who accepts the open invitation to participate. There have been some pretty big improvements to Coretex that have merged lately, like some of the work on parallel compaction [5, 6].

TL;DR, I think it is easy to jump to some conclusions about how things are going in a FOSS project that don't hold water if you do some cursory exploration. I think best way to know what's going on in a project is to get involved!

--

[1] the rights needed to: run the program for any purpose; to study how the program works, and modify it; to redistribute copies; to distribute copies of modified versions to others

[2] https://meshedinsights.com/2021/06/14/legally-ignoring-the-l...

[3] https://github.com/cortexproject/cortex/blob/master/MAINTAIN...

[4] https://docs.google.com/document/d/1shtXSAqp3t7fiC-9uZcKkq3m...

[5] https://aws.amazon.com/blogs/opensource/scaling-cortex-with-...

[6] https://github.com/cortexproject/cortex/pull/4624

justincormack

Their other AGPL projects all have a CLA and they state you can buy them as part of Grafana Enterprise without the AGPL license https://grafana.com/blog/2022/03/30/qa-with-our-ceo-about-gr... so they are not offering symmetric terms to themselves.

fishpen0

> Cortex is used by some of the world’s largest cloud providers and ISVs, who are able to offer Cortex at a lower cost because they do not invest the same amount in developing the project.

> ...

> All CNCF projects must be Apache 2.0-licensed. This restriction also prevents us from contributing our improvements back to Cortex.

I read this as "Amazon has destroyed the CNCF by not playing nice"

CameronNemo

Holy crap I did not know CNCF discriminated against copyleft software.

This really discredits the Linux Foundation as an institution.

netingle

I agree! Which is why I put one in the blog post ;-) https://grafana.com/blog/2022/03/30/announcing-grafana-mimir...

krnlpnc

I'm not seeing a comparison to Thanos

alrlroipsp

Why would you? Parent says its a comparison of Mimir and Cortex.

mekster

You're forgetting VictoriaMetrics that's presumably the best choice for Prometheus long term storage.

Such a solid solution exists and yet another competitor? Not sure why they didn't just buy VictoriaMetrics and possibly rebrand it.

daviziko9

Agree with you that VictoriaMetrics works like a charm: fast, easy to configure, easy to recover from components crashes (last time I checked Cortex, it was a nightmare to recover from the ingestors). For me, it is the better solution for long term storage Prometheus if you come from a clean state.

But Grafana labs, employ lots of people that have worked on Cortex since its inception at weaveworks, and has developed strong in-house knowledge about it so Grafana is fully commited to Cortex (now Mimir) and have developed derivatives for logs (Loki) and traces (Tempo) heavily based on the Cortex model.

smw

Seems like people should throw VictoriaMetrics into comparisons like this, as well?

deepsun

Yea, although making benchmarks properly is no easy task, and can be pretty time consuming, especially if you involve all the contestants for fairness. They are not interested in releasing a benchmark if they don't look good in it.

mr-karan

Folks looking for a solution to storing Prometheus metrics from multiple places, definitely consider exploring Victoriametrics.

I'm running a single Victoriametrics instance which has 230bn metrics, consuming ~4GB of memory and barely 200m of CPU utilization (only spikes to ~1.5cores when it flushes these datapoints from RAM to disk). I've previously[1] shared my experience of setting up Victoriametrics for long term Prometheus storage back in 2020 and since then this product has just kept getting better.

Over time, I switched to `vmagent` and `vmalert` as well which offer some nice little things (like did you know, you can't break up the scrape config of Prometheus into multiple files? `vmagent` does that happily). The whole setup is very easy to manage for an Ops person (as compared to Thanos/Cortex. Yet to checkout Mimir though!) as well. I've barely had to tweak any default configs that come in Victoriametrics and I even increased the retention of metrics from a month to multiple months after gaining confidence in prod.

[1]: https://zerodha.tech/blog/infra-monitoring-at-zerodha/

halfmatthalfcat

How does this stack up with https://github.com/thanos-io/thanos, which I've used to pretty good success.

The only criticism I have of Thanos though was the amount of moving pieces to maintain.

netingle

(Tom here; I started the Cortex project on which Mimir is based and lead the team behind Mimir)

Thanos is an awesome piece of software, and the Thanos team have done a great job building an vibrant community. I'm a big fan - so much so we used Thanos' storage in Cortex.

Mimir builds on this and makes it even more scalable and performance (with a sharded compactor and query engine). Mimir is multitenant from day 1, whereas this is a relatively new thing in Thanos I believe. Mimir has a slightly different deployment model to Thanos, but honestly even this is converging.

Generally: choosing Thanos is always going to be a good choice, but IMO choosing Mimir is an even better one :-p

AndyNemmity

Okay, but why? I am using Thanos today. It works, it's complex, when it breaks, it's a bit of a challenge to fix, but it happens. It doesn't break often.

It does the job. Mimir, which is based on Cortex, using either Mimir, or Cortex, what benefit am I getting?

I get asked every few months about moving off of Thanos to Cortex, and today now Mimir, and I don't have any substantial reason to do so. It feels like moving for the sake of moving.

I need to see some real reasoning as to why I am going to add value to move everything to Mimir.

netingle

Sounds like Thanos is working well for you, so in your position I wouldn't change anything.

There are a bunch of other reasons why people might choose Mimir; perhaps they have out grown some of the scalability limits, or perhaps they want faster high cardinality queries, or a different take on multi-tenancy.

Do remember Cortex (on which Mimir is based) predates Thanos as a project; Thanos was started to pursue a different architecture and storage concept. Thanos storage was clearly the way forward, so we adopted it. The architectures are still different: Thanos is "edge"-style IMO, Mimir is more centralised. Some people have a preference for one over the other.

daviziko9

We were struggling with Cortex a couple years ago, then we tried VictoriaMetrics and haven't look back. It goes pretty much unattended with just monitoring disk space to make sure we still have room to continue pouring in metrics. When a component crashes (not often) it recovers pretty much without noticing.

notacoward

Multi-tenancy is something that shouldn't be underestimated. A lot of people think it's just a checklist item until (a) they need it or (b) they try to implement it in an existing system. Kudos for making it a day-one feature.

vladvasiliu

While I agree with your point in the general case, would you mind elaborating on the specific case of Prometheus?

My understanding is that the recommended best-practice for Prometheus is to deploy as many of them as necessary, as close to the monitored infrastructure as possible.

What use case would require deploying a single Mimir, so supposedly Prometheus (cluster) in the case of serving multiple tenants? Why not just deploy a dedicated Prometheus / Mimir stack per client?

witcher

(Bartek here: I co-started Thanos and maintain it with other companies)

Thanks for this - it's a good feedback. It's funny you mentioned that, because we actively try to reduce the number of running pieces e.g while we design our query sharding (parallelization) and pushdown features.

As Cortex/Mimir shows it's hard - if you want to scale out every tiny functionality of your system you end up with twenty different microservices. But it's an interesting challenge to have - eventually it comes to trade-offs we try to make in Thanos between simplicity, reliability and cost vs ultra max performance (Mimir/Cortex).

pracucci

Mimir has a microservices architecture. However, Mimir supports two deployment modes: monolithic and microservices.

In monolithic mode you deploy Mimir as a single process and all microservices (Mimir components) run inside the same process. Then you scale it out running more replicas. Deployment modes are documented here: https://grafana.com/docs/mimir/latest/operators-guide/archit...

eatonphil

There isn't a link to the project on the page (that I could find) so it almost looked like it's not open source. But here it is: https://github.com/grafana/mimir.

notamy

You have to find the "Download" button and click it, it's very non-obvious :< The entire page seems to be designed to funnel you into signing up for their paid service, which makes sense, but still doesn't feel great...

candiddevmike

Recently switched from their cloud service back to on-premise. The cloud version wasn't being updated and the entire setup experience left a lot to be desired with how you connect their on-premise grafana agent, especially if you aren't using their easy button deployment stuff. Also, billing for metrics is insane, as on any given day my metric load may vary between 5-7k or more. This caused some operational overhead as I was constantly tweaking scrapers to reduce useless metrics.

For $50/mo, you can self host everything easier, cheaper and with more control IMO.

maccard

> For $50/mo, you can self host everything easier, cheaper and with more control IMO.

Can you give an example as to how you could self host a grafana stack for $50/month? On AWS that buys you 4 cores, 8GB memory and 0 storage, and it's certainly not easier than clicking one button on the grafana website.

dewey

The first CTA button on the page "Tutorial" links to a tutorial where the first step is to run the project with Docker. Doesn't really feel like an overly forced funnel to their paid service.

mdaniel

Still AGPL, which I guess makes sense given the rest of their stack is too: https://github.com/grafana/mimir/blob/mimir-2.0.0/LICENSE

MindTooth

How does this compare to https://www.timescale.com/promscale

I’m looking into choosing a backend for my metrics and always open for suggestions.

vineeth0297

Hey!

Promscale PM here :)

Promscale is the open source observability backend for metrics and traces powered by SQL.

Whereas Mimir/Cortex is designed only for metrics.

Key differences:

1. Promscale is light in architecture as all you need is Promscale connector + TimescaleDB to store and analyse metrics, traces where as Cortex comes with highly scalable micro-services architecture this requires deploying 10's of services like ingestor, distributor, querier, etc.

2. Promscale offers storage for metrics, traces and logs (in future). One system for all observability data. whereas the Mimir/Cortex is purpose built for metrics.

3. Promscale supports querying the metrics using PromQL, SQL and traces using Jaeger query and SQL. whereas in Cortex/Mimir all you can use is PromQL for metrics querying.

4. The Observability data in Cortex/Mimir is stored in object store like S3, GCS whereas in Promscale the data is stored in relational database i.e. TimescaleDB. This means that Promscale can support more complex analytics via SQL but Cortex is better for horizontal scalability at really large scales.

5. Promscale offers per metric retention, whereas Cortex/Mimir offers a global retention policy across the metrics.

I hope this answers your question!

pracucci

Hi. I'm a Mimir maintainer. I don't have hands-on/production experience with Promscale, so I can't speak about it. I'm chiming in just to add a note about the Mimir deployment modes.

> Cortex comes with highly scalable micro-services architecture this requires deploying 10's of services like ingestor, distributor, querier, etc.

Mimir also supports the monolithic deployment mode. It's about deploying the whole Mimir as a single unit (eg. a Kubernetes StatefulSet) which you then scale out adding more replicas.

More details here: https://grafana.com/docs/mimir/latest/operators-guide/archit...

tarun_anand

Thanks... how do we do reporting/dashboards/alerts with Promscale?

Also, any performance benchmarks?

vineeth0297

Promscale supports reporting/ingestion of data using Prometheus remote-write for metrics, OTLP (OpenTelemetry Line Protocol) for traces.

Dashboards you can use Promscale as Prometheus datasource for PromQL based querying, visualising, as Jaeger datasource for querying, visualising traces and as PostgreSQL datasource to query both metrics and traces using SQL. If you are interested in visualising data using SQL, we recently published a blog on visualising traces using SQL (https://www.timescale.com/blog/learn-opentelemetry-tracing-w...)

Alerts needs to be configured on the Prometheus end, Promscale doesn't support alerting at the moment. But expect the native alerting from Promscale in the upcoming releases.

We have internally tested Promscale at 1Mil samples/sec, here is the resource recommendation guide for Promscale https://docs.timescale.com/promscale/latest/installation/rec...

If you are interested in evaluating, setting up Promscale reach out to us in Timescale community slack(http://slack.timescale.com/) in #promscale channel.

SuperQue

One interesting question I have is regards to global availability.

With our current Thanos deployment, we can tie a single geo regional deployment together with a tiered query engine.

Basically like this:

"Global Query Layer" -> "Zone Cluster Query Layer" -> "Prom Sidecar / Thanos Store"

We can duplicate the "Global Query Layer" in multiple geo regions with their own replicated Grafana instances. If a single region/zone has trouble we can still access metrics in other regions/zones. This avoids Thanos having any SPoFs for large multi-user(Dev/SRE) orgs.

ddreier

This is one of my favorite things about Thanos. We run Prometheus in multiple private datacenters, multiple AWS regions across multiple AWS accounts, and multiple Azure regions across multiple subscriptions. We have three global labels: cloud, region, and environment. With Thanos's Store/Querier architecture we have a single Datasource in Grafana where we can quickly query any metric from any environment across the breadth of our infrastructure.

It's really a shame that Loki in particular doesn't share this kind of architecture. Seems like Mimir, frustratingly, will share this deficiency.

bboreham

The typical way to run Mimir is centralised, with different regions/datacenters feeding metrics in to one place. You can run that central system across multiple AZs.

If you run Mimir with an object store (e.g. S3) that supports replication then you can have copies in multiple geographies and query them, but the copies will not have the most recent data.

(Note I work on Mimir)

dikei

Sad news for Cortex, with most of the maintainer moving on to Mimir, I fear it's pretty much dead in the water.

netingle

We tried to address this question on the Q&A blog post: https://grafana.com/blog/2022/03/30/qa-with-our-ceo-about-gr...

It doesn't have to mean the end for Cortex, but others will have to step up to lead the project. We've tried to put other maintainers in place to kick start this.

sciurus

I was going to ask what the migration path was from Cortex to Mimir, but I see you've documented that at https://grafana.com/docs/mimir/latest/migration-guide/migrat... . Thanks for the work you've done to make this easy.

pracucci

This video also shows a live migration from Cortex to Mimir (running in Kubernetes): https://www.youtube.com/watch?v=aaGxTcJmzBw&ab_channel=Grafa...

AndyNemmity

If anything, this makes me less interested in moving from Thanos.

Thaxll

So many solutions to the same problem, how does it compare to Victoria Metrics?

hagen1778

VictoriaMetrics co-founder here.

There are many similar features between Mimir and VictoriaMetrics: multi-tenancy, horizontal and vertical scalability, high availability. Features like Graphite and Influx protocols ingestion, Graphite query engine are already supported by VictoriaMetrics. I didn't find references to downsampling in Mimir's docs, but I believe it supports it too.

There are architectural differences. For example, Mimir stores last 2h of data in local filesystem (and mmaps it, I assume) and once in 2h uploads it to the object storage (long-term storage). VictoriaMetrics doesn't support object storage and prefers to use local filesystem for the sake of query speed performance. Both VictoriaMetrics and Mimir can be used as a single binary (Monolithic mode in Mimir's docs) and in cluster mode (Microservices mode in Mimir's docs). The set of cluster components (microservices) is different, though.

It is hard to say something about ingestion and query performance or resource usage so far. While benchmarks from the project owners can be 100% objective, I hope community will perform unbiased tests soon.

outsb

Given Victoria Metrics is the only solution I've seen to make data comparing it to other systems easily accessible as part of official documentation, it's the only one I pay attention to.

I knew from reading the docs what VM excelled at and areas it was weak in, long before I ever ran it (and expectations from running it matched the documentation). I hate aspirational marketing-saturated campaigns for deep tech projects where standards should obviously be higher, it speaks more about intended audience than it does the solution, and that's why in this respect VM is automatically a cut above the rest.

cip01

Cortex, Thanos and Mimir all support "remote-read" protocol (documented in Prometheus: https://prometheus.io/docs/prometheus/latest/storage/#remote...), so external systems (eg Prometheus) can read data from them easily.

valyala

It would be great if you could provide a few practical examples for "Prometheus remote-read" protocol given its' restrictions [1].

[1] https://github.com/prometheus/prometheus/issues/4456

cett

Presumably AGPLv3 is why Grafana would rather develop this than Cortex?

pracucci

Hi. I'm Marco, I work at Grafana Labs and I'm a Grafana Mimir maintainer. We just published a couple of blog posts about the project, including more details on your question: https://grafana.com/blog/2022/03/30/announcing-grafana-mimir... and https://grafana.com/blog/2022/03/30/qa-with-our-ceo-about-gr...

cett

Thank you for your answer. That seems like a reasonable strategy.

mgarciaisaia

The thing I need most right now is a confirmation that it's named after this tweet: https://twitter.com/mmoriqomm/status/1272552214658117638

bbu

i don't get why there's so much hate here.

cortex is a pain to configure and maintain. would be awesome to have mimir address these issue!

jaigupta

This is about Prometheus but Mimir makes it interesting. I can't find any other open source time series database except Mimir/Cortex which allows this much scale (clustering options in their open source version). Our use case will have high cardinality and Mimir seems to fit very well.

Can we use Prometheus/Mimir as general purpose time series database? Prometheus is built for monitoring purposes and may not be for general purpose time series databases like InfluxDB (I am hoping to be wrong). What are the disadvantages/limitations for using Prometheus/Mimir as general purpose time series database?

valyala

> I can't find any other open source time series database except Mimir/Cortex which allows this much scale (clustering options in their open source version)

The following open source time series databases also can scale horizontally to many nodes:

- Thanos - https://github.com/thanos-io/thanos/

- M3 - https://github.com/m3db/m3

- Cluster version of VictoriaMetrics - https://docs.victoriametrics.com/Cluster-VictoriaMetrics.htm... (I'm CTO at VictoriaMetrics)

> Can we use Prometheus/Mimir as general purpose time series database?

This depends on what do you mean under "general purpose time series database". Prometheus/Mimir are optimized for storing (timestamp, value) series where timestamp is a unix timestamp in milliseconds and value is a floating-point number. Each series has a name and can have arbitrary set of additional (label=value) labels. Prometheus/Mimir aren't optimized for storing and processing series of other value types such as strings (aka logs) and complex datastructures (aka events and traces).

So, if you need storing time series with floating-point values, then Prometheus/Mimir may be a good fit. Otherwise take a look at ClickHouse [1] - it can efficiently store and process time series with values of arbitrary types.

[1] https://clickhouse.com/

jaigupta

I meant all Prometheus based solutions, includes Thanos, M3, VictoriaMetrics. Thank you for your answer.