upvote
There are ways to scale Prometheus (look at Thanos), but none of the solutions is really bug free.

See this PR for example (https://github.com/prometheus/prometheus/pull/18364) - this used to impact a production deployment I worked on. Prometheus, Thanos and even OpenTelemetry are full of those kind of problems - but at the same time it's the best we have and we should be grateful they're free and open source.

I'd still choose an open source stack (and contribute to it) rather than go for a proprietary solution - we've all seen what happens with DataDog & co.

Please don't take my words lightly, I worked with the rest of my team in a large scale observability platform and scalability should not be underestimated - at the same time DataDog / Splunk prices are simply unjustified. It's ironically cheaper to build a team of engineers that will maintain a sane observability stack instead of feeding the monster(s).

reply
> It's ironically cheaper to build a team of engineers that will maintain a sane observability stack instead of feeding the monster(s).

Can you show the math here? This is a very bold claim, and I’m super curious.

reply
Well, I am running the stack in production right now, but everyone has a different understanding of what that actually means...

Do you have concrete examples of these catastrophic failures? I've personally havent experienced any myself during these years, but I'm doing very boring and typical stuff, so wouldn't surprise me there was hard edges still.

reply
There's a difficult distinction here, you're right.

Technically even a single server running LAMP as root but taking frontend traffic meets the definition of in production but I think we all recognise that it's not the right idea.

What I'm referring to is: should the disk start to have issues: what does prometheus do? If the scrapers start to stall due to connection timeouts: what does prometheus do? If you are doing linear interpolation of data and you have massive gaps because you're polling opportunisitically: what does prometheus do.

I'm all about boring technology, but prometheus assumes too much happy path. It assumes that a single node is enough for time series data that is used for alerting.

Which, it is: at very small scale and with best effort reliability.

It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or to infer issues that need to be correlated across multiple systems.

reply
> should the disk start to have issues

If that happens, is prometheus really the biggest of your worries here? Software breaks left and right when disks disappear from under them, I'm not sure this is neither unexpected or unique to prometheus.

> If the scrapers start to stall due to connection timeouts: what does prometheus do?

I'm having this "issue" all the time, as some of my WiFi connected (less important) cameras are just within the WiFi range, and I'm using prometheus to scrape metrics from them. It seems like the requests times out, then the next time it doesn't, and everything just works? What's the issue you're experiencing with this exactly?

> It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or

Wait what? Billing systems? That stuff would go into your proper database, wouldn't it? Sure, if prometheus/node_exporter fails or whatever, you won't get metrics out of the host, but again, if those things start failing on that host, the host is having bigger issues than "prometheus suck at scale".

I was eagerly awaiting to be educated about potential gaps in my understanding of prometheus, instead it seems like you simply don't happen to like they way they do things? I was under the impression they did something wrong or something was broken, but these things just seems like the typical stuff you have to think about for any service you deploy.

reply
Yes, my monitoring system not alerting me when the systems it runs on are failing is the entire problem.

That's not a general "software breaks when disks fail" situation: that's a monitoring system failing at its one job.

Your monitoring system failing silently when your infrastructure is under stress is precisely the failure mode that monitoring exists to prevent.

Zabbix solves this with native HA and self-checks. Prometheus makes it your problem to solve with external tooling, and most people don't, until they need it.

reply
Why wouldn't your monitoring system alert you when metrics suddenly disappear? Sounds like you need a better monitoring system, prometheus is not gonna magically solve that problem for you. No wonder you were having issues with prometheus...
reply