No one, you pull an engineer off the production issue to debug the log server, because you need the log server to debug the production servers.
See the problem?
Edit: to be clear I’m no fan of Datadog and I wish self hosting were an option. I want this path for our company, but at least on our team we just don’t have enough (redundant) expertise to deploy and manage these systems. We’d have to hire an extra FTE.
If you mean you are experiencing two totally unrelated issues at the same time, then I don’t think that’s a reasonable thing to really assign much value to as it’s incredibly unlikely.
Half of $30k/mo trivially pays for an engineer you hire to only manage such a cluster for you and just works an hour a week unless a pager goes off if you truly need that level of peace of mind. If you’re hiring for such a position I have a few rock star level folks who would love such a job.
The hypothetical problems people imagine for on-prem infrastructure get really strange to me. I could come up with the same sort of scenarios for cloud based SaaS infrastructure just as easily.
In my experience the systems/tools needed to debug production issues are often only used when they’re needed.
Which now means you need health and uptime monitoring on your log server since without that, it might break randomly and no one notices until you need it.
> The hypothetical problems people imagine for on-prem infrastructure get really strange to me
It really comes down to the people and whether you have the expertise on the team. And whether the team can realistically manage the system long term. It’s typically safer to spend more money for the managed service.
(It’s a safer decision, not necessarily better)
More importantly, with a third party service I'd be very surprised if both went down at the same time and it wasn't a further upstream issue like AWS. If its my own logging service and it went down during a prod outage, I likely didn't properly isolate my logging service in the first place.
Beyond that, and Im aware this is very much application/company dependent, theres plenty of SaaS companies that offer horrendous or no support no matter what you pay. We used to use splunk for monitoring and logging. Paid a ton of money because we were handling financial data and needed tracibility and reliability. We constantly had to put out fires that were caused by their unreliable platform. It was not a good experience.
Ultimately, we jumped ship to Prometheus. We pay a fraction of the price and spent less time on it.
The problem is all these SaaS companies have cut costs so much that all their support has been reduced to useless offshore at best and at worst a chatbot. They do go down and don't work and often times there's simply nothing you can do. The worst offenders will seize upon the moment and force you to upgrade a support plan before they will even talk to you, even if the issue is their own making.
Unless you're a huge customer and already paying them tons of money, expect to receive no support. Your only line of defense if something happens and you're not a whale is that some whale is upset and they actually have their people working on the problem. If you're a small company, startup, or even mid-size, good luck on getting them to care. You'll probably be sent a survey when you don't renew and may eventually be a quotient in their risk calculus at some point in the distant future, but only if you represent a meaningful mass of customers they lost.
How do you calculate the time spent on an internal tool like this, actually? (I’ve never been in management). Realistically your team inevitably will have some downtime, maybe some internal tool maintenance can be fit in there? I mean it obviously isn’t fully “free” but is also shouldn’t be “billed” at their full salary, right?
In broad strokes there's two ways. You can count it as an operational expense, or you can count it as capital (this takes more work to do but can have some advantages). If you count it as operations, it's just a big red pit you're throwing money into that you hope is offsetting a larger operational cost somewhere (but this can be hard to quantify). If you count it as capital, you're basically storing all of those hours as an "asset" which then loses value over time (it's kind of like the charge in a battery). The problem is you have to be able to show that this internal tool would, in the case of an acquisition or liquidation, be valued by the new owner at the value you're setting it at.
The problem there being that people are even more hesitant to trust somebody else's internal tool than they are to trust their own internal tool, so I've seen multiple managers think "I sunk a million dollars into this so it must be worth something" but in fact they were just running a jobs program for their team.
What? My team wouldn't have any downtime even if we had 10x the amount of people.
If you work at a company where you have times where you don't have work to do, you should polish your resume because it means the company will go under.
I think most software companies need to be doing less. Deleting code, refining, and making their product genuinely useful as opposed to "able to technically contort to client needs".
Using an open source self hosted solution should be the industry standard, encouraged position, by default. Our industry does not gain overall from using DataDog but only from truly open source solutions that utilized AGPL licenses that allows everyone to move forward together + share lessons together + contribute together toward a common goal of better observability.
Why are we acting like it's hard to set up? This isn't the 1990s, it's 2026. Tooling has gotten quite good over the last decade.
Also corporations stupidly spend money all the time, they over spend too. I recently left a company that was paying SalesForce $10mil a year in licenses when only 8 people in the entire 3,000 person company was using it. I doubt that was the only single instance across our industry too. There is a massive amount of waste and graft in enterprise sales.
I honestly doubt it if you replaced grafana for 10,000 DataDog customers they would notice the difference.
Because the current generation of “full stack” engineers are great at spinning up react apps, but struggle with infrastructure and systems management. It’s really not any more complicated than that.
On a typical 8 person engineering team, maybe 1 or 2 people will know how to deploy anything to the cloud if you’re lucky.
The expertise just isn’t there at most companies.