> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
I read the above as "avoid development that increases complexity needlessly" — and often, there is a desire to overcomplicate something that can be much simpler because the understanding is lacking.
"As much as they can" does not mean trying not to do any work, but trying to simplify the work where it achieves desired outcomes, and just about! This frequently means doing the improvement today.
This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.
Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.
Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.
And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.
What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
> why would you not want to index?
Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.
In the sense of the blog post: A senior with decent DB experience would have told you. ;)
In all fairness this was my first job a few years ago as a developer, I deep dove MongoDB but I was also one of the only devs using it at this place.
My previous experience with MongoDB had been in college and more limited.
I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
At some point they added the docValues configuration option per-field to do the transformation during indexing and store it to disk instead, so none of it had to be stored in the heap. Instead what you're supposed to do is rely on the OS disk cache, which handles eviction automatically, so you can run with significantly less memory but get performance improvements by adding memory without having to change any configuration further.
This do not mean we don't develop new product and services, it just means when we do so, we find the path of least overall entropy, it also applies to operations and tech debt reduction.
premature optimization is root of all evil
The qualities were highlighted because they can all lead to better stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
A complete stability is death.
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.