Similar to the "code should be self documenting - ergo: We don't write any comments, ever"
1) Most bugs are integration bugs. Whereby multiple systems are glued together but there’s something about the API contract that the various developers in each system don’t understand.
2) Most performance issues are architectural. Unnecessary round trips, doing work synchronously, fetching too much data.
Debuggers and profilers don’t really help with those problems.
I personally know how to use those tools and I do for personal projects. It just doesn’t come up in my enterprise job.
He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.
(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)
That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.
The net result of that is I never use C "long", instead using "int" and "long long".
This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.
I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.
The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.
You mean the .net compiler/runtime itself? I haven't looked at it, but isn't that the one place you'd expect to see weirdly low-level C# code?
And you have a frame with an operands stack where you should be able to store at least a 32-bit value. `double` would just fill 2 adjacent slots.
And references are just pointers (possibly not using the whole of the value as an address, but as flags for e.g. the GC) pointing to objects, whose internal structure is implementation detail, but usually having a header and the fields (that can again be reference types).
Pretty standard stuff, heap allocating stuff is pretty common in C as well.
And unlike C, it will run the exact same way on every platform.
If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed
To be fair, though, I come up short on a lot of things comp sci graduates know.
It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.
And yet even more of a fun time with porting pointer code was going from the various x86 memory models[0] to 32-bit. Depending on the program, the pain was either near, far, or huge... :-D
The integer representation wasn't always two's complement in the early days of computing, so you couldn't even assume that. C++ only required integer representations to be two's complement as of C++20, since the last architectures that don't work this way had effectively been dead for decades.
In that context, an 'int' was supposed to be the native word size of an integer on a given architecture. A long time ago, 'int' was an abstraction over the dozen different bit-widths used in real hardware. In that context, it was an aid to portability.
I suggested to him that he'd have a hard time finding any existing C code that ran correctly on it. After all, how are you going to write a byte to memory if you've only got 32 bit operations?
Anyhow, after 20 years of programming C, I took what I learned and applied it to D. The integral types are specified sizes, and 2's complement.
One might ask, what about 16 bit machines? Instead of trying to define how this would work in official D, I suggested a variant of D where the language rules were adapted to 16 bits. This is not objectively worse than what C does, and it works fine, and the advantage is there is no false pretense of portability.
If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.
The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.
On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.
I haven't used a debugger much at work for years because it's all Docker (I know it's possible but lots of hoops to jump through, plus my current job has everything in AWS i.e. no local dev).
It should be to the greatest extent possible. Strive to write literate code before writing a comment. Comments should be how and why, not what.
> - ergo: We don't write any comments, ever"
Indeed this does not logically follow. Writing fluent, idiomatic code with real names for symbols and obvious control flow beats writing brain teasers riddled with comments that are necessary because of the difficulty in parsing a 15-line statement with triply-nested closures and single-letter variable names. There's a wide middle ground where comments are leveraged, not made out of necessity.
My counterpoint: Code can be self-documenting, reality isn't. You can have a perfectly clear method that does something nobody will ever understand unless you have plenty of documentation about why that specific thing needs to be done, and why it can't be simpler. Like having special-casing for DST in Arizona, which no other state seems to need:
I know it may be hard for me to understand the need for writing in english what is obvious (to me) in code. I also know i have read a stupid amount of code.
My rule is simple, if the comment repeats verbatim the name of a variable declaration or function name, it has to go. Anything else we can talk about.
Even 'grug brained' isn't about not thinking, it's about keeping capacity in reserve for when the shit hits the fan. Proper Grug Brain is fully compatible with Kernighan's Law.
I'm still salty about that time a colleague suggested adding a 500 kb general purpose js library to a webapp that was already taking 12 seconds on initial load, in order to fix a tiny corner case, when we could have written our own micro utility in 20 lines. I had to spend so much time advocating to management for my choice to spend time writing that utility myself, because of that kind of garbage opinion that is way too acceptable in our industry today. The insufferable bastard kept saying I had to do measurements in order to make sure I wasn't prematurely optimizing. Guy adding 500 kb of js when you need 1 kb of it is obviously a horrible idea, especially when you're already way over the performance budget. Asshat. I'm still salty he got so much airtime for that shitty opinion of his and that I had to spend so much energy defending myself.
OR, perhaps its the case that different contexts have different levels of effort. Running a spike can be an important way to promote new ideas across an org and show how things can be done differently. It can be a political tool that has positive impact, because there's a lot more to a business than simply writing good code. However if your org is horrible then it can backfire in the way that was described. Maybe business are too aggressive and trample on dev, maybe dev doesn't have a spine, maybe nobody spoke up about what a fucking disaster it was going to be, maybe they did and nobody listened. Those are all organisational issues akin to an exploitable code base but embedded into the org instead of the code.
These issues are not the direct fault of the spike, its the fault of the org, just like the idiot that took your poorly formatted comment and put it on the front page of Vogue.
I mean I could take a toddlers tricycle and try to take it onto the motorway. Can we blame the toy company for that? It has wheels, it goes forward, its basically a car, right? In the same way a spike is basically something we can ship right now.
"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is."
Moreso, in my personal experience, I've seen a few speed hacks cause incorrect behavior on more than one occasion.
Parent is talking about building software that is inherently non-performant due to abstractions or architecture with the wrong assumption that it can be optimized later if needed.
The analogy is trying to convert a garbage truck into a race car. A race car is built as a race car. You don't start building a garbage truck and then optimize it on the race course. There are obvious principles and understanding that first go into the building of a race car, assuming one is needed, and the optimization happens from that basis in testing on and off the track.
I’m being a bit provocative here, just to make two points:
a) Software development back in the day, especially when it comes to service, reach, security, etc., was completely different from today. Black Friday, millions of users, SLAs, 24-hour service... these didn’t exist back then.
b) Because of so many conditions — some mentioned in point (a) - prematurity ends when the code is live in production. End.
Which is pretty close to just saying "don't do anything unless you have a good reason for doing it."
Yeah like, NOT indexing any fields in a database, that'll become a problem very quickly. ;)