upvote
Did someone ask about Intel processor history? :-) The Intel 8080 (1974) didn't use microcode, but there were many later processors that didn't use microcode either. For instance, the 8085 (1976). Intel's microcontrollers, such as the 8051 (1980), didn't use microcode either. The RISC i860 (1989) didn't use microcode (I assume). The completely unrelated i960 (1988) didn't use microcode in the base version, but the floating-point version used microcode for the math, and the bonkers MX version used microcode to implement objects, capabilities, and garbage collection. The RISC StrongARM (1997) presumably didn't use microcode.

As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.

My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.

reply
You can learn JavaScript and code for life. You can’t learn React and code for life.

Yeah, JavaScript is an illusion (to be exact, a concept). But it’s the one that we accept as fundamental. People need fundamentals to rely upon.

reply
> You can’t learn React and code for life.

Sure you can, why can't you? Even if it's deprecated in 20 years, you can still run it and use it, fork it even to expand upon it, because it's still JS at the end of the day, which based on your earlier statement you can code for life with.

reply
deleted
reply
A good abstraction relieves you of concern for the particulars it abstracts away. A bad abstraction hides the particulars until the worst possible moment, at which point everything spills out in a messy heap and you have to confront all the details. Bad abstractions existed long before React and long before LLMs.
reply
deleted
reply
Are you seriously saying that you can't understand the concept of different abstractions having different levels of usefulness? That's the law of averages taken to cosmic proportions.

If this is true, why have more than one abstraction?

reply
I just think everyone who says they don't like magic should be forced to give an extemporaneous explanation of paging.
reply
Are you seriously saying you can’t understand the parallel being drawn here?

If you “don’t like magic”, you can’t use a compiler.

reply
Is a compiler magic? Did they came from an electronic heaven? There are plenty of books, papers, courses,... that explains how compiler works. When people are talking about "magic", it usually means choosing a complex solution over a simple one, but with an abstraction that is ill-fitted. Then they use words like user-friendly, easy to install with curl|bash, etc to lure us into using it.
reply