Yes, but that need not scale linearly with the number of bits. https://en.wikipedia.org/wiki/Carry-lookahead_adder:
“A carry-lookahead adder (CLA) or fast adder is a type of electronics adder used in digital logic. A carry-lookahead adder […] can be contrasted with the simpler, but usually slower, ripple-carry adder (RCA), for which the carry bit is calculated alongside the sum bit, and each stage must wait until the previous carry bit has been calculated to begin calculating its own sum bit and carry bit. The carry-lookahead adder calculates one or more carry bits before the sum, which reduces the wait time to calculate the result of the larger-value bits of the adder.
[…]
Already in the mid-1800s, Charles Babbage recognized the performance penalty imposed by the ripple-carry used in his difference engine, and subsequently designed mechanisms for anticipating carriage for his never-built analytical engine.[1][2] Konrad Zuse is thought to have implemented the first carry-lookahead adder in his 1930s binary mechanical computer, the Zuse Z1.”
I think most, if not all, current ALUs implement such adders.
It could also be as a result of most people working in assembly being aware of the properties of logic gates, so they carry the understanding that under the hood it might somehow be better.
I think this might just be due to not realizing just how far back in CPU history this goes.
The former just seems way more practical
8080/Z80 is probably where XOR A got a lead over SUB A, but they are also the same number of cycles.
.b and .w -> clr eor sub are all identical
for .l moveq #0 is the winner
A tangent, but what is Obvious depends on what you know.
Often experts don't explain the things they think are Obvious, but those things are only Obvious to them, because they are the expert.
We should all kind, and explain also the Obvious things those who do not know.
E.g. on Z80 and 6502 both have the same cycle count.
A real-world CPU example is the Cray-1, where S-Register Scalar Operations (64-bit) take 3 cycles for ADD/SUB but still only 1 cycle for XOR. [1]
[1] https://ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1-HRM...
EOR and SBC still have the same cycle counts though.
Not scalar, but still sub vs xor. Though you’d use vmov immediate for zeroing anyway.
According to Claude on the IBM 360 SUB A,A was actually the preferred way since that's what IBM programmers were familiar with (earlier IBM mainframes didn't support XOR), it's more readable, and XOR A,A offered no advantage.
Edit: Looked at comments, seems like x86 and the major 8bit cpu's had the same speed, pondering in this might be a remnant from the 4-bit ALU times.
I think that era of CPUs used a single circuit capable of doing add, sub, xor etc. They'd have 8 of them and the signals propagate through them in a row. I think this page explains the situation on the 6502: https://c74project.com/card-b-alu-cu/
And this one for the ARM 1: https://daveshacks.blogspot.com/2015/12/inside-alu-of-armv1-...
But I'm a software engineer speculating about how hardware works. You might want to ask a hardware engineer instead.
In any ALU the speed is determined by the slowest operation, so XOR is never faster. It does not matter which is the width of the ALU, all that matters is that an ALU does many kinds of operations, including XOR and subtraction, where the operation done by an ALU is selected by some control bits.
I have explained in another comment that the only CPUs where XOR can be faster than subtraction are the so-called superpipelined CPUs. Superpipelined CPUs have been made only after 1990 and there were very few such CPUs. Even if in superpipelined CPUs it is possible for XOR to be faster than subtraction, it is very unlikely that this feature has been implemented in anyone of the few superpipelined CPU models that have ever been made, because it would not have been worthwhile.
For general-purpose computers, there have never been "4-bit ALU times".
The first monolithic general-purpose processor was Intel 8008 (i.e. the monolithic version of Datapoint 2200), with an 8-bit ISA.
Intel claims that Intel 4004 was the first "microprocessor" (in order to move its priority earlier by one year), but that was not a processor for a general-purpose computer, but a calculator IC. Its only historical relevance for the history of personal computers is that the Intel team which designed 4004 gained a lot of experience with it and they established a logic design methodology with PMOS transistors, which they used for designing the Intel 8008 processor.
Intel 4004, its successors and similar 4-bit processors introduced later by Rockwell, TI and others, were suitable only for calculators or for industrial controllers, never for general-purpose computers.
The first computers with monolithic processors, a.k.a. microcomputers, used 8-bit processors, and then 16-bit processors, and so on.
For cost reduction, it is possible for an 8-bit ISA to use a 4-bit ALU or even just a serial 1-bit ALU, but this is transparent for the programmer and for general-purpose computers there never were 4-bit instruction sets.
> It encodes to the same number of bytes, executes in the same number of cycles.
The predominance of these idioms as a way to zero out a register led Intel to add special xor r, r-detection and sub r, r-detection in the instruction decoding front-end and rename the destination to an internal zero register, bypassing the execution of the instruction entirely.
When you do XOR together with many other operations in an ALU (arithmetic-logical unit), the speed is determined by the slowest operation, so the speed of any faster operation does not matter.
This means that in almost all CPUs XOR and addition and subtraction have the same speed, despite the fact that XOR could be done faster.
In a modern pipelined CPU, the clock frequency is normally chosen so that a 64-bit addition can be done in 1 clock cycle, when including all the overheads caused by registers, multiplexers and other circuitry outside the ALU stages.
Operations more complex than 64-bit addition/subtraction have a latency greater than 1 clock cycle, even if one such operation can be initiated every clock cycle in one of the execution pipelines.
The operations less complex than 64-bit addition/subtraction, like XOR, are still executed in 1 clock cycle, so they do not have any speed advantage.
There have existed so-called superpipelined CPUs, where the clock frequency is increased, so that even addition/subtraction has a latency of 2 or more clock cycles.
Only in superpipelined CPUs it would be possible to have a XOR instruction that is faster than subtraction, but I do not know if this has ever been implemented in a real superpipelined CPU, because it could complicate the execution pipeline for negligible performance improvements.
Initially superpipelining was promoted by DEC as a supposedly better alternative to the superscalar processors promoted by IBM. However, later superpipelining was abandoned, because the superscalar approach provides better energy efficiency for the same performance. (I.e. even if for a few years it was thought that a Speed Demon beats a Brainiac, eventually it was proven that a Brainiac beats a Speed Demon, like shown in the Apple CPUs)
While mainstream CPUs do not use superpipelining, there have been some relatively recent IBM POWER CPUs that were superpipelined, but for a different reason than originally proposed. Those POWER CPUs were intended for having good performance only in multi-threaded workloads when using SMT, and not in single-thread applications. So by running simultaneous threads on the same ALU the multi-cycle latency of addition/subtraction was masked. This technique allowed IBM a simpler implementation of a CPU intended to run at 5 GHz or more, by degrading only the single-thread performance, without affecting the SMT performance. Because this would not have provided any advantage when using SMT, I assume that in those POWER CPUs XOR was not made faster than subtraction, even if this would have theoretically been possible.
I mean, not for zeroing because we know from the TFA that it's special-cased anyway. But maybe if you test on different registers?
The predominance of these idioms as a way to zero out a register led Intel to add special xor r, r-detection and sub r, r-detection in the instruction decoding front-end and rename the destination to an internal zero register, bypassing the execution of the instruction entirely. You can imagine that the instruction, in some sense, “takes zero cycles to execute”.That would be quite late then, 1997 Pentium 2 for general population.
It used to be not only faster but also smaller. And back then this mattered.
Say you had a computer running at 33 Mhz, you had 33 million cycles per second to do your stuff. A 60 Hz game? 33 million / 60 and suddenly you only have about 500 000 cycles per frame. 200 scanlines? Suddenly you're left with only 2500 cycles per scanline to do your stuff. And 2500 cycles really isn't that much.
So every cycle counted back then. We'd use the official doc and see how many cycles each instruction would take. And we'd then verify by code that this was correct too. And memory mattered too.
XOR was both faster and smaller (less bytes) then a MOV ..., 0.
Full stop.
And when those CPU first began having cache, the cache were really tiny at first: literally caching ridiculously low number of CPU instructions. We could actually count the size of the cache manually (for example by filling with a few NOP instructions then modifying them to, say, add one, and checking which result we got at the end).
XOR, due to being smaller, allowed to put more instructions in the cache too.
Now people may lament that it persisted way long after our x86 CPUs weren't even real x86 CPUs anymore and that is another topic.
But there's a reason XOR was used and people should deal with it.
We zero with XOR EAX,EAX and that's it.
<op> (1 Byte opcode), <Registers> (1 Byte), <immediate value> (1 Byte)
While xor eax, eax only uses 2 bytes. Since there are only 8 registers, meaning they can be encoded with 3 bits, you can pack two values into the <Registers> field (ModR/M).
Making mov eax, 0 only take two bytes would require significant changes of the ISA to allow immediate values in the ModR/M byte (or similar) but there would be little benefit since zeroing can already be done in 2 bytes and I doubt that other cases are not close to frequent enough for this to be any benefit overall. An actual improvement would be if there was a dedicated 1 Byte set-rax-to-0 instruction, but obviously that comes at a tradeoff where we have to encode another operation differently (probably with more bytes) again (and you can't zero anything else with it).
Of course many of the RISC processors also have fixed length instructions, with small literal values being encoded as part of the instruction, so "mov reg, #0" and "mov reg, zero" would both be same length.
One-byte INC/DEC was dropped with x86-64, and PUSH/POP are almost obsolete in APX due to its addition of PUSH2/POP2, leaving only the least useful of the five in the most recent incantation of the instruction set.
https://en.wikipedia.org/wiki/Carry-lookahead_adder
The only minor difference between the two on x86, really, is SUB sets OF and CF according to the result while XOR always clears them.
(But this does not discount the fact that basically all CPUs treat them both as one cycle)