Define off tune? 12 TET? Just intonation? Bohlen-Pierce (56 TET) ?
The "in tune" notes are as much a function of culture as physics.
Huh? Pitch ratios are not a social construct, it's just arithmetic.
But in the case if sound, I would have expected the skew to be less of a problem. Also surprised how the orof instantly know. It took me a while to figure out. How did you fix it? Cool story!
Proof is left as an exercise to the student. ;-)
That is not really true. You usually have a couple of clock sources on a MCU, but the clock gets propagated down the clock tree and the source, and most of the times, the PWM has the same source clock as the CPU. Indeed, I think if you're before the PLL the clock is more accurate as in you get less jitter but the overall drift is the same. You might have distinct clock sources but you need a specific hw and a specific configuration.
This worked well in 1980's microcomputers which used an accurate, crystal oscillator clock. IC's like the MOS6502 or Intel 8086 don't have built-in clocking. The boards were large and costly enough to afford a clock; and often it was dual purposed. E.g. in Apple II machines, the master oscillator clock from which the NTSC colorburst clock was derived also supplied the CPU clock.
These processors had no caches, so instructions executed with predictable timing. Every data access or instruction fetch was a real cycle on the bus, taking the same time every time.
Code that arranged not to be interrupted could generate precise signals.
Some microcomputers used software loops to drive serial lines, lacking a UART chip for that. You could do that well enough to communicate up to around 1200 baud.
This sounds like they were most likely bit banging square waves into a speaker directly via a GPIO on a microcontroller (or maybe using a PWM output if they were fancy about it). In that case, the audio frequency will be derived directly from the microcontroller's clock speed, and the tolerance of an internal oscillator on a microcontroller can be as bad as 10%.