I had an interesting conversation today with a co-workers today: how one learns programming.

If you go to school today for programming you’ll likely learn a scripting language like Ruby or Python and one of the high level languages like Java or C#.

This gets you off to a pretty good start admittedly, but a lot of what goes on behind the scenes is lost. If nothing else you lose a proper respect for real pointers and memory management. These days of nearly infinite memory and elastic computing can lead you to get lazy. “Not fast enough, spin up another instance.”

Even with a lower level language like C or C++ you miss out of a lot of what makes a computer compute. At least you get a good gut feel on pointers and indirection.

But things got far deeper than that.

Only when you get to the OS level do you start to pull back the curtain on things like the translation lookaside buffer (TLB) and how memory actually works. All the things that make a process function in a modern system.

Below that there’s more machinery. What about the cache and how to use it efficiently? Inter-processor coherency? Further down are things like speculative execution and how that affects register allocation and all that fun stuff.

I started before all of that.

The Commodore 64, the first computer I really programmed, had a 6810 processor — essentially a 6802 made by a different company. The 6802 had only 11,000 transistors. This was a replacement for the 6800 which started off life with only 4,000 transistors.

Back then you could understand the entire processor. As computing grew — and as I grew — blocks could be bolted on and each piece could be understood as a unit before moving onto the next.

Now, on the other hand, you’re given a Core-I7 and the manual for just the software bits is 3796 pages long.

Think about that for a second. In the modern processor the manual for the operation nearly has a page for every transistor on the processor I cut my teeth on.