I had an interesting conversation with my brother yesterday. He had acquired a couple of Arduino kits (I think from Fry’s, but that doesn’t matter) and started to play around with some of the example projects.
He’s been programming “normal” computers for the better part of two decades now. Most of it in Java and PL-SQL. (Need a programmer just north of LA? Hit me up!) He had the realization that the microcontroller only does one thing at a time.
When programming in more modern environments, you take the idea of multi-tasking and multi-threading for granted. You can set up even listeners, and thing get triggered asynchronously, and all is right with the world.
The problem is that modern computers have elevated our concept of lazy to new heights. It wasn’t until around ten years ago, give or take, that consumer CPUs started to ship with multiple cores. But even before then you had the same general toolset that you have today. Threads, listeners, asynchronous calls — nothing new.
But then you take a step back to microcontrollers.
Instead of something running at 1GHz, you might have something that runs at a whopping 16MHz, like the Arduino, or even slower if you’re starved for power. All of the things you’ve taken, literally, for granted are now brought to the fore. You need to start looking at clocks and how long things take; you can’t simply throw more CPU at the problem because the faster CPU could put you out of your power envelope constraints you need to work within.
Ahh… good times.