I was working on the firmware for the USB charger. I started off with the standard Arduino shell and started hacking away with little bits of code just to make things work.
I was able to see the commands go over the bus and things were going.
But it was growing in a way that would be bad in a couple of ways. It would be hard to maintain and on top of that it would be inefficient with the limited resources that I have on the micro controller.
Then I started going back in time — all the way to 1984 when I was coding on the Commodore 64 which was even less powerful. As I was coding it really felt like I was working on that old system. So decadent working in C++ instead of the MOS 6810’s assembler.
The little tricks that I was using like modifying the configs in place are so out of place in modern systems where you program as though you have infinite processor and memory.
All this is getting me thinking. Even with modern computers, though the resources seem infinite they still aren’t.
What I was coding for the microcontroller wouldn’t pass a modern code review. What would pass modern code reviews wouldn’t run properly on the microcontroller.
But why is that so? Some of it is maintainability and lessening the coupling between subsystems. But much seems to be inertia with how things should be done. But sometimes you have to ask why… what can you do that’s both fast, efficient, and maintainable?