Yesterday I posted a picture of some daffodils. Here’s a crop of an enlargement of the tree:

If you look at the twigs you see a bunch of colorful crap. This isn’t a flaw with either the lens or the sensor. It’s simply that the lens is way sharper than the sensor.

The way a camera sensor works in a typical digital camera is there’s a grid of light sensitive pixels that respond to all light and in front of that there’s a grid of a filter that directs red, green, and blue light to each individual pixel. This is called a Bayer filter.

CC license by Wikipedia user en:User:Cburnett

After you take a picture the processing takes the monochrome filtered pixels and combines them into a true colored final picture. This works almost all of the time. (Incidentally, the reason there’s more green pixels is because the human eye is more sensitive to green light than any other. Most of the luminescence information comes from green.)

The problem is if you have something that only covers one pixel. You can get that if you have fine detail, such as a distant twig, that has been resolved down to a single colored pixel. If that happens you get artifacting like seen above. If a twig (which is dark) covers only a red pixel, it’ll get processed to show blue and vice versa.

But I suppose that’s a good problem to have.

Thankfully the post processing software I use, Lightroom, has a feature to apply extra demosaicing to certain bits of a photo to correct for the too-sharp-for-the-sensor lens.