B&W From the Fuji S5 Sensor
I’ve always had a soft spot for the underdog. It’s been that way as long as I remember. Back in 1966, when I was 8, I thought – no, I knew – Muhammad Ali was both a great fighter and an even greater human being, and my holier-than-thou aunts and uncles and teachers and friends, who hated him because he wouldn’t go kill Vietnamese, were all full of shit. His explanation for why he wouldn’t be inducted into the US Army cut through all the patriotic war-mongering nonsense and made perfect sense to me – no Vietnamese had ever called him nigger, which was more than he could say about his fellow Americans who insisted it was his duty to go kill them. An unassailably elegant and irrefutable answer. I remember lying under the covers with my AM radio, listening to Ali fight Ernie Terrell – the guy who refused to call him by his chosen name, calling him “Cassius Clay” instead – strangely satisfied when Ali gave Terrell the savage beating he deserved, each time he hit him asking him “What’s my name?” As a grade school kid in 68, while the proto-fascists I went to church school with brainlessly wore the Nixon buttons their parents pinned on them, I was all in for Eugene McCarthy and then Hubert Humphrey. I even preferred the Stones to the Beatles, thinking “Paint it Black” was the greatest thing I’d ever heard.
Ten years later, I was buying an M5 as my first Leica. Enough said. I’ve carried that contrarian attitude into the digital age. I’m enamored of oddball cameras – specifically, the Ricoh GXR with its swappable sensors and M-mount, and the Fuji S5 Pro with its 12 mp “extended dynamic range” SuperCCD SR sensor, which to my mind produces the nicest digital B&W files I’ve ever seen (this includes files from the Leica MM). While I’ve bought and sold any number of high-resolution full-frame D800’s etc, the Ricoh and the S5 Pro are the cameras I most often grab, even today. They’re cheap as dirt too. While the consumerist herd chases marginal technological gains at maximum cost (just what camera makers and their online shills tell them they should be doing). you can feast on their throwaways at minimum cost.
Lately, I’ve gone full-bore contrarian, having developed a thing for Sigma’s Foveon cameras, now owning the DP2, the DP1M and the sd Quattro. The Foveons are the oddest of odd-duck digital technology – slow, clunky, limited to daylight capture, but producing remarkable files when done right, the sd Quattro and DP1 Merrill versions producing stunningly detailed color files easily the better of what I was getting from my D800E. The Merrill, with dedicated tack-sharp 28mm Sigma lens, I bought for $350. That’s crazy.
*************
Sidewalk Mosaic, Omaha, Nebraska. Sigma DP1 Merrill
In 1907, the Lumière brothers of France introduced the first commercial color photography process, called Autochrome. The Autochrome process used a filter of grains of potato-starch colored red, blue and green (the primary colors of light). This starch filter was spread over a glass plate, and colors were recorded horizontally i.e. all at the same surface level. In later years, color film photography evolved by a method in which three layers of photosensitive material were stacked vertically, and processes using a horizontal orientation, like the Autochrome process, were forgotten. It’s this vertically stacked RGB layered capture that gives the classic color film look.
Ironically, with the advent of digital photography, vertically stacked RGB capture disappeared and horizontally-oriented Autochrome- like color capture again became standard, (absent the potato-starch). Apart from Sigma Foveon sensors, digital cameras use monochrome “Bayer” sensors that capture Red, Green and Blue light intensities horizontally i.e. all on the same sensor level. Because these sensors do not capture color data but only luminosity values, a color filter with a mosaic of pixels for the three primary colors – red, blue and green (RGB) – is mounted on top so that color data can be represented. But each light-sensing photodiode (a “pixel”) has a one-color filter, which means that each pixel can only represent one color, the data for the other two colors being ignored by the pixel. A color “interpolation” process known as demosaicing is then performed on the image, restoring the colors lost by individual pixels.
Digital sensors are monochromatic – they measure luminosity, not color. Pixel sites on a digital sensor are light meters: they measure the brightness of the light. That’s it; they don’t register color. What represents the color is the filter placed over the pixel and the interpolation algorithm that guesses the missing colors by analyzing the neighboring pixels and then adding the missing colors back in.
Having been continuously improved over an extended period, this image-processing method is now good enough for most folks. But too often, because colors are interpolated from neighboring pixels, the subtle color nuances of the original subject are lost. Color filter arrays also generate color artifacts – colors not found in the original subject – during the demosaicing processing. This is due to the action of the color filter (generally a Bayer filter), which tries to regulate the color distribution if the subject contains too much detail (high-frequency areas). Conventional digital cameras using a Bayer color filter also have an optical low pass filter, interposed between the lens and the sensor, in order to suppress color artifacts. The optical low pass filter acts on the images resolved at a high level by the imaging lens, its job being to eliminate any detailed elements likely to generate color artifacts (high-frequency areas above a certain level), immediately before they reach the sensor. So it can effectively suppress the generation of color artifacts.
The bottom line of all of these workarounds to produce color – color array filters, low pass optical filters to suppress color artifacts caused by the array filters, interpolation algorithms – is that all of it, separately and together, cause a diminution of the fine detail recorded by the monochrome sensor. In other words, we pay a price to transform a native monochrome sensor into a color sensor, and that price is loss of resolution and lack of color fidelity.
*************
DP1 Merrill
Sigma’s Foveon sensor takes a different approach. Rather than a single layer of pixels, the Foveon has three layers, which take advantage of the fact that different colors of light possess different wavelengths. Pixels on the top layer of the Foveon sensor can see every color of visible light. The second layer sees only the green and red parts of the spectrum, since the thickness of the top layer serves to filter out the short-wavelength blue light. The third sensor layer sees only the red part of the spectrum, since the thickness of the top two layers is such that it filters out the mid-wavelength green light, allowing only the long-wavelength red light to reach the bottom. An algorithm then examines each separate pixel layer and — by analyzing the relative proportion of luminosity reported by each layer — determines the actual color of each pixel on each layer.
As such, the Foveon sensor doesn’t add or discard color. Different wavelengths of light (i.e. different colors) penetrate the Foveon sensor at different depths, achieving full-color capture in a single-pixel site configuration. No color filter is required. Like modern color film cameras, it uses a method that captures all the colors vertically. Because it does not need color interpolation or a low-pass filter, the Foveon produces natively sharp images without the need of computer interpolation. It’s why Foveon images have a truly nuanced, sharp feel and are visibly superior to Bayer images of the same resolution.
The benefits of Foveon tech: 1) Increased color purity — The camera is able to determine the color of light on every stacked photosite, rather than approximating each color based on the relative luminosities of several neighboring photosites; 2) Increased resolution — Each pixel in the final image contains accurate luminance information, as measured by the sensor’s photosites (in contrast to a color filter array, which must interpolate a luminance value for each pixel); and, 3) Less noise (at low ISO settings) — Because each photosite doesn’t have a colored filter in front of it (nor, possibly, an anti-aliasing filter to alleviate the moiré patterns inherent in the demosaicing process), the top sensor layer requires far less signal amplification than a Bayer-type sensor, meaning less noise.
I can attest to one thing. The Foveon sensor produces very subtle color files, photos that look a lot like traditional film color, without the artificial saturated effect often produced by Bayer sensors.
*************
Omaha. DP1 Merrill
However, I’m a B&W photographer, and my interest in camera technology is typically limited to B&W capture. For a B&W photographer, Bayer filters and interpolation algorithms are useless. A sensor has the ability to record an unadulterated monochromatic version of the scene before it without interpolation or filtration. And yet the Bayer sensors holds us hostage to its colorization schema, which we then greyscale out as unneeded. Unfortunately, all the image degradation remains. Not so with the Foveon. Shouldn’t there also be some advantage when shooting monochrome?
Somewhere Over the Midwest. Sigma DP2
So, I’m embarked on a new learning curve – figuring out how to maximize the Foveon sensor for B&W. My sense is that it has the potential to produce a unique B&W look, super-sharp and detailed as opposed to the luscious creaminess of the S5 Pro’s B&W output. Digital Panatomic-X. Now the goal is to figure out how. The RGB layers open up intriguing possibilities and make all sorts of things, common in the film age, theoretically possible again ( including, unfortunately, having to work with ISO in the 100-400 range, which to me is a small price to pay for what you’re getting in return, which in the DPM series is 6×9 format quality in a pocket camera). Back to glass filters maybe?
Omaha Zoo. DP1 Merrill
Hits: 1445