I was trying to put it in simple terms: The firmware performs a set of adjustments on the RAW data when one shoots jpeg. That will serve to explain the difference in appearance for the average hobbyist. Only a few people here know what a Bayer matrix interpolation or a quantization space are, but I thank you for the technical details.
On May 22, 2005, at 12:39 AM, Godfrey DiGiorgi wrote:

On May 21, 2005, at 8:20 PM, Paul Stenquist wrote:

If he were shooting jpegs, the camera probably
would have compensated with more brightness and a bit more exposure.

The camera compensate? You mean the photographer?

No, I mean the camera. When you shoot jpegs, the camera does some processing of the image. That's why you have to set sharpness, saturation, etc. Whey you shoot RAW, the camera leaves the data alone. That's what I mean that the camera would have compensated if he had been shooting jpegs. In other words, it would have responded differently to that meter reading.

But since he was shooting RAW, the meter cut things off at the point where the highlights
wouldn't be clipped.

Do you really mean meter?

Yes, or the camera's firmware that makes decisions based on the meter. I'm guessing now, but I think it probably runs a different exposure program for RAW than it does for jpegs.

Paul, forgive me for saying this, but I feel what you've written is confusing and not necessarily correct.

RAW format files from the camera contain the un-rendered sensor data obtained by an [EMAIL PROTECTED]@time exposure along with enclosures of RGB rendered JPEG thumbnails and preview images, as well as the camera metadata ... what the camera's user settings were with regard to quality, size, contrast, saturation, etc. at the time of the exposure. RAW data, on any current DSLR, has a 12-bit per pixel quantization space; in other words, RAW data is the full sensor array of pixels each of which has a 12bit tonal range, one quarter of which are captured through Red and Blue filters, and one half of which are captured through a Green filter.

The camera's metering programs do not change whether you have the camera set to store exposures as JPEG or RAW format. Exposures saved as JPEG format have been rendered into RGB with a Bayer matrix interpolation and gamma correction ... In doing so, the 12bits of tonal information in each pixel has been integrated with the interpolated chrominance values and then interpolated again into an [EMAIL PROTECTED] rendering. The constants used in this integration process are fixed by the camera's rendering algorithm and the user settings for colorspace, contrast, saturation, sharpness and white balance.

So, what happens is that the result of this RGB rendering might produce a sparkly crisp JPEG image, because people like sparkly crisp images: it has been clipped and fitted into an [EMAIL PROTECTED] space according to parameters that the manufacturer felt would be pleasing to the majority of its customers. The RAW data that it came from is the same regardless, given a scene's dynamics and the exposure values set, as a RAW format file expressing the same capture.

What's different when you look at an image in a RAW processor is that a totally different implementation of the RGB rendering engine is being used AND the data is being presented expressed as a 16(12)[EMAIL PROTECTED] image. It varies from the in-camera rendering by a factor of how the two RGB rendering engines are different and the effect of having the additional bit depth available to render an image to the display.

Godfrey


Reply via email to