On Mon, May 23, 2005 at 10:46:11AM -0500, [EMAIL PROTECTED] wrote:
> I don't think that's exactly what's being said -- "accurate" exposure
> /does/ matter, it's just that accurate exposure can be defined as
> "capturing the entire range of the scene".
> 
> Suppose I've got a histogram with four segments and my exposure is
> entirely contained in the second segment (counting from the left).  If
> I'd have kept all other things equal but increased my exposure time by a
> couple of stops or so and ended up with the scene entirely contained in
> the third segment, I'd have taken the same picture, only with a longer
> shutter speed -- either of those exposures could be "converted" to the
> other just by dragging the exposure slider in Photoshop on import of the
> pictures.  One would likely be the "better" shot, though, due to having
> more or less motion blur, camera shake, whatever.  Of course, this
> doesn't take into account non-linear response from the sensor, etc. or
> that many scenes have a range that exceeds the dynamic range of the
> sensor.
> 
> That's my understanding, at least; someone please correct me if I'm
> wrong.

You're (sort of) right, as far as it goes.
A couple of observations:

If you increased the exposure to get the right-hand edge of your
histogram at the end of the third segment, rather than at the end
of the second segment, you wouldn't end up with a histogram that
was entirely in the third segment; it would go roughly from half-
way along the second segment to the end of the third segment.
(i.e. instead of going from 1.0 to 2.0, it would go from 1.5 to 3.0)

Histograms don't do that - they always go all the way to the left
edge (unless you're shooting a scene with absolutely no black areas,
which is extremely uncommon).


One further correction: under any normal conditions, the sensors
don't suffer from non-linear response.  In fact the linearity of
their response is one of the problems; they respond directly to
the amount of light falling on them, rather than responding in
a fashion more like the logarithmic response of the eye.

This means that of the 4096 intensity levels that can be used by
a 12-bit sensor such as that in the *ist-D, 2048 are used in the
brightest part of the image (highlights, etc.).  1024 levels are
used for the next brightest range, then 512, 256, 128, 64, 32 ...

If we consider a scene that has eight stops of dynamic range,
we'll be trying to map those levels into a logarithmically-
encoded representation.  To produce an eight-bit JPEG we'll
want about 256/8 = 32 levels of brightness for each stop.
That's fine for the brightest part of the scene, but as we
can see by the time we get down to the darkest part of the
image we're trying to do a non-linear mapping from 32 input
levels to 32 output levels.  This is going to introduce some
quantisation errors (visible as pixelation in the shadows).
If we don't start off with the full range of recorded values,
but instead under-expose by a stop (so the RAW values run from
zero to 2047) we'll only have 16 sensor values to map to those
32 output values, which increases the quantisation.

That's why it's important to expose properly in-camera, and to
get the histogram to stretch as far to the right as possible
without totally blowing out the highlights; if you don't do
that, you're going to end up with (more) noise in the shadows.

Reply via email to