I don't really thinks so.
Fist of all, shifitng can be done in Photshop quite easily.
Secondly because the sensor image isn't visible until the frame is already
exposed.
This means you'll be guessing whow much shiftig is needed. Even if you had a
"shiftable" vievfinder the image would be to small to adequately preadjust
correctly anyway.
I guess a shift lens and later Photoshop is the most affordableway to go
right now.
Who would pay 300-500 USD moore for the body, if it had a shift opportunity,
that only a few people would actually use?
Regards
Jens Bladt


Jens Bladt
Arkitekt MAA
http://hjem.get2net.dk/bladt


-----Oprindelig meddelelse-----
Fra: Glen [mailto:[EMAIL PROTECTED]
Sendt: 22. september 2005 09:18
Til: [email protected]
Emne: Sensors That Shift?


Since the current sensors are smaller than 24 x 36 mm, could a camera be
built in such a way that it could shift the sensor up and down, and from
side to side? This would have a similar use as a shift lens would have. Of
course, you would have to use lenses made to cover the full 24 x 36 format.
The current DA lenses wouldn't work with a moveable sensor.

I think this might be a cool feature for some people, especially for those
who photograph architecture.

There is also one other potential use for a moveable sensor. When
photographing stationary objects and using a tripod, the sensor could be
used to take more than one image of the subject. Each image would be taken
with the sensor shifted a sub-pixel distance vertically and horizontally
between images. Let's say that instead of a single image, we capture 9
images, arranged in a grid pattern centered around what would have been the
normal single image. We then use this grid of 9 images to create a
higher-resolution image than a single image capture would have produced.
This should be a way to quadruple the amount of effective pixels, while
using the same sensor. Unfortunately, it would only work for situations
where the camera and subject were kept stationary with respect to each
other. Still, I bet a lot of photographers would benefit from such a boost
in resolution under such circumstances. Also, you could possibly develop
some enhanced noise reduction techniques by analyzing the extra exposures
and tossing out any pixels which seemed abnormally bright.

Would this idea of shifting the sensor in sub-pixel amounts actually help
yield higher resolutions? My intuition tells me that it would give images
with higher effective resolution than the single images we currently have,
but perhaps not quite as nice as a sensor with a truly quadrupled pixel
count. However, it should be a lot cheaper to build than a sensor with a
quadrupled pixel count.  ;)

Of course, the sub-pixel shift was thought up as a way to boost effective
resolution, while maintaining full compatibility with DA lenses. If the
sub-pixel shift idea doesn't work, I have a second idea for increasing the
total resolution of the captured image. Once again, it only works for
stationary subjects. For each image, make 4 captures. Shift the position of
the sensor to the top-left corner of the 24x36mm frame for the 1st capture,
then to the top-right corner, then the bottom-right, and finally to the
bottom-left corner of the 24x36mm frame. This way, you have covered the
full 24x36mm frame in 4 tiles. Then, seamlessly stitch these tiles together
in software to create a single full-frame image with much more than our
normal 6.1 megapixels. Of course you lose the compatibility with DA lenses,
and would have to use lenses designed to cover the full 24x36mm frame. I'm
not sure how many extra megapixels you would gain from simply extending the
effective sensor coverage to 24x36 mm. Maybe someone on the list knows how
to calculate this?


take care,
Glen


Reply via email to