Sunday, June 24, 2007

DSLR's and Photography

Photography has become one of the passions of a lot of people these days, with a good number of them going for DSLRs. With DSLRs comes the interest to shoot in manual mode. Manual shooting involves as a basic step the adjustment of the aperture, shutter speed and ISO settings in the camera to get a proper exposure. The camera will have a pointer indicating the current exposure on a scale ranging from -2 through 0 to +2. If the needle points to -2 means that the photo is going to be underexposed, 0 is proper exposure and +2 over exposure. This again depends on the kind of metering selected for light evaluation. Let me put some points on the usage of each of these features:
1. Aperture: Wider you open the aperture, more blur the objects would be at depths starting from the plane of focus. Closing the aperture would make objects in a wider range from this plane to appear to be in focus. This is because closing of the aperture would make the light cone narrower.
2. Shutter Speed: This is about telling the camera how long the sensor needs to be exposed to light. Greater the shutter speed lesser the time it will be exposed. All other parameters being the same, the shutter speed should be less under low light and high during more light in the surrounding.
3. ISO: This tells the sensor how sensitive it has to be for light. More the value higher will be its sensitivity to light. So even under low light conditions the shutter speed can be kept high by selecting a higher ISO. But this comes at a price. Increasing the ISO causes random electrical activity in the sensor due to the shift from the normal linear region of operation.
Most people feel that it is enough to keep the exposure marker at 0 be it by adjusting the aperture, shutter speed or ISO, if they are not too keen on getting the depth of field. Shutter speed plays an important role while capturing moving objects. If you want them to appear to be relatively static, choose a higher shutter speed. But wrt the exposure to light, shutter speed almost gives a linear relationship, I mean the amount of light collected by the sensor in x sec will be almost half of the amount of light collected in 2x secs (unless of course it is not saturated). I dont want to talk about the ISO parameter simply because I started writing this article to explain the consequences of aperture variation on a photograph and so would like to stick to that. Will come up with some sample images in my next post to explain about the effects of aperture variation.

Wednesday, June 20, 2007

Photography and Travel: Bhimeshwari and Shivanasamudra

Had been to Bhimeshwari and Shivanasamudra last weekend. The fishing camp and rafting is actually conducted by JLR and they don't allow you inside their fenced campus without prior permission and we didn't have one. On my way to Shivanasamudra from Bhimeshwari we found cauvery water flowing close to the road and so got down to get some snaps. I found this strange creature on one of the rocks and wondered why it had this strange looks! When I went to photograph it, it escaped into one of the bushes and then I realised the importance of its dressing to disguise. WoW! How does nature correlate similar patterns in two totally different forms of life; one falling under the category of insects and the other being a plant. It's simply amazing!

Thursday, June 14, 2007

Photography, Programming and Algorithms: Merging Multishot Motion Images

I was busy working on a few algorithms these days, so could not put posts on a regular basis. So what was the algorithm about? Recently I was watching the movie Matrix on TV, especially the scene where he dodges the bullets. It was excellent! In this particular scene multiple images of his will be seen at a single instance if time. I started thinking if this was possible with a regular camera using the multishot capability or long exposure kind of stuff. I also started searching for techniques that are currently available to create such an effect and came across one called "stroboscopic technique" where in flashes of light will be used to illuminate a moving object over a dark background. But this cannot be used in our day today life in the actual/real environment or surrounding. Also long exposures cannot create this sharp effect. So, at first I tried to capture my matrix kind of motion using the multishot capability in DCs (3fps using my Canon 350D) and merged them to get the above effect. It looked wonderful, but the draw back was that the camera was held static during the capture and so, did not require any special software to do the merge. I just used Matlab to get this thing done. But what I wanted is a far more flexible stuff. Assume I go to watch a motion sport; something like a 20m diving competition (in swimming). I would want to capture the motion of this person from the start to the end, till he drops off into water and merge the complete set of images into a single one. Why would I want this? Simply because it is a motion sport and so, to get its complete effect I need to capture a motion sequence of it; something like a video. But suppose I want a poster of his motion or some of his important moves along his dive path in a single frame; today there is nothing I can do! Single picture frames don't give me the complete story, so I am not interested in it. I cant get a poster from the video I have captured either. Simple image averaging and differencing can create such effects if and only if the camera remained static during the capture. But unfortunately I don't want to put an extra burden of carrying a tripod, on the person who wants to capture such a shot. This means that the software should be able to merge images with a little offset. Also I cant expect the person to have very stable hands which means the output images would have also undergone a little bit of rotation. The software should take care of even this case :( Memory comes at a cost, and to create a single motion shot we would have captured 10s of stills, 10s of motion shots would require 100s of stills, which will quickly fill up the memory. So I would want this to be an embedded compatible, in camera software; which means it should make use of very less resources (both memory and time). I tried my best to come up with a software which would closely match the above requirements. I do not know what other requirements you people might have. If it is something that will impress me and be practical, I will definitely try to incorporate it in the coming versions (this one will be alpha-- still!). The software will be out shortly for you to test and create some jazzy stuff. Multishot of your own stunts, your favorite sports, etc, etc. Let me see how creative you people can get!

Tuesday, June 5, 2007

Computer Vision (29): Motion Segmentation

Motion segmentation is another concept that comes out of motion detection. As a newly born kid all you see around you is colors; colors that make no sense to you and you don't even know what color they are. One way in which our brain can start segmenting objects is through stereo correspondence. But again the process of stereo correspondence can be mechanical or knowledge based. If it is mechanical then we got to find how it can be done (I will discuss this later), if it is KB, our brain has to first of all learn how to correspond. So how does our brain start to segment objects? If you try to observe the point of vision of newly born kids they seem to be looking somewhere at a far off place which is the relaxed state of our eye. We need to interrupt its brain so that its visual system starts to concentrate on different things. This is the reason we get colorful toys that make interesting sounds and play it in front of them. Bright colors capture the sight of these kids and draw thier attention towards it. But if these objects are placed static the interrupts stop, so also the concentration. In order to keep up the interrupts and concentration you need to keep swaying it in front of them. This not only draws its attention but also helps it to catch up on the object through motion segmentation. You can now see that its eyes are actually pointing on the object you are playing with. After repeating this procedure for quite a few times you will see that it will freeze its sight to the object even if placed static. It has now started to update its knowledge! This knowledge helps it to segment objects from its background as the days pass by and finally they will start to grasp them. This is the onset of the perception of depth.