Saturday, July 21, 2007

Computer Vision (30): Why wasn't our face designed like this?

I have already touched upon the reasons behind having two sensors and their advantages (1). We now know why we were born with two ears, two nostrils and two eyes but only one mouth. What I failed to discuss at that point of time is their placement. I will concentrate more on the hearing and vision (placement of ears and eyes) which are better developed in electronics than the smell.
Our eyes as any layman will be aware of, is a 2D sensor similar to the sensors present in a camera (not design wise of course!). They capture the 2D projection of the 3D environment that we live in. In order to perceive depth (the 3rd dimension), our designers came up with the concept of two eyes (to determine depth through triangulation). For triangulation (2) to work the eyes only needed to be at an offset; horizontal, vertical, crossed anything would do. So probably the so called creator of humans (GOD for some and Nature for others) decided that they would place them horizontally to give a more symmetric look wrt our body. But why wasn’t it placed at the side of our head in place of our ears? (Hmmm, then where would our ears sit?).
Light, we know from our high school physics does not bend along the corners (I mean, a bulb in one room cannot light the other beside it), and from my earlier posts we know that to perceive depth our eyes need to capture some common region (the common region is where we perceive depth), they have to be placed on the same plane and parallel to it. This plane happened to be the plane of our frontal face and so our eyes came at the front where they are today.
Let’s move on to hearing now! When we hear some sound, our brain will be able to determine the direction of it (listen to some stereo sound), but will not be able to pin point the exact location (in terms of depth or distance from us) of the source. That is because our ears unlike our eyes are a single dimensional sensor able to detect only the intensity of sound (of course they can separate out the frequencies, but that is no way related to direction of sound) at any point in time. In order to derive the direction from which it came our creators/designers probably thought of reusing the same concept that they had come up for our sight and so gave us two ears (to derive the direction of sound through the difference in timing when it arrives at each of the ears). To derive the direction, our ears only needed to be at an offset; horizontal, vertical or crossed, so to give a symmetric look they probably thought of placing it at a horizontal offset. But why was it placed at the side of our face and not at the top like a rabbit, dog or any other creature?
Again from high school physics we know that sound can bend along the corners and pass through tiny gaps and spread out. So you can enjoy your music irrespective of where you are and where you are playing it in your house (well! if you could only adjust with its differing intensity). So our ears never demanded to be on the same plane and parallel to it! The common signals required to perceive the direction would anyway reach it irrespective of its placement since sound can bend. Secondly our ears were required to isolate the direction in the 360deg space unlike our eyes that only projects the frontal 180deg. Probably the best place to keep it was at the side of our face.
Our 2D vision was now capable of perceiving depth and our 1D hearing could locate the direction. Since our visual processing is one of the most complex design elements in our body and very well developed to perceive depth the designers never thought of giving any extra feature for any of our other sensors. But Nature has in fact produced creatures that have a different design to what we have, with ears at a vertical offset, on top of their head, etc, which I will be discussing in my next post.

References:
1. http://puneethbc.blogspot.com/2007/03/computer-vision-4.html
2. http://puneethbc.blogspot.com/2007/04/computer-vision-13.html

No comments: