vickybat
I am the night...I am...
*i.imgur.com/x8IQL.jpg
Seriously, I’m no enthusiast photographer and in reality, a complete noob to the world of photography. Although I own a typical digital point and shoot camera, it’s used as a general purpose clicker with no insight into actual photography. This all changed recently when my elder cousin bought a new DSLR Nikon D7000 and kind of showed off a bit as being the only one to own a professional grade camera in our family. He of course gave me full access and I started clicking abstract pictures with its viewfinder that really intrigued me. Taking pictures using the viewfinder is in a total different league compared to P/S cameras. I found the pictures that I took were as accurate as what I saw through the viewfinder. That’s where my love has grown for DSLR’S and I started to delve deeper into them understanding their mechanism and advantages over regular cameras. This is what I’m going to share with you guys and hope will be an interesting read for those who want to really understand the device that makes photography so amazing. Our forum member SUJOYP has been kind of an inspiration for a noob like me and has regularly answered all my queries. He played a major part in lighting the photographer’s interest in me.
Before going at a DSLR’s internals, let’s find out what a camera actually does. I’m going to skip all the film based counterparts as they no longer make sense since the advent of digital sensors.
Optics, image formation & focus
*i.imgur.com/fhSsx.gif
We all must have read in some part of our educational career about optics involving lenses. Time to rekindle those memories. We all know the two types of lenses that exist i.e. convex and concave type of lenses. Here we are more interested in the convex type has by its name converges the rays of light from an object passing through it at a certain point forming a real image. Now how this happen exactly is the question?
Let me give an example: Consider a cart travelling along a concrete pavement. Now further ahead, is a grass pavement that is completely different from the concrete pavement. Now consider the cart enters the grass field at a certain angle which isn’t straight. What happens is that one of the wheels left or right enters the field first while the other is still in concrete pavement. Let’s assume, the left wheel enters the grass and slows down due to several forces of friction. But the right wheel still being on concrete at that point doesn’t lose speed and continues. This difference of speeds allows the cart to turn more or we can say it deviates from its original part due to the difference in speed between its left and right wheels in any order. So we can conclude that the cart has turned.
Now consider a wave of light striking a transparent lens at an angle. One of the components of the light strikes the lens first than the other component and slows down while the other remains in its original state and speed. This phenomenon makes the light wave deviate or converge more than its actual path and what we call the bending of light through a lens. Have a look at the diagram below for a brief idea:
Now we concentrate upon how an image is actually formed through a lens. Again considering a convex lens due to its converging nature of making a real image. In a typical convex lens, let’s say we place an object before it such as a candle. Rays of light from the candle travel in different angles at the lens. The ones that hit the lens at a straight angle do not undergo any deviation because the components of that particular light wave, strike the lens at the same time which is not the case in light waves that strike the lens at an angle and thus undergo deviation due to difference of speeds between components of wave.
The rays of light from the candle undergo deviation and converge at the other side forming a real and inverted image. It’s called real image because the image is formed when the converging rays actually meet at a point. This is nothing different from what we’ve learnt in school and college level .
Now consider moving the object farther and closer from the lens and attach a screen at the front of the lens to see the image clearly. We notice that as we move the object farther from the lens , the image gets blurred and this happens again as we move it too close to the lens. Now let me explain the exact phenomenon behind it:
When we move the object away from the lens, the rays of light hit the lens with an obtuse angle i.e. greater than 90 deg but lesser than 180 deg. That means the angle isn’t sharp. But after passing through lens, the angle is acute (after refraction) and the rays converge closer to the lens forming the image. Moving the object closer to the lens gives the opposite phenomenon i.e. the rays hit the lens at an acute or sharp angle and leave the lens at an obtuse angle converging at a farther point.
*i.imgur.com/ZdExQ.gif
*i.imgur.com/HXaAn.gif
So In the above candle experiment, the image originally fell on the screen giving a clear real image. But when we moved the candle near or farther, the image blurred because in the first case when the object was brought closer, the image was formed further away from the screen as rays converge farther and vice-versa when the object was taken far away. So here, we have to move the lens instead of the screen to focus the real image properly depending on the distance of the object. This is called focusing and is what happens in a camera when we focus an object to take a picture.
Lenses
*i.imgur.com/B1cPL.jpg
Now let’s talk about the lenses. The converging power of a lens depends on the shape of the convex lens. The more the bulge in a lens, the more is the converging nature and it tends to converge the rays of light much closer to the lens. Basically, curving the lens out increases the distance between different points on the lens. This increases the amount of time that one part of the light wave is moving faster than another part, so the light makes a sharper turn. We get a diminished image in this case. A flatter lens has lesser converging power and spreads the rays farther and image is formed further forming a magnified image. This works similarly as a projector. When we move the projector farther from the screen, we get a magnified image. This is because the light rays are spread more and same thing happens with a flat lens.
Now let’s compare with the above phenomenon with a real camera. We learnt that increasing the distance between a lens and a real image increases the size of the real image while the sensor’s size remains constant that actually captures the image in a camera. When we focus on a faraway object, the ray of light are actually acute after refraction and falls closer to the lens and gets blurred or say out of focus. Now moving the lens far away from the image or screen will let it focus the image farther and onto the sensor for proper focus but will also increase or magnify the image. Here, the sensor’s size is fixed but the image size is magnified and only a part of it fits the sensor. On the contrary, a nearby object casts an image farther from the screen and we have to move the lens closer to the image to focus on the screen and thus we get a diminished but complete image that fits the entire sensor.
The things we deduce from here is that, the distance between the lens and the image formed is the focal length of the lens. Longer the focal length, greater is the ability of the lens to zoom in or producing magnified images and focus on a specific part of the image. A shorter focal lens has a wider view as due to the diminished characteristic of the image fitting the whole sensor and gives a wide angled or complete view. In a zoom lens, we can move different lens elements back and forth. By changing the distance between particular lenses, we can adjust the magnification power -- the focal length -- of the lens as a whole.
Chromatic aberration
Now a typical camera does not have single lens but several lenses combined into a single unit. The reason for this is, light from an object consists of several colors and having different wavelengths (distance between a crest and a trough in a typical light wave), with different converging angles. Now if we put a fixed single lens, the rays of light having different colors converge at different angles and in the image that is formed, we see several colored instances of the image in an unaligned manner that seems as several images overlapped. This is called an aberration and it’s chromatic because it involves colors of light.
*i.imgur.com/E0qku.gif
*i.imgur.com/xgFbp.gif
To align the images into one, several lenses are used which converge and align different wavelengths of light wave into a fixed point to eradicate chromatic aberration. Cameras compensate for this using several lenses made of different materials. The lenses each handle colors differently, and when you combine them in a certain way, the colors are realigned. This is actually a very important aspect in choosing a digital camera or SLR. Mathematically chromatic aberration can be corrected by the following formula which involves several lenses with correcting Abbe numbers. It’s like the following:
Considering two lenses, the focal lengths are f1 and f2 respectively, then the corrective formula is;
f1 . V1 + f2 . V2 =0 where V1 & V2 are abbe numbers which are nothing but the lens materials dispersion ability with respect to its refractive index. Here;
1/f1 +1/f2 = 1/f which is the net focal length of the two lenses.
Since the abbe numbers are positive, one of the focal lengths must be negative i.e. of dispersive ability. Thus a chromatic aberration corrective lens must consist of concave lens along with convex lens mechanism as concave lens diverges light rays. So a camera lens consists of a series of convex and concave lens to correct chromatic aberration.
*i.imgur.com/pUvBi.jpg
Since we now have a brief idea on image formation and other important aspects let’s move on to the real topic i.e. dslr:
Digital Single Lens Reflex (DSLR)
As the name suggests, the term SLR stands for single lens reflex and the prefix D nowadays due to the fact that today’s SLR cameras are filmless and consists of digital image sensors like ccd and cmos while the latter is widely used now.
So what is really the hype in these so called SLR cameras and why are they so significant to every enthusiast photographer out there? It’s not only because an SLR camera has an interchangeable lens mechanism but the presence of a reflecting mirror that resides within the body of the camera and hence the word “Reflex” is used.
A DSLR is the only professional and general camera type which is widely used to sport an optical viewfinder (OVF) as opposed to electronic viewfinder present in Bridge point & shoot cameras. The inclusion of OVF is what makes a DSLR so special. It actually lets the user see through the lens of the DSLR to view the real or actual image that is to be taken. Now the question arises that what is the actual use of an OVF and why we need it in the first place? The answer is, we get to preview the actual image that is to be captured at a particular instant of time without any image latency. The picture will be clearer after we discuss the mechanism of OVF in a DSLR below:
A DSLR, has a piece of interchangeable lens attached to its body. Within the body resides an image sensor (CCD or CMOS type) that captures the image. What separates a DSLR from regular P/S cameras is a reflecting mirror present just in front of the shutter and image sensor. Its job is to vertically reflect the light coming through the camera lens upwards and prevents it from reaching the sensor when the camera’s shutter isn’t released.
The mirror reflects light vertically upwards to a special prism having 5 faces and thus known as a pentaprism. The job of this pentaprism is to reflect the light from the mirror twice through 90 degs and transmit horizontally into the viewer’s eyepiece through which a person looks at the object which he/she wants to capture. Basically the prism flips the image on the screen, so it appears right side up again, and redirects it on to the viewfinder. So this mechanism has no electronics involved and has purely optical parts. So basically the real image or the actual image formed after convergence of light rays gets reflected by the mirror, prism mechanism directly into the viewer’s eye. So at every instant of time, the viewer sees the image whether it’s still or moving and has the power to take the picture of that object just as he/she previews through the viewfinder. Since actual light from the object falls on the viewer’s eye, there is zero lag in image formation due to speed of light .
The pentaprism although a complete piece of glass like normal prisms, doesn’t reflect light by the total internal reflection phenomenon, because the light incident doesn’t strike at critical angles for total internal reflection to take place. In reality, the two faces where the light gets reflected are painted with a reflecting paint and thus acts as a mirror and light falling on it obeys the laws of reflection. Pentamirrors are also used instead of prisms and canon introduced it first in its range of DSLR’s.
*i.imgur.com/gQYmM.jpg *i.imgur.com/oARO0.png
This mechanism helps us to take pictures of fast moving objects like a rabbit or a swift bird in motion without any blur. Now when the shutter is released, the mirror flips and the light directly strikes the image sensor at the back of the camera and the image is thus recorded instantly. This preview mechanism is still unmatched by any technology and thus still exists in current DSLR’S. Refer the image below to understand better:
*i.imgur.com/3xgfA.jpg
Now, in normal point and shoot or bridge cameras, the mirror/prism setup isn’t there and thus doesn’t allow the user to preview the real image. What we see through the lcd or the EVF isn’t the real image but the image preview done by the sensor itself. In a P/S camera, the light from the lens directly falls on the sensor in idle mode. This sensor electronically previews the image on the lcd or the eyepiece present in bridge cameras. It’s also called live-view as the sensor duplicates the image electronically at every instant of time. But this has a significant disadvantage and that is image latency. The sensor takes a certain amount of time to build the preview image and if the object is in motion or moving at regular intervals of time, then the sync between the movements and the preview image aren’t 100% and there is noticeable lag.
Consider a rabbit jumping towards a rabbit hole and the user wants to capture the rabbit as soon as it’s about to jump inside the hole. He previews the image in live-view mode and when the rabbit reaches that particular instance, this user release the shutter capturing the photo. But when the user sees the photo, finds out that the rabbit has already entered the hole when he took the snap. This is because, when the sensor was busy previewing and building the image for live-view of the jumping rabbit, the rabbit just enters the hole and when shutter is released, the image captured is different than the previewed image. If this would have been captured by a DSLR, then the user would have got the exact shot of the rabbit. So latency plagues EVF’S but is constantly under further improvement but hasn’t reached zero latency achieved by DSLR cameras via pure optics. Nowadays, even SLR cameras provide live-view along with OVF by help of electronics and sensors. It’s used in situations where the OVF cannot be used such as in underwater photography. In slr’s, the light is redirected to a different sensor that previews electronically the image.
Phase Detection Autofocus
DSLR’s incorporate a unique type of focusing system known as phase detection autofocus. What it does is simple. The mechanism automatically focuses on the object that is to be captured with the help of an AF sensor. It’s a passive focusing system. The beam splitter implemented as a small semi-transparent area of the main reflex mirror, coupled with a small secondary mirror splits the light coming from the object with different phases or with a phase shift and directs them to an AF sensor at the bottom of the camera to focus automatically and calculate the parameters involved.
*i.imgur.com/J7dnV.jpg
In this simplified schematic, we can see what happens to the image cast by the light passing through the left (blue dotted line) and right (red dotted lines) sides of the lens.
When in focus, the light from both sides of the lens converges to create a focused image. However, when not in focus, the images projected by two sides of the lens do not overlap (they are out of phase with one another).
Of course this is a massively simplified diagram with a single, vertical straight line as the subject (and no inversion of the image as it passes through the lens). The point is that we can derive information about focus if we can separately view light coming from opposite sides of the lens.
So we see that the key factor in phase detection is the ability to capture from a certain part of the lens so that at least two images can be compared and formed. This information makes it possible to calculate exactly where the lens needs to be moved to for accurate focusing. It also makes continuous AF easier, since the system can quickly calculate a new focusing distance and even assess the subject's rate of movement as long as two distinct phase shift images reach the sensor.
Exposure
Exposure is a mechanism in a camera that tells us the amount of light entering from the object into the sensor when the shutter is released. It not only tells us the amount of light passing but also the duration of exposure of sensor to light. Light entering a camera is controlled by the aperture which ias an opening in front of the lens and works in the same way as the iris of a human eye. It consists of metal plates that shrink and expand the diameter of a lens. In bright environments, the diameter is smaller allowing lesser light to enter and vice-versa in dark environments.
*i.imgur.com/Vav9Z.jpg
In SLR cameras, there is a different shutter known as focal plane shutter. This mechanism is very simple as it basically consists of two "curtains" between the lens and the film. Before we take a picture, the first curtain is closed, so the sensor won't be exposed to light. When we take the picture, this curtain slides open. After a certain amount of time, the second curtain slides in from the other side, to stop the exposure. Current DSLR’s incorporate sensors and electronics to control exposure. Thus DSLR’s offer better performance than point and shoot in different lighting conditions.
Sensors
DSLR’s incorporate large sensors of both CCD and CMOS type. They are much larger than conventional p/s sensors and thus have far better sensitivity and offers less noise to the image. They are reffered as full frame sensors. There is a connection between sensor size and image quality; in general, a larger sensor provides lower noise and higher sensitivity. There is also a connection between sensor size and depth of field, with the larger sensor resulting in shallower depth of field at a given aperture.
A DSLR’s sensor although large, like I said are of two types i.e. CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor). Basically, the function of both these sensors is same which is to convert light energy into electrical signals but is achieved differently. A simplified way to think about these sensors is to think of a 2-D array of thousands or millions of tiny solar cells.
Once the sensor converts the light into electric charge, it reads the total value of the charge in each cell of the image. This is where the similarity ends between a ccd and cmos sensor and below we have a look at how do each perform its work:
Ccd - *i.imgur.com/fOLj2.jpg In a CCD device, the charge is actually transported across the chip and read at one corner of the array. An analog-to-digital converter turns each pixel's value into a digital value. Utilizing a special manufacturing process, the sensor is able to transport the built up charge across itself without compromising the image quality. The first row of the array is read into an output register, which in turn is fed into an amplifier and an analog to digital converter. After the first row has been read, it is dumped from the read out register and the next row of the array is read into the register. The charges of each row are ‘coupled’ so as each row moves down and out, the successive rows follow in turn. The digital data is then stored as a file that can be viewed and manipulated. Refer the below schematic diagram:
*i.imgur.com/52dth.jpg
Cmos- *i.imgur.com/3Pyhw.jpg In most CMOS devices, there are several transistors at each pixel that amplify and move the charge using more traditional wires. It’s more flexible as each pixel is read individually and then digitized into binary form. The transistors allow for processing to be done right at the photosite, and each pixel/photosite can be accessed independently. Because the transistors occupy space on the array, some of the incoming light hits the transistors and not the photosites, which leads to picture noise. CMOS sensors also function at a very low gain which may contribute to noise. Refer the below schematic diagram:
*i.imgur.com/y7ZmX.jpg
Usually CCD sensors offers low noise but cmos sensors consume much less power and are much more efficient. Today’s DSLR’s incorporate cmos sensors only as they have evolved a lot to match or surpass ccd quality and also in terms of overall noise.
Conclusion
So that was more like it guys. This guide is meant for all members here who want to know more about dslr’s and their basic functioning. It has all the important aspects covered and any suggestions by expert members are most welcome for improving this guide in terms of providing accurate and important content. I have tried to make this guide as simple as possible and hope it delivers the same. My love for dslr’s grew when I used my brother’s Nikon d7000 for the first time to take a picture of his and my bhabi. After looking through the OVF for the first time, I took the exact shot that I saw through it and instantly realized its significance. That photo is still praised by my family members and will always be recognized as a fuel to spark the photographer in me which is still in the making. And of course AUSTIN STEVENS who is a world famous herpetologist and wildlife photographer. I had seen him work with dangerous venomous snakes and clicking pictures up close using dslr's. He has been my ideal all along.I will be adding micro-four thirds or mirrorless cameras sometime later. Please do post your valuable feedback guys.
Seriously, I’m no enthusiast photographer and in reality, a complete noob to the world of photography. Although I own a typical digital point and shoot camera, it’s used as a general purpose clicker with no insight into actual photography. This all changed recently when my elder cousin bought a new DSLR Nikon D7000 and kind of showed off a bit as being the only one to own a professional grade camera in our family. He of course gave me full access and I started clicking abstract pictures with its viewfinder that really intrigued me. Taking pictures using the viewfinder is in a total different league compared to P/S cameras. I found the pictures that I took were as accurate as what I saw through the viewfinder. That’s where my love has grown for DSLR’S and I started to delve deeper into them understanding their mechanism and advantages over regular cameras. This is what I’m going to share with you guys and hope will be an interesting read for those who want to really understand the device that makes photography so amazing. Our forum member SUJOYP has been kind of an inspiration for a noob like me and has regularly answered all my queries. He played a major part in lighting the photographer’s interest in me.
Before going at a DSLR’s internals, let’s find out what a camera actually does. I’m going to skip all the film based counterparts as they no longer make sense since the advent of digital sensors.
Optics, image formation & focus
*i.imgur.com/fhSsx.gif
We all must have read in some part of our educational career about optics involving lenses. Time to rekindle those memories. We all know the two types of lenses that exist i.e. convex and concave type of lenses. Here we are more interested in the convex type has by its name converges the rays of light from an object passing through it at a certain point forming a real image. Now how this happen exactly is the question?
Let me give an example: Consider a cart travelling along a concrete pavement. Now further ahead, is a grass pavement that is completely different from the concrete pavement. Now consider the cart enters the grass field at a certain angle which isn’t straight. What happens is that one of the wheels left or right enters the field first while the other is still in concrete pavement. Let’s assume, the left wheel enters the grass and slows down due to several forces of friction. But the right wheel still being on concrete at that point doesn’t lose speed and continues. This difference of speeds allows the cart to turn more or we can say it deviates from its original part due to the difference in speed between its left and right wheels in any order. So we can conclude that the cart has turned.
Now consider a wave of light striking a transparent lens at an angle. One of the components of the light strikes the lens first than the other component and slows down while the other remains in its original state and speed. This phenomenon makes the light wave deviate or converge more than its actual path and what we call the bending of light through a lens. Have a look at the diagram below for a brief idea:
Now we concentrate upon how an image is actually formed through a lens. Again considering a convex lens due to its converging nature of making a real image. In a typical convex lens, let’s say we place an object before it such as a candle. Rays of light from the candle travel in different angles at the lens. The ones that hit the lens at a straight angle do not undergo any deviation because the components of that particular light wave, strike the lens at the same time which is not the case in light waves that strike the lens at an angle and thus undergo deviation due to difference of speeds between components of wave.
The rays of light from the candle undergo deviation and converge at the other side forming a real and inverted image. It’s called real image because the image is formed when the converging rays actually meet at a point. This is nothing different from what we’ve learnt in school and college level .
Now consider moving the object farther and closer from the lens and attach a screen at the front of the lens to see the image clearly. We notice that as we move the object farther from the lens , the image gets blurred and this happens again as we move it too close to the lens. Now let me explain the exact phenomenon behind it:
When we move the object away from the lens, the rays of light hit the lens with an obtuse angle i.e. greater than 90 deg but lesser than 180 deg. That means the angle isn’t sharp. But after passing through lens, the angle is acute (after refraction) and the rays converge closer to the lens forming the image. Moving the object closer to the lens gives the opposite phenomenon i.e. the rays hit the lens at an acute or sharp angle and leave the lens at an obtuse angle converging at a farther point.
*i.imgur.com/ZdExQ.gif
*i.imgur.com/HXaAn.gif
So In the above candle experiment, the image originally fell on the screen giving a clear real image. But when we moved the candle near or farther, the image blurred because in the first case when the object was brought closer, the image was formed further away from the screen as rays converge farther and vice-versa when the object was taken far away. So here, we have to move the lens instead of the screen to focus the real image properly depending on the distance of the object. This is called focusing and is what happens in a camera when we focus an object to take a picture.
Lenses
*i.imgur.com/B1cPL.jpg
Now let’s talk about the lenses. The converging power of a lens depends on the shape of the convex lens. The more the bulge in a lens, the more is the converging nature and it tends to converge the rays of light much closer to the lens. Basically, curving the lens out increases the distance between different points on the lens. This increases the amount of time that one part of the light wave is moving faster than another part, so the light makes a sharper turn. We get a diminished image in this case. A flatter lens has lesser converging power and spreads the rays farther and image is formed further forming a magnified image. This works similarly as a projector. When we move the projector farther from the screen, we get a magnified image. This is because the light rays are spread more and same thing happens with a flat lens.
Now let’s compare with the above phenomenon with a real camera. We learnt that increasing the distance between a lens and a real image increases the size of the real image while the sensor’s size remains constant that actually captures the image in a camera. When we focus on a faraway object, the ray of light are actually acute after refraction and falls closer to the lens and gets blurred or say out of focus. Now moving the lens far away from the image or screen will let it focus the image farther and onto the sensor for proper focus but will also increase or magnify the image. Here, the sensor’s size is fixed but the image size is magnified and only a part of it fits the sensor. On the contrary, a nearby object casts an image farther from the screen and we have to move the lens closer to the image to focus on the screen and thus we get a diminished but complete image that fits the entire sensor.
The things we deduce from here is that, the distance between the lens and the image formed is the focal length of the lens. Longer the focal length, greater is the ability of the lens to zoom in or producing magnified images and focus on a specific part of the image. A shorter focal lens has a wider view as due to the diminished characteristic of the image fitting the whole sensor and gives a wide angled or complete view. In a zoom lens, we can move different lens elements back and forth. By changing the distance between particular lenses, we can adjust the magnification power -- the focal length -- of the lens as a whole.
Chromatic aberration
Now a typical camera does not have single lens but several lenses combined into a single unit. The reason for this is, light from an object consists of several colors and having different wavelengths (distance between a crest and a trough in a typical light wave), with different converging angles. Now if we put a fixed single lens, the rays of light having different colors converge at different angles and in the image that is formed, we see several colored instances of the image in an unaligned manner that seems as several images overlapped. This is called an aberration and it’s chromatic because it involves colors of light.
*i.imgur.com/E0qku.gif
*i.imgur.com/xgFbp.gif
To align the images into one, several lenses are used which converge and align different wavelengths of light wave into a fixed point to eradicate chromatic aberration. Cameras compensate for this using several lenses made of different materials. The lenses each handle colors differently, and when you combine them in a certain way, the colors are realigned. This is actually a very important aspect in choosing a digital camera or SLR. Mathematically chromatic aberration can be corrected by the following formula which involves several lenses with correcting Abbe numbers. It’s like the following:
Considering two lenses, the focal lengths are f1 and f2 respectively, then the corrective formula is;
f1 . V1 + f2 . V2 =0 where V1 & V2 are abbe numbers which are nothing but the lens materials dispersion ability with respect to its refractive index. Here;
1/f1 +1/f2 = 1/f which is the net focal length of the two lenses.
Since the abbe numbers are positive, one of the focal lengths must be negative i.e. of dispersive ability. Thus a chromatic aberration corrective lens must consist of concave lens along with convex lens mechanism as concave lens diverges light rays. So a camera lens consists of a series of convex and concave lens to correct chromatic aberration.
*i.imgur.com/pUvBi.jpg
Since we now have a brief idea on image formation and other important aspects let’s move on to the real topic i.e. dslr:
Digital Single Lens Reflex (DSLR)
As the name suggests, the term SLR stands for single lens reflex and the prefix D nowadays due to the fact that today’s SLR cameras are filmless and consists of digital image sensors like ccd and cmos while the latter is widely used now.
So what is really the hype in these so called SLR cameras and why are they so significant to every enthusiast photographer out there? It’s not only because an SLR camera has an interchangeable lens mechanism but the presence of a reflecting mirror that resides within the body of the camera and hence the word “Reflex” is used.
A DSLR is the only professional and general camera type which is widely used to sport an optical viewfinder (OVF) as opposed to electronic viewfinder present in Bridge point & shoot cameras. The inclusion of OVF is what makes a DSLR so special. It actually lets the user see through the lens of the DSLR to view the real or actual image that is to be taken. Now the question arises that what is the actual use of an OVF and why we need it in the first place? The answer is, we get to preview the actual image that is to be captured at a particular instant of time without any image latency. The picture will be clearer after we discuss the mechanism of OVF in a DSLR below:
A DSLR, has a piece of interchangeable lens attached to its body. Within the body resides an image sensor (CCD or CMOS type) that captures the image. What separates a DSLR from regular P/S cameras is a reflecting mirror present just in front of the shutter and image sensor. Its job is to vertically reflect the light coming through the camera lens upwards and prevents it from reaching the sensor when the camera’s shutter isn’t released.
The mirror reflects light vertically upwards to a special prism having 5 faces and thus known as a pentaprism. The job of this pentaprism is to reflect the light from the mirror twice through 90 degs and transmit horizontally into the viewer’s eyepiece through which a person looks at the object which he/she wants to capture. Basically the prism flips the image on the screen, so it appears right side up again, and redirects it on to the viewfinder. So this mechanism has no electronics involved and has purely optical parts. So basically the real image or the actual image formed after convergence of light rays gets reflected by the mirror, prism mechanism directly into the viewer’s eye. So at every instant of time, the viewer sees the image whether it’s still or moving and has the power to take the picture of that object just as he/she previews through the viewfinder. Since actual light from the object falls on the viewer’s eye, there is zero lag in image formation due to speed of light .
The pentaprism although a complete piece of glass like normal prisms, doesn’t reflect light by the total internal reflection phenomenon, because the light incident doesn’t strike at critical angles for total internal reflection to take place. In reality, the two faces where the light gets reflected are painted with a reflecting paint and thus acts as a mirror and light falling on it obeys the laws of reflection. Pentamirrors are also used instead of prisms and canon introduced it first in its range of DSLR’s.
*i.imgur.com/gQYmM.jpg *i.imgur.com/oARO0.png
This mechanism helps us to take pictures of fast moving objects like a rabbit or a swift bird in motion without any blur. Now when the shutter is released, the mirror flips and the light directly strikes the image sensor at the back of the camera and the image is thus recorded instantly. This preview mechanism is still unmatched by any technology and thus still exists in current DSLR’S. Refer the image below to understand better:
*i.imgur.com/3xgfA.jpg
Now, in normal point and shoot or bridge cameras, the mirror/prism setup isn’t there and thus doesn’t allow the user to preview the real image. What we see through the lcd or the EVF isn’t the real image but the image preview done by the sensor itself. In a P/S camera, the light from the lens directly falls on the sensor in idle mode. This sensor electronically previews the image on the lcd or the eyepiece present in bridge cameras. It’s also called live-view as the sensor duplicates the image electronically at every instant of time. But this has a significant disadvantage and that is image latency. The sensor takes a certain amount of time to build the preview image and if the object is in motion or moving at regular intervals of time, then the sync between the movements and the preview image aren’t 100% and there is noticeable lag.
Consider a rabbit jumping towards a rabbit hole and the user wants to capture the rabbit as soon as it’s about to jump inside the hole. He previews the image in live-view mode and when the rabbit reaches that particular instance, this user release the shutter capturing the photo. But when the user sees the photo, finds out that the rabbit has already entered the hole when he took the snap. This is because, when the sensor was busy previewing and building the image for live-view of the jumping rabbit, the rabbit just enters the hole and when shutter is released, the image captured is different than the previewed image. If this would have been captured by a DSLR, then the user would have got the exact shot of the rabbit. So latency plagues EVF’S but is constantly under further improvement but hasn’t reached zero latency achieved by DSLR cameras via pure optics. Nowadays, even SLR cameras provide live-view along with OVF by help of electronics and sensors. It’s used in situations where the OVF cannot be used such as in underwater photography. In slr’s, the light is redirected to a different sensor that previews electronically the image.
Phase Detection Autofocus
DSLR’s incorporate a unique type of focusing system known as phase detection autofocus. What it does is simple. The mechanism automatically focuses on the object that is to be captured with the help of an AF sensor. It’s a passive focusing system. The beam splitter implemented as a small semi-transparent area of the main reflex mirror, coupled with a small secondary mirror splits the light coming from the object with different phases or with a phase shift and directs them to an AF sensor at the bottom of the camera to focus automatically and calculate the parameters involved.
*i.imgur.com/J7dnV.jpg
In this simplified schematic, we can see what happens to the image cast by the light passing through the left (blue dotted line) and right (red dotted lines) sides of the lens.
When in focus, the light from both sides of the lens converges to create a focused image. However, when not in focus, the images projected by two sides of the lens do not overlap (they are out of phase with one another).
Of course this is a massively simplified diagram with a single, vertical straight line as the subject (and no inversion of the image as it passes through the lens). The point is that we can derive information about focus if we can separately view light coming from opposite sides of the lens.
So we see that the key factor in phase detection is the ability to capture from a certain part of the lens so that at least two images can be compared and formed. This information makes it possible to calculate exactly where the lens needs to be moved to for accurate focusing. It also makes continuous AF easier, since the system can quickly calculate a new focusing distance and even assess the subject's rate of movement as long as two distinct phase shift images reach the sensor.
Exposure
Exposure is a mechanism in a camera that tells us the amount of light entering from the object into the sensor when the shutter is released. It not only tells us the amount of light passing but also the duration of exposure of sensor to light. Light entering a camera is controlled by the aperture which ias an opening in front of the lens and works in the same way as the iris of a human eye. It consists of metal plates that shrink and expand the diameter of a lens. In bright environments, the diameter is smaller allowing lesser light to enter and vice-versa in dark environments.
*i.imgur.com/Vav9Z.jpg
In SLR cameras, there is a different shutter known as focal plane shutter. This mechanism is very simple as it basically consists of two "curtains" between the lens and the film. Before we take a picture, the first curtain is closed, so the sensor won't be exposed to light. When we take the picture, this curtain slides open. After a certain amount of time, the second curtain slides in from the other side, to stop the exposure. Current DSLR’s incorporate sensors and electronics to control exposure. Thus DSLR’s offer better performance than point and shoot in different lighting conditions.
Sensors
DSLR’s incorporate large sensors of both CCD and CMOS type. They are much larger than conventional p/s sensors and thus have far better sensitivity and offers less noise to the image. They are reffered as full frame sensors. There is a connection between sensor size and image quality; in general, a larger sensor provides lower noise and higher sensitivity. There is also a connection between sensor size and depth of field, with the larger sensor resulting in shallower depth of field at a given aperture.
A DSLR’s sensor although large, like I said are of two types i.e. CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor). Basically, the function of both these sensors is same which is to convert light energy into electrical signals but is achieved differently. A simplified way to think about these sensors is to think of a 2-D array of thousands or millions of tiny solar cells.
Once the sensor converts the light into electric charge, it reads the total value of the charge in each cell of the image. This is where the similarity ends between a ccd and cmos sensor and below we have a look at how do each perform its work:
Ccd - *i.imgur.com/fOLj2.jpg In a CCD device, the charge is actually transported across the chip and read at one corner of the array. An analog-to-digital converter turns each pixel's value into a digital value. Utilizing a special manufacturing process, the sensor is able to transport the built up charge across itself without compromising the image quality. The first row of the array is read into an output register, which in turn is fed into an amplifier and an analog to digital converter. After the first row has been read, it is dumped from the read out register and the next row of the array is read into the register. The charges of each row are ‘coupled’ so as each row moves down and out, the successive rows follow in turn. The digital data is then stored as a file that can be viewed and manipulated. Refer the below schematic diagram:
*i.imgur.com/52dth.jpg
Cmos- *i.imgur.com/3Pyhw.jpg In most CMOS devices, there are several transistors at each pixel that amplify and move the charge using more traditional wires. It’s more flexible as each pixel is read individually and then digitized into binary form. The transistors allow for processing to be done right at the photosite, and each pixel/photosite can be accessed independently. Because the transistors occupy space on the array, some of the incoming light hits the transistors and not the photosites, which leads to picture noise. CMOS sensors also function at a very low gain which may contribute to noise. Refer the below schematic diagram:
*i.imgur.com/y7ZmX.jpg
Usually CCD sensors offers low noise but cmos sensors consume much less power and are much more efficient. Today’s DSLR’s incorporate cmos sensors only as they have evolved a lot to match or surpass ccd quality and also in terms of overall noise.
Conclusion
So that was more like it guys. This guide is meant for all members here who want to know more about dslr’s and their basic functioning. It has all the important aspects covered and any suggestions by expert members are most welcome for improving this guide in terms of providing accurate and important content. I have tried to make this guide as simple as possible and hope it delivers the same. My love for dslr’s grew when I used my brother’s Nikon d7000 for the first time to take a picture of his and my bhabi. After looking through the OVF for the first time, I took the exact shot that I saw through it and instantly realized its significance. That photo is still praised by my family members and will always be recognized as a fuel to spark the photographer in me which is still in the making. And of course AUSTIN STEVENS who is a world famous herpetologist and wildlife photographer. I had seen him work with dangerous venomous snakes and clicking pictures up close using dslr's. He has been my ideal all along.I will be adding micro-four thirds or mirrorless cameras sometime later. Please do post your valuable feedback guys.
Last edited: