How digital camera WORK?

What exactly going on behind the lenses of a digital camera? The image sensor employed by most digital cameras is a charge coupled device (CCD). Some low-end cameras use complementary metal oxide semiconductor (CMOS) technology. While CMOS sensors will almost certainly improve and become more popular in the future, they probably won't replace CCD sensors in higher-end digital cameras. Throughout the rest of this article, we will mostly focus on CCD. For the purpose of understanding how a digital camera works, you can think of them as nearly identical devices. Most of what you learn will also apply to CMOS cameras.

The CCD is a collection of tiny light-sensitive diodes, which convert photons (light) into electrons (electrical charge). These diodes are called photosites. In a nutshell, each photosite is sensitive to light -- the brighter the light that hits a single photosite, the greater the electrical charge that will accumulate at that site.

One of the drivers behind the falling prices of digital cameras has been the introduction of CMOS image sensors. CMOS sensors are much less expensive to manufacture than CCD sensors.


A CMOS image sensor

Both CCD and CMOS image sensors start at the same point -- they have to convert light into electrons at the photosites. If you've know How Solar Cells Work, you already understand one of the pieces of technology used to perform the conversion. A simplified way to think about the sensor used in a digital camera (or camcorder) is to think of it as having a 2-D array of thousands or millions of tiny solar cells, each of which transforms the light from one small portion of the image into electrons. Both CCD and CMOS devices perform this task using a variety of technologies.

 

 

 

CCD vs. CMOS Sensors

How big are the sensors?

The current generations of digital sensors are smaller than film. Typical film emulsions that are exposed in a film-based camera measure 24mm x 36mm. If you've look at the specifications of a typical 1.3-megapixel camera, you'll find that it has a CCD sensor that measures 4.4mm x 6.6mm. As you'll see in a later section, a smaller sensor means smaller lenses.

Once the light is converted into electrons, the differences between the two main sensor types kick in. The next step is to read the value (accumulated charge) of each cell in the image. In a CCD device, the charge is actually transported across the chip and read at one corner of the array. An analog-to-digital converter turns each pixel's value into a digital value. In most CMOS devices, there are several transistors at each pixel that amplify and move the charge using more traditional wires. The CMOS approach is more flexible because each pixel can be read individually.

CCDs use a special manufacturing process to create the ability to transport charge across the chip without distortion. This process leads to very high-quality sensors in terms of fidelity and light sensitivity. CMOS chips, on the other hand, use completely standard manufacturing processes to create the chip -- the same processes used to make most microprocessors. Because of the manufacturing differences, there are several noticeable differences between CCD and CMOS sensors.

  • CCD sensors, as mentioned above, create high-quality, low-noise images. CMOS sensors, traditionally, are more susceptible to noise.
  • Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip is lower. Many of the photons hitting the chip hit the transistors instead of the photodiode.
  • CMOS sensors traditionally consume little power. Implementing a sensor in CMOS yields a low-power sensor. CCDs, on the other hand, use a process that consumes lots of power. CCDs consume as much as 100 times more power than an equivalent CMOS sensor.
  • CMOS chips can be fabricated on just about any standard silicon production line, so they tend to be extremely inexpensive compared to CCD sensors.
  • CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality pixels, and more of them.

Based on these differences, you can see that CCDs tend to be used in cameras that focus on high-quality images with lots of pixels and excellent light sensitivity. CMOS sensors usually have lower quality, lower resolution and lower sensitivity. However, CMOS cameras are less expensive and have great battery life.

Resolution

The amount of detail that the camera can capture is called the resolution, and it is measured in pixels. The more pixels your camera has, the more detail it can capture. The more detail you have, the more you can blow up a picture before it becomes "grainy" and starts to look out-of-focus.

Some typical resolutions that you find in digital cameras today include:

  • 256x256 pixels - You find this resolution on very cheap cameras. This resolution is so low that the picture quality is almost always unacceptable. This is 65,000 total pixels.
  • 640x480 pixels - This is the low end on most "real" cameras. This resolution is great if you plan to e-mail most of your pictures to friends or post them on a Web site. This is 307,000 total pixels.
  • 1216x912 pixels - If you are planning to print your images, this is a good resolution. This is a "megapixel" image size -- 1,109,000 total pixels.
  • 1600x1200 pixels - This is "high resolution." Images taken with this resolution can be printed in larger sizes, such as 8x10 inches, with good results. This is almost 2 million total pixels. You can find cameras today with up to 10.2 million pixels.

Resolution: Printing Pictures
There are many different technologies used in printers. Here we'll talk about
inkjet printers, which use many different methods in and of themselves. In general, printer manufacturers will advertise the printer resolution in dots per inch (dpi). However, all dots are not created equal. One printer may place more drops of ink (black, cyan, magenta or yellow) per dot than another.

For instance, printers made by Hewlett Packard that use PhotoREt III technology can layer a combination of up to 29 drops of ink per dot, yielding about 3,500 possible colors per dot. This may sound like a lot, but most cameras can capture 16.8 million colors per pixel. So these printers cannot replicate the exact color of a pixel with a single dot. Instead, they must create a grouping of dots that when viewed from a distance blend together to form the color of a single pixel.

The rule of thumb is that you divide your printer's color resolution by about four to get the actual maximum picture quality of your printer. So for a 1200 dpi printer, a resolution of 300 pixels per inch would be just about the best quality that printer is capable of. This means that with a 1200x900 pixel image, you could print a 4-inch by 3-inch print. In practice, though, lower resolutions than this usually provide adequate quality. To make a reasonable print that comes close to the quality of a traditionally developed photograph, you need about 150 to 200 pixels per inch of print size.

On this page, Kodak recommends the following as minimum resolutions for different print sizes:

Print Size

Mega pixels

Image Resolution

Wallet

0.3

640x480 pixels

4x5 inches

0.4

768x512 pixels

5x7 inches

0.8

1152x768 pixels

8x10 inches

1.6

1536x1024 pixels

Capturing Color
unfortunately, each photosite is colorblind. It only keeps track of the total intensity of the light that strikes its surface. In order to get a full color image, most sensors use filtering to look at the light in its three primary colors. Once all three colors have been recorded, they can be added together to create the full spectrum of colors that you've grown accustomed to seeing on
computer monitors and color printers.

NEXT PAGE >