Photography Notes:

Exposure

 

Aperture

 

Shutter speed

Lens

gathers light and uses refraction to bend light, and focuses light onto the film/image sensor.

Focal length

Angles of view

 

Short(wide angle) lenses

Normal lenses

Usually faster lenses

Long lenses

Zoom lens

Focus

To increase depth of field

How the image sensor works

Like film, the image sensor responds to the light cast upon it by the lens element. But the mechanics of this response are as different as night and day:

Film responds chemically .The active ingredient in film is a layer of gelatinous emulsion filled with light-sensitive crystals. The crystals contain traces of silver. W1aen light hits the film, impurities in the film crystals attract the silver atoms into microscopic clumps. Stronger light results in larger clumps (though still microscopic). The development process enlarges the clumps further, making them visible. Every step is irreversible, meaning that the film can be exposed and developed only once. From then on, the film serves as a storage medium.

• An image sensor responds electronically .The sensor is composed of a layer of silicon covered with a grid of square electrodes. The silicon is rife with negatively charged particles, or electrons. When light passes through the electrodes, it sends the electrons scattering. Voltage applied to the electrodes attracts the free electrons into clusters called photosites. Stronger light and higher voltage at a specific electrode translates to more electrons per site. A digital converter counts the electrons at each site and sends the data out to the logic board for processing. The electrons are then released back into the silicon and the image sensor is ready to use all over again.

Admittedly, the last thing you want to worry about when shooting a photograph is the behavior of free electrons. But the unique proclivities of electrodes and silicon have a distinct effect on the performance of digital cameras.

Consider the aspect of speed. All this collecting and counting of electrons takes time, particularly if you're used to the immediate response of film. With film, the shutter opens, the shutter closes, the film advances, and you're ready for the next picture. The whole process can take as little as i/8000 second! This makes film cameras ideally suited to rapid-fire shooting.


Capturing Color

Like film emulsion, free electrons respond to the intensity of light, but not to its color. In that regard, both film and image sensors see the world in shades of black and white. Film gets around this by combining three layers of emulsion, each~ sensitive to a different part of the color spectrum. The emulsion layers are colorized with dyes built into the film.

Image sensors capture color using red, green, and blue filters. These filters arc nothing more than dabs of translucent plastic applied directly to the electrodes, as you can see in the microscope photograph featured in Color Plate 3-1 on page C5. A red filter removes all non red light, creating a red view of the world, as if you were wearing bright red sunglasses. The green and blue filters remove all nongreen and nonblue light, respectively. Red, green, and blue light mix to form white, which contains all the colors in the visible spectrum. So the layers of red, green, and blue pixels mix to form most (if not quite all) of the colors our eyes can see.

Although all digital cameras rely on red, green, and blue filters, the ways in which these filters are used vary, especially among professional-level cameras in the

Far and away the most popular color filtering method is a simplified option called the single-array system. One image sensor is equipped with red, green, and blue filters, much like the piezo system. But instead of moving, the filters are fixed to the electrodes. A single electrode can respond to just one filter, so each electrode is filtered independently. One electrode is filtered red, its neighbor is green, and the next is blue, as in Color Plate 3-2 on page C5. The exact filtering pattern varies among sensors, but the upshot is the same—one chip captures a full-color photograph.

But while the single-array system is the standard in digital photography (including even $10,000-and-up professional models like the Kodak DCS560), it does have one drawback. The camera has to interpolate each set of filtered pixels to make a continuous picture—one in red, one in green, and one in blue. These colored pictures are called channels, and each channel much be complete before the camera can construct a full-color photograph.

The green channel is the easiest to interpolate, since there are usually twice as many green pixels as red or blue. Every other pixel is green, so the camera has to interpolate only half the pixels to finish off the green channel. But only one out of every four pixels in the red and blue channels is captured by the image sensor. The other 75 percent of the channel is interpolated, as demonstrated in Color Plate 3-3 on page C6.

And yet, as Color Plate 3-4 on page C6 shows, interpolated color channels come out looking much better than, say, an image that's been interpolated up from a lower resolution (Figure 3-13). There are three reasons for this:

• The channels compensate for each other when mixed together Point to any pixel, and that's exactly how it looked to a specific electrode.

• The camera draws information from all three channels when interpolating any one of them This means the colors of the green and blue pixels are mathematically factored in when the missing red ones are calculated. This is called color interpolation.

• The image sensor mimics our eyes Most image sensors favor green filters for a very good reason. Our eyes are more sensitive to green light than to red or blue. A highly detailed green channel tricks us into perceiving a highly detailed color photograph.

A few cameras permit you to shoot color and black and white photographs. If you'll print the final image in black and white, then shoot the image in black and white. Why? Because with a black-and white photo, a digital camera ignores the filters, resulting in one channel of image data with no interpolation (Figure 3-14). It's all darks and lights, which the electrodes see without filtering.

The Math Behind the Pixels

YOU see a digital image as a collection of pixels.

But to the computer, the image is a file full of numbers. Because a computer's brain is merely a succession of switches—albeit an exceedingly long succession of switches—each digit in a file can be either a O or a 1, off or on. This special digit is called a bit, short for binary digit. In an image file, O indicates a black pixel, 1 indicates white. Hence, many computer artists refer to digital images as bitmaps.

The problem with equating a pixel with a single bit of data is that you don't have any intermediate colors to work with. You have black, you have white, end of story. As shown in Figure 4-2, black and white pixels are fine for representing line art, but they can't measure up to the task of digital photography.

The solution is to string a series of bits together. For example, if you use 2 bits to define a pixel, then you have room for two gray values. The value 00 is black, 01 is dark gray, 10 is light gray, and 11 is white. That's a total of 2 x 2 = 4 variations.

If 2 bits are better than 1, why not add more? The standard for grayscale imagery is 8 bits (or 1 byte) per pixel. If you multiply 2 by itself 8 times (2 to the 8th power), you act a total of 256 possible gray values in an image—black, white, and 254 others. That's why a grayscale photo is sometimes called an 8-bit image.

NOTE: As it so happens, a word processor uses 8 bits to express a character of type. For example, 01000001 is a capital A, 01 100001 is a lowercase a. This means that each pixel in a grayscale image consumes the same space in your computer's memory as a character of type. The image at the bottom of Figure 4-1 contains 480,000 pixels, about as many letters as in this entire book. If you disregard all the formatting, the text in this book takes up about as much room in MM as that single figure. And you thought that bit about a picture being worth a thousand words was just a cliche.

To human eyes, a photograph with 256 colors produces roughly the same visual effect as a grayscale image with a mere 6 shades of gray. Compare Figure 4-4 to Color Plate 4-1 and you'll see some striking similarities. This effect of harsh transitions between one color and the next is called posterization.

Figure 4-4 Equipped with a paltry 6 shades of gray, this boat suffers no worse posterization than it does when rendered with 256 colors (compare to Color Plate 4-1).

Although posterization may be an interesting effect, it isn't photographic. You need more colors. The obvious way to add more colors is to pile on more bits. The problem is, how? In a grayscale image, larger numbers are lighter. But when you add color to the mix, that simple rule doesn't work. You have to distinguish not only light from dark, but also vivid from drab, yellow from purple, and so on.

The solution is to divide color into its root components. There are several recipes for color, but by far the most popular is RGB, employed by all digital cameras, camcorders, scanners, and a host of other devices. The initials RGB stand for red green, and blue—the primary colors of light. The idea is based on the behavior of light: If you shine three spotlights at the same point on a stage—one brilliant red, another bright green, and a third deep blue—you'll get a circle of neutral white. By alternately reducing the amount of light cast from one of the three spotlights, you can produce nearly all the colors in the visible spectrum, running the gamut from red to violet. As you dim a spotlight, the color grows darker; as you turn it up, the color grows lighter. This is why RGB is also known as the additive color model—you add light to get brighter colors.

Now imagine that instead of shining spotlights, you have three slide projectors. One contains a red slide, another a green version of that same image, and the third a blue version of the image. Shine the projectors at the same spot on the screen and— assuming the three slides were shot properly—you get the full-color photograph in all its glory.

This is precisely how digital images work, except that in place of slides, you have channels. Each channel is an independent 8-bit image, as shown in Figure 4-5. To generate the full-color photograph, the computer colorizes the channels and mixes them together, as illustrated in Color Plate 4-2 on page C7. Where the red channel is light—as along the top of the boat—red predominates. Where the green channel is light—the bottom of the boat—green comes through loud and clear. To see how other RGB combinations work, see Color Plate 4-3 on page C8, which shows how combinations of red, green, and blue mix to form yellow, violet, and white.

Three channels of 8-bit data means it takes 24 bits (or 3 bytes) to define each pixel. Hence, an RGB photograph is called a 24-bit image.

Figure 4-5 Computers produce full-color images by combining three channels—one for red (top), one for green (middle), and one for blue (bottom).

TIP: The quality of an individual color channel varies from photograph to photograph. But as a general rule of thumb, the red channel contains much of the color information, with the widest contrast between darks and lights. The green channel typically excels in image detail, which means sharp focus and clarity. Our eyes are least sensitive to blue light, so the blue channel is often dark and in relatively poor condition, as demonstrated in Figure 4-7. Of the three, the green channel is usually the best suited for conversion to grayscale.

TIP: When downsampling dramatically, it's a good idea to reduce the image in multiple steps. As a rule of thumb, avoid reducing the height or width of an image by more than 50 percent at a time. For example, to reduce an image to 30 percent of its original width, first downsample the image to 50 percent, then to 60 percent. (50% x 60% = 30%.) This way, you make sure that every pixel is calculated when interpolating the smaller image. Otherwise you run the risk of simply throwing away pixels, which may harm fine details in your image.

Preparing color images for Print

1. Spotting, retouching, dust and scratch removal

2. Global tonal correction

3. Global color correction

4. Selective tonal and/or color correction

5. Targeting (sharpening, handling out of gamut colors, compressing tonal range, converting to CMYK.)

Unsharp masking, or USM, is a traditional film compositing technique used to sharpen edges in an image. The Unsharp Mask filter corrects blurring introduced during photographing, scanning, resampling, or printing. It is useful for images intended both for print and online.

The Unsharp Mask filter locates pixels that differ from surrounding pixels by the threshold you specify and increases the pixels’ contrast by the amount you specify. In addition, you specify the radius of the region to which each pixel is compared.

The effects of the Unsharp Mask filter are far more pronounced on-screen than in high-resolution output. If your final destination is print, experiment to determine what dialog box settings work best for your image.

To sharpen an image using the Unsharp Mask filter:

1 Choose Filter > Sharpen > Unsharp Mask.

2 Sharpen the image:

For Amount, drag the slider or enter a value to determine how much to increase the contrast of pixels. For high-resolution printed images, an amount between 150% and 200% is recommended.

For Radius, drag the slider or enter a value to determine the number of pixels surrounding the edge pixels that affects the sharpening. For high-resolution images, a Radius between 1 and 2 is recommended.

A lower value sharpens only the edge pixels, whereas a higher value sharpens a wider band of pixels. This effect is much less noticeable in print than on-screen, because a 2-pixel radius represents a smaller area in a high-resolution printed image.

For Threshold, drag the slider or enter a value to determine how different the sharpened pixels must be from the surrounding area before they are considered edge pixels and sharpened by the filter. To avoid introducing noise (in images with fleshtones, for example), experiment with Threshold values between 2 and 20. The default Threshold value (0) sharpens all pixels in the image.

If applying the Unsharp Mask filter makes already bright colors appear overly saturated, convert the image to Lab mode and apply the filter to the L channel only. This technique sharpens the image without affecting the color components.

 

Correction Notes

Use measure tool to calculate rotation angle.

Use front image command to crop 1 image size to another.

Use Levels on each channel to correct color.

Use hue and saturation to correct individual colors/use eyedropper.

To Correct Artifacts

To increase saturation in a jpeg and avoid artifacts

1. Open Original

2. Copy to a new layer

3. Adjust saturation

4. Use Median (8 pixels)

5. Change to color blend mode

6. merge layers

Color Temperature

from Real World Digital Photography P188