How is a picture represented when it is converted to digital (explain what a pixel is)?
Explain what RGB is? When is this color scheme used? Why are 32 bits used for color when standard RGB takes only 24?
When the last 8 bits are used, what are they used for?
Explain what CMYK is? When is this color scheme used? Why do we need the ‘K’ in addition to the CMY?
What is a ‘layer’ in a graphics program? Why are they important?
What is lost when an analog phenomenon is converted to digital?
What is sampling?
What is quantization?
What is bit-depth? What do we gain or loose if we increase or decrease bit depth?
What effect would changing the bit depth have on the quantization performed on an image when converting it from analog to digital?
What is the importance of structure?
Which is better: analog or digital?
Explain the concept of “good enough” when evaluating a compression method?
Why is a jpeg image not useful in a graphics print shop?
What is the defining characteristic of lossless compression? How does lossless compression actually make an image file smaller?
What is the defining characteristic of lossy compression? What does lossy compression do to make an image even smaller than lossless can?
Which would be more effective on a printed page from a book, lossy or lossless compression?
Which would be better for a picture of a jungle, lossy or lossless compression?
Explain why repeatedly compressing, editing, and uncompressing an image with lossless compression
is okay while compressing, editing and uncompressing an image with lossy compression can (and probably will) ruin it. Explain why the repetition in this process makes things worse?
Why is HSL useful for editing images for viewing?