Comparison of Color Appearance Models

Cherry Zhang, Winter 2010

Motivation

When viewing images on an LCD display, we adapt to the illumination condition of the surrounding environment. When the illumination of the surround changes, our state of adaptation changes, causing a noticeable shift in color appearance of the displayed image. My research area is in developing a dynamic display, which senses the illumination condition of the surround and automatically adjusts the color of the displayed image, to provide viewers with a consistent viewing experience. This project is a sub-area of my research. It compares different mathematical models that can be used in adjusting color appearance of an image.


Goal

The project is based on a recent study on color appearance in high-dynamic range imaging [1]. [1] only studies a particular model called CIECAM02. In this project, I implemented CIECAM02, another existing models called Fairchild [2], as well as von Kries, a model of my own. I also extended the application of the models to regular (low-dynamic range) images. I implemented a graphical user interface to help me visualize the difference between output images.


Related Research

(1) Chromatic Adaptation

Chromatic adaptation is the major phenomenon that explains why our perception of color changes when the illumination of surround changes. Chromatic adaptation is our visual system's capability to adjust to varying illumination so as to preserve the appearance of object colors. The theory proposed by von Kries [3] is the most rudimentary and widely accepted theory to explain chromatic adaptation. In short, chromatic adaptation states that if the surround is illuminated by a strong chromatic illuminant, say, blue light, then our visual system will lower the gain of that particular color. As a result, our perceived image will contain less of the color, which is less blue or more yellow in this case. To counter the effect of chromatic adaptation, we must find a way to shift the color of the image towards the color of the surround illuminant to achieve a "boost-up" effect.


(2) Color Appearance Models

Much research has been done on seeking for models that can be used to counter chromatic adaptation and preserve color appearance of images. Color appearance models are just mathematical models that predicts how colors will appear to viewers under different viewing conditions, based on a parameterized description of the surround. The most widely used color appearance model is the CIECAM02 model as mentioned before, which is the combination of many other models developed prior to year 1997. CIECAM02 strives to best-fit the LUTCHI [4] visual experiment data on chromatic adaptation and color appearance.

After Commission Internationale de L'Eclairage (CIE) standardized CIECAM02, many researchers attempted to perfect the model. For instance, Mark D. Fairchild [3] modified the model to account more for human's ability to discount illuminant (i.e. color constancy). Boo-song Kim [5] modified the model so that it fits the LUTCHI data better.

CIECAM02 and its derivatives are very mathematically complicated in nature. From an application point of view, it is unrealistic to implement such model in LCD displays. Therefore I am looking for a simpler model that can be easily implemented in displays while still captures the concepts of chromatic adaptation. Since von Kries' theory explains the effect of chromatic adaptation very well, I believe that a simple model derived directly from von Kries' theory should suffice for the purpose of color preservation. In this project, I incorporated the studies on my own model as well.


Implementation

(1) Algorithms

In this project, I implemented the following three color appearance models mentioned in previous section. The algorithms for these models can be found in the respective pdf files.

- CIECAM02 [pdf]

- Fairchild [pdf]

- von Kries [pdf]

These models cover a white range of illumination conditions. In order to make the study simple and concrete, I made the following assumptions:

- For all the models, the initial/original illuminant of the surround is CIE standard illuminant E, or the equal-energy illuminant. Illuminant E is a normalized reference illuminant which has the tristimulus values X=Y=Z=100.0. Most white-balancing algorithm strives to recover the "ground truth" of an image by computing what the image would look like under illuminant E. Therefore, it is a reasonable reference point for comparing the color changes of images.

- For CIECAM02, all the view parameters are chosen to have "average" value by default. I made this assumption because the other two models work the best under "average" conditions.


(2) GUI

The main purpose of the GUI is to help me visualize what each color appearance model does to an image in real-time, as well as the difference between color appearance models. The GUI has the following features:

- Load: An image of size 150x150 pixel can be imported as the source image. If the image is bigger than 150x150 pixel, it will be automatically cropped.

- Image Viewing Area: The result of applying each color appearance model will be shown in the respected area. The resulting images are updated in real-time as user changes the new illumination condition.

- New Illumination Condition Sliders:The user can change the color of the new illuminant by changing RGB values.

- Additional Parameters:These parameters are used by CIECAM02. The user can change these parameters and observe the different outcomes. The meaning of the parameters are outlines in the pdf file for CIECAM02.

- Color Patches for Original and New Illuminants: These color patches show the color of the original and new illuminant so that users will have a better knowledge of the illuminant color.

Here is a screenshot of the GUI




Result and Analysis

(1) Color patch under illuminant A

Illuminant A is a CIE standard illuminant for incandescent light. It has a fairly yellow color. The following figure shows the shift in color from illuminant E to illuminant A.



First I examined how images with a single color would change under the shift in illumination, from E to A. The color patches (from left to right) have the values ([RGB]=) [100,100,100], [50,50,240], [240,50,50], and [50,240,50].


&emsp &emsp &emsp &emsp &emsp &emsp


The top left image shows the original state of the color, which is what the color should look like under equal-energy illuminant. The top right, bottom left, and bottom left images are the adjusted images by von Kries, fairchild, and CIECAM02 models, respectively. For example, von Kries' model says that if an image appears to be natural grey under equal-energy illuminant, it should be adjusted to be "yellow-ish grey" if we want the viewers to perceive a "natural grey" under illuminant A.


I noticed that different color appearance models adjust the image differently. For example, CIECAM02 tends to shift the color more than von Kries. And Fairchild has the slightest color shift of all. The shift in color may also depends on the original color. For example, all the models make more significant color shift in blue color than in green color.


(2) Real image under illuminant A


&emsp &emsp &emsp &emsp &emsp &emsp


Although the difference in output images is noticeable in the color patch examples, it is less noticeable in real images. This may because real images contain complex color patterns so that it is hard for viewers to detect the shift in a single color. We can only notice that "overall" the adjusted image looks yellow-ish. Also, physiologically, we are more sensitive to "natural grey" than to complex color patterns. We could notice the color shift in natural grey caused by even small amount of change in R,G,B values.


(3) Real image under illuminant D65

I also examined how the models adjust the images when the new illumination condition is daylight. Illuminant D65 is CIE standard average daylight illuminant. It has a tint of green-blue color. The following figure shows the adjusted image under illuminant D65.



&emsp &emsp &emsp &emsp &emsp &emsp


Since the shift in the illumination conditions are not as significant as in the previous example, the adjusted images look fairly similar to the original image.


(4) Theoretical analysis

In order to closely examine the shifts in color, I produced some vector fields on the CIE chromaticity diagram.


&emsp &emsp


The left figure shows the predicted shift of the sRGB gamut and 32 (randomly-sampled) colors from illuminant E to illuminant A by von Kries' model. The middle and right figures illustrate the predicted shift of sRGB gamut and 10 (randomly-sampled) colors from illuminant E to illuminant A by Fairchild's model and CIECAM02 model, respectively. In all three models, the predicted color shifts agree with both the direction and the amount of shift in illuminant (between black dots). This agrees with our intuition - if the illumination of the surround causes our eyes to perceive less yellow, we must boost up the yellow signals to counter the effect.


(5) Comparison of models

The following figure shows the predicted color shift of the sRGB gamut and 10 (randomly-sampled) colors by all three models. von Kries is represented in red, Fairchild is represented in magenta, and CIECAM02 is represented in yellow. The illuminant is in black. The result agrees with the color patches shown in previous discussion: all three models have very close predictions in the red-green zone, while disagrees slightly in the blue-zone.



Conclusions and Future Work

My project achieved its goal by successfully producing both theoretical and realistic comparison of three color appearance models. The analysis shows that the adjusted images are fairly similar in all three models. Due to the time limit, I was only able to implement the models, the GUI, and examine the results in a very rough manner. In the continuation of my Master's research, I will extend the projects to address the following issues:

(1) Include quantitative analysis on the data

This involves plotting the predicted colors on a uv-diagram instead of chromaticity diagram, and compute the distance between points. Lu*v* space is the most standard way to study the difference in color.

(2) Show the validity of von Kries' model

Since von Kries' model is the one I proposed as another valid color appearance model, I will need to show, mathematically, the validity of the model. The proof will be include in my Master's thesis and is relatively irrelevant to this project.

(3) Compare the algorithmic complexity of the models

The von Kries' model is a significantly simpler model than any existing color appearance models. In my thesis, I will include an analysis on the runtime of different color appearance models, to show that von Kries' is far more realistic to implement in both hardware and software.

(5) Show high dynamic range examples

Due to the time limit, I was not able to take photos to produce HDR images. Since in the algorithms, we work in tristimulus space, I believe it is very easy to take RAW images with XYZ information and directly work on them.

(4) Experimental evaluation

All the work has been done here is theoretical. There are very few well-designed and large-scale visual experiments done since the LUTCHI data. The lack of experimental data is due to the complexity of visual experiments. The subjects usually have to be well-trained and sensible to color changes.


Demo

A demo clip can be downloaded [here]


References

[1] Akyuz,O.A., and Reinhard,E. 2006. Color appearance in high-dynamic-range imaging. Journal of Electronic Imaging 15,033001.

[2] Fairchild,M.D. 1998. Color appearance models. Addison-Wesley, Reading, MA.

[3] von Kries J. 1970. Influence of adaptation on the effects produced by luminous stimuli. Sources of Color Science, pages 109-119. The MIT Press, Cambridge, MA.

[4] Luo,M.R., clarke, A.A., Rhodes, P.A., Schappo, A., Scrivener, S.A.R., and Tait, C.J. Quantifying color appearance. Part I. LUTCHI colour appearance data. Color Res.Appl., 16(3):166-180 (1991).

[5] Kim,B.,Kim,I. 2006. Chromatic Adaptatino Model Using the RElationship between the Cone Responses under Change in Illuminant. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 6:1717-1719. Oxford, UK.