High-Fidelity Colour Reproduction for High-Dynamic-Range - VCLAB

Loading...
High-Fidelity Colour Reproduction for High-Dynamic-Range Imaging Min Hyuk Kim

A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy of the University College London.

Department of Computer Science University College London

2010

ii I, [Min Hyuk Kim], confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis.

Signed:

Copyright � c 2010 Min H. Kim All rights reserved.

Abstract

iii

Abstract The aim of this thesis is to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because current characterisation methods and colour appearance models fail to cover the dynamic range of luminance present in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range media such as LCD displays. However, most of these models have only considered luminance compression from a photographic point of view and have not explicitly taken into account colour appearance. Motivated by the idea to bridge the gap between crossmedia colour reproduction and HDR imaging, this thesis investigates the fundamentals and the infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, and develops a novel cross-media colour reproduction system for HDR imaging. First, our HDR characterisation method enables us to measure HDR radiance values to a high accuracy that rivals spectroradiometers. Second, our colour appearance model enables us to predict human colour perception under high luminance levels. We first built a high-luminance display in order to establish a controllable high-luminance viewing environment. We conducted a psychophysical experiment on this display device to measure perceptual colour attributes. A novel numerical model for colour appearance was derived from our experimental data, which covers the full working range of the human visual system. Our appearance model predicts colour and luminance attributes under high luminance levels. In particular, our model predicts perceived lightness and colourfulness to a significantly higher accuracy than other appearance models. Finally, a complete colour reproduction pipeline is proposed using our novel HDR characterisation and colour appearance models. Results indicate that our reproduction system outperforms other reproduction methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for HDR imaging, and successfully bridges the gap between cross-media colour reproduction and HDR imaging.

Acknowledgements

iv

Acknowledgements I sincerely appreciate my PhD supervisor, Dr. Jan Kautz. His encouragement, support, mentoring, and friendship were essential to completing this PhD thesis. I also am grateful to Dr. Celine Loscos for offering me a doctorate opportunity at University College London (UCL). Further, I would like to thank Dr. Simon Julier for his helpful advice as an assessor. I also am grateful to Prof. Zhaopeng Li for kindly allowing me to use her Vision Laboratory. I am very grateful to Prof. Stuart Robson and Dr. Erik Reinhard for kindly agreeing to serve as internal and external examiners. Without the help of the following friends in the computer graphics group at UCL, it would have been impossible to complete this thesis. I would like to thank James Tompkin and Martin Parsley for their untiring help and support in proofreading. I also would like to thank my friends (in alphabetical order): Jeren Chen, Andrew Cox, and Harsha Sri-Narayana. Thanks also to my colleagues in the computer science department for their generous help (in alphabetical order): Jania Aghajanian, Frederic Besse, Dr. Gabriel Brostow, Yun Fu, Oscar Kozlowski, Dr. Peng Li, Soo Ling Lim, Umar Mohammed, Dr. Wole Oyekoya, Dr. Xueni Pan, Dr. Simon Prince, Aitor Rovira, Prof. Anthony Steed, William Steptoe, Fotios Tzellos, Sara Vicente, Dr. Jonathan Warrell, Dr. Tim Weyrich, and Insu Yu. In addition, I would like to thank these people for mentoring my research through their excellent publications (in alphabetical order): Dr. Roy Berns, Dr. Paul Debevec. Dr. Mark Fairchild, Dr. Robert Hunt, Dr. Youngshin Kwak, Dr. Ronnier Luo, Prof. Lindsay MacDonald, Dr. Jan Morovic, Dr. Jack Tumblin, and Dr. Günter Wyszecki. Finally, I would like to thank my parents and parents-in-law: Hun Kim, Mi Lim Kim, Hae Duck Jang, Mae Ja Park, my sister: Hey Lee Kim, and my own family: Jung Hyun Kim, Sue Hyun Kim, and Jin Hee Jang for understanding and untiring supporting during this doctorate.

Contents

v

Contents

1 Introduction

1

1.1 Motivation and Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

2 Background and Previous Work

7

2.1 Colour Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.2 Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2.1 Measuring Optical Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2.2 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

2.2.3 Camera Optics for Capturing Radiance . . . . . . . . . . . . . . . . . . . . . . . .

13

2.2.4 Sensing Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.2.5 Device Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

2.2.6 White Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

2.3 Colour Appearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.3.1 Human Colour Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.3.2 Quantifying Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.3.3 Colour Appearance Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

2.3.4 Colour Appearance Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

2.3.5 Colour Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

2.3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

2.4 Gamut Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

2.5 High-Dynamic-Range Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

2.5.1 High-Dynamic-Range Image Acquisition . . . . . . . . . . . . . . . . . . . . . . .

52

2.5.2 High-Dynamic-Range Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.5.3 Tone Reproduction in High-Dynamic-Range Imaging . . . . . . . . . . . . . . . .

57

2.5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72

2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

Contents 3 Characterisation for High-Dynamic-Range Imaging

vi 75

3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75

3.2 Acquisition of High-Dynamic-Range Radiance Maps . . . . . . . . . . . . . . . . . . . . .

76

3.2.1 Response of Digital Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

3.2.2 Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

3.2.3 Low-Dynamic-Range Source Images . . . . . . . . . . . . . . . . . . . . . . . . . .

78

3.2.4 High-Dynamic-Range Image Acquisition . . . . . . . . . . . . . . . . . . . . . . .

79

3.3 High-Dynamic-Range Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

3.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

3.3.2 Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.3.3 Characterisation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

3.4 White Balancing of High-Dynamic-Range Radiance Maps . . . . . . . . . . . . . . . . .

85

3.4.1 Estimating the Scene Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . .

86

3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.5.1 Colour Accuracy of High-Dynamic-Range Characterisation . . . . . . . . . . . .

88

3.5.2 Illuminant Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4 High-Luminance Colour Experiments 4.1 High-Luminance Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99 99

4.1.1 Design and Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.2 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.3.1 Experimental Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.3.2 Colour Appearance Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.3.3 Inter-phase Colourfulness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.3.4 Observer Repeatability and Variation . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.3.5 Differences to Previous Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.5 Colour Appearance Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.5.1 Luminance Effect on Lightness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.5.2 Luminance Effect on Colourfulness . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.5.3 Luminance Effect on Hue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.4 Background Effect on Lightness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.5 Background Effect on Colourfulness . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.6 Background Effect on Hue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.7 Colour Temperature Effect on Colour Appearance . . . . . . . . . . . . . . . . . . 117

Contents

vii

4.5.8 Surround Effect on Colour Appearance . . . . . . . . . . . . . . . . . . . . . . . . 121 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6.1 Perceived Lightness Appearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6.2 Perceived Colourfulness Appearance . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.6.3 Perceived Hue Appearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5 A Colour Appearance Model for Extended Luminance Levels

125

5.1 Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.2 Forward Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Chromatic Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Cone Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 5.2.3 Achromatic Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.2.4 Chromatic Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.3 Inverse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.4.1 Estimations under High Luminances . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.4.2 Estimations on Different Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6 Colour Reproduction in High-Dynamic-Range Imaging

155

6.1 Image Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.1.1 Reproduction Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.1.2 Colour Connection Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.1.3 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 6.1.4 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 6.2 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.2.1 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.2.2 Experimental Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 6.2.3 Quantitative Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7 Discussion and Future Work

181

7.1 High-Dynamic-Range Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.2 High-Luminance Colour Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 7.3 Colour Appearance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 7.4 High-Dynamic-Range Colour Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Contents

viii

8 Conclusion

187

A Supplementals

189

A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A.2 Relative Camera Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A.3 Physical Measurements in High-Dynamic-Range Characterisation . . . . . . . . . . . . 190 A.4 Physical Measurements of the High-Luminance Display . . . . . . . . . . . . . . . . . . . 222 A.5 Instruction for Colour Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 A.6 Colour Appearance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 A.7 Similarity Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Bibliography

247

Index

259

List of Figures

ix

1.1 Comparison of dynamic ranges in low-/high-dynamic-range imaging . . . . . . . . . .

2

2.1 Five-stage colour reproduction system . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.2 Schematic diagram of illumination laws . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.3 CIE 1931 colour matching functions vs. cone spectral sensitivity curves . . . . . . . . .

10

2.4 CIE-recommended illuminating and viewing geometries . . . . . . . . . . . . . . . . . .

12

2.5 Quantum efficiency of a solid-state-based sensor . . . . . . . . . . . . . . . . . . . . . . .

15

2.6 Average responsivity of solid-state imaging . . . . . . . . . . . . . . . . . . . . . . . . . .

17

2.7 Measured opto-electronic transfer functions of a digital camera and an LCD display .

18

2.8 Spectral responsivity of a digital camera and an LCD display . . . . . . . . . . . . . . .

20

2.9 Schematic illustration of human colour vision based on the zone model . . . . . . . .

23

2.10 Cone response (V ) vs. intensity (log I) curves . . . . . . . . . . . . . . . . . . . . . . . .

24

2.11 Specification of components of the viewing field . . . . . . . . . . . . . . . . . . . . . . .

27

2.12 Four-stage structure of modern colour appearance models . . . . . . . . . . . . . . . . .

30

2.13 Gamut boundary comparison between a digital camera and an LCD display . . . . . .

49

2.14 Gamut boundary comparison between the real-world gamut and sRGB colour space .

50

2.15 Mosaic neutral-density filter for high-dynamic-range imaging . . . . . . . . . . . . . . .

55

2.16 Design of a high-dynamic-range display . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.17 Schematic diagram for a tone reproduction operator . . . . . . . . . . . . . . . . . . . .

58

2.18 Range of the dynamic scale factor k2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

2.19 Comparison between frequency and gradient decomposition . . . . . . . . . . . . . . .

64

3.1 Characteristic curves of ordinary and RAW responses of a digital camera . . . . . . . .

77

3.2 Correlated colour temperature estimates from a digital camera . . . . . . . . . . . . . .

78

3.3 Channel separation from RAW response to RGB channels . . . . . . . . . . . . . . . . .

78

3.4 Characteristic responses curves of a digital camera . . . . . . . . . . . . . . . . . . . . .

80

3.5 Comparison of measured gamut boundaries . . . . . . . . . . . . . . . . . . . . . . . . . .

82

3.6 Setup for training/testing high-dynamic-range characterisation models . . . . . . . . .

82

3.7 Setup of high-dynamic-range characterisation . . . . . . . . . . . . . . . . . . . . . . . .

83

3.8 Traditional characterisation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84

3.9 Measuring geometry setup for high-dynamic-range characterisation . . . . . . . . . . .

84

List of Figures

List of Figures

x

3.10 Examples of the training images for our white balancing . . . . . . . . . . . . . . . . . .

87

3.11 Overall results of accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.12 Comparison of colour difference (test set, patches sorted by chromaticity) . . . . . . .

91

3.13 Test scene consisting of GretagMacbeth charts under halogen light . . . . . . . . . . . .

91

3.14 Each step of the high-dynamic-range characterisation method . . . . . . . . . . . . . . .

92

3.15 Before and after comparison of high-dynamic-range characterisation . . . . . . . . . .

93

3.16 Comparison of high-dynamic-range characterisation models . . . . . . . . . . . . . . . .

94

3.17 Result of temperature estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

4.1 A custom-built high-luminance display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.2 Design of the high-luminance display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3 Compartments of the high-luminance display . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4 Colour gamut and spectral power distribution of the high-luminance display . . . . . 102 4.5 Viewing pattern observed by participants . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6 Chromaticity coordinates of colour samples . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.7 Measuring geometry setup for colour experiments . . . . . . . . . . . . . . . . . . . . . . 105 4.8 Viewing pattern observed by participants . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.9 Perceptual colour primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.10 Perceived reference colourfulness for different luminances and backgrounds . . . . . . 109 4.11 Qualitative comparison of observer repeatability . . . . . . . . . . . . . . . . . . . . . . . 110 4.12 Qualitative comparison between LUTCHI and our appearance data . . . . . . . . . . . 112 4.13 Lightness perception for different luminance levels . . . . . . . . . . . . . . . . . . . . . 114 4.14 Colourfulness perception for different luminance levels . . . . . . . . . . . . . . . . . . . 115 4.15 Hue perception for different luminance levels . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.16 Lightness perception for different background levels . . . . . . . . . . . . . . . . . . . . 118 4.17 Colourfulness perception for different background levels . . . . . . . . . . . . . . . . . . 119 4.18 Hue perception for different background levels . . . . . . . . . . . . . . . . . . . . . . . . 120 4.19 Colour perception for different colour temperatures . . . . . . . . . . . . . . . . . . . . . 121 4.20 Colour perception for different surrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.1 Testing chromatic adaptation transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.2 Testing degree of adaptation parameter D in CIECAT02 . . . . . . . . . . . . . . . . . . . 129 5.3 Testing cone response function of a dynamic cone response function in CIECAM02 . . 130 5.4 Comparison of cone response in a power function and our hyperbolic function . . . . 131 5.5 Comparison of results of lightness predictions . . . . . . . . . . . . . . . . . . . . . . . . 133 5.6 Media dependency in lightness predictions . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.7 Relationship between brightness and lightness with respect to luminance . . . . . . . 134 5.8 Relationship between colourfulness and chroma with respect to luminance . . . . . . 135 5.9 Comparison of results of colourfulness predictions . . . . . . . . . . . . . . . . . . . . . . 136

List of Figures

xi

5.10 Comparison of results of hue predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.11 Results of estimations in luminance-varying phases . . . . . . . . . . . . . . . . . . . . . 141 5.12 Results of estimations in background-varying phases . . . . . . . . . . . . . . . . . . . . 142 5.13 Overall results of estimations in variation of luminance and background . . . . . . . . 143 5.14 Results of estimations in colour temperature-varying phases . . . . . . . . . . . . . . . . 144 5.15 Results of estimations in surround-varying phases . . . . . . . . . . . . . . . . . . . . . . 145 5.16 Overall results of estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5.17 Results of estimations in a validation set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5.18 Overall results of estimations with validation phases . . . . . . . . . . . . . . . . . . . . 147 5.19 Results of predicting colour appearance under high luminance . . . . . . . . . . . . . . 149 5.20 Quantitative comparison of the prediction of colours in LUTCHI data set . . . . . . . . 150 5.21 Quantitative comparison of the average prediction of colours in LUTCHI data set . . . 151 6.1 High-fidelity colour reproduction pipeline for high-dynamic-range imaging . . . . . . 156 6.2 Appearance matching with respect to the background effect . . . . . . . . . . . . . . . . 160 6.3 Appearance matching with respect to media dependency . . . . . . . . . . . . . . . . . . 160 6.4 Qualitative comparison of perceptual predictions of colour appearance models . . . . 161 6.5 Qualitative comparison of visual predictions (1/5) . . . . . . . . . . . . . . . . . . . . . . 163 6.6 Qualitative comparison of visual predictions (2/5) . . . . . . . . . . . . . . . . . . . . . . 164 6.7 Qualitative comparison of visual predictions (3/5) . . . . . . . . . . . . . . . . . . . . . . 165 6.8 Qualitative comparison of visual predictions (4/5) . . . . . . . . . . . . . . . . . . . . . . 166 6.9 Qualitative comparison of visual predictions (5/5) . . . . . . . . . . . . . . . . . . . . . . 167 6.10 Schematic diagram of psychophysical evaluation experiments . . . . . . . . . . . . . . . 168 6.11 Screen capture of a reproduction stimuli. . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.12 Comparison of perceptual predictions with a real scene (scene one) (1/2) . . . . . . . 170 6.13 Comparison of perceptual predictions with a real scene (scene one) (2/2) . . . . . . . 171 6.14 Comparison of perceptual predictions with a real scene (scene two) (1/2) . . . . . . . 172 6.15 Comparison of perceptual predictions with a real scene (scene two) (2/2) . . . . . . . 173 6.16 An example of a linear least-squares fit from LG to z-score . . . . . . . . . . . . . . . . . 174 6.17 Overall quantitative comparison of visual predictions and significance test . . . . . . . 175 6.18 Quantitative comparison of perceptual predictions with a real scene (scene one) . . . 176 6.19 Quantitative comparison of perceptual predictions with a real scene (scene two) . . . 177

List of Tables

xii

2.1 Transform from sRGB into CIEXYZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.2 Transform from sRGB into D50-adapted CIEXYZ . . . . . . . . . . . . . . . . . . . . . . .

12

2.3 Hue angle conversion to hue composition in the RLAB model . . . . . . . . . . . . . . .

33

2.4 Surround parameters in the Hunt94 model . . . . . . . . . . . . . . . . . . . . . . . . . .

34

2.5 Hue eccentricity parameters in the Hunt94 model . . . . . . . . . . . . . . . . . . . . . .

37

2.6 Surround parameters in the LLAB model . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

2.7 Hue angle conversion to hue composition in the LLAB model . . . . . . . . . . . . . . .

39

2.8 Surround parameters in the CIECAM97s model . . . . . . . . . . . . . . . . . . . . . . . .

40

2.9 Surround parameters in the CIECAM02 model . . . . . . . . . . . . . . . . . . . . . . . .

43

3.1 Transformation matrices from high-dynamic-range camera signals into CIEXYZ . . . .

85

3.2 Colour accuracy error of high-dynamic-range characterisation . . . . . . . . . . . . . .

89

List of Tables

4.1 Summary of the 19 phases of our experiment . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.2 Observers repeatability and overall variation . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.1 Hue eccentricity parameters for unique hues . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.1 Summary of our evaluation experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 A.1 Relative camera characterisation for Canon 350D . . . . . . . . . . . . . . . . . . . . . . 189 A.2 Radiometric and camera measurements (Canon 350D) of training colours (1/15) . . 190 A.3 Radiometric and camera measurements (Canon 350D) of training colours (2/15) . . 191 A.4 Radiometric and camera measurements (Canon 350D) of training colours (3/15) . . 192 A.5 Radiometric and camera measurements (Canon 350D) of training colours (4/15) . . 193 A.6 Radiometric and camera measurements (Canon 350D) of training colours (5/15) . . 194 A.7 Radiometric and camera measurements (Canon 350D) of training colours (6/15) . . 195 A.8 Radiometric and camera measurements (Canon 350D) of training colours (7/15) . . 196 A.9 Radiometric and camera measurements (Canon 350D) of training colours (8/15) . . 197 A.10 Radiometric and camera measurements (Canon 350D) of training colours (9/15) . . 198 A.11 Radiometric and camera measurements (Canon 350D) of training colours (10/15) . 199 A.12 Radiometric and camera measurements (Canon 350D) of training colours (11/15) . 200 A.13 Radiometric and camera measurements (Canon 350D) of training colours (12/15) . 201

List of Tables

xiii

A.14 Radiometric and camera measurements (Canon 350D) of training colours (13/15) . 202 A.15 Radiometric and camera measurements (Canon 350D) of training colours (14/15) . 203 A.16 Radiometric and camera measurements (Canon 350D) of training colours (15/15) . 204 A.17 Camera measurements (Nikon D100 and D40) of training colour samples (1/15) . . 205 A.18 Camera measurements (Nikon D100 and D40) of training colour samples (2/15) . . 206 A.19 Camera measurements (Nikon D100 and D40) of training colour samples (3/15) . . 207 A.20 Camera measurements (Nikon D100 and D40) of training colour samples (4/15) . . 208 A.21 Camera measurements (Nikon D100 and D40) of training colour samples (5/15) . . 209 A.22 Camera measurements (Nikon D100 and D40) of training colour samples (6/15) . . 210 A.23 Camera measurements (Nikon D100 and D40) of training colour samples (7/15) . . 211 A.24 Camera measurements (Nikon D100 and D40) of training colour samples (8/15) . . 212 A.25 Camera measurements (Nikon D100 and D40) of training colour samples (9/15) . . 213 A.26 Camera measurements (Nikon D100 and D40) of training colour samples (10/15) . . 214 A.27 Camera measurements (Nikon D100 and D40) of training colour samples (11/15) . . 215 A.28 Camera measurements (Nikon D100 and D40) of training colour samples (12/15) . . 216 A.29 Camera measurements (Nikon D100 and D40) of training colour samples (13/15) . . 217 A.30 Camera measurements (Nikon D100 and D40) of training colour samples (14/15) . . 218 A.31 Camera measurements (Nikon D100 and D40) of training colour samples (15/15) . . 219 A.32 Radiometric and camera measurements (Canon 350D) of test colour samples . . . . . 220 A.33 Camera measurements (Nikon D100 and D40) of test colour samples . . . . . . . . . . 221 A.34 Device signals and corresponding radiometric measurements of our display (1/2) . . 222 A.35 Device signals and corresponding radiometric measurements of our display (2/2) . . 223 A.36 Summary of viewing conditions for all 19 phases. . . . . . . . . . . . . . . . . . . . . . . 225 A.37 Physical measurements, perceptual estimates, and our model’s predictions (Phase 1). 226 A.38 Physical measurements, perceptual estimates, and our model’s predictions (Phase 2). 227 A.39 Physical measurements, perceptual estimates, and our model’s predictions (Phase 3). 228 A.40 Physical measurements, perceptual estimates, and our model’s predictions (Phase 4). 229 A.41 Physical measurements, perceptual estimates, and our model’s predictions (Phase 5). 230 A.42 Physical measurements, perceptual estimates, and our model’s predictions (Phase 6). 231 A.43 Physical measurements, perceptual estimates, and our model’s predictions (Phase 7). 232 A.44 Physical measurements, perceptual estimates, and our model’s predictions (Phase 8). 233 A.45 Physical measurements, perceptual estimates, and our model’s predictions (Phase 9). 234 A.46 Physical measurements, perceptual estimates, and our model’s predictions (Phase 10). 235 A.47 Physical measurements, perceptual estimates, and our model’s predictions (Phase 11). 236 A.48 Physical measurements, perceptual estimates, and our model’s predictions (Phase 12). 237 A.49 Physical measurements, perceptual estimates, and our model’s predictions (Phase 13). 238 A.50 Physical measurements, perceptual estimates, and our model’s predictions (Phase 14). 239 A.51 Physical measurements, perceptual estimates, and our model’s predictions (Phase 15). 240

List of Tables

xiv

A.52 Physical measurements, perceptual estimates, and our model’s predictions (Phase 16). 241 A.53 Physical measurements, perceptual estimates, and our model’s predictions (Phase 17). 242 A.54 Physical measurements, perceptual estimates, and our model’s predictions (Phase 18). 243 A.55 Physical measurements, perceptual estimates, and our model’s predictions (Phase 19). 244 A.56 Physical measurements of perceived similarity of a real scene (scene one). . . . . . . . 245 A.57 Physical measurements of perceived similarity of a real scene (scene two). . . . . . . . 246

1

Chapter 1

Introduction This chapter provides a brief introduction to motivate the thesis and describes its principal contributions. It summarises the main structure of this document with a short overview of the methodology and results.

1.1

Motivation and Objective

We live in a world of image-driven media. On a computer, a television, or in a newspaper, we look at reproduced images every day. We are communicating and archiving visual information of the real world through image reproduction. Faithfulness is the most important factor in this visual communication. If the original and reproduction were different, our visual communication would be deteriorated, introducing miscommunication. In order to achieve high fidelity in reproducing an image, the image data captured by a camera should match the original scene, and the captured image should be displayed on a monitor or in a photograph as faithfully as recorded in the image data. The image in visual communication comprises various information, e.g., colour, texture, and visual story. Among them, colours form a fundamental base of visual communication. It is important to achieve high fidelity in reproducing colours for visual communication. This topic has been broadly researched as the study of cross-media colour reproduction [Morovic, 2008]. In the past decade, imaging technology has leaped into a new era by significantly extending the dynamic range in capturing real-world luminance. The working range of common imaging devices is limited by the capacity of the hardware. For instance, a common digital camera captures luminances by using a solid-state sensor, which yields 12-bit depth of signals as integers (e.g., Nikon D100). If a scene that we need to capture with the camera contains a wider range of luminances, such as ten orders of magnitude, we would only be able to capture partial luminance information due to the bleaching and saturation of sensor signals [see Figure 1.1(a)]. This problem was first addressed by Mann [1993]. To overcome the saturation problem in sensing real-world luminance, Mann introduced an innovative capture technology called high-dynamic-range (HDR) imaging. Instead of taking only one picture, Mann captured the scene (that may have high-dynamic-range luminances) as multiple images, scanning the required dynamic range with various exposure settings with a lowdynamic-range (LDR) camera. The multiple exposures were then concatenated into an HDR image.

1.1. Motivation and Objective

2

As a result, HDR imaging can cover most of the dynamic range of real-world luminance, solving the sensor saturation problem of the camera [see Figure 1.1(b)]. HDR imaging was a sensational innovation in capturing the real world and has been broadly used in the graphics and electronic engineering fields. However, even though HDR imaging solves the sensing problem when capturing, it introduces another problem in reproducing the HDR image data. As shown in Figure 1.1(b), the dynamic range of the captured HDR image exceeds that of the displays significantly. Simple scaling methods are not enough to compress the range of the HDR data. Consequently, most of the interesting information in the HDR image is lost by discretisation of the display signal resolution. Hence, Tumblin and Rushmeier [1993] addressed this reproduction problem. They proposed a non-linear mapping to reproduce the HDR image with a similar appearance to that observed by the human visual system, called a tone reproduction operator or tone-mapping algorithm. In fact, HDR imaging [Mann, 1993; Debevec and Malik, 1997; Mitsunaga and Nayar, 1999] and tone reproduction operators [Tumblin and Rushmeier, 1993; Fattal et al., 2002; Durand and Dorsey, 2002; Reinhard et al., 2002] can be understood as advanced colour reproduction methods. However, the state of the art in HDR imaging has focused on the extendibility of the dynamic range from a tone-reproduction point of view and has not yet approached classical cross-media colour reproduction. For example, the state of the art in HDR imaging does not have infrastructure such as a modulated colour reproduction pipeline. As shown in Figure 1.1, the data flows in LDR and

High-dynamic-range imaging

Low-dynamic-range imaging

(a)

35

~33 bits

Dynamic range of luminance [unit: bit]

Dynamic range of luminance [unit: bit]

35 30

Caturing stage

25 20 15

12-14 bits

10

6-8 bits

5 0

Reproduction stage

Real world

LDR camera

LCD display

(b)

~33 bits

~33 bits

30 25

Caturing stage Reproduction stage

20 15 10

6-8 bits

5 0

Real world

HDR camera

LCD display

Figure 1.1: These two plots compare dynamic-range changes in low-/high-dynamic-range image reproduction. Imagine that we capture a real-world scene on a bright sunny day. In both plots, the real-world scene is represented as the highest grey-scales bar on the left-hand side. We assume that the luminance ranges ten orders of magnitude. We recalculate the intensity as a bit depth to compare with digital signal depth (33 ⇡ log2 1010 cd/m2 ). The middle bars in both plots represent dynamic ranges of camera

data. The middle bar in Plot (a) shows ⇠12 bits signal depth. This means the sensor in LDR imaging is able to capture only a partial range of the real-world luminance. The middle bar in Plot (b) shows

the dynamic range of HDR image data, which is almost identical to that of the real world. Finally, the bars on the right-hand side show the dynamic range of a typical display (about 8 bits of signal depth). While the dynamic range of the display shows a minor difference to the LDR camera, the display range shows a significant difference to that of the camera data for HDR imaging.

1.2. Scope

3

HDR imaging are significantly different; hence, current cross-media colour reproduction technology is not compatible. Historically, there have been efforts to bridge the gap between classical reproduction technology and HDR imaging. Göesele et al. [2001] utilised a colour management profile to build an HDR image. Johnson and Fairchild [2003], Akyüz and Reinhard [2006], and Kuang et al. [2007] attempted to combine a tone-mapping algorithm with a colour appearance model. However, without radical restructuring of the colour reproduction system, such hybrid solutions have struggled with performance. With motivation to bridge the gap between cross-media colour reproduction and HDR imaging, this thesis investigates fundamentals and infrastructure of cross-media colour reproduction. It restructures cross-media colour reproduction with respect to HDR imaging, aiming to develop a novel cross-media colour reproduction system for HDR imaging.

1.2

Scope

Classical cross-media colour reproduction has been understood as a set of reproduction chains that have three elements: device characterisation, colour appearance modelling, and gamut mapping [MacDonald, 1993].

Device characterisation describes a set of transforms to convert in-

put/output device signals to physically-meaningful device-independent signals, e.g., CIEXYZ coordinates. Colour appearance modelling interprets these physically-meaningful device-independent signals to perceptually-meaningful coordinates by taking the viewing environmental conditions into account. Finally, gamut mapping is a visual enhancement procedure to minimise the perceived gamut differences between the target and source media, aiming for plausible reproductions. In this thesis, these fundamentals were investigated in the context of HDR imaging, resulting in the development of a high-fidelity colour reproduction system for HDR imaging. First, the capturing stage in HDR imaging was researched with respect to device characterisation [see Figure 1.1(b)]. We suggest a novel device characterisation for HDR imaging. HDR characterisation converts the colour specifications of device-dependent HDR images into highly accurate and physically-meaningful radiance values in the form of absolute CIEXYZ. This thesis focuses on generating physically accurate HDR radiance maps of static scenes, whereas constructing HDR images of moving objects or transforming LDR images to HDR images is not handled in this thesis. Acquiring physically-meaningful radiance maps is not sufficient for HDR colour communication as the given physical colours under high luminance levels are perceived differently depending on their viewing conditions (see Chapter 4 on more details of our experimental findings). Therefore, perceptual attributes, e.g., lightness, colourfulness, and hue, of the given physical colour stimuli under high luminances were measured experimentally and modelled as a novel colour appearance model. Our colour appearance model links the description of physically-meaningful HDR radiance maps to perceptually-uniform appearance attributes under extended luminance levels. In theory, these two elements, HDR device characterisation and colour appearance modelling for high luminances, are sufficient for colour image reproduction unless the size of the colour gamuts of the input/output media are significantly different [Morovic, 2008]. According to our measurements (see Section 2.4), the gamut size of the input device is smaller. Especially the input gamut is smaller

1.3. Contributions

4

regarding highly saturated colours. Aiming to achieve the highest fidelity of perceived colour reproduction, we directly mapped perceived colour attributes (input gamut) into perceived output colour attributes (output gamut) with a direct 1:1 gamut mapping, similar to relative colorimetric intent (see Section 2.4 for more details). Plausible aspects in user preference (e.g., gamut mapping study) are not handled in this thesis. In summary, this thesis focuses on accuracy in both physical acquisition (device characterisation) and perceptual prediction (colour appearance modelling) in HDR colour reproduction. Finally, this thesis provides a complete colour reproduction system for HDR imaging as an application at the end. Possible applications for this system may be as a high-fidelity reproduction pipeline in an HDR broadcasting system (from HDR input to home displays) or as a measuring device for physical radiance and its corresponding perceptual response.

1.3

Contributions

In the context of this thesis, the following contributions have been made. • Device characterisation for HDR camera systems. A novel characterisation method is intro-

duced in Chapter 3. A novel colour reference target was built, specifically designed for HDR imaging. The reference target has a larger gamut and higher dynamic range than common camera calibration targets. It enables highly accurate calibration of an HDR camera system. The proposed method yields physically-meaningful HDR radiance maps to a high accuracy from digital cameras. See Chapter 3 for more details on HDR characterisation.

• Colour constancy algorithm. A novel colour constancy algorithm is proposed to reproduce colour constant hues on output media. This technique produces the estimated white point of the scene illumination that is used for white balancing of the calibrated HDR radiance map and can be used to estimate the white point as input to our CAM. See Chapter 3 for more details on white balancing. • Colour appearance data under high luminance levels. A novel high-luminance display device was built to yield a controllable high-luminance viewing environment, where a series of

psychophysical experiments were conducted to produce colour appearance data under high luminance levels (up to 16 860 cd/m2 ). This data set provides novel measurements of human colour perception in the full working range of the human visual system (five orders of magnitude). See Chapter 4 for more details on the experiments and analysis of the data set. The appearance data set can be found in Appendix A. • Colour appearance model for high luminance levels. A novel colour appearance model

was developed from our experimental data set (see Chapter 4 for the experiments), which

enables us to model the human visual system under high luminance levels. The model covers a larger range of luminance than existing colour appearance models, and it is directly applicable to HDR imaging. Owing to the proposed colour appearance model, no extra tone-mapping

1.3. Contributions

5

algorithm is required to complete colour reproduction in HDR imaging. Chapter 5 describes the development of our colour appearance model. • Cross-media colour reproduction system for HDR imaging. A complete colour reproduc-

tion pipeline is introduced in Chapter 6. This system is built using the HDR characterisation (in Chapter 3) and our colour appearance model (in Chapter 5). It enables reproduction of human observations of a real-world scene onto an output display device. Chapter 6 describes the organisation of the novel elements for colour reproduction in HDR imaging. Results indicates that the proposed colour reproduction system produces high fidelity on output media.

Most of these contributions have been presented in the following publications: 1. Min H. Kim, Tim Weyrich, and Jan Kautz. 2009. Modeling Human Color Perception under Extended Luminance Levels. ACM Transactions on Graphics (Proc. SIGGRAPH 2009), 28(3):27:1-9. 2. Min H. Kim and Jan Kautz. 2008. Characterization for High Dynamic Range Imaging. Computer Graphics Forum (Proc. EUROGRAPHICS 2008), 27(2):691-697. 3. Min H. Kim and Jan Kautz. 2009. Consistent Scene Illumination using a Chromatic Flash. In Proc. Eurographics Workshop on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe 2009), pages 83-89, British Columbia. Eurographics Association. 4. Min H. Kim and Jan Kautz. 2008. Consistent Tone Reproduction. In Proc. IASTED Conference on Computer Graphics and Imaging (CGIM 2008), pages 152-159, Innsbruck. IASTED/ACTA Press. 5. Min H. Kim and Lindsay W. MacDonald. 2006. Rendering High Dynamic Range Images. In Proc. EVA 2006 London Conference, EVA Conferences International, pages 22.1–11, Middlesex. EVA Conference International (ECI). Other publications during this doctorate: 6. Tobias Ritschel, Thorsten Grosch, Min H. Kim, Hans-Peter Seidel, Carsten Dachsbacher, and Jan Kautz. 2008. Imperfect Shadow Maps for Efficient Computation of Indirect Illumination. ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2008), 27(5):129:1-8. 7. Insu Yu, Andrew Cox, Min H. Kim, Tobias Ritschel, Thorsten Grosch, Carsten Dachsbacher, and Jan Kautz. 2009. Perceptual Influence of Approximate Visibility in Indirect Illumination. ACM Transactions on Applied Perception (presented at Symposium on Applied Perception in Graphics and Visualization, APGV 2009), 6(4):24:1-14.

1.4. Thesis Outline

1.4

6

Thesis Outline

Chapter 2 presents the fundamentals of colour reproduction, device characterisation, colour appearance modelling, and HDR imaging in general. It also provides an overview of the state of the art in colour appearance modelling and HDR imaging. In Chapter 3, we present a novel reference target designed for HDR camera systems and a novel technique to build physically-meaningful HDR radiance maps with significant accuracy, called HDR characterisation. We also introduce an efficient and accurate method to estimate the scene illumination for white balancing. Chapter 4 describes the high-luminance colour experiments, conducted with a high-luminance display device that was specifically designed and build for producing high-luminance colour stimuli. A novel colour appearance model for high luminance levels is presented in Chapter 5. It is derived from the acquired experimental data in Chapter 4. Chapter 6 describes an HDR colour reproduction pipeline using our novel fundamentals. Chapter 7 summaries this thesis and discusses potential directions for future work, and Chapter 8 concludes this thesis. Appendix A lists experimental data.

7

Chapter 2

Background and Previous Work This chapter introduces the background to this thesis and discusses related work. Section 2.1 introduces colour reproduction. In Section 2.2, fundamentals of device characterisation are presented. Section 2.3 describes human colour vision and the state of the art in modelling colour appearance. The fundamentals of gamut mapping are presented in Section 2.4. Section 2.5 describes the related work in high-dynamic-range imaging with respect to colour reproduction. Section 2.6 discusses this chapter.

2.1

Colour Reproduction

Cross-media colour reproduction can be presented as a process which comprises three essential elements: device characterisation, colour appearance modelling, and gamut mapping. A set of

Device characterisation

Colour appearance

Original medium profile

Original image

Forward device transform

Gamut mapping Original viewing conditions

Forward appearance model Gamut mapping algorithm

Equivalent colour appearance Reproduction image

Inverse device transform

Reproduction medium profile

Inverse appearance model

Original and reproduction colour gamuts

Reproduction viewing conditions

Figure 2.1: Five-stage colour reproduction system. Procedures for reproducing a source image on a target medium can be described as a set of five different stages: (1) forward device characterisation, e.g., a camera or a scanner, (2) forward colour appearance model, e.g., CIECAM02, (3) perceptual gamut mapping, (4) inverse colour appearance model, and (5) inverse device characterisation. Adapted from [MacDonald, 1993; Morovic, 1985].

2.2. Characterisation

8

these elements can be interpreted as a five-stage transform [MacDonald, 1993] from the point of view of reproducing a source image on a target medium (see Figure 2.1). Initially, the original image is specific on the source medium. For instance, the RGB 8-bit signals of a camera. At the target device the image is also shown in a medium-dependent way. In order to match the colour appearance on the two different media, it important to describe the different media in some medium-independent way. Device characterisation describes colour reproduction devices, e.g., a camera, a scanner, and a printer, by relating their device-dependent colour specification to device-independent coordinates, e.g., physically-meaningful tristimulus values such as CIEXYZ. However, this is not sufficient for colour reproduction as a given physical stimuli can be perceived differently depending on its viewing conditions. Perceptual attributes, e.g., the lightness, chroma, and hue of a physical colour stimulus, need to be communicated instead of physical stimuli values. Hence, a colour appearance model links the description of the physical stimuli to the perceptual appearance attributes, considering a given viewing environment. Technically, these two elements, device characterisation and colour appearance modelling, are sufficient for colour image reproduction unless the size of the colour gamuts of input/output media is different [Morovic, 2008]. However, if there is a considerable difference between the colour gamuts, it is necessary to map the input colour gamut into the output in an intelligent way, so-called gamut mapping.

2.2

Characterisation

Colours on imaging devices are specific to their media. Device characterisation converts the devicedependent colour specification to device-independent coordinates. It bridges the meaningless imaging device signals to physically-meaningful values. The following sections present the physical background and technical details of device characterisation.

2.2.1

Measuring Optical Radiation

Imaging devices like digital or film cameras sense a certain range of optical radiation to yield images. Radiometry is the measurement of the optical radiation, which is an electromagnetic radiation within the frequency range from 3⇥1011 to 3⇥1016 Hz [CIE, 1983]. In contrast, photometry is the measurement of light, which is defined as electromagnetic radiation detectable by the human eye within the wavelength range from 380nm to 780nm. It is defined as the CIE V (λ) function [CIE, 1986]. Therefore, radiometric units include infrared, visible, and ultraviolet wavelengths without specific consideration of the human visual system, and luminous units account for the perceptual aspect of the radiation on the human eye. There are various ways to quantify the optical radiation in physics. The quantification units are described here. Suppose there is a tungsten light, which emits a beam of light on subjects in a room. The beam contains a certain amount of light. When it is near the lamp, it occupies a small area; when it is further away, it occupies a larger area (like a spot light). However, the amount of light in the beam is the same. Its beam looks like a circular cone (see Figure 2.2). The total amount of light visible in the beam is called luminous flux [unit: lumen] F . It is a summation of the products of

2.2. Characterisation

9

the power per unit wavelength interval P(λ), the spectral luminous efficiency function V (λ) [CIE, 1986], and the width of each wavelength band ∆λ. To obtain a physically-meaningful scale, it is scaled by a constant relating units of flux to units of power (683 lumens per watt) Km : F = Km

X

P(λ)V (λ)∆λ .

(2.1)

λ

The only difference between calculating radiometric units and calculating photometric units is to exclude the CIE V (λ) function of luminance. The calculation of radiant flux excludes the V (λ) function in Equation (2.1) and uses the watt unit instead. Luminous flux measures the visible light in passage from one place to another. Illuminance is the amount of luminous flux falling on a unit area of a surface. Its unit is lux, which means one lumen falls on an area of one square metre. For irradiance, the unit is w/m2 . There are two interesting laws related to illumination. Illumination E is inversely proportional to the square of the distance between the light and the surface d,

E1 E2

=

d22 d12

, called Inverse Square

Law of Illumination. The illumination E on an inclined surface E at distance d is proportional to the cosine of the angle ✓ of incident light and the surface normal, E =

I cos✓ , d2

where I is luminous

intensity, called the Lambertian Cosine Law of Illumination (see Figure 2.2). On the light emitting surface, the amount of light leaving a light source can be measured. It is called the luminous intensity, and is measured in candela. One candela occurs when a source radiates one lumen into a solid angle of one steradian (sr). The unit for radiant intensity is w/sr. Luminance describes a measure of the light leaving a surface, equal to the luminous intensity per unit area. The unit of luminance is cd/m2 ; the unit for radiance is w/(m2 · sr). In particular, the

iterative travel of radiance L at a certain solid angle (a steradian w is an area A per squared radius r: w = A/r 2 ) !o can be modelled mathematically like Equation (2.2), so-called the rendering equation

[Kajiya, 1986]. It is a summation of emitted radiance L e (p,!o ) at a point p and the integral of reflected light in hemisphere ⌦: Z f (p,!i ,!o )L(p0 ,−!i )cos✓i d!i ,

L(p,!o ) = L e (p,!o ) +

(2.2)



Luminous flux d1

E1 d2  E2

Figure 2.2: Schematic diagram of illumination laws. E1 surface illuminated by a near light source; E2 surface illuminated by a more distance of light source.

2.2. Characterisation

10

where f (p,!i ,!o ) is the reflectance property (a scalar function from zero to one) at point p in the incoming direction !i and outgoing direction !o . This is the bi-directional reflectance distribution function (BRDF). L(p0 ,−!i ) is the incoming radiance from direction (−!i ) in one dimensional angle ✓ from the surface p normal. In practice, a perfect diffusion assumption is often used for mathematical convenience, called a Lambertian surface. Theoretically, a Lambertian surface provides uniform diffusion of the incident radiation so that its luminance is the same in all directions from which it can be measured. For instance, if the Lambertian surface is illuminated uniformly with an illuminance of 3.1416 (⇡) lux, then the measure of its luminance on that surface will be 1.0 cd/m2 in 100% reflectance.

2.2.2

Colorimetry

Colorimetry is the measurement of human colour perception, concerned with reducing spectra to the physical correlates of colour perception. To perform colorimetry, we need three essential elements: a light source (illuminant), an object (with standard measuring geometry), and a standard observer. In 1931, Commission Internationale de l’Eclairage (CIE) conducted psychophysical experiments, the CIE 1931 standard colorimetric observation, for quantifying trichromatic colour perception of humans to yield colour matching functions (CMF). In the experiment, two colours are shown to normal colour vision observers who are asked to adjust one of the stimuli colours to match the appearance of the other colour. They used red, green, and blue lights that produced a metameric match. The transform has since been updated by Stiles and Burch [1959] and Vos [1978]. These functions became the official standard for the transform from visible spectrum to trichromatic colour coordinates, the so-called CIE tristimulus values, CIEXYZ. However, the physiological long-/middle-/short-wave (LMS) cone responses were discovered to be different from these psychophysical colour matching functions [Estévez, 1979; Hunt and Pointer, 1985]. A transform for cone response was suggested by Estévez [1979], which is broadly 1.8 CIE x() CIE y() CIE z() L-cone M-cone S-cone

1.6

Spectral sensitivity

1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0

380 405 430 455 480 505 530 555 580 605 630 655 680 705 Wavelength [nm]

Figure 2.3: CIE 1931 colour matching functions vs. physiological cone spectral sensitivity curves. Solid R/G/B coloured lines present CIE 1931 colour matching functions (version: Vos [1978] modification), broken R/G/B coloured lines show the physiological cone responses originated by Estévez [1979]. In particular, the red colour response (L-cone) appears significantly different from CIE x(λ) function.

2.2. Characterisation

11

used as a fundamental transform for computational cone responses (see Figure 2.3 for comparison between the CIEXYZ and LMS cone responses). See Section 2.3.4 for more details on colour spaces. Radiation that raises the colour sensation is measurable by a photo-detector. Such devices comprise of a diffraction grating and light-detecting diodes; for instance, a colorimeter, spectroradiometer, or spectrophotometer. The measured energy on each band of wavelengths is recorded as a spectrum. The spectrum can be converted to tristimulus values (CIEXYZ). Depending on the type of measuring device, there are two different types of tristimulus values. Spectroradiometers normally yield tristimulus values X Y Z by the summation of products of spectral radiance distributions L e,λ [unit: W/(sr·m2 ·nm)] and CIE colour matching functions x(λ), y(λ), and z(λ), scaled by maxi-

mum photographic luminous efficacy, Km , 683lm/W, where the Y value corresponds to luminance L v (unit: cd/m2 ) [Ohta and Robertson, 2005]: X = Km

X

L e,λ x(λ)∆λ ,

λ

Y = Km

X

L e,λ y(λ)∆λ = L v ,

(2.3)

λ

Z = Km

X

L e,λ z(λ)∆λ .

λ

In contrast, spectrophotometers yield the CIEXYZ by the normalised (usually Y = 100) summation of products of reference viewing illumination P(λ) (CIE standard illumination, D50), which is the spectral power distribution normalised to 100 at 560nm wavelength, surface reflectance basis S(λ) [unit: W/(sr·m2 ·nm)], and CIE CMFs [CIE, 1986] [Hunt, 1998]. As it turns out, spectrophotometers

yield normalised D50 illumination-adapted radiance measurements. However, both are confusingly called CIEXYZ values even though they are not identical: X =k

X

P(λ)S(λ)x(λ)∆λ ,

λ

Y =k

X

P(λ)S(λ) y(λ)∆λ ,

(2.4)

λ

Z=k

X

P(λ)S(λ)z(λ)∆λ ,

λ

where k = P

100 P(λ) y(λ)∆λ

.

λ

The International Electrotechnical Commission (IEC) standardises a common colour transform from sRGB primaries to CIEXYZ values [IEC, 2003], which returns the radiometric tristimulus values without including the reference illumination adaptation. In contrast, most present colour transform matrices in colour science were derived from the measurements of a spectrophotometer, e.g., CIECAT02, Bradford chromatic transform, or Hunt-Pointer-Estévez (HPE) transform, as most psychophysical experiments were conducted with reflective materials. To this end, Nielsen and Stokes [1998] proposed a D50-adapted transform of sRGB primaries. The transform bakes the D50 illuminant adaptation in the original sRGB transform [IEC, 2003] through the Bradford chromatic adaptation [Lam, 1985]. This transform is used as an International Color Consortium (ICC) profile

2.2. Characterisation

12

Forward transform R

G

B

X

0.4124

0.3576

0.1805

Y

0.2126

0.7152

Z

0.0193

0.1192

Inverse transform X

Y

Z

R

3.2406

-1.5372

-0.4986

0.0722

G

-0.9689

1.8758

0.0415

0.9505

B

0.0557

-0.2040

1.0570

Table 2.1: Transform from sRGB into CIEXYZ [IEC, 2003]. Forward transform R

G

B

X

0.4361

0.3851

0.1431

Y

0.2225

0.7169

Z

0.0139

0.0971

Inverse transform X

Y

Z

R

3.1336

-1.6168

-0.4907

0.0606

G

-0.9787

1.9161

0.0335

0.7141

B

0.0721

-0.2291

1.4054

Table 2.2: Transform from sRGB into D50-adapted CIEXYZ [Nielsen and Stokes, 1998]. colour space (PCS) [ICC, 2004] (see Table 2.1 and 2.2 for both transform details). In our colour reproduction system, the D50-adapted transform is used for transforming sRGB signals to CIEXYZ values. See Chapter 6 for more details of our colour reproduction system. When the photo-detector measures the surface reflectance (colour), the measurements can be changed due to the geometric positions of the light source, the photo-detector and the surface object. The CIE defined four illumination and viewing geometries for reflectance (transmittance) measurements [CIE, 1986]: 45/normal (45/0), normal/45 (0/45), diffuse/normal (d/0), and normal/diffuse (0/d) (see Figure 2.4). In the 45/normal geometry, the sample is illuminated with an incident light at an angle of 45◦ from the normal, and the photo-detector is located along the normal. The normal/45 geometry is the reverse order of the 45/normal geometry. Common hand-held spectrophotometers, e.g., GretagMacbeth Spectrolino and EyeOne, use the 45/normal geometry. In Detector

Light source Light source

Detector

45/0

0/45

Detector

Light source

Detector

Light source D/0

0/D

Figure 2.4: CIE-recommended illuminating and viewing geometries. Adapted from [Battle, 1997].

2.2. Characterisation

13

the diffuse/normal geometry, the colour object is illuminated from all angles using an integrating sphere, of which the inner surface is painted with white material, and measured at the angle near the normal to the surface (generally 8◦ from the normal to avoid specular highlights). This geometry provides an option for measuring specular highlights (included or excluded). The normal/diffuse geometry is the reverse order of the diffuse/normal geometry. Generally, high-end spectrophotometers use normal/diffuse geometry, e.g., the Datacolor Spectraflash.

2.2.3

Camera Optics for Capturing Radiance

Electromagnetic radiation can be captured physically by an optical mechanism. The simplest formation of an optical image is an image on a plane mirror. As further evolution of the image formation device, Greeks such as Aristotle and Euclid discovered the optical principle of the pinhole camera in the 4th century BC. This is a precursor to the camera obscura: an optical device used in drawing that lead to the invention of photography. In this camera system, the bundles of rays from points on the subject pass through a pinhole and diverge to form an image on a photoplane surface. The pinhole image is inverted, reversed, smaller and lacks sharpness. In modern camera systems, the pinhole is replaced with a series of negative and positive spherical lenses in order to improve the image formation in terms of geometric/radiometric distortion, sharpness, vignetting, and brightness. A lens is usually fitted with aperture, which controls the transmittance of light, calibrated in units of relative aperture. This is represented by a number N , which is defined as the equivalent focal length f of the lens divided by the diameter d of the entrance pupil: N = f /d, for example, a lens with an entrance pupil size 25mm in diameter and a focal length of 50mm has a relative aperture of 2 (=50/25). The numerical value of relative aperture is usually prefixed by the italic letter f and an oblique stroke, e.g., f /2, which provides a reminder of its derivation. The denominator of the expression used is usually referred to as the f-number of the lens, and the relative aperture of a lens is commonly referred to simply as its aperture or even as the f-stop. If there are two different aperture and shutter speed settings, they satisfy the ratio of shutter times to the ratio of squared aperture sizes:

t1 t2

=

N12 N22

[Ray, 2000b].

To simplify exposure calculations, f -numbers are usually selected from a standard series of numbers. As the amount of light passed through a lens is inversely proportional to the square of the p f -number, the numbers in the series increase by a factor of 2. The standard series of f -number is f /1.0, 1.4, 2.0, 2.8, 4.0, 5.6, 8.0, 11, 16, 22, 32, 45, and 64. A change in relative aperture corresponding to a change in exposure by a factor of 2 (larger or smaller) is referred to as a change of one stop. The change of aperture size influences not only exposure, but also sharpness. This is called the depth of field. The depth of field Td is proportional to the squared of focused distance u of an object and relative aperture N . Td is also proportional to the diameter of the circle of confusion of the lens C, but is inversely proportional to the square of the focal length f of the lens: Td =

2u2 N C . f2

The amount of incident radiation can be controlled by a shutter by opening and closing its shield at a user’s command and exposing the sensing material to light for a predetermined time. It

2.2. Characterisation

14

can be decided by the user or by an automatic exposure-metering system. On older shutters before 1950s, the series of shutter speeds was 1, 1/2, 1/5, 1/10, 1/25, 1/50, 1/100, 1/250, and 1/500 second. Modern shutters provide 1, 1/2, 1/4, 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, and 1/500 second in order to provide a progression of exposure increases similar to the standard series of lens aperture numbers (by a factor of 2) for easy calculation of the exposure. The latter shutter system permits a mechanical interlock between the aperture and shutter speed controls to keep the two in a reciprocal relationship with reference to exposure values [Ray, 2000a]. However, modern shutters introduce rounding errors with respect to a factor of 2, e.g., 1/15 and 1/16. Debevec and Malik [1997] tested their Canon EOS Elan camera by audio recording of the camera shutters. Their measurements verified that the actual exposure times varied by powers of two, e.g., 1, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256, and 1/512. We used these actual shutter speeds for the exposure time calculation. When the shutter system opens, the light from a subject falls on to the corresponding area of the photo-sensing material inside a camera. The effect produced on the material, exposure H, is proportional to the product of the illuminance E and the exposure time t: H = E t. The unit for exposure is lux seconds [unit: l x · s] [Attridge, 2000]. The decision of how much to exposure is

made not using radiance, but luminance that excludes the ultraviolet and infrared regions of the electromagnetic spectrum. The luminance L of a small off-the-axis area of the subject is imaged in the focal plane of the camera as illuminance E. The amount of illuminance E on the sensor site that comes from the subject’s luminance L increases with a lens of higher transmittance T , but decreases with squared f -number of aperture N : E=

T ⇡cos4 ✓ 4N 2

L,

(2.5)

where illumination E reduces according to the distance from the optical axis of the lens in proportional to cos4 ✓ , called the vignetting effect (✓ is an angle from the optical axis). In addition, the equivalent series of the combinations of shutter times and apertures can be ⇣ 2⌘ defined as a absolute figure, called exposure value (EV) [Ray, 2000a]: log2 Nt . Assuming a film speed of ISO 100, the overall luminance level can be determined as a proportion of 2 EV −3 . For instance, if an EV measurement is 5, the scene luminance is approximately 4 cd/m2 .

2.2.4

Sensing Radiance

Once the optical radiation has travelled through the optical mechanism, the amount of radiation can be detected by certain materials to accomplish image formation. Early image-sensing technology started with Daguerreotype (the first photography, introduced in 1839) in which silver halide is coated on the surface of a mirror as photodetectors [Walls and Attridge, 1977]. Once an image is exposed on the silver halide, the latent image is deposited by iodine vapour. In recent film photography, the mirror is replaced with light-sensitive emulsion, which comprises transparent celluloid or acetate base, coated with an emulsion, containing the silver halide. The developing method of the latent image is also improved with bromine and chlorine to enhance the spectral sensitivity of films.

2.2. Characterisation

15

The film-based image-sensing method has been replaced by solid-state-based devices over many years for efficiency and accuracy. Charge-coupled device (CCD) refers to a semiconductor architecture in which the electronic charge is transferred to its storage areas. The CCD architecture has three basic functions: charge collection, charge transfer, and the conversion of charge into measurable voltage [Janesick, 2001]. Recently, complementary-metal-oxide semiconductor (CMOS) has become more popular than CCD sensors in solid-state cameras as it provides more efficient energy consumption. In general, CCDs are regarded as passive pixel sensors and CMOSs are regarded as active pixel sensors, since each pixel on the CMOS includes its own amplifier to yield amplified charge voltage per pixel [Holst, 1998]. Note that solid-state-based sensors have a wider bandwidth of spectral sensitivity than the human visual system (see Figure 2.5). In particular, the sensitivity of such sensors is spread more toward infrared (IR) wavelengths (beyond red colour). In order to have a similar response to human vision, the sensors need to be calibrated with an IR-blocking filter that cuts out the wavelengths longer than 700-800nm [Gilblom and Yoo, 2004]. Once the incident light is filtered through the IR blocking filter, individual pixels are filtered with either red, green, or blue filters arranged in a mosaic pattern. These colour filters mimic the spectral responsivity of the human visual system [see Figure 2.8(a) for the spectral sensitivity of a digital camera and Figure 2.3 for that of the human visual system]. The amplified charge voltage is transported to an analogue-to-digital converter (ADC), which converts voltage into measurable voltage, i.e., an electronic signal. For consumer cameras, an 8bit ADC is used; for professional or scientific photographic cameras, a 12- or 14-bit ADC is used. Its linearity is specified by differential nonlinearity (DNL) and integral nonlinearity (INL). In theory, the voltage of charge in a detector should increase linearly in proportion to the illuminance on the surface of each pixel, but its linearity often requires additional calibration inside the solid state device [Inglis and Luther, 1996]. In addition, recent digital single-lens reflex (DSLR) cameras provide an alternative output in addition to ordinary 8- or 16-bit red, green, and blue (RGB) outputs.

Quantum Efficiency [%]

60 50 40 30 20 10 0 300

Sensitivity of the eye 400

500

600 700 800 Wavelength [nm]

900

1000

1100

Figure 2.5: Quantum efficiency of a solid-state-based sensor. The raw spectral sensitivity of solidstate-based sensors is much wider (between 300 and 1100nm) than that of the human visual system (380–780nm). Infrared-blocking filters are necessary to make the response similar to the human eye. Adapted from [Gilblom and Yoo, 2004].

2.2. Characterisation

16

It is often called RAW image format, which directly stores the ADC sensor signals in the Bayer-pattern as one mosaic colour channel of red, green, blue, and green (RGBG). It excludes the post-image processing, e.g., white balancing, gamma correction, tone mapping, or post noise reduction process, merely including hardware-level noise reduction (pattern noise), scaling constants for white point in the captured scene, and meta data of the camera settings [Coffin, 2009]. The method in Chapter 3 utilises these RAW files to generate high-dynamic-range images and characterises them to achieve image measurements of radiance on an absolute scale. The dynamic range of solid-state sensors is often limited by two main factors: overflow drain in the highest saturation level of illuminance, called blooming effect; and noise floor in the lowest saturation level of illuminance (see Figure 2.6). First, when an electron-detector (well) overflows, the charge spills over to adjacent pixels in the same column resulting in an undesirable overload, called blooming. In order to overcome the blooming effect, usually anti-bloom drains or overflow drains are installed in the imaging sensor. The drains are attached to every pixel, where any photoelectron is swept into the drain and instantly removed. In an ideal imaging system, the output increases linearly in proportion to the incident light up to the anti-bloom drain limit. However, in real arrays, a knee is created because of imperfect drain operation [Janesick, 2001] (see Figure 2.6). Second, the dark saturation point of the image is limited by sensor noise, which falls into five main categories [Holst, 1998]: Shot noise is due to the discrete nature of electrons. It occurs when the photoelectrons are created while the dark current electrons are present. Cooling the array can reduce the dark current (relatively small electric current that flows through the solid state even without exposed to light) to a negligible value and thereby reduce the shot noise to a negligible level; Reset noise is associated with resetting the sense node capacitor. It occurs due to thermal noise (a signal generated by the thermal agitation of the charge carriers in the conductor) generated by the resistance; Amplifier noise comprises two components: 1/f noise (a signal with a frequency spectrum such that the spectral power density is proportional to the reciprocal of the frequency) and white noise (random signals independent of the spectral power density). It occurs in on-chip amplifiers as well as off-chip amplifiers; quantisation noise is due to the ADC discretisation of the output level; Pattern noise refers to pixel-to-pixel variation that occurs (when the array is even in the dark) due to the dark current differences. It is a signal-independent noise, which occurs in CMOS sensors. The noise level is often evaluated as root-mean-squared (RMS) noise on the capture of a uniform surface: v u −1 N −1 X u 1 MX ⇥ ⇤2 f (x, y) − g(x, y) , RMS = t M N x=0 y=0

(2.6)

where M and N are horizontal and vertical image resolutions; f is each pixel level, and g contains the mean of the entire pixel levels. The dynamic range can be described as the difference between the maximum and minimum intensities (or densities) of imaging signal (or colorant), where the intensity (or density) is often calculated by taking the 10-based logarithm of the ratio between the reference maximum luminance

2.2. Characterisation

17

measure I max and the minimum luminance measure I min : log10



I ma x I min



. In electronic imaging, the

dynamic range often describes the number of electrons of full capacity of the well Nsi g nal , which is limited by the noise floor Nnoise [Holst, 1998]. The dynamic range that considers the noise floor is presented by the signal-to-noise ratio (SNR) (dynamic range multiplied by 20, unit: dB): ⇣N ⌘ si g nal 20log10 N , where Nnoise is usually calculated as RMS noise. noise

2.2.5

Device Characterisation

Once we measure the optical radiation of a reference target and simultaneously capture it as an image with a sensing device, it is possible to derive a mathematical model to describe the colour specification of the imaging device in physically-meaningful device-independent coordinates. The device signals or output colours in imaging devices vary due to their manufacturer settings or hardware design. They can also vary even with the same specification of identical models due to their manufacturing process. Device characterisation overcomes the variation of imaging devices to build a mathematical bridge between device signals and physical coordinates so that we can describe the device-dependent signals as device-independent signals. A colour space, e.g., CIEXYZ or CIELAB, can be used device-independent signals. To this end, we are able to utilise imaging devices to measure some physical property or to produce specific target colours on the output devices. Device characterisation often requires two procedures [Johnson, 2002]: • Calibration: the setting up of a device or process so that the device gives repeatable data.

• Characterisation: the relationship between device colour space and the device-independent colour space, e.g., CIE tristimulus values.

Once a device is calibrated in repeatable conditions, a mathematical model can be derived to yield physically-meaningful coordinates. The characterisation of a target device then comprises two elements: estimating a tone-reproduction curve for each colour channel, the so-called opto-electronic transfer function (OETF), and deriving a colour transform between the device-dependent signals and Slope responsivity

Signal-toNoise Ratio

Output signal

Saturation

Dark current

No dark current SEE

Input Exposure

Figure 2.6: Average responsivity of solid-state imaging. Its average responsivity is the slope of the output-input transformation. The maximum input or the saturation equivalent exposure (SEE) is the input that fills the charge wells. SEE is used to define the dynamic range. Dark current limits the available signal strength. Cooling can reduce the dark current to a negligible level. Adapted from [Holst, 1998].

2.2. Characterisation Digital camera

0.8

0.6

0.4

0.2

Red Green Blue

0.0

(a)

LCD display

1.0

Normalized Output Luminance Measures

Normalizd Camera Output Responses

1.0

18

0.8

0.6

0.4

0.2

Red Green Blue

0.0 0.0

0.2 0.4 0.6 0.8 1.0 Normalized Input Luminance Measurements

(b)

0.0

0.2 0.4 0.6 0.8 Normalized Input Display Signals

1.0

Figure 2.7: Measured OETFs of a digital camera and an LCD display. Plot (a) shows the measured OETF of RGB output of a Nikon D100 camera, where the horizontal axis presents normalised incident luminance and the vertical axis shows normalised camera outputs. Plot (b) shows the measured OETF of an Apple Cinema HD Display (LCD panel), where the horizontal axis indicates normalised input display signals and the vertical axis presents the normalised corresponding measurements of luminance levels. the device-independent coordinates. Opto-Electronic Transfer Function

OETF describes a non-linear tone-reproduction function for

each colour channel of an imaging device. For instance, the 709 phosphor in a cathode-ray tube (CRT) display yields non-linear luminance responses according to its voltage input [Inglis and Luther, 1996]. Its responses are raised to the power of approximately 2.2, which is similar to the inverted function of the human cone responses (raises to the power of approximately 0.45, see Chapter 4 for the measured human response). To this end, the non-linear response of CRT monitors cancels out the non-linear response of human perception. The OETF for the 709 phosphor became an international standard for tone reproduction of the sRGB colour space [IEC, 2003]. Figure 2.7 shows the measured OETFs of a DSLR camera and a liquid-crystal display (LCD) monitor. One is associated with the other as an inverse function with minor differences. Even though an LCD panel has a linear response to input voltage [Kwak and MacDonald, 2001], the complete product of the LCD display replicates the OETFs of the CRT monitor to maintain backward compatibility with the sRGB colour system. Display device radiance level measurements (corresponding to its input signals) allow us to derive OETFs for the colour primaries of the device. For instance, if the rough estimate of the tone-reproduction curve appears similar to the power function, we can model the OETF as a power function [Berns et al., 1993], called a gain-offset-gamma (GOG) model. It models the tone reproduction of each channel as a power function with a conditional clamp: 8 C=

< :

î

k g ain · d + ko f f set 0,

óγ

,

ó k g ain · d + ko f f set > 0 , î ó k g ain · d + ko f f set  0 î

(2.7)

2.2. Characterisation

19

where the summation of k g ain and ko f f set is one; d is a normalised display signal for each channel; k g ain is a scaling constant; ko f f set is an offset value; the radiance raises to the exponent of γ. C is the radiance level of red (R), green (G), and blue (B) primaries respectively. To provide a linear relationship in the complete camera-display system, an inverse gamma power function is used in digital cameras as an image processing procedure, so-called gamma correction. This is an essential step to transform the trichromatic radiance values to sRGB display signals (camera output). Note that gamma correction does not exist in HDR imaging camera output as this is normally conducted in the tone-reproduction stage of HDR imaging. When the radiance level C of each primary is normalised to 1.0, the normalised camera output will be: 8 d=

< 1.055C γ − 0.055, : 12.92C,

C > 0.00304 C  0.00304

,

(2.8)

where γ value is 1/2.4 (0.42) which compensates for the 2.2 gamma reproduction in the sRGB system (with linear ramp for dark colours) [IEC, 2003]. OETFs of output devices should be invertible for actual applications. See Chapter 6 for more details on the practical application of display characterisation. In contrast, it is not necessary for digital camera OETFs to be invertible as only a forward transform (from the device signals to the device-independent signals) is required (see Chapter 6 for more details). Hence, high-ordered polynomials are often used for better performance instead of the simple power function [Pointer et al., 2001; MacDonald and Ji, 2002; ISO, 2006]. Colour Transform

Modelling characteristics of non-linear tone reproduction for each colour chan-

nel yields linearised device signals, which correspond to physical measurements of device-dependent colours. It enables us to derive a linear transform between device signals and physical measurements. The use of colour transforms is based on a theory, called Grassmann’s Additivity Law [Hunt, 1998], which describes that any colour can be matched by certain amounts of multiple primaries. For instance, if we have three device primaries and three-dimensional colour coordinates, a 3⇥3 linear transform is sufficient to map device colours to colour coordinates such that they are linearly associated. Suppose we have a digital camera which captures a measured colour target. The trichromatic response value [red (R), green (G), and blue(B)] of a specific pixel on the sensor is given as the sum of the product of the spectral power distribution (irradiance) of the light source P(λ), the surface reflectance (or transmittance) of the imaged object S(λ), and the spectral responsivities of the colour filters D r/g/b (λ). Assuming that incident light is reflected from object surfaces: R= G= B=

P λ

P λ

P λ

P(λ)S(λ)D r (λ)∆λ , P(λ)S(λ)D g (λ)∆λ , P(λ)S(λ)D b (λ)∆λ .

(2.9)

2.2. Characterisation

20

The summation is taken over a suitable wavelength range in the visible part of the spectrum from 380nm to 780nm for instance [ISO, 2006]. The calculation of these response values is similar to the computation of device-independent tristimulus values, such CIEXYZ: X= Y= Z=

P

P(λ)S(λ)x(λ)∆λ ,

λ

P

P(λ)S(λ) y(λ)∆λ ,

(2.10)

λ

P

P(λ)S(λ)z(λ)∆λ ,

λ

where x(λ), y(λ) and z(λ) are the CIE color matching functions (CMF) [CIE, 1986]. The only difference between Equations (2.9) and (2.10) is the use of different weighting functions D r/g/b and x, y,z. Various camera characterisation techniques have been proposed to find a mapping between these colour spaces. They can be categorised into two main classes: models based on targets with known reflectances [Pointer et al., 2001; MacDonald and Ji, 2002; Johnson, 2002; ISO, 2006] and models based on the measurement of spectral responsivity using a monochromatic light source [Martínez-Verdú et al., 2000; MacDonald and Ji, 2002; Martínez-Verdú et al., 2003; ISO, 2006; Normand et al., 2007]. The reflectance-based techniques use a colour target, such as the GretagMacbeth ColorChecker, where the tristimulus values of each colour patch are measured first or already known (e.g., in CIEXYZ). A picture of the colour target is then taken and a direct mapping between the image’s RGB-values and the measured XYZ values is derived via linear regression (or polynomial regression in case of non-linearised images). While these techniques are very simple, they are only valid for the current illumination condition [ISO, 2006], as P(λ)s in Equation (2.9) and (2.10) are not the same

Normalized Spectral Sensitivities

(a)

LCD display Red Green Blue

0.80 0.60 0.40 0.20 0.00

360 400 440 480 520 560 600 640 680 720 760 800 Wavelength [nm]

Normalised Measured Radiance

Digital camera 1.00

(b)

1.00

Red Green Blue

0.80 0.60 0.40 0.20

0.00

380 410 440 470 500 530 560 590 620 650 680 710 740 770 Wavelength [nm]

Figure 2.8: Spectral responsivity of a digital camera and an LCD display. Plot (a) shows the measured spectral sensitivities of the RGB filters respectively on the single wavelength lights (Nikon D70). The responsivity appears to be similar to the human colour matching functions in wavelengths between 380 and 730nm. Plot (b) presents the measured spectral characteristics of the RGB primaries respectively. The bandwidth of the trichromatic primaries is relatively narrower in the LCD display (Apple Cinema HD Display), compared to the width of the camera filters, as the florescent lamp or LED diode is used as a back-light source instead of broadband width light sources (e.g., a Xenon lamp).

2.2. Characterisation

21

with these methods. P(λ) in Equation (2.9) is the spectrum of the light source at a scene; P(λ) in Equation (2.10) is usually CIE D50 illuminant in colorimetry and ICC profiles (see Section 2.2.2). As soon as the lighting changes, a new mapping is required. Therefore, this characterisation method is very limited in practical applications. Nonetheless, it is universally used for ICC input profiles [ICC, 2004] and is part of the ISO standard [ISO, 2006]. Reflectance-based techniques have also been extended to HDR imaging by assembling characterised LDR images into an HDR image by using the ICC method [Göesele et al., 2001]. However, this extension shares the same assumption of fixed geometric and spectral illumination characteristics, and also does not allow us to characterise absolute luminance. The monochromator-based techniques use a white integrating sphere of known reflectance and a monochromatic light source of which wavelength can be adjusted. By illuminating the integrating sphere with every single wavelength within the visible spectrum, the spectral responsivity D r/g/b can be measured directly, which allows derivation of a simple linear mapping to CIEXYZ. In this case, P(λ) is the same for Equation (2.9) and (2.10). While this method is much more universal than reflectance-based techniques, monochromator-based techniques are very time-consuming, each wavelength must be measured individually and a picture needs to be taken for every wavelength. These techniques can, in theory, be used for camera characterisation in HDR imaging. However, only colour could be characterised and not luminance, as the employed illumination and target only offers a low dynamic range. Figure 2.8(a) presents spectral characteristics of a digital camera, obtained through the monochromator-based technique, compared to spectral characteristics of a trichromatic LCD display [Figure 2.8(b)]. Inanici and Galvin [2004] and Krawczyk et al. [2005] proposed to rescale the measured luminance values in HDR radiance maps by comparing them with measurements from a luminance meter. However, they only take into account luminance scales without considering radiometric measurements of colours.

2.2.6

White Balancing

The characterisation model of a digital camera transforms input device-dependent camera signals into device-independent colour coordinates. However, in case an image is not intended for measurement purposes but for display on an output monitor, we need to take the human visual system (which adapts to a given illumination condition) into account. This is a classical issue and is traditionally called white balancing (for cameras) or colour constancy (for human vision) [d’Zmura and Lennie, 1986] for digital cameras. These computational methods are distinct from human chromatic adaptation. Colour constancy methods pursue accurate estimation of scene illumination and assume 100% adaptation to the given illumination, but chromatic adaptation in the human visual system shows inconsistent adaptation to a given illumination; hence, a chromatic adaptation model focuses on formulating these inconsistent trends in perceiving hue (see Section 2.3.4 for more details). Many colour constancy methods have been proposed and we can only mention the most related methods; for a more complete overview, see [Hordley, 2006].

2.3. Colour Appearance

22

In order to estimate the unknown scene illumination from camera signals only, assumptions are usually made about aspects of real-world images. The grey-world method [Buchsbaum, 1980; van de Weijer and Gevers, 2005] assumes that the average reflectance or colour derivative in a scene is grey, whereas the maxRGB method [Land, 1977] assumes the respectively brightest channel levels in an image correspond to the white point. Instead, prior information about the gamut distribution can be acquired in a learning phase, which is used in the colour-by-correlation method, for instance in [Finlayson et al., 2001]. Statistical prior probability of the training data set can be used to improve the performance of the grey-world method [Barnard et al., 2002; Gijsenij and Gevers, 2007; Gehler et al., 2008]. This requires a large set of training data and long precomputation times. Despite the large variety of available methods, no algorithm can be regarded as universal. In practice, the grey-world and maxRGB approaches perform well on natural, real-world images [Hordley, 2006; Gijsenij and Gevers, 2007]. We therefore propose an enhanced version of the greyworld algorithm to estimate the scene’s correlated colour temperature, which is inspired by Barnard et al. [2002]’s method. However, we derive a linear transform from real-world training images with radiometric measurements instead of synthetic images, and we further apply a weighting scheme that combines the maxRGB and grey-world methods. See Chapter 3 for more details.

2.3

Colour Appearance

Device characterisation describes colour reproduction devices by relating their device-dependent colour specification to device-independent coordinates, e.g., physically-meaningful CIEXYZ. However, this is not sufficient for colour reproduction as given physical stimuli can be perceived differently due to their viewing conditions. Therefore, perceptual attributes, e.g., lightness, chroma, and hue, of a physical colour stimulus need to be communicated rather than physical stimuli. Colour spaces commonly try to ensure that equal scale intervals between stimuli represent approximately equally perceived differences in the attributes considered. Colour appearance models additionally try to model how the human visual system perceive colours under different viewing conditions, e.g., against different backgrounds. The following section presents the background and related work of the human visual system, psychophysical methodology, and colour appearance models.

2.3.1

Human Colour Vision

Colour is caused by the spectral characteristics of reflected or emitted radiance, which is seemingly easy to understand as a physical quantity. However, colour is really a perceptual quantity that occurs in one’s mind, and not in the world. Therefore, the physical spectrum is commonly decomposed into perceptual quantities using physiological and psychophysical measurements that try to quantify the human visual system; e.g., the CIE 1931 standard colorimetric observation [CIE, 1986]. Müller’s zone theory of trichromatic vision [Müller, 1930] is commonly used as a basis for deriving computational models of human vision. It describes how the combined effect of retina, ganglion neurons, nerve fibers, and the visual cortex constitutes colour perception (see Figure 2.9). The retina features cones and rods with different spectral sensitivity. Long (L), middle (M),

2.3. Colour Appearance

23

and short (S) cones are stimulated by approximately red, green, and blue wavelengths respectively, while the rods have achromatic sensitivity. The ratio of the numbers of the three cone types varies significantly among humans [Carroll et al., 2002], but on average it can be estimated as 40:20:1 (L:M:S) [Vos and Walraven, 1971]. In the first stage of the visual system, the eye adapts to the observed brightness level. Two adaptation mechanisms control the effective cone response. The pupil changes size and controls the amount of light reaching the retina to a limited extent. In addition to physical adaptation, the retina itself adapts neurologically. Based on measurements of cone responses of primates under varying (flashed) incident retinal light levels I of up to 106 td (Troland units: luminance in cd/m2 ⇥ pupil area in mm2 ), Valeton and van Norren [1983] found that the response satisfies the hyper-

bolic ratio equation of Naka and Rushton [1966], called Naka-Rushton equation [Equation (2.11)], which originated from the Michaelis-Menten equation [V /Vm = I/(I + σ)] [Michaelis and Menten, 1913], effectively compressing the response. Normalising the cone response V by the maximum physiological cone response Vm , they derive a general response function: V Vm

=

In I n + σn

,

(2.11)

where n was found to be 0.74 and σ was found to depend directly on the adaptation luminance (varying from 3.5 to 6.3 log td), which shifts the response curve along the log-intensity axis, see Figure 2.10. In contrast, Boynton and Whitten [1970] assume σ to be constant and that all sensitivity loss is caused by response compression and pigment bleaching, which is the basis of many colour appearance models, such as Hunt94, CIECAM97s, and CIECAM02 [Hunt, 1994; CIE, 1998; Moroney et al., 2002]; however, we will demonstrate that for accurate prediction of lightness, σ should be

Zone 1

Zone 2

Fovea

Visible Spectrum

M cone S cone Rod Retina

Optic nerve

Colour Perception

L cone

(a)

Eye

Zone 3

(b)

A

Brightness

C1 (R-G)

(+) R, (-) G

C2-C3 (Y-B)

(+) Y, (-) B

Neurons

Nerve Fiber

• Absolute Quantity: 1. Brightness (Strength of A) 2. Hue (Ratio of C1 to C2-C3) 3. Colourfulness (Strength of C1 and C2-C3) • Relative Quantity: Lightness/Chroma/Saturation (c)

Visual Cortex

Optic chiasm Optic tract

(d)

Lateral geniculate nucleus Optic radiations Primary visual cortex

Figure 2.9: Schematic illustration of human colour vision based on the zone model [Müller, 1930]. Light enters through the pupil and stimulates cones and rods. The given stimulus is sensed by long (L)and middle (M)-wave cones in the fovea, and short (S)-wave cones and rods outside the fovea (a). The strengths of the four responses are combined to yield achromatic brightness, and the ratio and strength of the C1 (L − M ) channel and the combined C2 (M − S) and C3 (S − L) channels yield the hue and

colourfulness sensations. The signals travel along the nerve fiber (crossed at the optical chiasm), are merged into one image in the left and right lateral geniculate nucleus (LGNs), and cause the final visual sensation at the visual cortex (c). Image (d) presents a corresponding anatomical chart of the head.

2.3. Colour Appearance

24

allowed to vary. See Chapter 5 for more details on modelling the cone response. Humans perceive object colours as constant under different illumination; this effect is called colour constancy. It is believed that the underlying mechanism is caused by a slightly different adaptation of each cone type but the details are still debated [Lam, 1985]. It may even be a combination of cone adaptation and processing in the cortex. According to the zone theory, the cones’ and rods’ responses are transformed into three neural signals, which are passed along the nerve fibers. A weighted combination of the three cone- and rod-responses yields one achromatic signal A that is perceived as brightness. Colour information is transformed in the form of two difference signals: the red/green opponent colour attribute is the difference of the L and M cone sensations, C1 = L − M ; the yellow/blue opponent colour attribute is the difference of the two difference signals C2 = M − S and C3 = S − L, that is, C2 − C3 . The ratio of C1 and C2 − C3 causes a hue sensation in our visual cortex, and their strength conveys colourfulness.

Brightness, hue, and colourfulness are the fundamental attributes of colour sensation. They can

be used to derive relative quantities that model human colour perception. The ratio of a surface’s brightness A and the brightness An of the reference white defines the lightness sensation [Land and McCann, 1971]. Setting a surface’s colourfulness in proportion to the reference brightness An yields chroma. Similarly, comparing a surface’s colourfulness to its own brightness level provides the saturation sensation. Hunt [1998] defines common colour appearance terminologies clearly: • Brightness:

attribute of a visual sensation according to which an area appears to exhibit

more or less light.

• Lightness: the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or very highly transmitting.

• Colourfulness: attribute of a visual sensation according to which an area appears to exhibit more or less of its hue.

• Chroma: the colourfulness of an area judged in proportion to the brightness of a similarly

Total response, V (µV)

illuminated area that appears to be white or highly transmitting.

800 600 400

DA 2 3

4

5

6

200 0

-200 1

2

3 4 5 Log intensity, I (log td)

6

7

Figure 2.10: Cone response (V ) vs. intensity (log I) curves in the presence of adapting background illumination from dark adapted luminance (DA) to brighter adaptation luminances (2, 3, 4, 5, and 6 log td). Adapted from [Valeton and van Norren, 1983].

2.3. Colour Appearance

25

• Saturation: the colourfulness of an area judged in proportion to its brightness.

• Hue: attribute of a visual sensation according to which an area appears to be similar to one, or to proportions of two, of the perceived colours, red, yellow, green, and blue.

In this thesis, colour appearance attributes will be discussed by using these terminologies.

2.3.2

Quantifying Perception

Colorimetry in Section 2.2.2 describes colour as it directly relates to physical properties. Considering that colour is a perceptual sensation triggered by physical stimuli and that electrophysiological measurements of the human eye and brain are quite limited, experimental psychology is an alternative option to measure human colour perception. Many of the psychophysical measurements necessary for modelling human colour vision have been conducted in recent decades. Psychophysics is the scientific study to derive the relationships between the physical stimuli and the perceptual sensations that those stimuli evoke [Fairchild, 2005]. We conducted psychophysical experiments to quantify human colour perception under high luminance levels to achieve a full range of measurements of the human visual response (see Chapter 4) and to assess and evaluate the accuracy of our colour reproduction system, compared with previous work (see Chapter 6). Psychophysical analysis originates from Weber’s Law, which states that the ratio of the change in stimulus intensity that achieve a just noticeable difference to the stimulus intensity is constant, and Fechner’s Law that defines the relationship between the magnitudes of physical stimuli X and their resulting perceptions S as logarithmic (S = ln X ). In modern psychophysics, the relationship between the stimuli and their perceptions is described as a power function (S = ↵X β ), instead of logarithmic, by Stevens’ Law [Laming, 1997]. Psychophysical experiments fall into two main categories: threshold and matching to measure visual sensitivity to small changes in stimuli (or perceptual equality), e.g., measuring just-noticeable difference (JND) as visual tolerances, and scaling to define a supra-threshold relationship between the physical stimuli and the perceptual magnitudes from those stimuli, e.g., LUTCHI colour appearance experiments [Luo et al., 1991a]. Threshold and Matching

Two different stimuli are presented to observers who are asked whether

they can sense the difference of those stimuli (threshold) or to adjust one of the presented stimuli to match with the other (matching). In general, these methods yield more accurate measurements than the sensory scaling methods. For instance, CIE 1931 standard colorimetric observations were derived from metameric matching experiments [Hunt, 1998]. In these experiments, one colour is presented to one eye and another colour presented to the other eye with a haploscopic device. Colour-normal participants are then asked to adjust one colour to match the other by controlling the proportion of red, green, and blue primary colours. This experiment is based on the assumption that the adaptation of one eye does not influence the other. Unfortunately it imposes unnatural viewing conditions with constrained eye movement. Sensory Scaling

For a given stimulus, observers are asked to produce a numerical scale with

2.3. Colour Appearance

26

respect to the intensity of a “-ness” property [Engeldrum, 2000], e.g., lightness, colourfulness, or similarity. The scales belong to one of four different categories. A nominal scale is an indexing number for classification or identification purposes. An ordinal scale presents the rank of a specific property of given candidate stimuli. An interval (between scales) describes the difference or distance between the measured property or characteristic. A ratio scale is a combined scale of the ordinal and interval scales. This scale includes the zero amount [Fairchild, 2005]. The sensory scaling experiments fall into three main categories: pair comparison, category judgment, and magnitude estimation types. Pair comparison is an experiment where each pair combination of a set of stimuli is presented to observers. Observers are then asked to choose which stimulus exhibits more of a property or characteristic being evaluated. So that the experiment is not forced-choice, the observers are allowed to choose that both stimuli are equal. Thurstone’s Law of Comparative Judgement [Thurstone, 1959] is often used to analyse the collected data to quantify properties of stimuli by transforming them into an interval scale. This method is generally believed to provide better accuracy in quantifying a property compared with other scaling methods. Category judgement is a method where a possible magnitude of a property (given to observers) is scaled in equal intervals. Observers are asked to judge which category a given stimulus falls into. Torgerson’s Law of Categorical Judgement [Torgerson, 1958] (extended Thurstone’s Law of Comparative Judgement) allows us to transform the equal-interval scales into relatively-positioned interval scales with respect to category boundaries. Magnitude estimation is an experiment where observers are asked to judge a property of a given stimulus as a ratio scale to represent the extent. Each observer produces different scales, which are different from other observers. Stevens’ Power Law [Stevens, 1957] is used to manage a large variation of subjective ratio scales of each observer. We used this magnitude estimation method for obtaining human colour perception under high luminance levels. See Chapter 4 for more details on experimental setting and data analysis.

2.3.3

Colour Appearance Phenomena

Colour appearance phenomena occur when identical optical radiation levels are perceived differently in varying viewing environments. The human visual system presents certain characteristics in how it perceives the appearance of colour in specific viewing conditions. These are defined as stimulus, proximal field, background, surround, and adapting field [Hunt, 1998]. Stimulus describes the physical radiation that invokes colour appearance, generally in a 2◦ angle subtended from the visual axis of the human eye. The proximal field is the extended area from the edge of a 2◦ stimulus in all directions. The background presents the environment of the main colour stimulus in a 10◦ area, outside the 2◦ stimulus. The surround is the field outside the background. See Figure 2.11. Perceived appearance depends on the environmental viewing conditions. Among the various effects, this section presents phenomena with respect to our experiments in Chapter 4 (see [Fairchild, 2005] for more details on other phenomena): • Luminance Effect on Brightness: Stevens and Stevens [1963] describe brightness perception

2.3. Colour Appearance

27

Surround Background

Stimulus

Proximal field

Figure 2.11: Specification of components of the viewing field. Stimulus describes the physical radiation, generally in a 2◦ angle subtended from the visual axis of the human eye. The proximal field is the extended area from the edge of a 2◦ stimulus in all directions. The background presents the environment of the main colour stimulus in a 10◦ area, outside the 2◦ stimulus. The surround is the field outside the background. Adapted from [Fairchild, 2005].

trends with respect to luminance. They state that the perceived brightness changes according to luminance and model brightness perception as a power function where the exponent depends on a luminance. This is the Stevens effect. Suppose two identical grey-scales are placed in a dark room and a bright room. The contrasts of the perceived brightnesses appear differently. At a low luminance level, the contrast of the perceived grey-scale appears decreased, while at a high luminance level, the perceived contrast increases, i.e., the dark colours appear darker, and middle or brighter colours appear brighter under higher luminance levels. This effect is observed in our experimental data. However, the trend appears more complicated than a simple power raise. See Chapter 5 for more details on our proposed numerical model for the Stevens effect. • Luminance Effect on Colourfulness: The level of luminance influences not only contrast, but also colourfulness. Suppose we look at colourful objects under bright sun light but we also observe the identical objects in a dim room. Comparison of the perceived colourfulness find that the colourfulness of a given stimulus increases with the luminance level; the so-called Hunt effect [Hunt, 1952]. The Hunt effect is also confirmed by our colour experiments (see Chapter 4). While Hunt [1952] uses a haploscopic device where two different levels of luminance are presented to the left and the right eyes respectively, we conducted a psychophysical memory experiment (see Chapter 4 for more details on the experiments). • Background Effect: Suppose two identical grey patches are placed on two different backgrounds, white and black. The grey patch on the black background appears lighter, while

2.3. Colour Appearance

28

the identical patch on the white background appears darker. This is called the simultaneous contrast effect. The change in the background causes the change of colour appearance [Albers, 1963]. The simultaneous contrast for these stimuli depends on the spatial structure of the environment, rather than colours or edges. These changes were observed for not only lightness but also for colourfulness in our colour experiments (see Chapter 4 for more details on the initial findings). • Surround Effect: Breneman [1977] describes the effects of surround with respect to luminance. Suppose two identical grey-scales are placed under average and dark surround respectively. The perceived contrast of lightness under the dark surrounds increases while the contrast under the average surround decreases. Our experimental data indicates that lightness contrast decreases while colourfulness increases as the luminance level of the surround increases. However, as observed in Breneman [1977], our data confirms that the difference was small and statistically insignificant(see Chapter 4). • Helson-Judd effect: Helson [1938] states that the chromatic adaptation mechanism works

imperfectly depending on the lightness of the objects. For instance, if a grey scale is illuminated by a yellowish light source like tungsten light, the lighter patches will appear yellowish exhibiting a certain amount of the hue of the light source. In contrast, the darker patches in the grey scale will appear bluish.

• Purkinje Break & Shift: Purkinje [1825] describes the activity transition of cones and rods with respect to luminance. In the luminance range between 0 and ⇠100 cd/m2 (called

mesopic vision), as luminance decreases, cones are gradually deactivated, and rods start to contribute in sensing luminance. At a certain luminance level (called Purkinje break), the threshold of luminance increases such that the cones and rods both contribute to luminance perception. However, further decreases of luminance deactivates cones, and then only rods contribute to vision. Under dark luminance conditions, the scotopic vision (only rods) also presents different spectral sensitivity, called Purkinje shift, from photopic vision (only cones). In dark viewing conditions, the eye’s luminance sensitivity shifts toward short-wavelengths (bluish) a little, defined as the CIE V 0 (λ) function [CIE, 1986]. Peak sensitivity of luminance shifts from 560nm to 510nm. Targeting extended luminance levels, our model covers photopic vision only. This phenomenon is not modelled in our appearance model. LUTCHI Colour Appearance Experiments

In order to quantify colour appearance phenomena,

many extensive experiments have been conducted. In particular, the magnitude estimation experiments conducted at the Loughborough University of Technology Computer Human Interface (LUTCHI) Research Centre provide a significant amount of measurements of colour appearance on a large variety of media from reflective materials to CRT monitors [Luo et al., 1991a,b, 1993a,b, 1995]. The LUTCHI data set includes relative tristimulus values, viewing conditions (e.g., reference white, background luminance level, and medium type), and corresponding colour appearance measurements. The data set has been used to revise the Hunt colour appearance model [Hunt, 1991]

2.3. Colour Appearance

29

and to derive the LLAB model [Luo et al., 1996]. International standard colour appearance models, CIECAM97s [CIE, 1998] and CIECAM02 [Moroney et al., 2002], are also derived from this data set. In [Luo et al., 1991a], six to seven trained colour-normal participants were asked to rate scales with respect to lightness, colourfulness, and hue of the given stimuli. The viewing environments varied the level of illumination (low and high, up to ⇠250 cd/m2 ), medium type (reflective and

CRT), background (white, grey, and black), and white point (CIE A, D50, D65 illuminant). The results show that the background and reference white influence colour appearance significantly. In our experiments, we used the almost identical experimental settings to these LUTCHI experiments. See Sections 4.2 and 4.3.1 for more details. Luo et al. [1991b] compares the performance of a several colour appearance models, namely CMC, CIELAB, Nayatani’s, Hunt’s 87, and Hunt-ACAM (being the Alvey Colour Appearance Model), in terms of lightness, colourfulness, and hue. Overall, the Hunt-ACAM model performs better than the others. In particular, Luo et al. [1993a] measured brightness along with lightness perception, which is the only available data set for the relationship between lightness and brightness. Those properties were measured under six different luminance levels of CIE D50 illuminant. Luo et al. [1993b] describe the measurements of colour appearance on cut-sheet transparency and 35mm projection, which are under high levels of luminance up to 1 272 cd/m2 . However, they used only four colour samples between 1 000 and 1 272 cd/m2 . Luo et al. [1995] specifically examined the simultaneous contrast effect. Five observers scaled lightness, colourfulness, and hue on a CRT display with varying proximal fields around the main colour samples. This is used for testing the performance of predicting simultaneous contrast in the Hunt model. The LUTCHI colour appearance experiments provide an excellent methodology to measure the perceived colour appearance in a scientific way, and it covers a very wide range of media from reflective materials to CRT displays. However, most of the luminance levels in the experimental data are under approximately 690 cd/m2 , which was limited by the available display technology in the 90s. This range of luminance falls short of covering the full range of the human visual system (which is five-orders of magnitude). Consequently, perceptual colour appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conducted psychophysical colour experiments in order to acquire appearance data for many different luminance levels (up to 16 860 cd/m2 ) covering most of the dynamic range of the human visual system (see Chapter 4 for more details). These experimental data allow us to quantify human colour perception under extended luminance levels, yielding a new colour appearance model.

Coefficient of Variation

In order to evaluate the performance of the colour appearance models, Luo

et al. [1991b] compared models’ predicted attributes to their perceptual measurements of colour appearance. They evaluated the qualitative difference by employing coefficient of variation (CV); RMS error with respect to mean in percentage scale. Suppose there are two different data sets x

2.3. Colour Appearance

30

and y. The calculation of CV is: CV =

100 y

»

1X N

i

(x i − yi )2 ,

(2.12)

where y is the mean of the data set y; N number of y elements. The deviation in this CV is calculated from the difference between two elements (x i − yi ) like RMS error, which is then normalised

by the mean. As opposed to this paired comparison, when evaluating the sample variation of a group

x, like inter-observer variation, the difference between each element and the mean (x i − x) is used

instead of the difference of each element (x i − yi ) in Equation (2.12): » 100 1 X CV = (x i − x)2 , x (N − 1) i

(2.13)

where (N − 1) is the degree of freedom. We only have (N − 1) independent deviations such that P the sum of the N deviations from the mean is always zero: (x i − x) = 0. We employed these CV i

error methods for evaluating our experiments and model performance in qualitative comparison with others.

2.3.4

Colour Appearance Models

A colour appearance model (CAM) is a numerical model of the human colour vision mechanism. Common colour appearance models largely follow the zone theory by modelling human colour vision as a four-stage procedure, shown in Figure 2.12, comprising chromatic adaptation, dynamic cone adaptation, achromatic/opponent colour decomposition, and computation of perceptual attribute correlates. Generally, colour appearance models take tristimulus X Y Z values (of the colour to be perceived) and parameters of the viewing condition to yield perceptual attributes predicting the perceived colour (commonly lightness, chroma, and hue). Colour appearance models mostly differ in the specific functions that transform colour quantities across these four stages, the quality of their prediction, and the different viewing conditions that can be modelled. Popular models are

X Chromatic Adaptation Transform

Y Z

Xc

L

L’

Yc

M

M’

Zc

S

S’

XYZW

Achromatic (A) L:M:S

Opponent (a, b)

Colourfulness (M)

Brightness (Q)

Chroma (C)

Saturation (s)

Hue Quadrature (H)

Hue (h)

La

Outputs

Inputs

Chromatic Adaptation

Lightness (J)

Cone Responses

Colour Decomposition

Colour Appearance

Figure 2.12: Modern colour appearance models roughly follow these four stages. First, the incoming spectrum, sampled as an X Y Z triple, is transformed for chromatic adaptation. This is usually done in a specialised colour space (though not always). Then, the white-adapted X Y Zc is transformed into the cone colour space, where a cone-response function is applied (commonly a power or hyperbolic function). After that, the signal is decomposed in the achromatic channel A and the colour opponent channels a and b. The perceptual correlates are based on these three channels. This is where colour appearance models differ most, as a large range of functions are applied to yield perceptual values.

2.3. Colour Appearance

31

the simple CIELAB model, RLAB [Fairchild, 1991], Hunt94 [Hunt, 1994], LLAB [Luo et al., 1996], CIECAM97s [CIE, 1998], up to the recent and currently widely accepted CIECAM02 [Moroney et al., 2002]. Many different colour appearance models have been proposed over the years. We will briefly review the common models with more details on their mathematical modelling (see [Fairchild, 2005] for a complete overview of other colour appearance models). For the purpose of developing our colour appearance model, we conducted a review of the mathematical details of other colour appearance models. This section contains detailed description of the mathematics of the models. This is included as a reference to the reader for completeness. Section 2.3.6 summarises the following models in sufficient detail for those readers not requiring the reference. CIELAB

CIELAB (or CIELCH) [CIE, 1986] is a very simple colour appearance model that is purely

based on X Y Z tristimulus values. Chromatic adaptation is performed by dividing X Y Z values by normalised white point values X Y Zw . This is a modified form of the von Kries chromatic adaptation transform [von Kries, 1970], and the cone response is modelled as a cube root. Only lightness, chroma, hue, and colour opponents (a and b) are predicted. It does not model any adaptation to different backgrounds or surround changes. Despite these simplifications, it still performs rather well (see Chapter 5 for more details on its performance). Input parameters to the CIELAB model are: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point: X n Yn Zn .

CIELAB takes only normalised input values without taking any environmental viewing conditions into account. The colour appearance attributes are modelled as follows: Lightness Redness − Greenness

Yellowness − Blueness

� � L ⇤ = 116 f Y /Yn − 16 , � � �⇤ ⇥ � a⇤ = 500 f X /X n − f Y /Yn , � � �⇤ ⇥ � b⇤ = 200 f Y /Yn − f Z/Zn , 8 where f (x) =

<

x 1/3 ,

: 7.787x + 16/116 ,

p

x > 0.008856 x  0.008856

(a⇤ )2 + (b⇤ )2 ,

Chroma

Ca⇤b =

Hue angle

ha b = t an−1 (b⇤ /a⇤ ) .

(2.14) (2.15) (2.16)

,

(2.17)

(2.18) (2.19)

CIELAB is the oldest model that was derived from the psychophysical approach in 1976. Although CIELAB does not consider background or surround environmental conditions, it performs considerably well for general purposes (see Chapter 5 for quantitative comparison). RLAB

RLAB [Fairchild, 1991] is a revised version of CIELAB that takes different viewing con-

ditions into account. In particular, it supports different media and different surround conditions.

2.3. Colour Appearance

32

RLAB comprises a chromatic adaptation transform and appearance attribute calculation. Chromatic adaptation is performed in LMS cone colour space, but colour attributes are still computed from white-adapted X Y Z values. Input parameters to the RLAB model are: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point: X n Yn Zn , • Level of luminance of the reference white point: YN [unit: cd/m2 ], • Model parameters: D and σ,

where D depends on a medium type: D = 1.0 corresponds to hard-copy print, soft-copy CRT display yields D = 0.0, and an intermediate value is used for projected images in a darkroom. (D = 0.5 is used with no available data.) σ corresponds to the surround condition: 1/2.3 (for dark), 1/2.9 (for dim), and 1/3.5 (for average) respectively. First, the input tristimulus X Y Z values are transformed into LM S cone signals by using the Hunt-Pointer-Estévez (HPE) transform, MHPE , originated from [Estévez, 1979]: 2 3 2 3 6 X 7 6 0.38971 6 L 7 6 7 6 6 7 6 7 6 6 M 7=M HPE · 6 Y 7 , MHPE = 6 −0.22981 6 7 4 4 5 4 5 Z 0.00000 S

3 −0.07868 7 7 0.04641 7 7. 5 1.00000

2

0.68898 1.18340 0.00000

(2.20)

From the transformed cone signals, the model computes von Kries chromatic adaptation scalars a L , a M , and aS to accomplish chromatic adaptation in the LMS cone colour space: aL =

p L + D(1 − p L ) Ln

, aM =

p M + D(1 − p M ) Mn

, aS =

pS + D(1 − pS ) Sn

,

(2.21)

,

(2.22)

.

(2.23)

where the inner parameters p L , p M , and pS are calculated as follows: 1/3

pL =

(1 + YN

+ lE )

1/3 (1 + YN + 1/l E )

where

lE =

1/3

, pM = 3L n

(1 + YN

+ mE )

1/3 (1 + YN + 1/m E )

L n + Mn + Sn

, mE =

1/3

, pS =

3Mn

L n + Mn + Sn

+ sE ) 1/3 (1 + YN + 1/s E ) (1 + YN

, sE =

3Sn

L n + Mn + Sn

The adaptation scalars form a diagonal matrix A to apply to the adaptation transform, then the cone signals are transformed into tristimulus values with respect to the model’s reference viewing condition (CIE D65 illuminant in 318 cd/m2 ). 2 3 2 3 X X 6 7 6 1.9569 6 re f 7 ä−1 6 Ä 6 7 7 6 6 7 7 6 Y =6 6 0.3612 6 r e f 7 = R · A · MHPE · 6 Y 7 , R = A r e f · MHPE 4 5 4 5 4 Zr e f Z 0.0000 2

−1.1882

0.6388 0.0000

3 0.2313 7 7 0.0000 7 7. 5 1.0000 (2.24)

2.3. Colour Appearance

33

Then, the colour appearance attributes are modelled as follows: äσ Ä L R = 100 Yr e f , hÄ äσ Ä äσ i aR = 430 X r e f − Yr e f , hÄ äσ Ä äσ i , − Zr e f bR = 170 Yr e f ∆� � � � 2 2 CR = aR + bR ,

Lightness Redness − Greenness Yellowness − Blueness Chroma

C

(2.25) (2.26) (2.27) (2.28)

R

Saturation

sR =

Hue angle

hR = t an−1 (bR /aR ) .

LR

,

(2.29) (2.30)

Finally, hue composition H R is calculated by linear interpolation of the values in Table 2.3. hR

Red

Blue

Green

Yellow

HR

24

100

0

0

0

R

90

0

0

0

100

Y

162

0

0

100

0

G

180

0

21.4

78.6

0

B79G

246

0

100

0

0

B

270

17.4

82.6

0

0

R83B

0

82.6

17.4

0

0

R17B

24

100

0

0

0

R

Table 2.3: Hue angle conversion to hue composition in the RLAB model. The RLAB model includes a rigorous medium parameter D, accepting that colour appearance depends on medium type. On the other hand, it conducts the chromatic adaptation in the physiological cone colour space; but we experienced that its hue and colourfulness estimation performance is reduced when compared to the original, CIELAB (see Chapter 5 for more details on comparison). Thus, we were skeptical that the physiologically-plausible structure is a better choice than the hybrid structure (psychophysical chromatic adaptation and physiological pipeline), and our model inherits the hybrid structure instead of the physiologically-plausible structure for chromatic adaptation (see Chapter 5). Hunt94

Hunt94 is the latest in a series of colour appearance models by the author [Hunt, 1982;

Hunt and Pointer, 1985; Hunt, 1987, 1991, 1994]. The Hunt94 model is a predecessor to the CIECAM97s model. The Hunt94 model is based on the physiological zone theory [Müller, 1930]. For instance, the Hunt94 model does not have the separate chromatic adaptation procedure at the beginning, whereas the adaptation is generally adopted for high accuracy in other colour appearance models. To place chromatic adaptation before the cone responses using the von Kries transform is not physiologically plausible, at least under the assumption that not the cones but the visual cortex interprets the hue of colours. Differing from CIECAM97s and CIECAM02, the chromatic adapta-

2.3. Colour Appearance

34

tion in the Hunt94 model is implemented as part of the cone adaptation calculation. Nonetheless, the Hunt94 model provides a basic structure in comparison to other colour appearance models. However, its application has been limited by the mathematical complexity of the model (the most complicated model ever). The model comprises three stages: dynamic cone adaptation, colour decomposition (achromatic and colour opponent signals), and colour appearance attribute modelling. It has the largest number of input parameters among colour appearance models: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point: X W YW ZW , • Level of luminance adaptation: LA [unit: cd/m2 ]

(LA is normally taken to be 20% of the luminance of the reference white.),

• Normalised luminance of background: Yb ,

• Scotopic luminance of the adapting field: LAS [unit: scotopic cd/m2 ]

(LAS can alternatively be approximated from the photopic luminance adaptation: LAS = 2.26LA (T /4000 − 0.4)1/3 where T is correlated colour temperature),

• Scotopic normalised luminance of colour sample to the reference white: S/SW , (If it is not available, Y /YW can be substituted instead.)

• Background parameters: Nc b and N b b [Nc b = N b b = 0.725(YW /Yb )0.2 ],

• Surround parameters (specified in Table 2.4): N b and Nc . Surround conditions

Nb

Nc

Small areas in uniform backgrounds and surrounds

300

1.0

Normal scenes

75

1.0

Television and CRT displays in dim surrounds

25

1.0

Cut-sheet transparencies on light boxes

25

0.7

Projected transparencies in dark surrounds

10

0.7

Table 2.4: Surround parameters in the Hunt94 model. First, the input tristimulus values are transformed into a physiological cone colour spaces using the HPE transform (see 2.20 for more details of the transform): 2 3 2 3 6 ⇢ 7 6 X 7 6 7 6 7 6 γ 7=M 6 7 HPE · 6 Y 7 . 6 7 4 5 4 5 β Z

(2.31)

They are then compressed by the revised Naka-Rushton equation [see Equations (2.11) and (2.33)]: ⇢a = B⇢ [ f n (F L F⇢ ⇢/⇢W ) + ⇢ D ] + 1 , γa = Bγ [ f n (F L Fγ γ/γW ) + γ D ] + 1 , βa = Bβ [ f n (F L Fβ β/βW ) + β D ] + 1 ,

(2.32)

2.3. Colour Appearance

35

where the function f N is defined as: f n (I) = 40[I 0.73 /(I 0.73 + 2)] .

(2.33)

Here we can observe the exponent constant is almost identical to [Valeton and van Norren, 1983] (0.73 ⇡ 0.74), which is derived from primate cone measurements. The luminance-level adaptation

is modelled as F L , which is inherited in CIECAM97s and CIECAM02:

F L = 0.2k4 (5LA) + 0.1(1 − k4 )2 (5LA)1/3 , k = 1/(5LA + 1) .

(2.34)

The formulae below also include the chromatic adaptation factors F⇢ , Fγ , and Fβ , which are modelled as follows: 1/3

1/3

1/3

1/3

F⇢ = (1 + LA + h⇢ )/(1 + LA + 1/h⇢ ) , Fγ = (1 + LA + hγ )/(1 + LA + 1/hγ ) ,

(2.35)

1/3 1/3 Fβ = (1 + LA + hβ )/(1 + LA + 1/hβ ) ,

where parameters h⇢ , hγ , and hβ are modelled: h⇢ = 3⇢W /(⇢W + γW + βW ) , hγ = 3γW /(⇢W + γW + βW ) , hβ = 3βW /(⇢W + γW + βW ) .

(2.36)

As opposed to other appearance models, the Hunt94 model predicts the Helson-Judd effect (see Section 2.3.3 for more details on the effect) and the cone pigment bleaching effect. In the above formulae, scalars ⇢ D , γ D , and β D are used for modelling the Helson-Judd effect: ⇢ D = f n [(Yb /YW )F L Fγ ] − f n [(Yb /YW )F L F⇢ ] , γ D = 0.0 ,

(2.37)

β D = f n [(Yb /YW )F L Fγ ] − f n [(Yb /YW )F L Fβ ] . The pigment bleach is modelled as follows: B⇢ = 107 /[107 + 5LA(⇢W /100)] , Bγ = 107 /[107 + 5LA(γW /100)] ,

(2.38)

Bβ = 107 /[107 + 5LA(βW /100)] . For the next stage, the Hunt94 model calculates achromatic signals and colour opponent signals. The achromatic signal transform in the Hunt94 model is rather complicated. The Hunt94 model even considers the photopic and scotopic vision. First, photopic vision is modelled by taking a weighted average of the three cones (L:M:S ⇡ 40:20:1) [Vos and Walraven, 1971]: Aa = 2⇢a + γa + (1/20)βa − 3.05 + 1 .

(2.39)

Second, the scotopic vision is modelled in a more complex way as follows: AS = 3.05BS [ f n (F LS S/SW )] + 0.3 ,

(2.40)

2.3. Colour Appearance

36

where the parameters F LS and BS are defined as: F LS = 3800 j 2 5LAS /2.26 + 0.2(1 − j 2 )4 (5LAS /2.26)1/6 ,

(2.41)

where j = 0.00001/(5LAS /2.26 + 0.00001) ,

(2.42)

BS = 0.5/{1 + 0.3[(5LAS /2.26)(S/SW )]0.3 } + 0.5/[1 + 5(5LAS /2.26)] .

(2.43)

The photopic and scotopic achromatic signals, Aa and AS , are combined to an achromatic signal: A = N b b (Aa − 1 + AS − 0.3 +

p

12 + 0.32 ) .

(2.44)

Then, the intermediate colour opponent signals C1 , C2 , and C3 are derived from zone theory: C1 = ⇢a − γa , C2 = γa − βa , C3 = βa − ⇢a .

(2.45)

These parameters yield redness–greenness and yellowness–blueness coordinates: MRG = 100[C1 − (C2 /11)][eS (10/13)Nc Nc b ] ,

Redness − Greenness

(2.46)

MY B = 100[(1/2)(C2 − C3 )/4.5][eS (10/13)Nc Nc b F t ] ,

Yellowness − Blueness

where eS = e1 + (e2 − e1 )

(hS − h1 ) (h2 − h1 )

,

(2.47) (2.48)

F t = LA/(LA + 0.1) .

(2.49)

Finally, the following colour appearance attributes are modelled, where the Hunt94 model calculates the brightness level first, then computes lightness: Brightness

Q = [7(A+ M /100)]0.6 N1 − N2 , where M =



2 MRG + MY2 B , N1 =

✓ Lightness

J = 100

Q QW

◆z

(7AW )0.5

5.33N b0.13

(2.50) , N2 =

7AW N b0.362 200

,

, where z = 1 + (Yb /YW )0.5 ,

(2.51) (2.52)

where the achromatic signal Q W of the reference white point X Y ZW is calculated in the same way as X Y Z main colours. In this model, lightness and brightness are related with saturation, chroma, and colourfulness through the chromatic response parameter M [see Equation (2.51)]: Saturation

s = 50M /(⇢a + γa + βa ) ,

(2.53)

Chroma

C94 = 2.44s0.69 (Q/Q W )Yb /YW (1.64 − 0.29Yb /YW ) ,

(2.54)

Colourfulness

M94 = C94 F L0.15 .

(2.55)

The hue angle hs is computed from the internal colour opponent signals, and the hue quadrature H (0–400) is computed by interpolating with respect to the eccentricity of each hue: �  (C2 − C3 )/9 , Hue hS = tan−1 C1 − (C2 /11) 100(hS − h1 )/e1 , Hue quadrature H = H1 + (h− h1 )/e1 + (h2 − h)/e2

(2.56) (2.57)

2.3. Colour Appearance

37

Unique Hue

Red

Yellow

Green

Blue

Hue angle hS

20.14

90.00

164.25

237.53

Eccentricity eS

0.8

0.7

1.0

1.2

Hue quadrature H

0

100

200

300

Table 2.5: Hue eccentricity parameters in the Hunt94 model. where H1 , h1 , and e1 are the hue quadrature, the hue angle, and the eccentricity values of the nearest lower unique hue angle of a given hue angle hS ; h2 and e2 are the hue angle and the eccentricity values of the nearest higher unique hue angle of hS in Table 2.5. The Hunt94 model was derived from a long study on photographic media, conducted at the Kodak research lab [Hunt, 1982; Hunt and Pointer, 1985; Hunt, 1987, 1991, 1994]. The formulae and structures were accumulated over many years. However, it involves a high complexity in mathematics, which results in a high computational cost and limits the model’s broad application. However, it forms the basic structure of modern colour appearance models. LLAB

The LLAB model [Luo et al., 1996] was derived from the analysis of psychophysical exper-

imental data, namely the LUTCHI colour appearance data set [Luo et al., 1991a,b, 1993a,b, 1995] (see Section 2.3.3 for more details). The LLAB model comprises chromatic adaptation (adopted from the so-called Bradford chromatic adaptation transform [Lam, 1985]) and a revised CIELAB colour space. Its structure is similar in a sense to the RLAB model. The LLAB model takes background measurements and surround parameters in order to predict the change of colour appearance by the luminance levels of background and surround, as observed in their experimental data. We review the mathematical details of this model that were revised and presented in [Luo and Morovic, 1996]. The input parameters to the LLAB model are: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point (test): X o Yo Zo .

• Normalised tristimulus values of the reference white point (target): X or Yor Zor (The reference illuminant is defined to be CIE illuminant D65, X Y Zo r = [95.05, 100.00, 108.08]). • Level of luminance of the reference white point: L [unit: cd/m2 ],

• Normalised luminance of background: Yb ,

• Surround parameters: D, FS , F L , and FC (see Table 2.6).

First, normalised input tristimulus values are transformed into a psychophysically sharpened (optimised) colour space through the Bradford chromatic adaptation transform MBFD [Lam, 1985]: 2

3 2 3 2 6 R 7 6 X /Y 7 6 0.8951 6 7 6 7 6 6 G 7=M 6 7 6 BFD · 6 Y /Y 7 , MBFD = 6 −0.7502 6 7 4 5 4 5 4 B Z/Y 0.0389

0.2664 1.7135 −0.0685

3 −0.1614 7 7 0.0367 7 7. 5 1.0296

(2.58)

2.3. Colour Appearance

38 D

FS

FL

FC

Reflective samples in average surround (>4 )

1.0

3.0

0.0

1.0

Reflective samples in average surround (4◦ )

1.0

3.0

1.0

1.0

Television in dim surround

0.7

3.5

1.0

1.0

Cut-sheet transparencies in dim surround

1.0

5.0

1.0

1.1

35mm projection transparencies in dark surround

0.7

4.0

1.0

1.0

Surround conditions ◦

Table 2.6: Surround parameters in the LLAB model.

Three cone responses are adapted to the test reference white point as follows:

R r = [D(R o r /R o ) + 1 − D]R ,

(2.59)

G r = [D(Go r /Go ) + 1 − D]G .

(2.60)

In particular, the blue response is changed nonlinearly: 8 Br =

<

β

[D(Bo r /Bo ) + 1 − D]B β ,

B>0

: −[D(B /B β ) + 1 − D]|B|β , o or

, where β = (Bo /Bo r )0.0834 .

B0

(2.61)

The above function is added to achieve a better fit of the model to the psychophysical experimental data, improving the accuracy of the chromatic adaptation [Lam, 1985]. However, it leads to non-equal energy of the three cones and also limits the analytical invertibility of the chromatic adaptation. The scaled RGB responses are transformed back to tristimulus XYZ values: 2

3

6 Xr 6 6 Y 6 r 4 Zr

6 Rr Y 7 7 7 6 7 7 = M−1 · 6 G Y 7 . r BFD 6 7 7 5 4 5 Br Y

3

2

(2.62)

In the second stage, the LLAB model computes colour appearance attributes. Lightness and colour opponent channels are modelled in a similar way to CIELAB:

Lightness Redness − Greenness

Yellowness − Blueness

L L = 116 f (Yr /100)z − 16 , z = 1 + F L (Yb /100)1/2 , ä Ä äó î Ä , A = 500 f X r /X o r − f Yr /Yo r î Ä ä Ä äó B = 200 f Yr /Yo r − f Z r /Zo r , 8 where f (x) =

< :



x 1/Fs , 16 0.0088561/Fs − 116

0.008856

x > 0.008856

◆ x+

16 116

,

x  0.008856

(2.63) (2.64) (2.65)

.

2.3. Colour Appearance

39

The other colour appearance attributes are calculated as follows: Chroma

Ch L = 25ln(1 + 0.05C), where C =

Colourfulness

C L = Ch L S M SC FC ,

p

A2 + B 2 ) ,

(2.66) (2.67)

where S M = 0.7 + 0.02L L − 0.0002L 2L ,

(2.68)

SC = 1.0 + 0.47log L − 0.057(log L)2 ,

(2.69)

Saturation

SL =

(2.70)

Hue angle

h L = t an−1 (B/A) ,

(2.71)

Hue quadrature

H L = H L1 + (H L2 − H L1 )(h L − h L1 )/(h L2 − h L1 ) ,

(2.72)

Ch L LL

,

where H L1 and h L1 are the hue quadrature and the hue angle of the nearest lower unique hue angle of a given hue angle h L ; H L2 and h L2 are the hue quadrature and the hue angle of the nearest higher unique hue angle of h L in Table 2.7. hL

HL

Red

Yellow

Green

Blue

NCS expression

25

0

100

0

0

0

R

62

50

50

50

0

0

R50Y

93

100

0

100

0

0

Y

118

150

0

50

50

0

Y50G

165

200

0

0

100

0

G

202

250

0

0

50

50

G50B

254

300

0

0

0

100

B

322

350

50

0

0

50

B50R

Table 2.7: Hue angle conversion to hue composition in the LLAB model. As observed in the RLAB model, the LLAB model noted that the colour appearance depends on the medium type, so the LLAB model includes medium-dependent parameters, e.g., for cut-sheet transparencies distinctive from other reflective media. We also experienced and include such a change of colour appearance due to the medium type after analysis of our experimental data and LUTCHI data (see Chapter 5 for more details). CIECAM97s

CIECAM97s [CIE, 1998] is a predecessor to CIECAM02 and similar in spirit to, but

much more complex than, CIECAM02. Historically, CIECAM97s is a combination of the Hunt94 model (having physiological plausibleness) and the LLAB model (based on psychophysical data). The chromatic adaptation transform, the Bradford chromatic adaptation transform [Lam, 1985], is adopted from the LLAB model. The structure is adopted from the Hunt94 model. It is a considerable challenge to merge the psychophysical (LLAB) and physiologically-plausible (Hunt94) aspects into a model to achieve improved performance. However, its practical applicability is limited. For instance, the Bradford chromatic adaptation transform is non-invertible as it includes non-linear compression

2.3. Colour Appearance

40

of the short cone (blue) signals, and the performance in the prediction of saturation is unstable as it is influenced by its hue and luminance levels. CIECAM02 is in many respects its simpler but more powerful successor, overcoming the drawbacks of CIECAM97s. We review the mathematical details of the CIECAM97s model. The input parameters for this model are: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point: X W YW ZW , • Level of luminance adaptation: LA [unit: cd/m2 ]

(LA is normally taken to be 20% of the luminance of the reference white.),

• Normalised luminance of background: Yb ,

• Surround parameters (specified in Table 2.8): c, Nc , F , and F L L . In particular, the input parameters to the CIECAM97s model includes a medium-dependent parameter F L L for the surround (see Table 2.8). The parameter specifies cut-sheet transparencies data, which is removed in CIECAM02. c

Nc

F

FL L

0.69

1.0

1.0

0.0

Average surround (4 )

0.69

1.0

1.0

1.0

Dim surround

0.59

1.1

0.9

1.0

Dark surround

0.525

0.8

0.9

1.0

Cut-sheet transparencies

0.41

0.8

0.9

1.0

Surround conditions Average surround (>4◦ ) ◦

Table 2.8: Surround parameters in the CIECAM97s model.

For the first stage, the CIECAM97s model uses the Bradford chromatic adaptation transform, MBFD [see Equation (2.58) in the LLAB model], which is often called CMCCAT97, which is inherited from the LLAB model: 2 3 3 6 X /Y 7 6 R 7 6 7 6 7 6 7 6 G 7=M BFD · 6 Y /Y 7 . 6 7 4 5 4 5 Z/Y B 2

(2.73)

As in the LLAB model, red and green responses are adapted to the test reference white point as follows: R c = [D(1.0/RW ) + 1 − D]R ,

(2.74)

Gc = [D(1.0/GW ) + 1 − D]G , 8 Bc =

< :

(2.75)

p

[D(1.0/BW ) + 1 − D]B p ,

p −[D(1.0/BW ) + 1 − D]|B| p

B>0 ,

B0

, where p = (BW /1.0)0.0834 .

(2.76)

2.3. Colour Appearance

41

CIECAM97s suffers the non-invertibility problem of the chromatic adaptation, which is inherited from the LLAB model. It was later on modified in CIECAM02 to address the invertibility problem. After the chromatic transformation, the scaled RGB responses are transformed back to tristimulus XYZ values, and then transformed into the cone colour space by using the HPE transform, MHPE , as in RLAB [see Equation (2.20) for more details on the transform]: 3 2 2 3 R Y ⇢ 6 c 7 6 7 7 6 6 7 −1 6 7 6 γ 7=M · M · HPE BFD 6 Gc Y 7 . 6 7 5 4 4 5 Bc Y β

(2.77)

Second, a hyperbolic function, originating from the Naka-Rushton equation [Equation (2.11)], compresses the cone signals: ⇢a = γa = βa =

40(F L ⇢/100)0.73 (F L ⇢/100)0.73 + 2 40(F L γ/100)0.73

(F L γ/100)0.73 + 2 40(F L β/100)0.73

(F L β/100)0.73 + 2

+1 , +1 ,

(2.78)

+1 ,

where parameter F L is calculated by a polynomial function. This is the same as Equation (2.34) in the Hunt94 model. Third, the cone responses are transformed into achromatic signals and colour opponent signals. The achromatic signals A are modelled as follows: Achromatic signal

0

0

0

A = [2R a + Ga + 0.05Ba − 2.05]N b b ,

(2.79)

n = Yb /YW , N b b = Nc b =

(2.80)

0.725 n0.2

.

The colour opponent signals, redness-greenness (a) and yellowness-blueness (b), are calculated (inherited from Hunt94) as follows: Redness − Greenness Yellowness − Blueness

0

12

(2.81)

1 0 0 0 b = (R a + Ga − 2Ba ) . 9

(2.82)

11

0

1

0

a = Ra −

Ga +

11

Ba ,

Finally, the following colour appearance attributes are modelled as follows: ◆ ✓ A cz , z = 1 + F L L n0.5 , Lightness J = 100 AW Brightness

Q = (1.24/c)(J/100)0.67 (Aw + 3)0.9 ,

(2.83) (2.84)

where the achromatic signal AW of the reference white point X Y ZW is calculated in the same way as the X Y Z main colours. Chroma Colourfulness Saturation

C = 2.44s0.69 (J/100)0.67n (1.64 − 0.29n ) ,

M = C F L0.15 , p 50 a2 + b2 100e(10/13)Nc Nc b , s= ⇢a + γa + (21/20)βa

(2.85) (2.86) (2.87)

2.3. Colour Appearance

42

where e is calculated by Equation (2.48) in the Hunt94 model. Hue angle h is derived by converting a and b into polar coordinates: Hue angle

h = t an−1 (b/a) .

(2.88)

The calculation of the hue quadrature H values are identical to those of the Hunt94 model [see Equation (2.57) and Table 2.5]. CIECAM97s forms the basic structure of the current standard appearance model, CIECAM02. The detailed differences are: the chromatic adaptation transform, the Bradford transform, is substituted with a new chromatic transform, CIECAT02, in order to rectify the invertibility problem, and the equations of colour appearance attributes are optimised differently in CIECAM02. CIECAM02

CIECAM02 [Moroney et al., 2002] is considered one of the most complete and accurate

colour appearance models. It originates from the CIECAM97s model through a few modifications [Fairchild, 2001; Hunt et al., 2002] (often called the Fairchild model and the FC model respectively). It follows the zone theory closely, but includes psychophysical optimisation in the chromatic adaptation. First, chromatic adaptation is performed using CIECAT02, which supports varying degrees of adaptation. The resulting white-adapted X Y Z values are then normalised. The cone response is modelled using Equation (2.11), but with a fixed σ, which causes the response to be similar to a power function (see Chapter 5 for more details on each equation). The opponent colour decomposition follows Section 2.3.1 closely. The final attributes include lightness, brightness, chroma, colourfulness, hue and saturation. CIECAM02 can model different surroundings and adaptation levels. We review the mathematical details of the CIECAM02 model. Many parts are similar or identical to CIECAM97s, hence we describe only the formulae that are different. Note that the medium dependent parameter F L L in CIECAM97s is removed in CIECAM02 (see Table 2.9). The input parameters for the CIECAM02 model are: • Normalised (Y equal to 100) CIE tristimulus values (observed main colours): X Y Z, • Normalised tristimulus values of the reference white point: X W YW ZW ,

• Level of luminance adaptation: LA [unit: cd/m2 ]

(LA is normally taken to be 20% of the luminance of the reference white.),

• Normalised luminance of background: Yb ,

• Surround parameters (specified in Table 2.9): c, Nc , and F . The main procedures fall into four different stages. First, the physically-meaningful input tristimulus values X Y Z are adapted with respect to the reference white point to yield colour constancy. Chromatic adaptation is calculated in a psychophysically sharpened colour space, called CIECAT02, originating from the revision of the CMCCAT2000

2.3. Colour Appearance

43

Surround conditions

c

Nc

F

Average surround

0.69

1.0

1.0

Dim surround

0.59

0.9

0.9

Dark surround

0.525

0.8

0.8

Table 2.9: Surround parameters in the CIECAM02 model. transform [Li et al., 2002]: 2 2 3 2 3 X R 6 7 6 0.7328 6 7 6 7 6 6 7 6 7 6 6 G 7=M CAT02 · 6 Y 7 , MCAT02 = 6 −0.7036 6 7 4 5 4 4 5 Z 0.0030 B

0.4296 1.6975 0.0136

3 −0.1624 7 7 0.0061 7 7. 5 0.9834

(2.89)

The Matrix MCAT 02 is normalised such that the tristimulus values for the equal-energy illuminant (X = Y = Z = 100) produce equal cone responses (L = M = S = 100) to ensure analytical invertibility. This means that the model handles the responses of three cones equally (which is then re-scaled by the proportion of their respective populations). The degree of chromatic adaptation depends on the absolute luminance level of LA, which is modelled as a parameter D:  ✓ ◆ ⇣ −(L +42) ⌘ � A 1 92 D = F 1− e . 3.6

(2.90)

Then, the chromatic adaptation is modelled in the CIECAM02 as follow: � R C = [(100D RW ) + (1 − D)]R , � GC = [(100D GW ) + (1 − D)]G , � BC = [(100D BW ) + (1 − D)]B .

(2.91)

The chromatically adapted values in CIECAT02 are then inverted back to the original CIEXYZ colour space through the inverse matrix, M−1 CAT 02 . Second, chromatically adapted colours are transformed into the physiological LMS cone colour space by using the HPE transform [see Equation (2.20) in the RLAB model]: 2 3 3 2 0 6 R 7 6 RC 7 6 7 7 6 −1 6 G0 7 = M 7 6 HPE · MCAT02 · 6 GC 7 . 6 7 4 0 5 5 4 B BC

(2.92)

The linear cone responses are compressed with a hyperbolic function. Although the function is derived from different optimisation (exponent changed from 0.73 to 0.42), they are similar to CIECAM97s’ cone response functions: 0

Ra = 0

Ga = 0

Ba =

400(F L R0 /100)0.42

27.13 + (F L R0 /100)0.42 400(F L G 0 /100)0.42

27.13 + (F L G 0 /100)0.42 400(F L B 0 /100)0.42

27.13 + (F L B 0 /100)0.42

+ 0.1 , + 0.1 , + 0.1 ,

(2.93)

2.3. Colour Appearance

44

where F L is calculated by Equation (2.34) in the Hunt94 model. In Chapter 5, we claim that the way to model cone responses in current colour appearance models can be improved upon to increase the dynamic range of our colour appearance model. We will discuss modelling cone responses later (see Chapter 5 for more details). Third, the simulated cone responses are transformed into achromatic signals and colour opponent signals. Achromatic signals are calculated as an average with respect to the population of the three cones (inherited from the Hunt94 model). Compared to CIECAM97s, only the achromatic signal equation is modified: 0

0

0

A = [2R a + Ga + (1/20)Ba − 0.305]N b b , where n = Yb /YW , N b b =

0.725 n0.2

.

(2.94) (2.95)

The colour opponent signal equations [redness-greenness (a) and yellowness-blueness (b)] are identical to the CIECAM97s model [see Equation (2.81) and (2.82)]. Finally, colour appearance attributes for a given stimulus are calculated: lightness (J), brightness (Q), chroma (C), saturation (s) and hue angle (h), colourfulness (M ) and hue composition (H): ◆ ✓ A cz p , z = 1.48 + n , Lightness J = 100 AW « J � 4 � AW + 4 F L0.25 , Brightness Q= c 100

(2.96) (2.97)

where the achromatic signal AW of the reference white point X Y ZW is calculated in the same way as X Y Z main colours. Chroma

Colourfulness Saturation

p

J/100(1.64 − 0.29n )0.73 , p (50000/13)Nc Nc b e t a2 + b2 , where t = 0 0 0 R a + Ga + (21/20)Ba ⇡ 0.725 e t = 1/4[cos(h + 2) + 3.8], Nc b = 0.2 , 180 n C = t 0.9

M = C F L0.25 , r M . s = 100 Q

(2.98) (2.99) (2.100) (2.101) (2.102)

The calculation of hue angle h is directly inherited from the CIECAM97s model [see Equation (2.88)], and the calculation of hue quadrature H from the hue angle h is identical to those of the Hunt94 model and the CIECAM97s model [see Equation (2.57) and Table 2.5]. Generally, the performance of the CIECAM02 model is good, and it is the current international standard for colour appearance modelling. However, as we will see in Chapter 5, it has difficulties with higher luminance levels, both in terms of colourfulness as well as lightness. We partially attribute this to the fact that input X Y Z values are normalised, which seems to lose important information. Kunkel and Reinhard [2009]

Kunkel and Reinhard [2009] introduced a neurophysiology-inspired

colour appearance model, which shows that chromatic adaptation and response compression in

2.3. Colour Appearance

45

CIECAM02 could be combined and that colour opponent channels could be derived from neurophysiological evidence [De Valois et al., 1997]. Compared to CIECAM02, their model removes the chromatic adaptation transform matrix [see Equation (2.89) for the transform] and merges the degree of adaptation in the chromatic adaptation [see Equations (2.90) and (2.91)] into a dynamic cone response function [see Equation (2.93)]. This revision changes the value σ for each cone respectively in the physiological cone response function [see Equation (2.11)]. This models the different responsivity trends (response curve shapes) of the three LMS cones. Consequently, their model employs different LMS ratios (4.19:1.00:1.17) for computing achromatic signals [see Equation (2.94)] and three different stages of colour opponent signals. First, a set of colour opponent signals (ac ,bc ) is used for modelling the chroma attribute: 3 2 2 0.5233 a 6 c 7 6 −4.5132 3.9899 7 6 6 6 b 7 = 6 −4.1562 5.2238 −1.0677 6 c 7 6 5 4 4 d 7.3984 −2.3007 −0.4156

3 2

3 0 L 7 6 7 7 6 7 7·6 M0 7, 7 6 7 5 4 0 5 S

(2.103)

where d is a normalisation constant and L 0 , M 0 , and S 0 are non-linear cone responses. Then, chroma C is calculated as follows: « 3

C = (10 t)

J

0.9

n 0.73

100

(1.64 − 0.29 )

,

where t =

Nc Nc b

p

ac2 + bc2

d

,

(2.104)

and J is lightness, see Equations (2.95) and (2.100) and Table 2.9 for Nc , Nc b , and n in CIECAM02. A second set of colour opponent signals (ah , bh ) is used to compute intermediate hue attributes h [a polar coordinate of (ah , bh )]: 2

3

2

3 0 L 7 6 7 −1.7198 7 6 0 6 5·6 M 7 7. 2.3476 4 0 5 S 3

6 ah 7 6 −15.4141 5=4 4 bh −1.6010

17.1339 −0.7467

2

(2.105)

Finally, a third set of colour opponent signals (a00 , b00 ) is derived from the ganglion-derived colour primaries r p , g p , y p , and b p : a00 = r p − g p , where

b00 = y p − b p ,

(2.106) (2.107)

r p = max(0,0.6581cos0.5390 (9.1 − h)),

(2.108)

y p = max(0,0.9041cos2.5251 (90.9 − h)),

(2.110)

g p = max(0,0.9482cos2.9435 (167.0 − h)),

(2.109)

b p = max(0,0.7832cos0.2886 (268.4 − h)).

(2.111)

The colour opponent signals are converted into a polar coordinate h0 [see Equation (2.88)]. Their model is simpler and theoretically more plausible in modelling cone response and chromatic adaptation than CIECAM02. It also shows a higher accuracy in predicting hue attributes when compared with CIECAM02, although it does not present significant improvements in predicting

2.3. Colour Appearance

46

lightness and colourfulness attributes. In addition, their model is invertible, hence can be used for imaging applications. It is interesting future work to combine their hue prediction with our colour appearance model.

2.3.5

Colour Difference

If a colour space is perceptually uniform, the difference between two colours can be represented as the Euclidean distance between their coordinates. The CIE 1976 uniform colour space, CIELAB, defines colour difference CIE ∆E as the Euclidean distance between two colours: ⇤ = ∆Eab



(L1⇤ − L2⇤ )2 + (a1⇤ − a2⇤ )2 + (b1⇤ − b2⇤ )2 .

(2.112)

However, it was found that perceptual uniformity of colour difference is not consistent, in particular around the blue hue [Luo et al., 2001]. Many other colour difference formulae have been suggested to correct the non-uniformity, e.g., CMC(l : c) [Clarke et al., 1984], BFD(l : c) [Luo and Rigg, 1987], CIE94 [CIE, 1995], and CIEDE2000 [CIE, 2001] colour differences. Below, we briefly review the latest standard colour difference, CIEDE2000, ∆E00 . This revision is based on psychophysical experiment data accumulated over many years, and its basic structure is similar to that of the BFD(l : c) colour difference formula. CIEDE2000 is the Euclidean distance between two CIELCH coordinates, where the difference of each dimension is rescaled by constants and an additional term is introduced for hue and chroma interaction. First CIEDE2000 computes intermediate colour coordinates L 0 , a0 , b0 , Ca0 b , and h0a b for the CIELAB coordinates: L0 = L⇤ ,

(2.113)

a0 = (1 + G)a⇤ ,

(2.114)

b0 = b⇤ , p Ca0 b = a02 + b02 ,

(2.115) (2.116)

h0a b = tan−1 (b0 /a0 ) , 0 0 where

B B G = 0.5 B @1 − @

Ca⇤b

7

7

Ca⇤b + 25

11/2 1 C C A C A, 7

(2.117) (2.118)

7

⇤ ⇤ and Cab is the mean of the Cab values for two different colours. Then, each colour difference in

each dimension is calculated as ∆L 0 , ∆C 0 , and ∆H 0 : ∆L 0 = L10 − L20 ,

∆Ca0 b = Ca0 b,1 − Ca0 b,2 , ∆

∆H a0 b = 2 where

Ca0 b,1 Ca0 b,2 sin

∆h0a b = h0a b,1 − h0a b,2 .

(2.119) Ç

∆h0a b 2

(2.120)

å ,

(2.121) (2.122)

2.3. Colour Appearance

47

After that, three weighting scalars S L , SC , and SH are computed as follow: SL = 1 +

where

0.015(L 0 − 50)2

(20 + (L 0 − 50)2 )1/2

,

(2.123)

SC = 1 + 0.045Ca0 b ,

(2.124)

SH = 1 + 0.015Ca0 b T ,

(2.125)

T = 1 − 0.17cos(h0ab − 30◦ ) + 0.24cos(2h0a b ) + 0.32cos(3h0a b + 6◦ ) − 0.20cos(4h0a b − 63◦ ),

(2.126)

and L 0 is the mean of two different L 0 s, and h0a b is the mean of the two angles: 8

h0ab =

<

(h0a b,1 + h0a b,2 )/2,

: (h0 + h0 )/2 − 180, a b,1 a b,2

h0a b  180◦

.

h0a b > 180◦

(2.127)

The hue–chroma interaction factor R T is modelled as follows: R T = −sin(2∆✓ )R C , n o where ∆✓ = 30exp −[(h0a b − 275◦ )/25]2 , 11/2 0 7 0 C C B ab . RC = 2 @ A 7 0 Ca b + 257

(2.128) (2.129) (2.130)

Finally, the colour difference CIE ∆E00 is calculated as follows: 2Ç ∆E00 = 4

∆L 0

kL SL

å2

Ç +

∆Ca0 b kC SC

å2

Ç +

∆H a0 b kH SH

å2

Ç + RT

∆Ca0 b kC SC

åÇ

∆H a0 b kH SH

å31/2 5 ,

(2.131)

where parameters k L , kC , and kH are chosen to best represent the viewing conditions. For general reference conditions, these parameters are set to be 1 ( k L = kC = kH = 1 ). We use CIEDE2000 in our work of the characterisation method for HDR imaging in order to compute perceptual difference values (see Section 3.5.1).

2.3.6

Summary

Colour appearance models are numerically derived from experimental measurements of colour appearance. Colour appearance occurs in the visual cortex; hence, physiological measurements of colour appearance is still challenging. Instead, psychophysical measurements have been broadly used for modelling human colour vision. This is the reason why we still depend on the classical zone theory [Müller, 1930]. The previous methods to model human colour vision fall into three categories. One is the psychophysical modelling approach used by CIELAB and LLAB models. They are derived from psychophysical experimental data so do not try to follow the zone theory. Those models perform comparatively well (see Chapter 5 for more details on qualitative comparison). However, these models are quite limited in their representation of the structure and process of human colour vision. Another approach is physiologically-inspired modelling such as the Hunt94 model. This

2.4. Gamut Mapping

48

approach is strongly based on the zone theory and physiological measurements of primate cone responses. Even though it is seemingly more rigorous, it is based on an unproven hypothesis and on physiological response measurements from primates, which may have different characteristics from humans. Finally, hybrid approaches are an empirical combination of three different approaches: zone theory, physiological measurements of primates, and psychophysical measurements of the human,, e.g., CIECAM97s, Fairchild, FC, and CIECAM02. The CIECAM02 model is the latest of the hybrid types. The main structure is based on the zone theory. The chromatic adaptation is from psychophysical measurements. The cone responses are modelled from primate measurements. The colour appearance attributes are modelled from psychophysical measurements again. Our colour appearance model also takes this hybrid structure after analysis of our experiments (see Chapter 5 for more details). On the other hand, colour appearance modelling largely depends on the psychophysical experimental data. However, available data are geared towards luminances under 690 cd/m2 , which is a low luminance level when compared to real-world luminances. This is the reason why current colour appearance models fail when predicting colour perception under high luminance levels. This also limits the application of current colour appearance models for reproduction of HDR images. Therefore, we built a new experimental environment by using a custom-built high-luminance display; then, we conducted a series of psychophysical colour experiments under high luminance. This enabled us to produce a novel colour appearance data set for high luminance levels. Such wide range of colour appearance data allowed us to build a novel colour appearance model that can cover the working range of the human visual system (about five-orders magnitude). Finally, the appearance model is used to complete cross-media colour reproduction in HDR imaging (see Chapter 4 and 5 for more details on the development of our colour appearance model).

2.4

Gamut Mapping

Device characterisation describes colour device by relating their device-dependent colour specification to device-independent coordinates, e.g., CIEXYZ. Such colour spaces commonly try to ensure that equal scale intervals between stimuli represent approximately equally perceived differences in the attributes considered. Colour appearance models additionally try to model how the human visual system perceives colours under different viewing conditions so that the physically-meaningful coordinates can be transformed into perceptually-uniform coordinates. In colour reproduction process, a forward device characterisation model of a input device converts device-dependent signals to physically-meaningful coordinates. A forward colour appearance model then interprets these physical values into perceptual correspondence. These applications yield perceptually uniform colour coordinates of the real world. Suppose a reverse order of this colour reproduction process with respect to an output device. An inverse colour appearance model with a target viewing environment of the output device converts the perceived colour attributes (through the input device) to physically-meaningful colour coordinates (for the output device). Successively, inverse device characterisation of an output device changes the physical values into

2.4. Gamut Mapping

49

output device-dependent signals, completing the chain of the colour reproduction process (see Figure 2.1). Here colour gamuts of input/output devices can be compared in a perceptual colour space. We reach a point where we need to consider how to map these two perceptual colour spaces in order to achieve high fidelity in colour reproduction. Gamut-mapping algorithms have been broadly researched, and aim to ensure a plausible correspondence of overall colour appearance between the original and the reproduction by compensating for the mismatch in the size, shape, and location between the original and reproduced gamuts [MacDonald, 1993; Luo and Morovic, 1996; Stone et al., 1988; Braun and Fairchild, 1999]. See [Morovic, 2008] for a complete overview of gamut-mapping algorithms. As long as the output medium is different from the input, it is impossible to physically reproduce the same number of colours. Gamut-mapping algorithms generally aim for a plausible reproduction of the image’s appearance rather than the appearance of individual colours in the input image. The gamut-mapping algorithms generally fall into two high-level categories. The first is gamut clipping algorithms, which aim to preserve all in-gamut colours in their original locations as far as possible, but clip the rest of the out-of-gamut colours to maintain high fidelity. For instance, a common gamut clipping method is to project an out-of-gamut colour towards the light axis along paths of constant lightness and hue

(a) Digital camera

(b) LCD display

Display

Display

sRGB

Camera

(c) Camera and display

(d) Display and sRGB

Figure 2.13: Gamut boundary comparison between a digital camera and an LCD display. Image (a) presents the measured gamut boundary of a digital camera, a Canon 350D in the CIELAB colour space. Image (b) shows the gamut boundary of an LCD display, an Apple Cinema HD display. Image (c) presents a comparison of these two different media. Most of the camera gamut is covered by the display gamut so that most of the captured camera gamut can be represented without any gamut mapping (1:1 direct mapping) except in case of extreme saturation. As shown in Image (d), the gamut size of the Apple display is almost identical to sRGB international standard gamut.

2.4. Gamut Mapping

50

in a lightness, chroma, and hue space. These methods are generally used when gamut mismatch is small, which is true in most cases. The second way is gamut compression algorithms, which make changes to all colours from the original gamut so as to distribute the differences caused by gamut mismatch across the entire image. These approaches are used when a larger difference need to be overcome. Suppose the input and output gamuts are identical to each other. The input media gamut can be mapped directly onto the output media gamut. Even when the input device gamut is smaller than the output one, the input colours can be mapped directly onto the output device colours. In these two cases, simple 1:1 gamut mapping yields a perceptual match between input stimuli and output stimuli. However, if the input gamut is bigger than the output gamut, e.g., the reproduction of a colour transparency film to newspaper, direct mapping leads to gamut clipping of the outside colours. Rendering attempts to deal with gamut difference between the original and its reproduction, and can be divided into four different categories [Hunt, 2004; ICC, 2004]: • Relative colorimetric: Assuming that the human eye always adapts to the white of the viewed medium, relative colorimetric intent uses an output medium white point. This means that the

white point of an image is changed to the medium’s white point. It preserves all in-gamut

0.7 Spectral locus sRGB gamut Real-world gamut

0.6

0.5

v'

0.4

0.3

0.2

0.1

0.0 0.0

0.1

0.2

0.3

u'

0.4

0.5

0.6

0.7

Figure 2.14: Gamut boundary comparison between the real-world gamut and sRGB colour space. Pointer [1982] measured a maximum gamut for surface colours of the real world from 4089 colour samples including Munsell Limit Color Cascade. The green outer boundary represents possible gamut size yields by single monochromatic light within the visible spectrum (380–780nm in wavelength) in CIE u0 v0 diagram, the so-called spectral locus. The red-lined boundary shows the possible gamut boundary in the real-world viewing environment, which is smaller than the spectral locus as the actual bandwidth of the spectrum in the real world more spread out than monochromatic lights. Finally, the blue triangle region represent sRGB colour space. As shown above, most of the gamut boundary of the real world is covered by the sRGB colour space. Adapted from [Pointer, 1982]

2.5. High-Dynamic-Range Imaging

51

colours in their original locations, but clips all out-of-gamut colours. It is regarded as a better choice when the gamuts of the source and the reproduction are similar. This method applies to most common cases and is defined as a default for ICC profiling [ICC, 2004]. • Absolute colorimetric: Absolute colorimetric intent preserves the original white point in reproduction so that the original white point is maintained on the output medium that may differ from the original. For instance, this method is broadly used for newspaper and professional proofing prints. • Perceptual: Perceptual intent is the default rendering intent in gamut mapping. It preserves all of the source gamut by compressing through scaling. This method also uses the output

medium’s white point. No clipping of the source gamut happens. It is a reasonable choice for source images that contains significant out-of-gamut colours. • Saturation: Without concerning itself with accuracy, saturation intent converts saturated

colours in the source to saturated colours in the destination by expanding the source image’s colour gamut to the output device’s gamut. All colours are changed and the white point is decided by the output medium.

We measured and characterised a digital camera (Canon 350D) and an LCD display (Apple Cinema HD Display) with a spectrophotometer (GretagMacbeth EyeOne Pro). These were used as input and output devices during the work that makes up this thesis. The gamut boundaries of these two devices are compared in Figure 2.13 and the sRGB colour space is also compared with the real-world colour gamut (see Figure 2.14). As it turns out, the measured colour gamut of the digital camera is smaller than that of the display in most regions of the gamut boundary. 1:1 gamut mapping is used for faithful reproduction so that all in-gamut colours in the input medium are directly mapped (1:1) in their original locations in the output medium (see Chapter 6 for more details). Our colour appearance model handles the luminance difference of input/output media (see Chapter 5 for more details). Other gamut mapping techniques are not handled in this thesis.

2.5

High-Dynamic-Range Imaging

The previous sections discussed background and related work of the three essential elements in classical cross-media colour reproduction. However, this classic system was established and developed with low-dynamic-range (LDR) imaging fundamentals. It is well known that the LDR imaging system has obvious limitation in capturing and representing real-world optical radiation, as mentioned in Section 1.1. Current LDR imaging and LDR displays are based on a discretised signal structure, e.g., using 8-bit or 16-bit integer levels, which has obvious limitations. For instance, the camera cannot capture higher dynamic ranges than 16bits, or the display cannot produce colours of less than 1-bit signal depth. High-dynamic-range (HDR) imaging [Mann, 1993] and HDR display systems [Seetzen et al., 2004] have been developed to overcome these dynamic range limits. Owing to the new technology, we can capture a much higher dynamic range of luminance, a range similar to human

2.5. High-Dynamic-Range Imaging

52

vision, and we can display the captured higher dynamic range of data. However, the state of the art has mainly focused on the extendability of dynamic range from a tone-reproduction point of view, and has not considered colours rigorously. The work merely extended the dynamic range of each sensing/display channel on the existing platform. Current colour HDR imaging is achieved by merely combining the extended multi-chromatic channels, e.g., of red, green, and blue, together as a colour image. On the other hand, although we can overcome the dynamic range limit in the capturing stage, we are facing a reproduction problem at the display stage since HDR displays are not yet available. As presented in Figure 1.1(b), the range of the captured HDR image exceeds that of common LDR displays significantly. Gamma correction is not enough to compress the dynamic range of the captured images. Consequently, HDR images cannot be reproduced by simply rescaling the values of the HDR images to that of the display. If done nonetheless, most of the interesting information in the HDR images is lost by the discretisation of the display signal. Tumblin and Rushmeier [1993] proposed a non-linear mapping to reproduce HDR images on common LDR displays with a similar appearance to that observed by the human visual system, so-called tone mapping or tone reproduction. Many different HDR image acquisition algorithms and tone-mapping algorithms have been developed over the years. We will briefly review the common algorithms. In HDR imaging, we review how to solve a camera exposure function to derive a radiance map from LDR camera signals. We briefly review the structure of HDR displays. Finally, we review the state-of-the-art tone-mapping algorithms with respect to colour reproduction and appearance modelling.

2.5.1

High-Dynamic-Range Image Acquisition

Imaging sensors digitise incident illumination into digital signals within a certain range, which is often limited by the capacity of the solid-state wells and the ADC. State-of-the-art ADCs produce 12- or 14-bit discrete signals as integers. If the dynamic range of illumination exceeds the ADCs’ capacity, the output signal is saturated. To this end, Mann [1993] proposed a novel method to overcome the dynamic range limit. By taking the exposure time factor into account, the method concatenates a series of different exposures as a continuum, resulting in an HDR image. Exposure on the sensor H is the product of irradiance E and exposure time ∆t. Once we have the response function of a camera f (x) to output camera signal Z for a given exposure H, the inverse application of the function f −1 (x) yields the exposure H. As a result, the summation of the exposure H divided by the time interval ∆t yields irradiance E at each pixel location (x, y). Supposing the irradiance on the sensor is linear to the scene radiance [see Equation (2.5)], we can derive relative radiance measures up to a scalar from the camera signal: E(x, y) =

N −1 X

H j (x, y)

j=0

∆t j (x, y)

, where H(x, y) = f −1 (Z(x, y)),

(2.132)

and j represents the multi-exposure sequence number and N indicates the total number of exposure sequences.

2.5. High-Dynamic-Range Imaging

53

HDR radiance maps can be generated from ordinary sensor responses as a solid-state sensor produces linear responses to incident luminances [Mann and Picard, 1995; Yamada et al., 1995; Xiao et al., 2002]. These methods employ raw sensor signals by taking into account exposure times. In practice, digital cameras output non-linear response characteristics given incident light (see Figure 2.7 for the typical OETF of digital cameras); hence, a camera response function is generally required to derive exposure levels from given camera signals. As it turns out, this response function can be directly derived from the camera signals. Many such HDR image acquisition algorithms have been developed over the years. We will briefly review the common techniques. Debevec and Malik [1997] introduced a method to generate HDR images from multi-exposed ordinary photographs (not sensor signals). The key contribution of this method is to estimate a camera exposure function for a given exposure without requiring extra physical measurements of the cameras properties. The function is estimated from pixel data with exposure time information in a curve fitting sense. They assume that the camera response is a smooth and monotonically increasing function f (x) as a constraint to solve the under-determined function. If ln f −1 (x) at pixel Zi j is defined as g(Zi j ), the camera response function can be estimated by minimising the following error function: O=

P N X X i=1 j=1

{w(Zi j )[g(Zi j ) − ln Ei − ln∆t j ]}2 + λ

Zmax X−1 z=Zmin +1

[w(z)g 00 (z)]2 ,

(2.133)

where N is a number of pixel locations, P is a number of exposure sequences, Z is a pixel response, ∆t j is a relative exposure time, λ is the weighting constant, g 00 (z) is the second derivative of the function g(Zi j ), and w(z) is a pyramid weighting factor: 8 w(z) =

< z − Zmin , z  1 (Zmin + Zmax ) 2 . 1 : Z − z , z > (Z + Z ) max min max 2

(2.134)

In Equation (2.133), the first term is for concatenating the camera responses in different exposures; the second term is a smoothness term at each joint point of the LDR responses; the λ is empirically determined. Once the inverse logarithmic camera response function g(z) is recovered, the radiance values of each pixel in different exposure sequences are accumulated with a pyramid-weighting factor [see Equation (2.134)]; consequently, it yields a relative HDR radiance value (up to a scalar) at each pixel Ei : P P

ln Ei =

j=1

w(Zi j )(g(Zi j ) − ln(∆t j )) P P j=1

.

(2.135)

w(Zi j )

The main impact of their method is to allow greater access to HDR imaging so that any digital camera can be used to build HDR images without requiring any specific hardware such as a

2.5. High-Dynamic-Range Imaging

54

spectroradiometer. However, their estimation approach may produce noise depending on the sampleddata. Even though the parameter λ is helpful for stabilising performance, it may result in the loss of important information when estimating the camera exposure function. Mitsunaga and Nayar [1999] model the camera exposure function as a high-order polynomial function, while Debevec and Malik [1997] and Robertson et al. [1999, 2003] solve the camera function without assuming a polynomial function. The camera exposure function f (x) of pixel value Z is modelled as a polynomial function: f (Z) =

N X

cn Z n .

(2.136)

n=0

The exposure function is solved by minimising the below error function ":

"=

Q−1 P XX q=1 p=1

2 4

N X n=0

n cn Z p,q − Rq,q+1

N X n=0

32 n 5 , cn Z p,q+1

(2.137)

where Q is a total number of images used, N is a polynomial degree, and P represents each pixel location. cn is the coefficient to the polynomial. The optimisation can be solved by determining where the partial derivatives are all zero with respect to the polynomial coefficients @ "/@ cn = 0. The equation is solved iteratively until the minimum error reaches a certain level. They also constrain the maximum order of the polynomial degree up to the tenth order. Once the camera response function is recovered, the radiances in different exposures (scaled by the time intervals) at each pixel are accumulated as in [Debevec and Malik, 1997] [see Equation (2.135)]. While the assembly algorithm of [Debevec and Malik, 1997] requires the complete information of a series of exposure time intervals, Mitsunaga and Nayar [1999]’s algorithm needs only the first exposure time interval and computes the other time intervals. However, considering that the LDR source images are usually taken in identical exposure intervals, it is not a big benefit in practice. Nonetheless, [Mitsunaga and Nayar, 1999] is computationally more efficient and robust than [Debevec and Malik, 1997] such that the camera response function is smoothly increasing and monotonic. Nayar and Mitsunaga [2000] introduced the application of one-shot HDR imaging, so-called spatially varying exposure (SVE) imaging, by placing a set of mosaic neutral density filters in front of the sensor. This avoids the registration problem of the previous multi-exposure HDR imaging, e.g., [Debevec and Malik, 1997]. In their hardware, four neighboring pixels have different exposures respectively, and this pattern is repeated over the detector array. It is an innovative approach to produce HDR images without taking multi-exposure sequences that enables the capture of moving objects as HDR video. Göesele et al. [2001] solves the exposure function by using the ICC profile, which converts device-dependent signals (non-linear RGB) into device-independent signals, so-called profile-connect space (PCS), e.g., CIEXYZ coordinates — colour space adapted in the D50 illuminant [ICC, 2004]. Then the exposure sequence x i, j , yi, j , and zi, j , scaled by the time interval T j , are averaged with

2.5. High-Dynamic-Range Imaging

55

e0

e3

e1

e2

Figure 2.15: Mosaic neutral-density filter for spatial varying exposure imaging. Four different exposures of neutral density filter are installed in front of the detector array. The difference between neutral density is e3 = 4e2 = 16e1 = 64e0 . Adapted from Nayar and Mitsunaga [2000].

weighing factor w: P X i = Tn

j

X i, j T j−1 w(X i, j , Yi, j , Zi, j )

P Yi = Tn

j

P Zi = Tn

j

P j

w(X i, j , Yi, j , Zi, j )

,

Yi, j T j−1 w(X i, j , Yi, j , Zi, j ) P j

w(X i, j , Yi, j , Zi, j )

,

(2.138)

Zi, j T j−1 w(X i, j , Yi, j , Zi, j ) P j

w(X i, j , Yi, j , Zi, j )

.

After that, the HDR XYZ image is transformed into the display signals through an output ICC profile, which converts the device-independent signals (CIEXYZ) into device-dependent signals (non-linear display RGB). This approach is a method to utilise HDR images in the colour management workflow, which can produce better colour reproduction across its pipeline. However, this method inherits drawbacks from the ICC profile mechanism. The proposed method needs to measure the white point of the captured scene to achieve colour consistency; otherwise it needs to capture the reference target in every capture as the ICC input profile is specific only to a certain illumination condition (where it was generated). In practice, this aspect limits their application for capturing HDR images. The method also does not include a tone-mapping algorithm to reproduce images. It merely applies gamma correction, which is built in the ICC profile mechanism. HDR Image Formats

Captured HDR radiance is usually represented as floating-point data. Any im-

age format that supports floating-point data can be used for storage of the HDR images, e.g., RGBE format [Ward, 1992], OpenEXR [Lucas Digital Ltd., 2006], or Portable Float Map (PFM) [IEEE, 1985]. The RGBE file format has been distributed as a part of the freely available application Radiance [Ward, 1992]. It is broadly used in HDR and graphics applications. It has four channels:

2.5. High-Dynamic-Range Imaging Fresnel lens and diffuser

56

LCD panel

Projector

LCD controller PC PC with a dual-VGA graphic card

Figure 2.16: Design of a high-dynamic-range display. In the general structure of the LCD display, a DLP projector or LED panel is substituted for the fluorescent back-light unit. Consequently, the HDR display can produce higher contrast resolution than the ordinary display does and higher luminance levels. Adapted from [Seetzen et al., 2004].

three mantissas for red, green, and blue, and one exponent that is shared by these three colour channels; therefore, each colour value comprises two bytes of a mantissa and a shared exponent (half-precision float). The memory size for a pixel is 32 bits (4 bytes). However, it cannot cover the whole visible colour gamut, and colour saturation may occur as the three mantissa channels share one exponent. For example, if there is a colour which has large variation of colourfulness, the colour information will be clamped when it is encoded. The other drawback is that the number of mantissa bits (8bits) is rather smaller; hence, the RGBE format has limited precision. Lucas Digital Ltd. [2006] introduced open-source file input/output interface, called OpenEXR. This format is a general purpose wrapper for 16bits half-precision float type. It comprises a sign bit for the exponent, five bits for the exponents, and ten bits for the mantissa. It further supports wavelet compression. The memory size for a pixel is 48 bits (6 bytes). However, considering that most of HDR applications use single or double precision float internally, it loses tone precision of when restoring image data. In addition, the maximum value that can be stored is limited to 65504.0. The PFM file format stores single precision data directly without loss (IEEE storage format for the 32bits (4bytes) single precision float type [IEEE, 1985]). It comprises a sign bit for the exponent, 8bits for the exponent, and 23bits for the mantissa per each pixel in the interleaved mode. The total memory size for a pixel is 96bits (12bytes). The precision is high, but the file size is larger when compared to other HDR formats.

2.5.2

High-Dynamic-Range Display

Seetzen et al. [2004] introduced an HDR display system, which was created by substituting a digital light processing (DLP) projector for the fluorescent back-light unit of an ordinary LCD display. As a result, the display can display images with a higher dynamic range and a contrast ratio of 1:50 000 as the backlight is now spatially varying. Depending on the exact configuration, the maximum

2.5. High-Dynamic-Range Imaging

57

luminance goes up to 2 700 cd/m2 . As shown in Figure 2.16, the projector-based HDR display requires 100cm in depth, which is a drawback. Hence, they developed another type with lightemitting diode (LED)-based back-light modulator. The LED-based model has a low-resolution back light behind the diffuser of the LCD panel. It has a higher maximum output luminance of up to 8 500 cd/m2 . The LEDs are powered individually to form a low-frequency luminance map behind the displayed image. Thus the HDR display makes dark regions appear darker and in higher contrast than a uniform back-light modulation. In order to build a controllable viewing environment of our psychophysical experiment under high luminance levels, we built a high-luminance display. Our display substitutes hydrargyrum medium-arc iodide (HMI) bulbs for the florescent back-light unit of an LCD display so that its maximum luminance increases to 16 860 cd/m2 (see Chapter 4 for more details on our high-luminance display).

2.5.3

Tone Reproduction in High-Dynamic-Range Imaging

HDR imaging has been introduced to record real-world radiance values, which can have a much higher range than that of ordinary imaging devices. HDR radiance maps can have a dynamic range of about nine to ten orders of magnitude. Photographic HDR images or artificial radiance maps cannot be displayed properly on low-dynamic-range (LDR) output devices (with about two orders of magnitude) due to the huge difference in dynamic range (see Figure 1.1). Consequently, the dynamic range of the HDR scene needs to be mapped into the range of an output device, which is called tone reproduction or tone mapping. Tone Mapping

Tone mapping is related to colour appearance modelling and cross-media colour

reproduction as it tries to preserve the perception of an image after remapping to a low-luminance display; however, generally only tone (and not colourfulness) is considered. Over the years, many different tone reproduction operators have been developed since [Miller and Hoffman, 1984]. The majority of research has focused on improving local contrast, pursuing fewer artifacts and more efficient computation times [Schlick, 1994; Rahman et al., 1996; Ferwerda et al., 1996; Pattanaik et al., 1998; Tumblin and Turk, 1999; Pattanaik et al., 2000; Funt et al., 2000; Fattal et al., 2002; Reinhard et al., 2002; Durand and Dorsey, 2002; Johnson and Fairchild, 2003; Meylan and Süsstrunk, 2004; Li et al., 2005]. Global operators have received less attention [Tumblin and Rushmeier, 1993; Ward, 1994; Ward et al., 1997; Drago et al., 2003; Reinhard and Devlin, 2005] since high contrast appearance is difficult to achieve, but on the plus side they do not suffer from halo-artifacts like many local operators and are much more efficient. Among the previous tone-mapping algorithms, we will briefly review the relevant techniques (see [Reinhard et al., 2005] for a complete overview of other tone-mapping algorithms). This section also contains detailed mathematics of the methods. They are included here as a reference, and the reader is welcome to continue to Section 2.5.4 for a general summary of HDR imaging. First, we will briefly review the global operators.

2.5. High-Dynamic-Range Imaging Global Operators

58

Tumblin and Rushmeier [1991, 1993] were pioneers in addressing the research

question of how to render computer-generated HDR images. Their approach is to manipulate the tone-reproduction curves of HDR images by utilising the brightness perception model by [Stevens and Stevens, 1963]. It originates from scientific insights of the colour reproduction mechanism in humans with respect to tone mapping. Their tone-reproduction operator comprises three elements: a real-world observer function, an inverse display observer function, and an inverse display device function so that the perceived brightness on the display Bd matches that of the original scene B r w (See Figure 2.17). In particular, their insight into the HDR reproduction pipeline influenced our approach. They are only concerned with luminance mapping and derive their formulae from previous psychophysical assumptions, whereas we conducted psychophysical experiments to measure colour appearance attributes and modelled them for use in the reproduction pipeline (see Chapter 6 for more details on our method). Ward [1994] introduced a simple tone-mapping operator, which controls the contrast of HDR images with respect to the threshold in the human visual system to a given luminance intensity. The simplest way to achieve tone-mapping is to scale the captured real-world luminance L w at pixel (x,y) to the range of a display luminance L d with an appropriate scalar m: L d (x, y) = m · L w (x, y) .

(2.139)

Considering the non-linear responsivity of the human visual system to given luminance, a thresholdversus-intensity function t (a human observation function, corresponding to the forward colour appearance model) is used: t[L d (x, y)] = m · t[L w (x, y)] ,

(2.140)

where m is derived by solving t[L d (x, y)]/t[L w (x, y)], based on [CIE, 1981]. Finally, the tone-

Tone reproduction operator Real-world observer

L rw

Real-world luminance Real-world observer

Brw = B d

Brw

Perceptual Match

Inverse display observer

Bd

Ld = B d

Display observer

Ld

Inverse display device Display inputs

n

Display device

Figure 2.17: Schematic diagram for tone reproduction operators, adapted from [Tumblin and Rushmeier, 1993]. Their proposed tone-reproduction operator comprises real-world observations, inverse display observations, and an inverse display device function that achieves a perceptual match between real-world observation and the observation of the reproduced image on the display.

2.5. High-Dynamic-Range Imaging

59

mapping function m is modelled as: 0 m=

1 L d,ma x

⌘ 12.5 ⇣ L d,ma x 0.4 1.219 + B C 2 B C , 0.4 @ A 1.19 + L wa

(2.141)

where L d,max is the maximum display luminance assumed in the range 30–100 cd/m2 ; the level of real-world luminance adaptation L wa is estimated as the log average of the image’s luminance levels: L wa = exp

1X N

x, y

! log(10−8 + L w (x, y))

.

(2.142)

This method and [Tumblin and Rushmeier, 1993] form fundamentals for later tone-mapping algorithms. While Tumblin and Rushmeier [1993] suggest a fundamental pipeline for tone-mapping algorithms, Ward [1994] suggests a more practical idea to achieve tone mapping with respect to the human visual system. In particular, Equation (2.142) is adopted in many other tone-mapping algorithms for estimating the real-world luminance adaptation in HDR images [Pattanaik et al., 1998; Reinhard et al., 2002]. Equation (2.141) is extended further by Ferwerda et al. [1996] based on real measurements of the luminance response of the human visual system. Ward et al. [1997] suggested a global adaptation approach, which is based on histogram equalisation; furthermore, it models the subjective perception of the scene by borrowing the perceptual measurements of the contrast threshold. Their histogram equalisation decreases the contrast of less populated luminances and increase the contrast of more populated luminances respectively. This method first computes a histogram and cumulative distribution function from the logarithmic values of luminance, which is only used for obtaining a distribution. However, they found that the naive histogram equalisation method exaggerates contrast; hence, they imposes an upper bound onto the slope of the cumulative histogram remapping curve. But this changes the total pixel count in the histogram, which also affects the upper bound. They conduct histogram adjustments iteratively to a certain tolerance level. The level is decided in an empirical manner. The histogram is taken between the minimum and maximum values in equalised bins in the logarithmic scale of luminance (100 bins are used). The histogram equalisation function Bd e is applied in pixel values between log(L d min ) and log(L dmax ). The function Bde follows: Bde = log(L d min ) + [log(L d max ) − log(L d min )]cd f (Bw ) ,

(2.143)

where Bde is the computed display brightness log(L d ), L d min is the minimum of the display luminance (black level) [cd/m2 ], L dmax is the maximum of the display luminance (white level) [cd/m2 ], Bw is the world brightness log(L w ), and cd f () is the cumulative distribution function. Their method also considers the limitations of human vision: glare, colour sensitivity, and visual acuity. It includes functions to simulate glare that is caused by bright sources in the visual periphery and which scatter light into the lens of the eye; furthermore, it includes a term to simulate colour sensitivity which is reduced in dark environments as the light-sensitive rods take over from the

2.5. High-Dynamic-Range Imaging

60

colour-sensitive cone system. The proposed method is able to compress HDR images very effectively and also provides relatively stable colourfulness in the results. The details in the shadow area are very well preserved. However, the physical relationship between the display signal and the HDR radiance map is changed considerably. Drago et al. [2003] introduced a global tone-mapping model which is based on logarithmic compression following the hypothesis by Fechner [1963] (see Section 2.3.2 for more details). They manipulate the base of the logarithm to adjust the contrast of images. The method originates from Fechner’s Law: ✓ B = k1 ln



L L0

,

(2.144)

where L0 denotes the luminance of the background and k1 is a constant factor. The proposed logarithmic compression is structured to compute display luminance L d through dividing real-world luminance LW by the maximum luminance in the scene L ma x : Ld =

log(L w + 1) log(Lmax + 1)

.

(2.145)

However, this simple logarithmic compression is not enough to handle various HDR radiance maps, hence the base of the logarithm is varied from two to ten with appropriate interpolation. This is computed by Perlin and Hoffert’s bias power function [Perlin and Hoffert, 1989]. The bias function is a power function defined over the unit interval where an intuitive parameter b remaps an input value to a higher or lower value (0.85 is used for b): log(b)

bias b (t) = t log(0.5) ,

(2.146)

where t is the relative intensity of luminance. Finally, the bias function of Equation (2.146) is merged with the compression function of Equation (2.145) to vary the base of logarithm to differing contrast: Ld =

L dmax · 0.01

log10 (L wma x + 1)

·

Ç log 2 +

log(L w + 1) Ç å ⇣ ⌘ log(b) Lw

log(0.5)

L wma x

å.

(2.147)

·8

The first factor in Equation (2.147) is the adaptation scale factor, which is derived from the denominator of Equation (2.145). It is the ratio of the maximum luminance of the display (assumed to be 100cd/m2 ) to the logarithm of the maximum world luminance. The denominator of the second factor in Equation (2.147) is the base of the logarithm, which is the interpolated ratio of world luminance to maximum world luminance from two to ten by using the bias function (b = 0.85). After that, the compressed luminance values are gamma-corrected to fit the display gamma (ITU-R BT.709): 8 E0 =

<

slope · L ,

: 1.099L

0.9 γ

− 0.099 ,

L  st ar t

L > st ar t

,

(2.148)

2.5. High-Dynamic-Range Imaging

61

where slope is the elevation ratio (slope=4.5) of the line passing by the origin and tangent to the curve, st ar t is the abscissa (st ar t=0.018) at the point of tangency, and γ is 2.2. The proposed method provides not only computational efficiency but also relatively plausible reproduction. However, the performance of this method is affected by the default parameter settings and image characteristics. Some images are overly bright or dark while others look fine. Reinhard and Devlin [2005] introduced an efficient global method, inspired by the physiological response of photoreceptors (cones), based on [Kleinschmidt and Dowling, 1975; Hood et al., 1979]. The photoreceptor response V according to intensity I is defined similarly to the MichaelisMenten equation [Valeton and van Norren, 1983] [see Equation (2.11)]: V=

I I + (I a )m

Vmax ,

(2.149)

where the exponent m is 0.3 + 0.7k1.4 , k is (Lmax − L av )/(Lmax − Lmin ), L av is the geometric mean of

the luminance, and the adapted pixel intensity I a is computed through interpolation of local (pixel intensity itself) and global (geometric mean of luminance) adaptation as follow: I a = a I al ocal + (1 − a)I ag l obal ,

(2.150)

where a is 0.5 (which means the arithmetic mean of the geometric mean of luminance and pixel global

value), I alocal = L , I a

av av = I r/g/b , where L is the luminance level of each pixel, and I r/g/b is the

exponent of L av [Reinhard et al., 2005]. Finally the pixel value Vr/g/b is gamma-corrected by 2.0 [Reinhard et al., 2005]: 1/2.0

0 Vr/g/b = Vr/g/b .

(2.151)

The proposed algorithm takes a similar strategy to the global adaptation part of [Reinhard et al., 2002]. Both methods describe the modified version of the Michaelis-Menten hyperbolic equation. However, the global operator of [Reinhard et al., 2002] produces more stable and plausible results than this proposed method [Reinhard and Devlin, 2005] (see Chapter 6 for a more detailed comparison). Furthermore, like [Drago et al., 2003], the performance of the proposed method is affected by the default parameter settings and image characteristics. For instance, some images appear overly bright or dark with the default parameter settings (see Chapter 6 for more details on comparison). Kim and Kautz [2008b] introduced a global tone reproduction operator which provides consistent tone reproduction. This method was tested with a large variety of HDR images and produced consistent results without adjusting parameters. Their method is inspired by the characteristic curve in photography, called DlogE plot [Hunt, 2004], which plots density (logarithm of reflective luminance) against logarithm of the luminance incident on the photographic material. For instance, the Stanford Church HDR image [Debevec and Malik, 1997] (see Figure 6.5) has a dynamic range (luminance) of 5.5 orders of magnitude (1:343 512). Imagine that the radiance

2.5. High-Dynamic-Range Imaging

62

map is observed on a display which has a dynamic range of 2.4 orders of magnitude (1:256, 8bits [Berns and Katoh, 2002]). By linearly scaling the HDR radiance map to the range of display luminance in the DlogE domain (scaled by approximately 0.43), the dynamic range of the HDR radiance map is adjusted to that of the display luminance. The dynamic range of these two is then identical. The scaling factor k1 is computed as follows: k1 =

log L dma x − log L dmin log Lsma x − log Lsmin

,

(2.152)

where log L dma x and log L dmin are the maximum and minimum luminances of the display signals and log Lsma x and log Lsmin are the maximum and minimum luminances of the HDR radiance map. The dynamic-range compressed image can be computed as: � � � � �� L1 x, y = exp k1 log L0 x, y ,

(2.153)

where L1 is the compressed luminance at pixel address (x, y) and L0 is the luminance of the HDR image at each pixel. When a linear scaling factor is applied, the slope of the tone reproduction line decreases in the DlogE domain. The rotating point in changing the slope is moved to the averaged log-luminance µ by subtracting the mean µ before scaling, and then adding it back in the DlogE domain. The linear scaling factor is then replaced with a non-linear function. A Gaussian-weighting of the scale factor k1 is performed such that it has a peak at the averaged log-luminance µ and a minimum at k1 (see Figure 2.18). This new Gaussian-weighted scale factor k2 (L) depends on the log-luminance L = log L0 (x, y) and has a range of k1  k2 (L)  1.0. This non-linear scale factor is computed as: � � k2 (L) = 1 − k1 w (L) + k1 , � �2 ! 1 x −µ d0 w(x) = exp − , σ= , 2 2 c1 σ

(2.154) (2.155)

where σ is the ratio of the dynamic range d0 of the log-luminances of the HDR image to the userparameter c1 . This adjusts the shape of Gaussian fall-off within the width of its characteristic curve. The parameter c1 influences the resulting brightness and local details of the tone-mapped image. They found that c1 ⇡ 3.0 is the maximum level that can compress contrast without losing detail in the bright areas of images.

The final non-linear mapping function is as follows (including the rotation around µ): � � ⇥ � � � � ⇤ L1 x, y = exp c2 k2 log L0 x, y − µ + µ .

(2.156)

Parameter c2 is also introduced, referred to as the efficiency factor, which scales the intensity of the non-linear weighting. Even though the display signal depth may have a dynamic range of 2.4 (1:256), the actual dynamic range of the display luminance is often lower than that of the signal (e.g., an Apple Cinema HD Display has a measured dynamic range of only 2.01). Therefore, the dynamic range of an HDR radiance map should be compressed more than that of the display signal depth. Parameter c2 is 0.84 ( = 2.01/2.4) for this specific display. However, based on the testing of

2.5. High-Dynamic-Range Imaging

63

k2 0.0

k1

1.0

log-luminance

µ

Figure 2.18: Range of the dynamic scale factor k2 . other displays with lower dynamic ranges, the c2 parameter should be set to lower than the above for general purpose. Setting c2 ⇡ 0.5 works for a wide variety of images and displays.

The Y coordinate of CIEXYZ is used as the luminance input value L0 for the proposed tone

reproduction operator. After obtaining the mapped luminance layer L1 , the X and Z channels are scaled by the ratio of mapped luminance to original luminance as [Schlick, 1994]. After obtaining the tone-mapped radiance map, they use the international specification for the sRGB colour space [IEC, 2003] to map the LDR radiance map onto the display colour space (CIEXYZ values are transformed into sRGB signals through the inverse transform matrix and gamma correction, corresponding to γ = 2.2 including a linear ramp for dark values [IEC, 2003]). In order to optimise the dynamic range of the display, a histogram is computed of the tone-mapped image and used to stretch the pixel levels between 1% and 99% of the range of display signals (effectively clamping values below 1% and above 99% and re-normalising to the 0%-100% range). Global tone-mapping algorithms often produces inconsistent reproduction results for the same default parameter set — some images are overly bright or dark while others look fine. It is beneficial for tone reproduction operators not to require any per-image parameter tweaking. Their proposed method shows consistent results across the set of images (photographic and computer-generated) without any need for parameter tweaking. However, this model is developed with theoretical assumptions in an empirical manner, without taking into account colour reproduction. Local Operators

Chiu et al. [1993] introduced the pioneering concept of local adaptation for HDR

tone-mapping. As the human visual system has different sensitivities to different spatial frequencies, the contrast of the pixel intensity f at pixel location (i, j) is controlled with a low-pass filter S(i, j) in order to simulate the change of frequency sensitivity: ˆ j) f (i, j). fˆ(i, j) = S(i,

(2.157)

The contrast scaling function S(i, j) is modelled as follows: 8 ˆ j) = S(i,

<

S(i, j),

S(i, j) <

:

1 , f (i, j)

S(i, j) ≥

1 f (i, j) 1 f (i, j)

, where S(i, j) =

1 , k f blur (i, j)

0  S(i, j) 

1 , f (i, j)

(2.158)

ˆ j) is proportional to the reciprocal of a filtered (blurred) function f blur ; S(i, j) has the value and S(i,

2.5. High-Dynamic-Range Imaging between 0 and

1 , f (i, j)

64

which accents dark areas and dims bright areas. f blur is generated by a low-

pass filtering through the Perlin and Hoffert interpolation [Perlin and Hoffert, 1989] between two intensities at two local points c0 and c1 as c = (−2t 3 + 3t 2 )c0 + (2t 3 − 3t 2 + 1)c1 , where t varies

from zero to one between c0 and c1 . Although the spatially-varying adaptation of luminance was a pioneering idea to overcome the difference of dynamic range, their results yield artifacts such as halos (see Figure 2.19 for more details). Tumblin and Turk [1999] introduced the concept of diffusion imaging, which involves gradient

mapping using a partial differential equation solver. The common local adaptation methods convert HDR images into the frequency domain and scale down only the low bandwidth channel. These methods compress the low frequency luminance selectively into the display’s range with the same details as the original. However, this yields typical artifacts, called halos [see Figure 2.19(a)]. On the contrary, they introduced a method to control the gains of pixel intensities in the gradient domain instead of the frequency domain. In order to detect the edge, the method uses the diffusion theory [Perona and Malik, 1990] with an assumption that the image intensity is the temperature of a large flat plate of uniform thin material. This method scales down the higher gain selectively in the gradient domain so that such halo artefact is not included in output images [see Figure 2.19(b)]. Fattal et al. [2002] extended the gradient approach of [Tumblin and Turk, 1999] and improved computational efficiency. This method calculates the gradient of logarithm of luminance, following the approximation of human perception by [Fechner, 1963]. The computed gradients are compressed in a multi-scale pyramid. The compressed gradients are then converted back to intensities

Halo

Halo

HDR Input

(a)

Bandpass decomposition (in frequency)

Weighted sum result

HDR Input

(b)

Gradient decomposition

Weighted sum result

Figure 2.19: Comparison between frequency and gradient decomposition in tone mapping. Image (a) presents three stages of tone mapping in the frequency domain. On the left, an HDR image has very high contrast ratio with details. In the middle, the image is decomposed into different bandwidth channels, and the lower bandwidth is selectively scaled down for tone mapping. The result on the right presents halo artifacts as the higher bandwidth is spatially associated with the lower bandwidth. Image (b) presents three stages of tone mapping in the gradient domain. The HDR input on the left is decomposed into different level of gradient. The high gradient is selectively scaled down so that this method reduces any halo-like artifacts. Adapted from [Tumblin and Turk, 1999].

2.5. High-Dynamic-Range Imaging

65

via the Poisson equation. This method is faster than [Tumblin and Turk, 1999], but it often produces halo artefacts around high frequency regions as this method compresses not the gradient of pixel intensity, but the gradient of logarithmic luminance. Durand and Dorsey [2002] proposed an HDR tone-mapping operator based on bilateral filtering [Tomasi and Manduchi, 1998]. The main idea of bilateral filtering is that not only a spatial Gaussian filter f (p − s) is applied, but is weighted by an intensity Gaussian filters g(I p − Is ) between

two points p and s, which scales signal intensity of the corresponding pixels I p within an image ⌦. As a result, the filter detects edges Js at each pixel s while smoothing high frequency details: Js =

1 X k(s) p2⌦

where k(s) is a normalisation term: k(s) =

f (p − s) g(I p − Is ) I p . P

(2.159)

f (p − s) g(I p − Is ). Hence, a pixel closer to s and

p2⌦

more similar to s in intensity will be weighted more greatly to detect edges. The method then is accelerated by a piecewise-linear approximation in the intensity domain and appropriate subsampling through the fast Fourier transform to improve the computational cost over the original bilateral filter. Finally, this filter is used to decompose an image into a base layer (obtained from the bilateral filter) and a detail layer. Only the base layer is compressed and the detail layer is added back in. Even though the method was developed empirically, according to [Kuang et al., 2004], its tone-mapping results are as plausible as [Reinhard et al., 2002]. It was adopted into an image appearance model by Kuang et al. [2007], called iCAM06, to mimic the spatially-varying adaptation of the human vision system. Reinhard et al. [2002] presented a mixed approach of the global and local operators, which produces consistent and plausible results. It has been used broadly in graphics applications. For the local operation, they employed low-pass filtering through the Fast Fourier Transform. The global operation starts from calculating luminances from pixel values. Then, an adapting level of luminance L w is calculated, which is similar in a sense to the geometric mean of the luminance: 0P 1 B log(δ + L w (x, y)) C B x, y C L w = exp B C, N @ A

(2.160)

where δ is 0.0001 to avoid infinite error. The estimated average of luminance produces a normalisation of the scene luminances with key values (representing 18% neutral grey): L(x, y) =

a Lw

L w (x, y) ,

(2.161)

where L(x, y) is a scaled luminance, and a is a user parameter, 0.18 (as default). Finally, the global adaptation is defined in a form: � � ! L x, y L d (x, y) = · 1+ � �2 , 1 + L(x, y) L whi t e L(x, y)

(2.162)

where L d (x, y) is a global tone mapped image, and L whi t e is the maximum luminance of L(x, y) (limited to 1⇥1020 ).

2.5. High-Dynamic-Range Imaging

66

After that, the photographic local adaptation function is appended as a Gaussian convolution. The convoluted profile R i of each scale s at each pixel (x, y) is defined: ! x2 + y2 1 R i (x, y,s) = − � �2 . ⇡(↵i s)2 ↵i s

(2.163)

The computed profile R i of each scale s is convoluted with the luminance value L: Vi (x, y,s) = L(x, y) ⌦ R i (x, y,s). Then the centre convolution V1 and surround convolution V2 are merged to a

layer of each scale:

V (x, y,s) =

V1 (x, y,s) − V2 (x, y,s) 2φ a/s2 + V1 (x, y,s)

,

(2.164)

where φ is a sharpening parameter, 8.0. Finally, Equation (2.162) and (2.164) are combined as follows: L d (x, y) =

L(x, y) 1 + V1 (x, y,sm (x, y))

,

(2.165)

where V1 (x, y,sm (x, y)) is the blurred luminance level when sm (x, y) satisfies |V (x, y,sm )| < " (a threshold).

The resulting quality is more consistent compared to other approaches. According to [Kuang et al., 2004], its tone-mapping results are psychophysically rated to be as highly plausible as [Durand and Dorsey, 2002]. The performance of this method is presented in Chapter 6 with comparison to our reproduction model. Colour in Tone Mapping

Commonly, tone-mapping algorithms only modify lightness while keep-

ing the colour channels untouched. The Schlick [1994] tone-mapping method was the first to take colour into account in HDR tone mapping. He concentrated on preserving the ratio of colour primaries. Instead of scaling all three colour channels with a non-linear function, the luminance information L (corresponding to the Y channel in CIEXYZ colour space) is derived from the original image. The contrast response function takes the luminance level L to yield the tone-mapped luminance L 0 . Finally, the ratio of L 0 to L is used to compress the luminance without altering the physical colour property of each pixel in the source image: 0 = C r/g/b

L0 L

· C r/g/b ,

(2.166)

0 is the tone-mapped primaries, and C r/g/b is the original colour primary value. This where C r/g/b

colour reproduction method is used by many other tone-mapping algorithms [Reinhard et al., 2002; Reinhard and Devlin, 2005; Kim and Kautz, 2008b]. However, this may lead to perceptually flawed colour reproduction (either washed-out colours or over saturation), as has been shown in [Tumblin and Turk, 1999; Mantiuk et al., 2009]. Tumblin and Turk [1999] experienced washed-out colours after applying their tone-mapping operator and suggested a luminance preserving correction method: ◆ ✓ C r/g/b s 0 0 = ·L , C r/g/b L

(2.167)

2.5. High-Dynamic-Range Imaging

67

where s is a saturation factor, which was suggested to control the saturation of tone-mapped images. Mantiuk et al. [2009] also demonstrate how to improve colour reproduction after contrast compression and enhancement. They conducted a series of subjective appearance matching experiments to measure the change. Even though they did not provide a full colour appearance model, they proposed colour correction formulae for current tone-mapping algorithms. In addition to [Tumblin and Turk, 1999], Mantiuk et al. [2009] suggests a non-linear colour correction formula: 0 = C r/g/b

✓✓

C r/g/b L

◆ ◆ − 1 s + 1 L0.

(2.168)

The saturation parameter s is estimated with respect to a given luminance-specific tone-curve (depending on a contrast compression factor c). The tone curve is defined in a simplified form: L 0 = (L · b)c ,

(2.169)

where b is the brightness adjustment. Finally, Mantiuk et al. [2009] define the relationship between contrast c and saturation s as follows:

� s(c) =

� 1 + k 1 c k2 1 + k 1 c k2

,

(2.170)

where the parameters k1 and k2 are derived from their experimental data by a least-squares fit. The best fit for non-linear colour correction is k1 =1.6774 and k2 =0.9925 in Equation (2.168); the best fit for luminance preserving correction is k1 =2.3892 and k2 =0.8552 in Equation (2.167). Mantiuk et al. [2009]’s method provides a practical solution for compensating colour reproduction with respect to tone-mapping algorithms. However, their non-linear colour correction formulae strongly distorts lightness, and while the hue is less distorted than the luminance when using the preserving formula. In addition, they also observed that an existing colour appearance model (CIECAM02) cannot explain the relationship between perceived brightness and colourfulness. Image Appearance

Advanced models exist that try to combine colour appearance models with

spatial vision. Ferwerda et al. [1996] proposed a computational model of human vision that includes spatial adaptation. It was mainly based on previous psychophysical threshold experiments. It includes a threshold detection experiment that quantifies the perceptual threshold of luminance up to 10 000 cd/m2 . The experiment does not measure the suprathreshold appearance of luminance (e.g., magnitude experiments as in LUTCHI), but instead the threshold level of luminance. In contrast, we conducted suprathreshold measurements of perceived colour attributes (not only luminance) up to 16 860 cd/m2 of luminance (see Chapter 4 for more details on our experiments). Adopting Ward [1994]’s tone-mapping concept, Ferwerda et al. [1996] assume that the display luminance level L d is achievable by scaling real-world luminance L w with an appropriate scalar m: L d (L w ) = mL w . Ward [1994] defines a function to define the scalar m, which depends on real-world adaptation luminance L wa and display adaptation luminance L d a as follows: � � � � � � m L wa , L d a = t L d a /t L wa .

(2.171)

2.5. High-Dynamic-Range Imaging

68

Ferwerda et al. [1996] replace the threshold function t with their threshold function, derived from their psychophysical measurements. Like the Hunt94 colour appearance model (see Section 2.3), they modelled the threshold function t on three different vision categories. First, the threshold function for photopic vision (cone only) t p (L a ) is modelled as follows: 8 log t p (L a ) =

> < > :

log L a  −2.6

−0.72 ,

log L a − 1.255 ,

(0.249log L a + 0.65)

2.7

log L a ≥ 1.9

,

(2.172)

−2.6 < log L a < 1.9

− 0.72 ,

where L a is luminance [cd/m2 ]. The function for scotopic vision (rod only) t s (L a ) is: 8 log L a  −3.94 −2.86 , > < . log t s (L a ) = log L a ≥ −1.44 log L a − 0.395 , > : 2.18 − 2.86 , (0.405log L a + 1.6) −3.94 < log L a < 1.44

(2.173)

For mesopic vision (scotopic plus photopic vision) L d , these two responses, photopic L d p and scotopic L ds , are summed with a scaling constant k : L d = L d p + k(L a )L ds ,

(2.174)

where k is a constant from 0 to 1 replacing the adaptation level. Finally, they employed a Gaussian convolution filter with respect to spatially-varying local adaptation as shown in [Reinhard et al., 2002]. The filter cuts off high frequency (high contrast) of luminance to match the observer’s contrast threshold: f

⇤�



w c L wa

��

=

� � t L wa L wa

,

(2.175)

where f ⇤ is the Fourier transform of the convolution filter and w c (L wa ) is the threshold frequency for real-world adaptation. This method aims to produce the closest rendering results to human perception with highdynamic-range scenes. In particular, it presents a rigorous approach in modelling the Purkinje break effect (see Section 2.3.3 for the phenomenon). However, their model considers only luminance perception. Accurate colour appearance phenomena were not modelled, e.g., Hunt effect, Stevens effect, or simultaneous contrast effect (see Section 2.3.3 for more details on the phenomena). In contrast, we conducted a full range of colour experiments, and derived a suprathreshold colour appearance model in a wider range of luminance levels (up to 16 860 cd/m2 ). Pattanaik et al. [1998] improved on [Ferwerda et al., 1996] using a multiscale model of adaptation and spatial vision, combined with the CIECAM97s model [CIE, 1998] (see Section 7.3.4 in [Reinhard et al., 2005] for more details of the mathematics). Their model is based on a rigorous survey of previous psychological literature, but they only use previous experimental data without any new experiment. Their method is a two-staged mechanism. The first stage is a visual encoding which aims to simulate cone and rod response with respect to spatially-varying adaptation, corresponding for a forward colour appearance model. The second stage is a display mapping that

2.5. High-Dynamic-Range Imaging

69

converts the perceptual information to a display signal. This stage is a combination of the partial inverse appearance model and a partial inverse device characterisation. Their tone-mapping algorithm is a simplified application of the CIECAM97s model. The main structure of the visual encoding stage follows a Hunt-style structure (see the Hunt94 and CIECAM97s model in Section 2.3). The first step in visual encoding is to convert RGB input to LMS cone and rod signals through sRGB (see Section 2.2.2) and a HPE transform [see Equation (2.20)]. These four channel images are spatially decomposed into four seven-level Gaussian pyramids (a stack of seven Gaussian-blurred images). By subtracting adjacent Gaussian-blurred images L/M /Ssblur in the pyramid, they compute four six-level difference-of-Gaussian (DoG) stacks L/M /SsDoG at pixel (x,y), which are then normalised. After that, each of the DoGs in each of four channels are scaled by the gain function (equivalent to the threshold function in [Ward, 1994; Ferwerda et al., 1996]): blur blur LsDoG (x, y) = (Lsblur (x, y) − Ls+1 (x, y)) · G(Ls+1 (x, y)) ,

blur blur MsDoG (x, y) = (Msblur (x, y) − Ms+1 (x, y)) · G(Ms+1 (x, y)) ,

(2.176)

blur blur (x, y)) · G(Ss+1 (x, y)) , SsDoG (x, y) = (Ssblur (x, y) − Ss+1

where s indicates the stack level, and the gain function G is modelled as follows: G(x) =

1 0.555(L + 1)0.85

.

(2.177)

The blurred image at level seven is retained and will form the basis for image reconstruction [Reinhard et al., 2005]. The pixels in the level (s=7) are adapted to the mean value: blur

L7blur (x, y) = L7blur (x, y)G((1 − A)L 7

+ A· L7blur (x, y)) ,

blur

M7blur (x, y) = M7blur (x, y)G((1 − A)M 7

blur

S7blur (x, y) = S7blur (x, y)G((1 − A)S 7

+ A· M7blur (x, y)) ,

(2.178)

+ A· S7blur (x, y)) ,

where A is a user parameter for interpolation. The adapted cone signals are then converted to achromatic and colour opponent signals following the Hunt94 and CIECAM97s models [CIE, 1998]. They then apply another contrast transducer functions on each channel respectively (see [Pattanaik et al., 1998]). The first step in display mapping is to rescale the basis stack (level seven) with the mean luminance of a typical display L d,mean (⇠50 cd/m2 ), which is taken by the gain function G [Equation (2.177)]: L7blur (x, y) =

L7blur (x, y) G(L d,mean )

, M7blur (x, y) =

M7blur (x, y) G(Md,mean )

, S7blur (x, y) =

S7blur (x, y) G(Sd,mean )

.

(2.179)

Finally, the stacks of DoGs (from zero to six levels) are accumulated in order to the adapted blurred image at level seven. The computed LMS cone signals are converted back to RGB through XYZ with gamma correction. Even though this method is technically sound, in practice, multi-scale Gaussian pyramid tone mapping presents more obvious halo artifacts than other tone-mapping algorithms.

2.5. High-Dynamic-Range Imaging

70

Akyüz and Reinhard [2006] propose to combine a modified CIECAM02 model with tone mapping [Reinhard et al., 2002], in order to yield a better colour reproduction. As presented in Figure 2.17, when the employed colour appearance model can predict the real-world observation correctly, a tone-mapping algorithm is not required. However, insofar as the current conventional standard for colour appearance (CIECAM02) fails to predict the perception under high-dynamicrange luminances, the combination of the colour appearance model and tone mapping can be a practical solution. Akyüz and Reinhard [2006] applied a modified colour appearance model with scene viewing conditions (forward) and output device viewing conditions (inverse). After that, tone compression is performed only on luminance (Y in the Y x y domain). When Akyüz and Reinhard [2006] adapts CIECAM02, the chromatic adaptation parameter D in CIECAM02 is modified [see Equation (2.90) for the original CIECAM02 equation] as follows: D0 = D(1 − 3s2 + 2s3 ), where s =

L − L T + 0.1(Lmax − Lmin ) L T1 − L T0

(2.180) ,

(2.181)

L T = Lmin + [0.6 + 0.4(1 − k)](Lmax − Lmin ),

(2.182)

L T1 = min[Lmax , L T + 0.1(Lmax − Lmin )],

(2.184)

L T0 = max[Lmin , L T − 0.1(Lmax − Lmin )],

(2.183)

where if the luminance of a pixel is below L T0 , the original D is used. If it is greater than L T1 , D is set to 0 for the pixel. Otherwise, D0 is used instead of D. According to colour appearance data such as LUTCHI, the degree of adaptation increases in proportion with luminance. However, D0 decreases the degree of adaptation. Therefore, the used modification of the adaptation parameter is observed to conflict with previous findings. In contrast, our colour reproduction mechanisms do not employ any tone-mapping algorithm. They calculate the human observation as perceptual coordinates, and the perceptual values are reproduced on target medium through an analytical inverse model (see Chapter 5 and 6 for more details on our model and reproduction pipeline). Furthermore, our colour appearance model can be used to keep the perceived colourfulness and hue of colour samples as close to the original as possible during tone-mapping. iCAM [Johnson and Fairchild, 2003] is an image appearance model that is intended to predict the appearance of images, including HDR images. It combines components of traditional colour appearance models with spatial models of vision. iCAM has been developed through empirical modification of a colour appearance model, CIECAM02. iCAM aims to associate CIECAM02 with a spatially-varying tone-mapping algorithm. The goal and approach are similar in a sense to [Akyüz and Reinhard, 2006]. Kuang et al. [2007] introduced a revised image appearance model, called iCAM06, which is essentially a combination of CIECAM02 with tone mapping [Durand and Dorsey, 2002]. We briefly review the mathematical details of the latest version of iCAM [Kuang et al., 2007].

2.5. High-Dynamic-Range Imaging

71

An RGB source image is transformed into XYZ through the sRGB transform (see Table 2.1 for the transform). The XYZ image is then decomposed to a base layer and a detail layer through bilateral filtering [Durand and Dorsey, 2002] [see Equation (2.159)] . The base layer is used as input to the chromatic adaptation and tone mapping (modified from CIECAM02), while the detail layer is combined after the tone-mapping process. The chromatic adaptation of iCAM06 is inherited directly from CIECAM02 [see Equations (2.89), (2.90), and (2.91)]. They set the luminance adaptation parameter LA to 20%, and surround factor F to 1 (average surround). The degree of adaptation D is empirically scaled down to 30% (D scaled by 0.3). Instead of using the D50-adapted XYZ transform (see Table 2.2), the white point of the CIECAT02 transform [see Equation (2.89)] is changed into D65. In particular, they assume that the human visual system performs spatially-varying white adaptation. The Gaussian blurred original XYZ image is used as a set of local white points in their implementation of the chromatic adaptation in CIECAM02. However, our experiments (see Chapter 6) find that spatiallyvarying white balancing yields unrealistic results. After that, the base layer is converted into LMS cone and rod signals. In the Naka-Rushton equation in the CIECAM02 model, they empirically replace the exponent constant 0.43 with 0.75 (similar to 0.73 in CIECAM97s) in Equation (2.93). They also include rod response modelling adapted from the Hunt94 model [see Equations (2.40), (2.41), and (2.43)]. Next, the tone-mapped base layer is converted to XYZ values and combined with the detail layer. The combined layer is converted to the IPT colour space [Ebner and Fairchild, 1998]. In the IPT colour space, the image coordinates (lightness, chroma, and hue in the IPT space) can be used as perceptual coordinates. They also empirically adjusted image attributes (contrast of detail layer, chroma, and surround effect). The detail later is changed by using F L in CIECAM02 to mimic the Stevens effect: Det ailsa = Det ails(F L +0.8)

0.25

.

(2.185)

The chroma is also modified to mimic the Hunt effect: P 0 = P · [(F L + 1)0.2 (

1.29C 2 − 0.27C + 0.42 C 2 − 0.31C + 0.42

)] ,

1.29C 2 − 0.27C + 0.42 )] , T 0 = T · [(F L + 1)0.2 ( C 2 − 0.31C + 0.42 p where C = P 2 + T 2 .

(2.186)

(2.187) (2.188)

With respect to the surround effect, I coordinates (lightness) are modified: I a = I λ , where λd ar k = 1.5, λd im = 1.25, λaver age = 1.0.

(2.189)

Finally, the IPT colour space values are transformed to RGB signals through the XYZ colour space, then these signals are clamped to the 1st and 99 th percent of the image data to achieve improved plausibleness in the final output images.

2.5. High-Dynamic-Range Imaging

72

As opposed to previous colour appearance models, in fact, the modification of CIECAM02 in iCAM is not derived from experimental data (empirical modification in previous equations), although psychophysical experiments were conducted for evaluation purposes. In contrast, our colour appearance model is analytically derived from psychophysical experimental data like other colour appearance models; as a result, our model can achieve better performance than other empirical image appearance models (see Chapter 6 for more details on comparison). However, our aim is not to derive a full image appearance model; instead, we want to derive a pure colour appearance model that enables accurate predictions of colour perception. (see Chapter 5 for more details on our colour appearance model).

2.5.4

Summary

We first reviewed HDR image acquisition algorithms, which enables the creation of HDR images from multiple exposures. HDR image acquisition algorithms comprise two main stages: solving for the camera response function (converting pixel values to exposure) and accumulating radiance (exposure divided by time interval) at each pixel. Curve-fitting [Debevec and Malik, 1997], polynomial regression [Mitsunaga and Nayar, 1999], or ICC profiling [Göesele et al., 2001] yield a camera response function from captured camera signals. As it turns out, the first stage is not necessary if we utilise the solid-state response to incident light directly such as [Mann and Picard, 1995; Yamada et al., 1995; Xiao et al., 2002]. Professional DSLR cameras provides direct output from the sensor, called RAW images. With these, we can simplify the HDR imaging algorithm with improved accuracy, skipping the first stage — including non-linear regression (see Chapter 3 for more details on the our HDR imaging algorithm). Theoretically, if there is a display which can produce luminance as it exists in the real world in terms of dynamic range and maximum luminance, the captured HDR images can be reproduced on the display by simply mapping the camera signals to the display ones. Seetzen et al. [2004] propose an HDR display with a higher dynamic range and brighter maximum luminance than existing displays. However, the luminance levels of most displays is not identical to that of the real world. We need a specific solution to deal with this difference of luminance levels, called tone-reproduction operators or tone-mapping algorithms. The main aim of tone-reproduction operators is to achieve the same appearance on a output display, which is identical to the human perception of the real scene. The research falls into three categories: global adaptation models, local (spatially-varying) adaptation models, and image appearance models. Global adaptation models [Tumblin and Rushmeier, 1993; Ward, 1994; Ward et al., 1997; Drago et al., 2003; Reinhard and Devlin, 2005; Kim and Kautz, 2008b] attempt to achieve a similar response function to the human response function on incident luminance. They generally provide high computational efficiency, but are less able to handle the variation in dynamic range than local approaches. Local adaptation models [Tumblin and Turk, 1999; Fattal et al., 2002; Reinhard et al., 2002; Durand and Dorsey, 2002] attempt to achieve great flexibility in the compression of dynamic range with the assumption that the human

2.6. Discussion

73

eye is less sensitive to variations at low spatial frequencies than higher ones. They manipulate frequency, gain, or gradient in multi-bandwidths (detail and base layers), and often struggle with high computational cost and halo artefacts. Tone-mapping operators [Schlick, 1994; Tumblin and Turk, 1999; Mantiuk et al., 2009] address colour reproduction problem while compressing luminance and attempt to solve the colour problem in an empirical manner. Finally, image appearance models [Ferwerda et al., 1996; Pattanaik et al., 1998; Johnson and Fairchild, 2003; Akyüz and Reinhard, 2006; Kuang et al., 2007] attempt to make a computation model identical to the human vision system. They are often based on physiological assumptions, measurements from primates, or psychophysical experiments. Most tone-mapping algorithms are derived from the same assumption that the human visual system has a specific mechanism to observe real-world luminance. They attempt to model the response mechanism from previous experimental evidence or their own hypothesis, where the used data is often limited in dynamic range compared to real-world luminance, or not appropriate, or the hypothesis cannot prove the scientific soundness without experimental observation. These tonemapping algorithms only modify lightness while keeping the colour channels untouched, suggested by Schlick [1994]. However, as shown in [Tumblin and Turk, 1999; Mantiuk et al., 2009], this may lead to perceptually flawed colour reproduction. Mantiuk et al. [2009] attempt to change colourfulness of tone-mapped images according to experimental data, but they would need to include other colour properties such as lightness and hue in order to obtain plausible colourfulness. On the other hand, image appearance models attempt to solve this colour problem with empirical modification to the current colour appearance model by combining CIECAM02 with a tone-mapping algorithm. However, such hybrid solutions have struggled with performance. In contrast, our approach is to develop a novel colour appearance model derived from new experimental data that covers the full working range of the human visual system. This approach attempts to minimise any empirical modification to previous equations or unproved hypothesis (see Chapter 5 and 6 for more details on our model).

2.6

Discussion

Although HDR imaging technology extends the dynamic range in input/output media, the newly extended dynamic range in HDR imaging is not compatible with previous cross-media colour reproduction systems as they had been developed and optimised for integer-based LDR imaging systems. For instance, first, traditional characterisation techniques for digital cameras fail with HDR imaging, and produce considerable errors. The dynamic range of traditional colour targets and modelling techniques can only cope with the dynamic range of ordinary LDR cameras (see Chapter 3). Second, image appearance on high-luminance displays, e.g., HDR displays, are perceived to be different, compared with their appearance on low-luminance displays like CRT or LCD display. Specific colour appearance phenomena, the Stevens and Hunt effects, are strongly observed on high-luminance displays as our psychophysical experiments validated (see Chapter 4). Third, current colour appearance models fail in predicting such colour appearance phenomena under high

2.6. Discussion

74

luminance levels and are not applicable for HDR image reproduction (see Chapter 5 for results). The reason for the incompatibility is that current colour appearance models were derived from lowluminance experimental data (under about 690 cd/m2 ) limited by the available display technology in the 1990s, such as CRT displays. To correct this problems, a newly derived cross-media colour reproduction system for HDR imaging is presented in Chapter 6. It comprises three stages: HDR characterisation, a forward colour appearance model, and an inverse colour appearance model. Results indicate that the proposed system yields high-fidelity colour reproduction in HDR images (see Chapter 6 for more details on the reproduction pipeline). The following chapters will describe our experiments in more detail.

75

Chapter 3

Characterisation for High-Dynamic-Range Imaging In this chapter, a new practical camera characterisation technique is presented to improve colour accuracy in high-dynamic-range (HDR) imaging. Camera characterisation refers to the process of mapping device-dependent signals, such as digital camera images, into a well-defined colour space (see Section 2.2 for background). This is a well-understood process for low-dynamic-range (LDR) imaging and is part of most digital cameras. It is usually a mapping from the raw camera signal to the sRGB or Adobe RGB colour space. This chapter presents an efficient and accurate characterisation method for HDR imaging that extends previous methods originally designed for LDR imaging. We demonstrate that our characterisation method is very accurate even in unknown illumination conditions, effectively turning a digital camera into a measurement device that measures physically accurate radiance values, in terms of both luminance and colour, and rivals more expensive measurement instruments. We then estimate the correlated colour temperature of the scene as a reference white point for white-balancing the HDR radiance map. Finally, the physically meaningful HDR radiance map is used later on as input to our colour reproduction system.

3.1

Motivation

Recent advances in HDR imaging allow us to easily obtain radiance maps with off-the-shelf digital cameras by combining multiple exposures into a single HDR image [Mann and Picard, 1995; Saito, 1995; Debevec and Malik, 1997; Mitsunaga and Nayar, 1999; Robertson et al., 1999]. These acquired radiance maps are commonly used as environment maps for lighting simulations or for computational photography applications. However, the radiometric accuracy of the acquired HDR radiance maps — in terms of both luminance and colour — has rarely been discussed or evaluated because traditional characterisation methods for LDR imaging [Martínez-Verdú et al., 2000; Pointer et al., 2001; MacDonald and Ji, 2002; Martínez-Verdú et al., 2003; Kim et al., 2005; ISO, 2006; Normand et al., 2007] were not designed to characterise HDR radiance maps. We propose a new camera characterisation method that works well for HDR imaging as it is more accurate than many of the LDR methods and is very efficient in terms of acquisition time and cost. Our method is based

3.2. Acquisition of High-Dynamic-Range Radiance Maps

76

on the insight that common reflective targets have two main drawbacks: they only offer a low dynamic range which makes them not a good choice for HDR imaging, and that characterisation based on reflective targets requires both the reflectance of the target and the spectrum of the illuminant to be known. Therefore, we propose to use a novel back-lit transparency target specifically designed for HDR imaging, offering a higher dynamic range and wider colour gamut. Our method only requires the emitted radiance to be known, which can be measured using a spectroradiometer. This enables us to accurately characterise digital cameras used for HDR imaging. We show the effectiveness of the new method by characterising three different digital cameras. The achieved accuracy of the cameras is similar to the accuracy of a spectroradiometer. As we will demonstrate, radiance maps acquired by different cameras are virtually the same when using our characterisation method. Our goal is to develop a novel method to obtain a physically-accurate HDR radiance map with a camera system. Then, the captured radiance maps are white-balanced and tone-mapped for display. The following sections describe a novel HDR characterisation method and a novel white-balancing method for displaying HDR radiance maps.

3.2 3.2.1

Acquisition of HDR Radiance Maps Response of Digital Cameras

The sensing area of digital cameras is a solid-state sensor upon which incident photons cause charge to accumulate at discrete locations called pixels. This charge is transferred as an output digital signal via an ADC [Yamada, 2006] (see Section 2.2 for more details). The amount of digitised electronic charge is linear to irradiance on the sensing area — excluding the noise floor (fixed-pattern noise, sensor dark current, etc. [Holst, 1998]) and blooming (overflowing) [Janesick, 2001] of the sensor response (see Section 2.2.4 for more details). Typically, a non-linear function is applied to improve the dynamic range of the camera and at the same time this also takes care of gamma-correction for display. Most DSLR cameras allow the 12–16 bit linear digital signals to be output before non-linear processing (gamma correction, tone mapping, and histogram equalisation) as a RAW image [Coffin, 2009]. Within the possible range of camera signals, these RAW images correspond to the amount of charge of all the incident photons on the sensor, effectively measuring scene radiance at each pixel. Figure 3.1 presents the measured responses of a digital camera. A Canon 350D DLSR camera captured a transparency reference target, IT8.7/1 [ANSI, 1999] [see Figure 3.1(a)], in RAW as nonlinear TIFF images. Luminances of greyscales in the target were measured by a spectroradiometer (a Jeti Specbos 1200) which has a luminance accuracy of ±0.05 at 1000cd/m2 and chromaticity repeatability of ±0.0005 (x,y) [Morgenstern et al., 2004]. The corresponding signal levels were

read in RAW and non-linear TIFF images. Figure 3.1(b) presents a comparison between the ordinary non-linear response (marked with green triangles) and the RAW response (marked with blue-lined white triangles). As shown in Figure 3.1(b), the RAW camera response is linearly proportional to the incident light while the ordinary camera response presents a non-linear trend (a power function) in response. In this experiment, we use linear RAW signals to generate HDR radiance maps so that we

Normalised output response

3.2. Acquisition of High-Dynamic-Range Radiance Maps

(b)

(a)

77

1.0 0.8 0.6 0.4 0.2 0.0

RAW Camera Response Non−linear  Camera  Response

0

500 1000 1500 2000 Input luminance [cd/sqm]

2500

Figure 3.1: Image (a) presents the linear RAW sensor response of a Canon 350D digital camera (interpolated into RGB channels, but not gamma-corrected) and shows the captured transparency reference target, IT8.7/1 [ANSI, 1999]. Image (b) presents characteristic curves of the ordinary non-linear response, and the RAW sensor response from the camera. The Y axis, which signifies the acquired response, is normalised in the range [0.0,1.0]. The X axis represents the luminance in greyscales of the target [image (a)] measured by a spectroradiometer. As the plot shows, the RAW response is proportional to the amount of incident light.

avoid curve fitting regression and its potential inaccuracies. Next, we demonstrate how to generate HDR radiance maps from RAW responses.

3.2.2

Camera Setup

In this experiment, three different DLSR cameras were tested: a Nikon D100 with a 35mm lens, a Canon 350D with an 18–55mm lens, and a Nikon D40 with an 18–55mm lens. These cameras support manual control over exposure parameters. The exposure parameters were manually calibrated with an identical setting of aperture size (f/11), shutter speed (1/4000–30seconds in one-step intervals for exposure bracketing — HDR source images), and film speed (ISO 200). No automatically-estimated exposure parameters were involved in producing the RAW output images. A white-balancing procedure is required to display the characterised radiance map. Like the exposure parameters, the cameras provide automatic estimation of the white point of captured scenes. The estimated white point information is essential for achieving colour constancy (see Section 3.4). This automated white balancing is generally the default option in digital cameras. However, the cameras’ internal colour temperature estimate may not be directly applicable for white balancing, as it is often skewed to accommodate user preference. For instance, with the Canon 350D, we captured a GretagMacbeth ColorChecker DC chart under different illumination conditions with a colour temperature ranging from 2000K to 7500K in 500K intervals. We measured the correlated colour temperature (CCT) of the scene illumination and recorded the white-balancing multipliers estimated by the camera. Then the white-balancing multipliers are converted to CCT. The brown sigmoidal curve in Figure 3.2 shows the Canon 350D’s colour temperature estimation of the GretagMacbeth images (derived from white balancing multipliers), which indicates a deliberate choice to overestimate the colour temperature (yielding more yellowish images under lower colour tem-

3.2. Acquisition of High-Dynamic-Range Radiance Maps

78

Estimated Temperature in K

7500 6500 5500 4500 3500 2500 1500 1500 2500 3500 4500 5500 6500 7500 Radiometric Measurements in K

Figure 3.2: Correlated colour temperature estimates from a digital camera (Canon 350D).

peratures). Therefore, although the cameras store the estimated white-balancing multipliers in the header of the RAW files, we discard them and use our estimation method of scene illumination in order to display the characterised image more accurately, i.e., we use the raw colour response directly from the sensor instead of the automatically white-balanced image. As a result, the RAW sensor response (without auto white balancing) appears cyan-greenish as the incident light is filtered by an infrared-blocking filter (cutting out the wavelengths beyond red, see Section 2.2.4 for more details) before light reaches the solid-state sensor. Then, instead of using the automatic white balancing from the camera value, we estimate a correlated colour temperature of the scene illumination with our method (see Section 3.4 for more details) and conduct white balancing to display images.

3.2.3

Low-Dynamic-Range Source Images

Previous research [Mann and Picard, 1995; Debevec and Malik, 1997; Mitsunaga and Nayar, 1999; Robertson et al., 1999] presents many HDR imaging methods to derive an exposure function to describe a camera’s response to incident light. The exposure function virtually linearises non-linear camera responses in multi-exposed images. These regression methods contain potential computational errors in estimating the non-linear exposure function. With respect to accuracy, the best solution for generating HDR images is to use the linear response from a RAW image rather than the non-linear response from ordinary images; hence, we choose the RAW response to build HDR RAW response

Red

Green

Blue

Interpolated

Figure 3.3: Channel separation from RAW response to RGB channels.

3.3. High-Dynamic-Range Characterisation

79

images. As such, the first step in the HDR imaging algorithm (estimating a camera exposure function, see Section 2.5.1 for more details on HDR imaging algorithms) is not needed. Instead, an additional procedure is required to use a RAW response. A RAW response is a Bayer-pattern mosaic image of a single channel where generally a red, green, blue, and green channel pattern (or CMYM) covers the solid-state sensor. To yield an ordinary RGB image of three channels, we need to interpolate the missing data [Shortis et al., 2005]. (see Figure 3.3). We employed a so-called adaptive homogeneity-directed method [Hirakawa and Parks, 2003] for the interpolation process by adapting [Coffin, 2009]. Unlike ordinary conversion of RAW images, we do not perform gamma correction, tone reproduction (e.g., histogram equalisation), and white balancing. The RAW images are stored as 16-bit integer images. Note that these cameras have a 12-bit ADC, so output signals are rescaled up to 16 bits and stored.

3.2.4

High-Dynamic-Range Image Acquisition

We obtained linear 16-bit RAW images with exposure variations P and shutter times T , from which an HDR radiance map was generated. Logarithms of radiance values E at each pixel i are computed from the weighted average of the differences between the pixel response Zi and the shutter time log2 T in shutter intervals j: PP log2 Ei =

j=1 [log 2 (Zi j ) − log 2 (T j )]w(Zi j ) , PP j=1 w(Zi j )

(3.1)

where the weighting function w is a normalised pyramid: 8 w(z) =

< z − Zmin , z  1 (Zmin + Zmax ) 2 , 1 : Z max − z , z > 2 (Zmin + Zmax )

(3.2)

where Zmax is 65535, and Zmin is 0. This procedure is similar in sense to the second stage [Equation (2.135)] of Debevec and Malik [1997]’s method. Instead of deriving an exposure function from photographs, we take the direct sensor signals as the first stage. By taking RAW responses from the cameras [Debevec and Malik, 1997; Mitsunaga and Nayar, 1999; Robertson et al., 1999], the acquisition of HDR radiance maps is simplified. Figure 3.4 is a qualitative comparison between the RAW sensor signals and the HDR radiance map. These two sets of values are proportional to measured luminance. We tested the linearity of these two responses to incident luminance by computing the CV against incident luminance [see Equation (2.12) for more details on the CV calculation]. The RAW signal’s CV to the incident luminance was 6.66, and the HDR radiance’s CV was 2.54. Hence, measuring luminance by using the HDR radiance map is more accurate than simply using the RAW signal. The next section describes how to calibrate colours in the HDR radiance map.

3.3

High-Dynamic-Range Characterisation

Camera characterisation is defined as the transform of device-dependent signals into deviceindependent coordinates [Johnson, 2002] such as CIEXYZ tristimulus values. Ideally, the same

3.3. High-Dynamic-Range Characterisation

80

Normalised output response

1.0 0.8 0.6 0.4 RAW Camera Response Ideally Linear Response HDR Radiance Map

0.2 0.0

0

500

1000 1500 Input luminance [cd/sqm]

2000

2500

Figure 3.4: Characteristic curves of: RAW sensor response of the Canon 350D camera and an HDR radiance map is green channel, compared with the ideally-linear response. The Y axis, which signifies the acquired response, is normalised into the range [0.0,1.0]. The X axis represents luminance measured by a spectroradiometer. The square points on the diagonal show the ideally linear response. As the plot shows, the RAW response and the computed HDR radiance map are proportional to the incident light. CVs to the ideally linear signals are 6.66 (RAW signals) and 2.54 (HDR radiance map).

mapping works for any illumination. However, as mentioned in Section 2.2.5, previous characterisation methods were either limited to known illumination conditions [Pointer et al., 2001; MacDonald and Ji, 2002; Johnson, 2002; ISO, 2006] or required expensive equipment and prohibitive measurement times [Martínez-Verdú et al., 2000; MacDonald and Ji, 2002; Martínez-Verdú et al., 2003; ISO, 2006; Normand et al., 2007]. Furthermore, these characterisation methods were geared towards low-dynamic-range imaging. Inanici and Galvin [2004] and Krawczyk et al. [2005] proposed to rescale the measured luminance values in HDR radiance maps by comparing them with measurements from a luminance meter. In contrast, our method calibrates luminance and colour at the same time. We propose a new technique which offers the simplicity of reflectance-based techniques with the accuracy and the universal applicability of monochromator-based techniques. Furthermore, it is well-suited for HDR imaging and can characterise both colour and luminance. Our experiments show that a digital camera, characterised with our method, can capture measurements of the colour and luminance information of a scene that are almost identical to the measurements from a spectroradiometer that we tested. See Chapter 3 for more details of our characterisation method. Through HDR imaging (see Section 3.2.4 for more details), we build a device-dependent HDR radiance map, where the HDR trichromatic response values (red r, green g, and blue b) of pixels on the sensor are given as the sum of the product of the spectral power distribution of the light source P(λ), the reflectance (or transmittance) of the imaged object S(λ), and the spectral responsivities

3.3. High-Dynamic-Range Characterisation

81

of the colour filters D r/g/b (λ) — assuming that incident light is reflected from object surfaces: [r,g,b] =

X

P(λ)S(λ)D r/g/b (λ)∆λ .

(3.3)

λ

The sum in Equation (3.3) is taken over a suitable wavelength range in the visible part of the spectrum, for instance, from 380nm to 780nm [ISO, 2006] (see Figure 2.8 for an example). The calculation of these response values is similar to the computation of device-independent tristimulus values, such as CIEXYZ: [x,y,z] =

X

P(λ)S(λ)F x/ y/z (λ)∆λ ,

(3.4)

λ

where F x, y,z (λ) are the CIE colour matching functions [CIE, 1986]. The only difference between Equation (3.3) and (3.4) is the use of different weighting functions D r/g/b and F x, y,z . Therefore, HDR characterisation finds a mapping between the colour spaces of HDR radiance and tristimulus values by modelling the difference between the D r/g/b and F x, y,z functions. Our technique is based on two insights. First, the product of the spectral power distribution of the light source P(λ) and the reflectance of the calibration target S(λ) can be measured in a single step using a spectroradiometer, allowing camera characterisation that is efficient both in terms of cost and measurement time. Second, a novel back-lit transparency target specifically optimised for HDR imaging has a wider gamut and higher dynamic range than ordinary reflective targets. This makes the characterisation produce accurate measurements of luminance and colour and makes it applicable even in unknown illumination conditions.

3.3.1

Setup

We created our own transparency targets by photographically enlarging the IT8.7/1 [ANSI, 1999] colour chart onto Kodak Ektachrome professional film (8-by-10 inch) such that each patch matches the sensing area of the employed spectroradiometer (approximately 8mm in diameter). Two enlarged identical targets, one placed over three sheets of neutral density (2⇥) filters (in total 8⇥ darker), are placed on an uniform light emitting table in a darkroom to produce a training set with 576 colour patches and a dynamic range of 4.53 orders of magnitude. The light source’s correlated colour temperature (CCT) was 5434K. Using a transparency target not only offers a high dynamic range, but also provides a very wide colour gamut, [Figure 3.5 and 3.6(a)]. Two GretagMacbeth ColorChecker targets and two 800W halogen light sources (CCT: 2856K) are used to produce a test set with 48 colour patches. One target is illuminated by two halogen-type lights, which have different spectral characteristics from the light source used for the training data set. The other target is placed in a shadow area such that the scene has a large dynamic range (4.00 orders of magnitude). The emitted/reflected radiance of each patch in these two experimental sets were measured with the spectroradiometer, see Figure 3.6. Finally, we took HDR images of these two datasets using three different digital cameras for characterisation (Canon 350D, Nikon D100, and Nikon D40), see Figure 3.7.

3.3. High-Dynamic-Range Characterisation

82

0.7 0.6

CIE v'

0.5 0.4 0.3 0.2 0.1 0

(a)

Reflective LDR Target Transparent HDR Target Spectral Locus

0

0.1

0.2

0.3 0.4 CIE u'

0.5

0.6

0.7

(b)

Figure 3.5: Image (a) shows a comparison of measured gamut boundaries. The transparency HDR target provides a comparatively larger colour gamut than an ordinary reflective target (GretagMacbeth ColorChecker). Each side of our target [as seen on Image (b)] is an enlarged IT8.7/1 [ANSI, 1999] colour chart on Kodak Ektachrome professional film (8-by-10 inch).

(a) 1

Normalised measured radiance

Normalised measured radiance

(b) Fluorescent light

0.8 0.6 0.4 0.2 0

1 0.8 0.6 0.4 0.2

Halogen light

0 380

(c)

430

480

530

580

630

Wavelength [nm]

680

730

780

380

(d)

430

480

530

580

630

680

730

780

Wavelength [nm]

Figure 3.6: Image (a) presents the training setup of the HDR transparency reference colour samples. 576 colour patches were measured with a spectroradiometer and captured by a camera in a darkroom. Image (b) shows the setup for testing HDR characterisation models. Two GretagMacbeth ColorChecker targets and two 800W halogen light sources on the left (CCT: 2856K) were used to produce a test set with 48 colour patches. Plot (c) shows the spectral power distribution of the fluorescent light bulb (of the training setup) which presents a peak between 530 and 580 nm. Plot (d) presents the spectral power distribution of the halogen light bulb (of the test setup) which is spread more toward infrared wavelengths.

3.3. High-Dynamic-Range Characterisation

3.3.2

83

Characterisation

In traditional colorimetry (see Section 2.2.2), P(λ) in Equation (3.4) refers to relative spectral power distributions, which are always normalised (100 at 560nm [Hunt, 1998]). This discards the intensity scale of the illumination, which is why previous characterisation models have difficulties calibrating absolute scales. Furthermore, when tristimulus reflectance values are measured by a spectrophotometer (e.g., GretagMacbeth Spectrolino), a calibrated tungsten light is used, which is then converted into a CIE D50 illuminant PD50 (λ) [Equation (3.4)]. However, the scene illuminant P(λ) [in Equation (3.3)] is different from that, effectively building this mismatch into the characterisation, which poses problems when different scene illumination is used after characterisation (see Figure 3.8). Hence, our technique uses identical P(λ) and absolute spectral power distributions to solve both scale and illumination problems (see Figure 3.9 for our geometry setup). Using the above setup, we know the emitted radiance values for each patch of our transparency target (measured using the spectroradiometer), corresponding to Equation (3.4). Furthermore, the linear camera response for each patch is known from the HDR image (corresponding to Equation (3.3), see Appendix A.3 for the measurements of the colour samples). Since the illumination is identical for both, we can now find a (least-squares) linear transform between the RGB camera response and the physical CIEXYZ radiance values that is applicable to unknown lighting [the P(λ)

Training set

Test set

(Transparent 576 patches)

(Reflective 48 spatches)

Light -emitting table (D55)

Halogen-lights illuminated

Colour

Colour

Colour

charts

charts

charts

charts

(bright)

(dark)

(bright)

(dark)

Colour

radiance

LDR Spectroradiometer

Radiance measurements

DSLR

Multiple -

camera

exposures (RAW response)

Characterisation

HDR HDR radiance map

HDR camera response

Figure 3.7: Setup of HDR characterisation. A back-lit transparency colour target is captured by a digital camera and all its colour patches are measured using a spectroradiometer, which forms the training set that is used to compute the characterisation model. A second test set is acquired for validation purposes. It consists of two GretagMacbeth colour charts illuminated by light from a halogen bulb.

3.3. High-Dynamic-Range Characterisation

Spectrophotometer

84

Characterisation

Detector

Camera Light source

Spectral mismatch

Scene illuminance

Objects

Objects

Figure 3.8: Traditional characterisation setup of reflectance-based models. In order to measure reflectance, spectrophotometers use an internal light source (see Section 2.2.2 for more details on geometry). Generally a tungsten or xenon bulb is used as light source, then converted into a CIE D50 illuminant to yield CIEXYZ measurements. However, the scene illumination that is used in characterisation is different from the CIE D50 illuminant. Such a spectral mismatch is built into the characterisation, which poses problems when different scene illumination is used after characterisation.

Tristimulus values

Light source

HDR radiance map

Spectroradiometer

Transparency target

Camera

Light source

Transparency target Darkroom

T/normal

Darkroom T/normal

Figure 3.9: Measuring geometry setup for high-dynamic-range characterisation. Our HDR transparency target is installed on top of the uniform light emitting table in a dark room to produce a set of colours (576 patches). The light source, the colour samples, and measuring device are placed on a straight line (normal to transparency), where the emitted radiances of the patches are measured simultaneously by the spectroradiometer and a digital camera that yields HDR images. Therefore, the identical light source is used in both tristimulus and HDR radiance measurements and will be cancelled out when deriving a characterisation model.

3.4. White Balancing of High-Dynamic-Range Radiance Maps

85

cancels out]: X = (A> · A)−1 · A> · M ,

(3.5)

where X is a 3 ⇥ 3 transform for characterisation, A is a matrix containing the linear RGB camera

response [r,g,b] for each patch, and M is a matrix containing the measured radiometric CIEXYZ values [x,y,z] for each patch. This transform X can now be used to map any (high-dynamic-range) RGB value into a physically meaningful CIEXYZ value, independent of the illumination. In our particular setup we find three transforms, one for each digital camera.

3.3.3

Characterisation Models

Table 3.1 presents the matrices of the linear transform from camera HDR into CIEXYZ coordinates, which were computed as outlined in Section 3.3.2. Note that these matrices not only transform colorimetric information but also luminances, because we take absolute scales into account such that the characterised coordinates are identical to the physical radiance measurement. However, the scale of the matrices may be different for other HDR assembly algorithms. Canon 350D / 18-55mm lens R

G

B

X

6.8364

1.1685

0.3256

Y

3.0657

4.1205

Z

0.3650

-0.6863

Nikon D40 / 18-55mm lens R

G

B

X

12.9566

1.6246

0.8274

-1.2861

Y

6.0406

6.4671

-1.5985

6.3905

Z

0.5537

-0.9170

11.5996

Nikon D100 / 35mm lens R

G

B

X

10.1001

1.4246

0.5921

Y

4.6565

5.2054

Z

0.4985

-0.7648

Averaged R

G

B

X

9.9644

1.4059

0.5817

-1.5151

Y

4.5876

5.2643

-1.4666

10.1364

Z

0.4724

-0.7894

9.3755

Table 3.1: Transformation matrices from high-dynamic-range signals into CIEXYZ. The transforms were computed from HDR radiance maps of our transparency target and the corresponding radiance measurements. Averaged refers to the mean matrix of the three different cameras.

3.4

White Balancing of HDR Radiance Maps

Our mapping transforms HDR input images into physically-meaningful CIEXYZ values. However, in case an image is not intended for measurement purposes but for display (e.g., using a tonemapping method), we need to take the human visual system into account, which adapts to a given illumination condition. This is a classical issue and is traditionally called white balancing. There are a variety of techniques available to simulate this adaption [Hubel et al., 1999; Fairchild, 1991; Finlayson et al., 1997].

3.4. White Balancing of High-Dynamic-Range Radiance Maps

86

Colour temperature is defined as the spectral power distribution of a Planckian blackbody radiator [Wyszecki and Stiles, 1982]. Even though many real-world illuminants are not exactly equal to any of the chromaticities of a blackbody radiator, we can compute the correlated colour temperature (CCT) [Holm and Krochmann, 1975], which refers to the closest matching temperature. In our work, we estimate the CCT of a scene. While this assumes the scene illumination to be on the blackbody locus, it acts as a constraint which allows us to find good estimates. Techniques for estimating the correlated colour temperature are usually a part of computational colour constancy [d’Zmura and Lennie, 1986], which simulate the human visual system’s chromatic adaptation in digital imaging. Conceptually, these algorithms first estimate the correlated colour temperature and then balance the white point of the image accordingly. In the context of this thesis, we use colour constancy in two ways. First, we propose an efficient method to estimate the correlated colour temperature of a scene; and second, we white-balanced the captured HDR radiance maps for final display. Many colour constancy methods have been proposed (see Section 2.2.6 for more details), but despite the large variety of available methods, no algorithm can be regarded as universal. In practice, the grey-world and maxRGB approaches perform well on natural, real-world images [Hordley, 2006; Gijsenij and Gevers, 2007]. We therefore proposed an enhanced version of the grey-world algorithm to estimate the scene’s CCT. We derive a linear transform from real-world training images with radiometric measurements instead of synthetic images [Barnard et al., 2002], and we further apply a weighting scheme that combines the maxRGB and grey-world methods.

3.4.1

Estimating the Scene Illumination

The camera signal C (for each colour channel k = r, g, b) is the sum of the product of surface reflectance S(λ), camera response function Dk (λ) (e.g., influenced by colour filters), and irradiance P(λ) over all wavelengths λ: Ck =

X

P (λ)S (λ) Dk (λ)∆λ .

(3.6)

λ

We characterise Dk (λ) [Barnard and Funt, 2002] (see Section 3.3 for more details), which allows us to obtain (linearised) estimates of the radiant power Φ = P(λ)S(λ). However, both P(λ) and S(λ) are unknown, but we need to estimate the correlated colour temperature T of the scene illuminant P(λ). We start from the grey-world assumption that the average of all surface reflectances in a scene is a neutral reflectance [Buchsbaum, 1980]. However, as mentioned in [Barnard et al., 2002; Gijsenij and Gevers, 2007; Gehler et al., 2008], real-world statistical data shows that the average is different from perfect neutral reflectance. Unlike previous database-based grey-world methods [Barnard et al., 2002; Gijsenij and Gevers, 2007; Rosenberg et al., 2003] that either use synthetic training images or training images without knowing the actual scene illuminant, we use a database of characterised real-world photographs as well as accurately measured scene illuminants P(λ). We first captured 35 training images of real-world scenes (see Figure 3.10) under different

3.4. White Balancing of High-Dynamic-Range Radiance Maps

87

illumination conditions with a colour temperature Tm ranging from 2000K to 7500K, which we measured on a Spectralon tile that was placed in each scene using the spectroradiometer. The Spectralon tile was always oriented such that it was facing the main light source. It was usually removed from the scene when the training images were photographed (see Figure 3.10). The radiant power values Φ of each pixel (in each image) are then projected onto the blackbody locus using Holm and Krochmann [1975]’s method, which is also used by the spectroradiometer that we used to estimate the CCTs of the training data, yielding the (per pixel) correlated colour temperature T : argmin T

h�

ue − u T

�2

� �2 i1/2 + ve − vT ,

(3.7)

where (ue , ve ) are the radiance chromaticity coordinates of the pixel (derived from their radiance value) and T is the temperature of the nearest point (u T , vT ) on the Planckian locus. The colour temperatures Ti of pixels Zi within each image are then averaged together using a weighted average (similar to grey-world): P T = Pi

Ti w(Zi ) i w(Zi )

.

(3.8)

Our weighting function w() is proportional to the luminance of a pixel, i.e., zero weights are applied to the pixels with smallest luminance and a weight of one is applied to the brightest pixels. The colour temperatures of the brighter pixels are weighted more than those of the dark area. This weighting takes into account brighter signals more than the dark in a similar sense to the MaxRGB

1892K

2404K

2569K

2599K

2599K

2686K

2686K

2919K

3480K

4029K

4740K

4841K

5432K

5432K

5432K

5432K

5759K

5901K

5940K

6030K

6030K

6105K

6150K

6620K

6620K

6660K

7041K

7485K

7681K

8152K

Figure 3.10: Examples of the training images for our white balancing. We use raw sensor signals (discarding the camera’s auto white balance) and the spectral power distribution of the scene illumination (measured on a Spectralon tile) as our training data.

3.5. Results

88

method (which considers the brightest signal). From this training data, we then derive a simple affine transformation Tm = a · T + b that maps

from T to the accurately measured Tm . We estimate the two parameters a and b of this model using linear regression: >

M T = (T · T)−1 · T> · Tm ,

(3.9)

where T refers to the vector containing all training CCTs T , Tm refers to the vector containing all measured CCTs Tm , and M T is a matrix containing the two parameters. For any new image, we simply compute T and map to the actual colour temperature Ta with M T .

3.5 3.5.1

Results Colour Accuracy of HDR Characterisation

We have tested our HDR characterisation methods with three different cameras (Nikon D100, Canon 350D, and Nikon D40). For this we have computed three characterisation models, one for each camera, as described in the previous section [using our transparency colour target, see Figure 3.6(a)]. We analyse the radiometric accuracy of each of the three characterisation models (one for each camera) by comparing their results against physical measurements from the spectroradiometer. For each comparison, we compute three different error measures in order to judge the accuracy. First, we compute CIEDE2000 [CIE, 2001] values, which are commonly used to compare colours in a perceptual fashion (see Section 2.3.5 for more details on the formulae). This method is based on the CIELAB colour space [CIE, 1986], and as such is really only valid for low dynamic range values. Nonetheless, we include it for completeness. Second, we compute CIE Yu0 v 0 coordinates [CIE, 1986] for the characterised HDR image as well as the measurements from the spectroradiometer, and compute (relative) median differences between them. Third, we compute the (relative) median differences between the characterised CIEXYZ values and the measured CIEXYZ values. We first perform these comparisons within the training set [transparency target, see Figure 3.5(b)], i.e., we validate that a linear characterisation model is sufficient. To this end, we take the original HDR images (one for each camera), convert them to CIEXYZ with the characterisation matrices from Table 3.1 and compute the CIEDE2000 values, Yu0 v 0 median differences, and CIEXYZ median differences for each colour patch in the transparency target. As can be seen in Table 3.2(a), the errors are comparatively low. Furthermore, we validate how well the characterisation models work with test scenes that were taken under different illumination. Figures 3.6(c) and (d) show significant differences in spectral characteristics between the training scene (fluorescent light) and the test scene (halogen light). Our first test scene consists of two ColorChecker charts illuminated under halogen light, shown in Figure 3.6(b). As can be seen again in Figure 3.11 and Table 3.2(b), the errors are quite low, especially for the Canon 350D. We compare this result of our method [Kim and Kautz, 2008a] with the previous reflectance-based LDR characterisation [ISO, 2006] technique and the HDR assembly method using ICC profiles [Göesele et al., 2001] (generated by GretagMacbeth ProfileMaker), see

3.5. Results

(a)

(b)

(c)

89

Training set

∆E00

Y

u0 v 0

XYZ

Canon 350D

1.121

0.103

0.013

0.116

Nikon D100

1.311

0.096

0.022

0.117

Nikon D40

1.486

0.066

0.026

0.083

Test set

∆E00

Y

u0 v 0

XYZ

Canon 350D

0.480

0.111

0.016

0.114

Nikon D100

3.816

1.214

0.035

1.660

Nikon D100 (IR filter)

1.615

1.193

0.048

1.439

Nikon D40

3.104

0.884

0.038

1.192

Test set – other methods

∆E00

Y

u0 v 0

XYZ

Canon 350D (LDR Char.)

7.028

0.225

0.039

0.228

Canon 350D (HDR ICC)

4.130

1.085

0.073

0.919

Table 3.2: Colour accuracy error of HDR characterisation: (a) the training set presents the accuracy of our characterisation models using the training data (576 patches under 5571K illumination). (b) the test set shows the accuracy of the same characterisation models using a different test data-set (reflective target under 2946K illumination). Accuracy compared with other methods (c): LDR characterisation (only one target is used [ISO, 2006]) and HDR assembly using ICC profiles [Göesele et al., 2001]. ∆E00 denotes the median CIEDE2000 over all patches between measurement and prediction, Y shows the median relative differences of luminance levels, and u0 v 0 indicates the median relative differences between measurement and prediction of all patches in CIE u0 v 0 . XYZ shows the median relative differences of CIEXYZ channels between measurement and prediction. IR filter means using the improved results with Rosco Thermal Shield infrared-blocking filter.

3.5. Results

90

10.0

DE2000 colour difference

9.0 8.0 7.0

7.03

6.0 5.0

4.13

4.0 3.0 2.0

0.48

1.0 0.0 LDR Char.

HDR ICC

Our method

Figure 3.11: Overall results of accuracy. GretagMacbeth ColorChecker is used for testing colours. This figure compares the median CIEDE2000 colour difference error when the camera (Canon 350D) is characterised with three different methods (LDR characterisation with a ICC camera profile [ISO, 2006], HDR ICC profile [Göesele et al., 2001], and our HDR characterisation method. Average colour differences in CIEDE2000 are: (LDR Char.) 6.72, (HDR ICC) 4.70, and (our method) 1.01.

Table 3.2(c) and Figure 3.12. As predicted, the achieved accuracy error is lower than with our new method. In order to confirm repeatability, we acquired the test set (Canon 350D) a second time under different illumination (2983K). The median ∆E00 was 0.546 over 48 patches, which is very close to the ∆E00 of 0.480 for the first test set. Figure 3.13 compares luminance and chromaticity of the test scene which consists of GretagMacbeth charts under halogen light, acquired by three different cameras and then characterised using our method. As shown in the top plot, the Canon 350D shows very similar performance to the spectroradiometer, whereas the Nikons show overestimation of the luminance. The Nikon cameras have a slightly higher error which we traced back to an inferior infrared filter. Halogen light emits a large amount of infrared light, which caused the HDR images acquired with the Nikon cameras to have a considerable amount of infrared glare. Using an additional infrared blocking filter (Rosco Thermal Shield) in front of the lights yielded a median ∆E00 of 1.6 for the Nikon D100, down from 3.8 [see Table 3.2(b)]. Insofar as the averaged error level decreases with additional infrared-blocking filters, we believe that the inferior infrared-blocking filter with the Nikon camera causes infrared glare under tungsten light. The bottom left plot shows chromaticity differences of the test patches. The differences are minor, with only one colour (a red patch on the right-hand side) showing a big difference. This colour is located outside the camera’s RGB filter gamut because these cameras use wide-band width filters. Our second test scene is a desk scene illuminated mainly by a fluorescent desk lamp, shown in Figure 3.14, 3.15, and 3.16. The dynamic range of HDR radiance maps is usually much higher than that of typical monitors and cannot be displayed directly (see Section 2.5 for more details). Since simple linear scaling with gamma correction does not achieve satisfactory results when displaying

3.5. Results

12 10

91

LDR Char. HDR ICC Profile HDR Char.

∆E00

8 6 4 2 0 0

20

40 60 Chroma C*ab

80

100

Figure 3.12: Comparison of colour difference (test set, patches sorted by chromaticity). ∆E00 is computed by using a ColorChecker chart in a brightly illuminated area. LDR characterisation is calculated using the reflectance-based method [ISO, 2006]; the HDR ICC method is according to [Göesele et al., 2001]. Our HDR characterisation shows comparatively low errors.

Luminance [cd/sqm]

10000 8000 6000 4000

Measurement Test under Tungsten Light Source Spectroradiometer Camera (Canon 350D) Camera (Nikon D100) Camera (Nikon D40)

2000 0

F4 E4 A3 D2 F3 B2 A1 D1 D4 C3 E3 C1 E1 B3 C2 F1 F2 C4 E2 B1 A2 D3 B4 A4

Index of ColorChecker Patches (sorted by luminance)

Measurement Test under Tungsten Light Source 0.6 Canon 350D

CIE v'

0.55 0.5

0.45 Colour Difference Spectroradiometer Camera (Canon 350D) Camera (Nikon D100) Camera (Nikon D40)

0.4 0.35 0.15 0.2

0.25 0.3 CIE u'

0.35 0.4

0.45

Nikon D100

Nikon D40

Figure 3.13: Test scene consisting of GretagMacbeth charts under halogen light, acquired by three different digital cameras and then characterised using our method. The top plot presents luminance differences between radiometric measurements and camera measurements. In particular, the Canon 350D shows very similar performance to the spectroradiometer. Tone-mapped versions of the three characterised images are shown on the right; the differences between them are difficult to spot. For a quantitative comparison, see Table 3.2(b). The bottom left plot shows chromaticity differences of the test patches in a CIE uniform chromaticity diagram. The differences are minor, with only one colour showing a big difference, which is located outside the camera’s R/G/B filter gamut.

3.5. Results

92

(a) Direct sensor response (RAW)

(b) Absolute CIEXYZ

* (c) HDR Characterisation (white-balanced)

Figure 3.14: Each step of the HDR characterisation method. Image (a) presents the direct sensor response and is the acquired RAW image without any white balancing. The greenish appearance is due to the infrared filter in front of the sensor, which will be corrected by the derived mapping. Image (b) shows the characterised CIEXYZ image (which we render using an 1:1 mapping to RGB for illustration purposes). Each pixel value represents a measurement of radiance. Image (c) shows the final resulting image by mapping from characterised and device-independent CIEXYZ to the display sRGB colour space. The white point of the scene is converted to the white point of the display with the estimated reference white.

3.5. Results

93

(a) Before HDR characterisation

(b) After HDR characterisation

(c) Difference map (mid-grey = mean) amplified by 10

Figure 3.15: Before and after comparison of HDR characterisation. Image (a) presents an HDR image without characterisation while image (b) shows an HDR radiance map, which is characterised through our proposed method. Image (c) is a difference map which is amplified by 10 (for a visualisation purpose). Mid-grey presents the mean of these two images (before & after). In particular, the blue screen and yellow books (colourful objects) present more of a difference.

3.5. Results

94

(a) Canon 350D

(b) Nikon D100

(c) Nikon D40

Figure 3.16: An HDR desk scene is characterised with our method for three different digital cameras (Canon 350D, Nikon D100, and Nikon D40). Even though they are taken from slightly different perspectives and angles, there are only very minor colour differences between the images. For instance, the measurements of the white tile in the scene are: (spectroradiometer in X/Y/Z) 119.63/112.50/33.07; (Canon 350D)127.00/122.00/30.50; (Nikon D100) 150.00/143.00/39.00; (Nikon D40) 150.00/142.00/38.00.

3.5. Results

95

HDR images, tone-mapping algorithms have been introduced that compress the dynamic range in a more suitable manner in a global, local, or image appearance fashion. We only deal with the input side of HDR imaging in this chapter. Tone-mapping and colour appearance modelling will be dealt with in Chapters 5 and 6. For now, we use a popular tone-mapping method [Reinhard et al., 2002] to display our characterised images. Figure 3.14 presents each step of the HDR characterisation. The top image shows the direct sensor response and is the interpolated RAW image without any white balancing. As the infrared blocking filter (cyan-greenish) is located in front of the sensor, the raw sensed image without white balancing appears greenish. The middle image presents the characterised CIEXYZ image, which is rendered using a 1:1 mapping for CIEXYZ to RGB. Finally, the bottom image shows the result by mapping from characterised and device-independent CIEXYZ to the display sRGB colour space. The estimated white point of the scene is converted to the white point of the display. Figure 3.15 compares before and after a HDR characterisation. The top image shows an ordinary HDR image and the middle image presents a characterised HDR radiance map, which is characterised through our proposed method. The bottom image shows a difference map. In particular, the blue screen, yellow book, and colour chart present more of a difference. Tone-mapped versions of the characterised HDR images are shown and as can be seen in Figure 3.16, the colours in all three images are almost identical, even though they were taken with three different cameras.

3.5.2

Illuminant Estimation

Traditional grey-world methods average trichromatic primaries first and then compute the correlated colour temperature from the average. However, we have found that first computing colour temperatures and building a weighted average of those yields better results (squared correlation coefficient of R2 = 0.86 vs. R2 = 0.79). Initially, we experimented with training images of a GretagMacbeth DC chart instead of natural images. While their average colour temperatures T were highly correlated with the measured colour temperatures Tm (R2 = 0.99), the derived linear transform did not generalise well to natural images.

(a)

Estimation (Training: Natural Images)

7500 6500 5500 4500 3500 2500

Gray-World Color-by-Correlation The Proposed

1500 1500 2500 3500 4500 5500 6500 7500 8500 Radiometric Measurements in K

3500 Error of Temperature in K

Estimated Temperature in K

8500

(b)

3000

Differences (Estimation & Measurement) Gray-World Color-by-Correlation The Proposed

2500 2000 1500 1000 500 0 1500 2500 3500 4500 5500 6500 7500 Scene Temperature Measured in K

Figure 3.17: (a) Result of temperature estimation using the training data of natural images (all 35). (b) Difference between temperature estimation and radiometric measurement of new test images.

3.6. Discussion

96

Figure 3.17(a) and (b) demonstrate that our database-based grey-world algorithm estimates the colour temperature rather accurately for both the training as well as new natural images. These results of our method [Kim and Kautz, 2009] compare favourably to the original grey-world and colour-by-correlation methods [Finlayson et al., 1997]. In many cases, our colour temperature estimation method is more accurate than the original grey-world or gamut-based model despite using only 35 training images. Yet it allows us to compute an estimate in milliseconds (or even less when only a subset of pixels is used). Of course, when an image deviates too much from our training data, the colour temperature estimate is less accurate.

3.6

Discussion

HDR Characterisation

Our characterisation method is applicable to HDR imaging, which is very

useful in graphics but also other scientific fields. Our mathematical method of characterisation is rather simple — a linear transformation between colour spaces — and not different from previous methods. However, our characterisation methodology, the combination of a new transparency colour target, HDR imaging, and characterisation theory, solves drawbacks of previous characterisation methods. As shown in the results (see Figure 3.11), our characterisation performance is comparatively better than previous methods [Pointer et al., 2001; MacDonald and Ji, 2002; Johnson, 2002; ISO, 2006; Göesele et al., 2001], yet efficient in terms of cost and acquisition time. However, there are some limitations of our method. The performance depends on the optical quality of the digital camera, including lens flare, vignetting, veiling glare, and the infrared filter. For instance, the optical quality of the camera system could be improved with a fixed lens, which provides less chromatic aberration than a zoom lens [Shortis et al., 2006]. The inaccurate performance of Nikon cameras under tungsten lights could be improved by installing an additional infrared-blocking filter. HDR veiling glare can be solved [Talvala et al., 2007] but acquisition complexity is greatly increased. The measurement used in our method returns radiometric XYZ values, not radiance in each wavelength. In this way, it still allows potential measurement errors with metameric colours like other target-based models. Illuminant Estimation

In many cases, our colour temperature estimation method is more accurate

than the original grey-world or gamut-based model, even though we only used 35 training images. Yet, it allows us to compute an estimate in milliseconds (or even less when only a subset of pixels is used). Of course, when an image deviates too much from our training data, the colour temperature estimate is less accurate. When the scene illuminant moves far from the locus, the performance of our algorithm will degrade as we assume the illuminant to lie on the locus. However, in our experience, this case does not seem to occur frequently in natural scenes. Of course, extreme cases such as tinted light bulbs will be difficult to handle for our method. In this case, a classical white balancing method, like MaxRGB or the general grey-world method, can be used for white balancing. In addition, our method seems to perform well, even if the new images are not well represented in our training database. For instance, there is no similar training image (Figure 3.10) to the example

3.7. Summary

97

from Figure 3.14 and the colour chart example in Figure 3.15. Note that this estimation method is used only for display purposes (white balancing) and can be used to estimate white point (colour) for the CAM and that the characterisation method actually yields physically-meaningful radiance values (not white balanced) in absolute CIEXYZ.

3.7

Summary

We have presented a new technique that can characterise HDR imaging systems, both in terms of luminance and colour. It is more accurate than previous reflectance-based characterisation methods and less time-consuming than monochromator-based techniques, which were designed for LDR imaging. We have validated the accuracy of the method using three different digital cameras and test data sets with radiometric measurements. Even though we have devised our method with HDR imaging in mind, the same technique can also be applied to characterise LDR devices. The proposed method enables measurement of real-world radiance as an HDR radiance map with significant accuracy. The radiance map contains the full dynamic range of the real-world radiance in a physically-meaningful way. In the next chapter, we will describe how physical stimuli in the real world are perceived by the human visual system. We will describe a series of psychophysical experiments and a colour appearance data set under high-luminance levels.

98

Chapter 4

High-Luminance Colour Experiments The previous chapter describes a method to characterise HDR imaging to digitise real-world radiance as an HDR radiance map to a high accuracy. The method yields physically-meaningful HDR radiance maps, equivalent to radiometric measurements of the real world. This chapter will describe how such physical colour stimuli are perceived by the human visual system. We describe the experimental measurement of colour appearance under high luminance levels. This data set was used to develop a new colour appearance model (see Chapter 5) to complete colour communication in HDR imaging. In order to quantify actual perceptual colour appearance, we have conducted a series of magnitude estimation experiments. Observers are presented with a large number of coloured patches in succession, for which they have to estimate lightness, colourfulness, and hue values. Parameters influencing the estimates are changed across different phases of the experiment: background level, luminance (and colour temperature) of the reference white, and ambient luminance. We designed our psychophysical experiment in a similar way to the LUTCHI experiment, which allows us to leverage their existing data. However, our experiment differs from LUTCHI by including high luminance levels of up to 16 860 cd/m2 as well as a large number of phases, where the background intensity is varied. (The LUTCHI data set for the simultaneous contrast effect [Luo et al., 1995] is not publicly available.)

4.1

High-Luminance Display

As mentioned in Section 2.3.3, current colour appearance data sets, mostly LUTCHI [Luo et al., 1991a,b, 1993a,b, 1995], present the limited dynamic range of luminance. For instance, among the data sets, only Luo et al. [1993b] describe colour appearance of transparency signboards under high levels of luminance up to 1 272 cd/m2 , where only four colour samples were used with more than 1 000 cd/m2 . Most of the colours in the LUTCHI data sets are under 690 cd/m2 , which was limited by the available display technology in the early 1990s. In order to span an extended range of luminance levels (up to five-order magnitude, equivalent to the working range of the eye’s cone), we built a custom high-luminance display device which is capable of delivering up to approximately 30 000 cd/m2 , see Figure 4.1. The setup consists of a light box, powered by two 400W hydrargyrum medium-arc iodide (HMI) bulbs, transmitting light

4.1. High-Luminance Display

Figure 4.1: A custom-built high-luminance display.

99

The display can produce a luminance of

2

2 200 cd/m when used as an LCD display and up to 30 000 cd/m2 when used with transparencies.

through a filter ensemble followed by either a 1900 LCD panel (original backlight removed) or a diffuser onto which transparencies are placed. The light source spectrum conveniently resembles fluorescent backlights, close to a correlated colour temperature of 6500K. Moreover, HMI bulbs stay cool enough to keep the LCD panel from overheating (see Figure 4.2 for overall design).

4.1.1

Design and Manufacturing

The main insight of our display device is to achieve a higher luminance level by replacing the backlight unit in an ordinary LCD display. This simple replacement creates two new issues: over-heating of the LCD panel and calibration of the display. To avoid heat from the high-luminance light bulbs, we first choose an HMI bulb (400W Iwasaki Electric company Ltd — Eye MT400DL) with a 400W electronic ballast as a light source. As shown in Figure 4.4(b), the spectral power distribution of the bulb is quite similar to an ordinary florescent light bulb [see Figure 2.8(b)], as also used in the LUTCHI experiment [Luo et al., 1993b]. The minor differences of these spectrums are calibrated by using an ICC profile (see Section 4.1.2 for more details). The measured CCT of the bulb was 6494K, corresponding to CIE D65 illuminant. In addition, the HMI-type bulb produces more energy toward the visible spectrum, and relatively lower energy in the infrared wavelengths, compared to filament-type halogen bulbs. Consequently, the HMI bulb produces much less heat than other types of bulbs. However, liquid crystals are rather sensitive to heat. Any heat energy can affect crystal liquids to close the pixel by turning the direction of the liquid crystals (becoming black). Generally, LCD panels function under a temperature of 50◦ C. For this reason, although HMI bulbs produce less energy than halogen bulbs, heat ventilation was required to ensure this LCD panel work properly at under 45◦ C (ensured by measuring with a thermometer). Peak luminance (and with it the luminance of the reference white, as well as of all colour

4.1. High-Luminance Display

LCD controller

LCD panel (or transparency) & Diffuser

Neutral density or colour control filter Fire glass Air UV filter

100

Lid (2X) Fans

(2X) Reflectors

Heat ventilation

HMI bulbs (2 X 400W)

(2X) Ballasts for HMI bulbs

Figure 4.2: Design of the high-luminance display. From the left, an LCD panel or transparency is placed to produce colour stimuli. A slot is made for neutral density or colour control filters to control luminance and colour temperature of the light source. Double-glazed fire glass is installed to isolate the heat energy against the LCD panel. Two 400W HMI bulbs are used as a light source. Two 400W ballasts and two fans are located outside the box. Heat is vented from the top and the back sides of the display. A thermometer is installed to check the temperature of the inner chamber (keeping the box temperature at approximately 45◦ C).

(a) Lighting panel

(b) LCD panel and its controller

(c) Fans and electronic ballasts

(e) Fire glass, filters, and LCD

(f) Overall top view

(g) Front view without LCD

Figure 4.3: Compartments of the high-luminance display. Image (a) presents the inner back side that contains two HMI bulbs, two fans, their power supply, and electronic wires. These elements (except the bulbs) are covered with aluminium tin foil to improve energy efficiency. Image (b) shows the LCD panel, its power supply, and its VGA controller. Image (c) is a photograph of the outer back side panel. To protect the ballasts against heat, two electronic ballasts that produce flicker-free light are installed outside the display. From the top of the image (e), the double-glazed fire glass is installed to isolate infrared light and heat energy from the LCD panel, then (3⇥) UV filters are installed to avoid ionisation, then finally the LCD panel unit. Image (f) shows overall top view before installing the top panel. Image (g) presents a front view of the display before installing the LCD panel.

4.1. High-Luminance Display

101

samples) is controlled by placing additional neutral density (ND) filters into the light box (which preserves amplitude resolution). Combinations of different ND filters creates peak luminances of approximately 50, 125, 500, 1 000, 2 200, 8 500, and 16 860 cd/m2 used in our experiment. In addition, we can modify the colour temperature of our light source by placing Rosco colourtemperature-changing filters inside the light box. Our experiments use four different colour temperatures: 2000K, 6500K, and 8000K with the LCD, and 6000K with transparencies. We used a Samsung SM931C 1900 SXGA TFT LCD panel, which has a resolution of 1280x1024 (response time: 2ms) and a contrast ratio of approximately 1:1000 (according to its specification). When used with the LCD, the maximum displayable luminance is 2 250 cd/m2 (similar to the Dolby HDR display [Dolby, 2008]). Owing to the 8-bit LCD, the amplitude resolution is only 256 steps (less than for a real HDR display [Seetzen et al., 2006]). However, this is not critical, as the experiment only requires sparse sampling of the colour space. For transparencies, the maximum luminance reaches 30 000 cd/m2 , with virtually arbitrary contrast and amplitude resolution.

0.7 0.6 0.5

v'

0.4 0.3 0.2

Raw display sRGB

0.1

Spectral locus Characterised

0.0 0.0

(a)

0.1

0.3

u'

0.5

0.6

0.7

1

Light source

0.8 0.6 0.4 0.2

Red Green Blue

0.8 0.6 0.4 0.2 0

0

(b)

0.4

Normalised measured radiance

Normalised measured radiance

1

0.2

380

430

480

530

580

630

680

730

Wavelength [nm]

780

(c)

380

430

480

530

580

630

680

730

780

Wavelength [nm]

Figure 4.4: Colour gamut and spectral power distribution of the high-luminance display. In Plot (a), a red triangle presents the gamut of the raw colour primaries, and a orange triangle shows the gamut of characterised primaries of our high-luminance display. Plot (b) and (c) show measured spectral power distributions of the HMI bulb and the calibrated display. The light source presents undesirable strong peaks in the middle of its spectrum, which causes viewing angle dependency in the final display. Therefore, participants’ viewing angle was fixed perpendicular to the centre of the display to avoid the colour appearance changes by viewing angle.

4.2. Stimuli

4.1.2

102

Calibration

Using a Specbos Jeti 1200 spectroradiometer, we colour-calibrated the LCD version of our display to match an sRGB colour gamut and a gamma of 2.2 by generating a ICC [2004] profile (see Appendix A.4 for radiometric measurements for device characterisation), but our light bulbs produce a smaller colour gamut toward the red primary compared to sRGB colour space [see Figure 4.4(a)]. Hence, the display produces a colour space similar to sRGB, but with much higher luminance levels. In addition, when using the transparent film panel, the display covers higher levels of luminance and a wider colour gamut beyond the LCD panel (see Figure 4.6). We further measured the spectra of all displayed colour patches (LCD and transparencies), as well as background and reference white. In addition, the reference white was re-measured at the beginning (after the HMI bulbs output had stabilised after a few hours) and at the end of each day to ensure repeatability. Even though HMI light bulbs are known to change colour temperature over their lifetime (approximately 0.5K for each hour), over the two-week period of our experiments, we recorded only an insignificant variation of about 3% in luminance and a 1% decrease in colour temperature.

4.2

Stimuli

The setup for recording our perceptual measurements is adapted from the LT phases (cut-sheet transparencies) of the LUTCHI experiments [Luo et al., 1993b]. A participant is asked to look at a colour patch presented next to a reference white patch and a reference colourfulness patch (with a colourfulness of 40 and lightness of 40), as shown in the centre of Figure 4.5. The viewing pattern is observed from 60 cm distance and normal to the line of sight, such that each of the approximately 2⇥2cm2 patches covers approximately 2◦, and thus the whole display approximately 50◦ in the field of view of the participant (with the test colour patch being in the centre). The background is black or gray, with 32 random decorating colours at the boundary, simulating a real viewing environment. We selected 40 colour patches as stimuli, carefully chosen to provide a good sampling of the available colour gamut and to provide a roughly uniform luminance sampling. The 40 colour patches, background luminance level, and reference white patch were measured by a spectroradiometer before taking experiments with participants (see Figure 4.7 and Table 4.1 and Appendix A.6 for physical/perceptual measurements). Figure 4.6 shows the distribution of these 40 patch colours for each device. The patch sets for the LCD and transparency setup are different, as it is neither easy to match their spectra nor necessary for the experiment. Since the perception of lightness, colourfulness, and hue is strongly correlated with parameters such as luminance range, reference white, background level, and surround condition [Stevens and Stevens, 1963; CIE, 1981; Luo et al., 1991a; Luo and Hunt, 1998; Hunt et al., 2003], our study explores relevant slices of this high-dimensional space. We partition the experiment into different phases, with a specific set of parameters in each phase (see Table 4.1). We primarily focus on the influence of luminance range and background level on colour perception as these two dimensions are known to have the strongest perceptual influence [Luo et al., 1991a]. We performed experiments

4.2. Stimuli

Adapting field (10~12)

103

Background

Enter numbers here

Surround (outside the screen)

Test colour (2)

Reference white

Reference colourfulness

Decorating colours

Figure 4.5: The viewing pattern observed by participants (made with the C/C++ in Microsoft Visual Studio). Participants were presented with a series of test colour samples in the centre of the screen. They entered three estimated magnitude numbers (lightness, colourfulness, and hue) by using a keyboard numeric pad. Reference white is located to the left and below the test colour. The reference white patch is used for lightness estimation on a relative scale. Reference colourfulness is located to the right below to provide an anchor point when observers estimate colourfulness magnitude on an absolute scale. The adapting field (10-degree viewing angle) is used for measuring the luminance adaptation level of the eye. All other screen area is background, which includes decorating colours around edges to simulate the real-world viewing environment. Finally, areas outside of the screen are assumed to be surround, including the luminance level of the room.

0.7 0.6 0.5

v'

0.4 0.3 0.2 Trans. Colour Patches LCD Color Patches Spectral Locus

0.1 0.0 0.0

0.1

0.2

0.3

u'

0.4

0.5

0.6

0.7

Figure 4.6: Colour coordinates of the 40 LCD and transparency patches (CIE u0 v0 ).

4.3. Experiments Physical measurements (CIEXYZ)

104 Perceptual measurements (LCH)

Observer Spectroradiometer

Light source

Light source

LCD panel or transparency

LCD panel or transparency Darkroom

Darkroom

T/normal

T/normal

Figure 4.7: Our high-luminance display device is placed in a dark room where colour patches were measured by a spectroradiometer (physical quantities) and estimated by trained observers (corresponding perceptual quantities) in colour experiments. up to a peak luminance of 16 860 cd/m2 (corresponding to white paper in noon sunlight); higher luminance levels were abandoned as they were too uncomfortable for the participants. As previous colour experiments have already covered low luminance, we conducted only a few low-luminance experiments (phases 1–5 in Table 4.1) to verify consistency.

4.3 4.3.1

Experiments Experimental Procedures

A crucial point to psychophysical measurements conducted through magnitude estimation is that each observer clearly understands the perceptual attributes being judged. Each observer completed a 3-hour training session with the actual viewing pattern (using a different set of colour patches) to develop a consistent scale for each of the required perceptual attributes (lightness, colourfulness, and hue). For data compatibility, the same scaling units and instructions (see Appendix A.5) were used as in the LUTCHI data sets [Luo et al., 1993b]. We employed six fully trained expert observers, all of whom were research staff from our institution, who had passed the Ishihara and City University vision tests for normal colour vision. At the beginning of each phase, observers spent 5 minutes for high luminance and 30 minutes for dark luminance adapting to the viewing conditions. Each observer spent around 10 hours on the experiment in a dark room, usually distributed over two

(a)

(b)

Figure 4.8: Viewing pattern observed by participants [(a) with the LCD panel and (b) with transparency].

4.3. Experiments

Observers

Phases

Samples

Sequences

Estimates

6–7

60

105

9,450

28,350

Numbers

Phase 1

105

Light 5935K

Type LCD

Peak Lumin.

Backgrnd.

Ambient

2

24%

dark

2

44 cd/m

2

6265K

LCD

123 cd/m

21%

dark

3

6265K

LCD

494 cd/m2

0%

dark

4

6265K

LCD

521 cd/m2

24%

dark

LCD

2

87%

dark

2

0%

dark

2

5 6

6197K 6197K

LCD

563 cd/m 1 067 cd/m

7

6197K

LCD

1 051 cd/m

22%

dark

8

6390K

LCD

2 176 cd/m2

0%

dark

LCD

2

12%

dark

2

23%

dark

2

9 10

6392K 6391K

LCD

2 189 cd/m 2 196 cd/m

11

6387K

LCD

2 205 cd/m

55%

dark

12

6388K

LCD

2 241 cd/m2

95%

dark

LCD

2

21%

dark

2

19%

dark

2

13 14

7941K 1803K

LCD

1 274 cd/m 1 233 cd/m

15

6391K

LCD

2 201 cd/m

23%

average

16

5823K

Trans.

8 519 cd/m2

6%

dark

Trans.

2

21%

dark

2

5%

dark

2

22%

dark

17 18 19

5823K 5921K 5937K

Trans. Trans.

8 458 cd/m 16 860 cd/m

16 400 cd/m

Table 4.1: Summary of the 19 phases of our experiment. In each phase, 40 colour samples are shown. Each participant totalled 2 280 estimations, which took around 10 hours per participant.

4.3. Experiments

106

days (see Figure 4.8 for snapshots of the experiments). After the adaptation time, each colour sample was shown in a random order and the participants had to estimate three perceptual attributes: lightness, for which observers used a fixed scale from 0 (imaginary black) to 100 (reference white); hue, where observers were asked to produce a number indicating the hue using neighbouring combinations among four primaries — red-yellow (0–100), yellow-green (100–200), green-blue (200–300), blue-red (300–400); and colourfulness, where observers used their own open scale, with 0 being neutral and 40 equaling the anchor colourfulness. The participants entered the data using a keyboard. After each phase, participants were asked to judge the colourfulness of the reference colourfulness patch of the next phase relative to the previous one in order to allow inter-phase data analysis.

4.3.2

Colour Appearance Attributes

Colour appearance attributes can be quantified in either relative or absolute scales. An interesting question in designing a psychophysical experiment is which type of scale is a better choice in describing colour attributes. Brightness and colourfulness are attributes on absolute scales; lightness and chroma are relative attributes with respect to the maximum levels of brightness. Hue is a relative attribute describing the proportion of primary colours (see Section 2.3.1 for colour terminology), and hence a partitioning experiment is the only available method for this attribute. Generally, a partitioning experiment provides more convenience than magnitude estimation. The reason is that magnitude can only be estimated when considering the memory of the previous Yellow (b*+) (H: 100)

Q2

Q1

Green (a*-) (H: 200)

Red (a*+) (H: 0-400)

Q3

Q4

Blue (b*-) (H: 300)

Figure 4.9: Perceptual colour primaries. Imagine that a participant observes a test colour (suppose purple). He chooses one of the quadrants (Q4) that best matches the test colour. He decides the proportion of the nearest primaries (blue and red) on a percentage scale (like 60% of blue and 40% of red) that make up the test colour. Then, he adds the nearest small primary quadrature value (blue = 300) on the decided proportion (300 + 60) to obtain the hue quadrature value (360) of the test colour.

4.3. Experiments

107

trial (often assisted with an anchor point). Obtained data is on a subjective arbitrary scale, which depends on each participant individually. Therefore, if partitioning is possible, relative scaling is a better choice to improve the efficiency of the experiments. The question is whether relative scaling is possible in colour appearance experiments. Measuring lightness is achievable by providing a reference maximum brightness (reference white). For instance, each participant can be asked how bright the patch is with respect to the reference white. The participant can estimate a level of brightness on a percentage scale without difficulty. Thus, we choose lightness scaling over brightness scaling to allow relative scale assessment. However, scaling chroma is questionable as Kwak [2003] and Fairchild [2005] suggested when commenting on the LUTCHI experiments. Following the colour attribute definition by Hunt [1998] (see Section 2.3.1 for definitions), chroma is a relative judgement of colourfulness with respect to reference white, but it is a very difficult task to normalise a judged colourfulness intensity by the brightness level of the reference white. Therefore, simply asking for colourfulness intensity (ignoring maximum brightness level) is more intuitive than asking for normalised colourfulness. In this way, the colourfulness judgements become easier to understand for participants. For saturation, we would need to ask participants to judge their own assigned brightness level for a given test colour and, accordingly, to judge the colourfulness of the patch given this assigned brightness level. This includes the judgements of two different colour appearances. Consequently, it is better to ask for colourfulness directly than to ask for either chroma or saturation. Therefore, we asked the participants directly to judge the absolute quantity of colourfulness with the help of an anchoring reference colourfulness patch, which was also used in previous LUTCHI experiments [Luo et al., 1991a, 1993a].

4.3.3

Inter-phase Colourfulness

In our experiments, the reference colourfulness patches were chosen to have a colourfulness of 40 according to the CIELAB colour space. It should be noted that the reference colourfulness is only meant to anchor the estimates, and as such any colour or any value can be chosen. To allow comparisons between different phases, we asked participants to rate the colourfulness of the reference colourfulness patch based on the reference colourfulness patch from the previous phase (memory experiment). The results are shown in Figure 4.10, where (a) plots the averaged perceived colourfulness of the reference colourfulness for different luminance levels (44–16 400 cd/m2 ) with a fixed background (20%), and (b) plots perceived reference colourfulness for different background levels (0–95%) with a fixed luminance level (2 200 cd/m2 ). Averaged perceived colourfulness increases up to 62.12% in proportion to the logarithm of luminance and decreases up to 31.73% in proportion to the luminance levels of background. The average CV of these colourfulness memory experiments was 20.93%. In particular, the variation of the colourfulness change by background is higher than that change by luminance; the slope of change by background is also smaller than that of luminance. Our results show that the luminance level has more impact than the background level on colourfulness perception. Finally, as the participants estimate colourfulness by using a same an-

4.3. Experiments

108

chor point (colourfulness: 40), we can scale the perceived colourfulness by the change of reference

90 80 70 60 50 40 30 20 10 0

Colourfulness (geomean)

Colourfulness (geomean)

colourfulness between phases.

1

2

4

7 Phase

(a)

10

17

90 80 70 60 50 40 30 20 10 0

19

8

9

(b)

10

11

12

Phase

Figure 4.10: Plot (a) shows the average perceived colourfulness of the reference colourfulness patch for different luminance levels (44–16 400 cd/m2 ) with a fixed background (in 20%). Plot (b) presents the perceived reference colourfulness for different background levels (0–95%) with a fixed luminance level (2 200 cd/m2 ).

4.3.4

Observer Repeatability and Variation

The soundness of obtained colour appearance data was tested by evaluating variation at different time (repeatability) and overall CV errors in all phases (accuracy) [see Equation (2.13) for CV calculation]. Three observers repeated two phases (phases 7a and 7b) of the original experiment (phase 7) in order to judge long- and short-term repeatability. Phase 7 was conducted in the first week of December in 2008; the other two phases (7a and 7b) were conducted a month later. The average CV of short-term repeatability between two different experiments (7a and 7b) was 10.06% for lightness, 17.23% for colourfulness, and 7.22% for hue (see Figure 4.11 for qualitative comparison). Comparing one-month different experiments (phases 7 and 7a), the average CV of long-term repeatability was 11.83% for lightness, 22.82% for colourfulness, and 11.42% for hue. In addition, we tested overall observer variation of all phases by calculating CV error. The average CV of all the observers in all phases was 14.89% for lightness, 31.91% for colourfulness, and 9.37% for hue (see Table 4.2 for comparison). In particular, the colourfulness estimation had higher variation than other appearances, but this was also observed in previous colour experiments [Luo et al., 1991a,b, 1993a,b, 1995]. For instance, in the LUTCHI data sets phases of lightness varied 11–18% (CV), colourfulness phases varied 13–27%, and hue phases varied 4–7%. The LUTCHI data sets present similar variations to ours. Figure 4.11 shows a qualitative comparison of two different experiments (phase 7a and 7b) to measure short-term repeatability. It represents later estimations of the same colour stimuli against former estimations. Although small variation is observed in these two phases, data is scattered along the diagonal of these plots (a straight line on the diagonal indicates an ideal match). The later estimates (the Y axis) of lightness, colourfulness, and hue present the same trend compared to the former estimates (the X axis), and no bias nor skewness is observed.

4.3. Experiments

109

As shown in Table 4.2, hue estimates were more consistent than lightness and colourfulness estimates. Lightness estimates were more consistent than colourfulness estimates. Similar trends are also observed in the LUTCHI data sets. In particular, the largest variation was observed in colourfulness estimation. In post-experiment interviews, all the participants reported that colourfulness is the most difficult to judge and that open-end magnitude estimation (colourfulness) is more difficult than simple partitioning (lightness and colourfulness). For the larger variation of colourfulness, we could trace back to the difficulty of magnitude estimation, but the quality of our appearance data is consistent with previous experiments.

Observer Variance

Lightness

Colourfulness

Hue

Short-term repeat.

10.06%

17.23%

7.22%

Long-term repeat.

11.83%

22.82%

11.42%

All phases

14.89%

31.91%

9.37%

Table 4.2: Observers repeatability and all-phases variation.

120

Phase 7b (aver. colourfulness)

400

100

80 60 40 20 0 0

20 40 60 80 100 Phase 7a (aver. lightness)

Phase 7b (aver. hue)

Phase 7b (aver. lightness)

100

300

80 60

200

40

100

20

0

0 0

20 40 60 80 100 120 Phase 7a (aver. colourfulness)

0

100 200 300 400 Phase 7a (aver. hue)

Figure 4.11: The repeatability of observers was tested by using stimuli in phase 7. The X axis represents the estimations of lightness, colourfulness, and hue in phase 7a. The Y axis shows these estimations in phase 7b that was repeated after phase 7a.

4.3.5

Differences to Previous Experiments

Previous perceptual attribute correlates have been derived mostly from the LUTCHI data sets because it is publicly available (with one addition: an appearance data set for the simultaneous contrast effect [Luo et al., 1995]). The LUTCHI data sets comprise eight different viewing conditions: high-luminance reflective paper (R-HL), low-luminance reflective paper (R-LL), low-luminance reflective paper comparing lightness with brightness (R-VL), reflective textile (R-Textile), CRT display (CRT), transparency (LT), 35mm slide projector film (35mm), and supplemental reflective paper and transparency measurements (BIT) [Luo et al., 1991a,b, 1993a,b, 1995, 1997], but were really geared towards reflective surfaces and low-luminance conditions. Most of their experiments were carried out with a maximum luminances of up to 690 cd/m2 , except the cut-sheet transparency condition [Luo et al., 1993b], which included a total of only four colour patches (used in two different

4.4. Data Analysis

110

phases) with a luminance over 1 000 cd/m2 . It should be mentioned that there are some distinct differences in our experiments to previous experiments. The LUTCHI data set was geared towards reflective surfaces and low-luminance conditions — no data are available for extended luminance levels. As a result, colour appearance models derived only from LUTCHI cannot robustly model colour appearance under higher luminance levels. This can be seen in Chapter 6. In addition, data sets used in other experiments are not publicly available. In order to verify experimental consistency with the LUTCHI data sets, we conducted a few low-luminance experiments (phases 1–5 in Table A.36) as previous colour experiments have already covered low luminance. Figure 4.12 compares one low-luminance phase between LUTCHI (phase 6 in [Luo et al., 1991a] – on a CRT display with a peak luminance of 40.5 cd/m2 ) and ours [phase 1 on our high-luminance display with neutral-density (ND) filters, producing a peak luminance of 44 cd/m2 ]. Although both experiments were conducted on different display devices (a CRT and an ND-filtered high-luminance display) and different viewing conditions (unknown in LUTCHI), the quantified values of colourfulness and hue in both data sets present a very similar trend [see Plots (b) and (c)]. Lightness perception shows some differences in the perceived lightness of middletone colours [see Plot(a)]. We explain the differences with the fact that lightness perception is considerably changed by medium type [Luo et al., 1993b] (see Chapter 5 for more details) and with the (unknown) differences in viewing conditions.

4.4

Data Analysis

For lightness and hue estimates, all observers had to use the same numerical scale with fixed end points. Given minimum and maximum values to judge the lightness and hue attributes, this forced the observers to use a partition technique rather than pure magnitude estimation [Stevens, 1971]. Consequently, we can compute the arithmetic mean between all observers in order to find the central tendency measure for partitioning. Note that for hue, the scale is circular and care needs to be taken when averaging. If an observer’s response were a mixture of R-Y and B-R, one of the responses was moved to the other end of the scale between 0 and 400, e.g., in case of 20 and 390, 390 is converted to -10, and averaged with 20. For colourfulness scaling, the observers applied their own open-end arbitrary scale (pure magnitude estimation). Colourfulness estimates, on absolute scales, were analysed, following [Bartleson, 1979; Pointer, 1980]. According to [Stevens, 1971], the sensation of a signal always presents a power function. Therefore, the appropriate central tendency measure for magnitude estimation is the geometric mean, but only after relating the observers’ responses to each other (since observers use their individual scales). We follow the same method as [Pointer, 1980] and map each observer’s responses to the mean observer. Each observer produces their own scale unit of sensation b in their own attribute a. The ob-

4.5. Colour Appearance Phenomena 120

40 20

LUTCHI [1991a] ph.6 Our phase 1

0 0

20 40 60 80 100 Normalised luminance (Ymax=100)

(a) Lightness

Our phase 1

300 80

Perceived hue

60

400

LUTCHI [1991a] ph.6

100

80

Perceived colourfulness

Perceived lightness

100

111

60

200

40

100

LUTCHI [1991a] ph.6

20

Our phase 1 0

0 0

20

40 60 80 CIELAB C*

100

120

0

(b) Colourfulness

100

200 300 CIELAB H*

400

(c) Hue

Figure 4.12: Qualitative comparison between LUTCHI and our appearance data. This figure compares low-luminance phases in LUTCHI (phase 6 in [Luo et al., 1991a] – on a CRT display) and ours (phase 1 on our high-luminance display with ND filters). Plot(a) presents perceived lightness against normalised incident luminance. Plot(b) and (c) show perceived colourfulness and hue of our data and LUTCHI data against CIELAB chroma C* and hue H* (scaled to 400). For qualitative comparison of colourfulness and hue (which are not measurable in a physical sense as is luminance), we use CIELAB colour space instead as the CIELAB space does not account for any viewing environmental conditions (see Section 2.3.4 for more details). In Plot(a), lightness perception presents differences in the perceived lightness of middletone colours. The differences are explained by the fact that lightness perception is considerably changed by medium type, i.e., LUTCHI data here [Luo et al., 1993b] employs a CRT display and our measurement uses an LCD display. The different spectral characteristics of these media causes different perception of lightness due to unknown differences in viewing conditions.

servers response magnitude R can be modelled [Stevens, 1971] as follows: R = aS b ,

(4.1)

where S is the stimulus magnitude. The observer’s scale and attribute can be mapped into a common scale (geometric mean according to [Stevens, 1971]). When the common geometric mean responses R of all the observers to given stimuli S is computed, each individual observer’s scale and attribute, the constants a and b, can be found by least-squares fitting in log-log domain: log10 R = b log10 S + a.

(4.2)

This enables us to correct each observer’s data to a common scale. Then, the arithmetic mean of the converted data turns out to match to the geometric mean of original data. As a result, each individual colourfulness measurement is able to be compared with others for arithmetic comparison. The CV was mainly used as a statistical measure to investigate the agreement between any two sets of data [see Equations (2.12) and (2.13)].

4.5

Colour Appearance Phenomena

Before describing our colour appearance model in the next chapter, this section will describe the important findings and trends observed in our data. The findings of our experiments agree with

4.5. Colour Appearance Phenomena

112

those of previous experiments (see Section 2.3.3 for more details on colour appearance phenomena). However, our experiments quantify known colour appearance phenomena in the full range of the human visual system (five orders of magnitude). Colour appearance data sets have higher variation than other scientific measurements [Luo et al., 1991a] as they are commonly derived via the magnitude estimation method. Therefore, the central tendency of colour appearance attributes is broadly accepted and used in colour appearance modelling as opposed to the student’s t-test. The following subsections describe qualitative and quantitative findings from our experiments. The observed colour appearance phenomena are presented by plotting them against CIELAB colour appearance. As mentioned earlier, the lightness, chroma, and hue coordinates in CIELAB are assumed here to be physical measures such that they do not take into account the viewing environment (see Section 2.3.4 for more details). Physical measurements from the spectroradiometer in CIEXYZ are simply transformed into L*, C*, and H* (scaled to 400) coordinates in CIELAB for comparison with perceptual measurements.

4.5.1

Luminance Effect on Lightness

Perceived lightness is plotted against physical measurements in Figure 4.13. The Y axis represents perceived lightness, and the X axis shows the lightness value L* (in CIELAB) of the incident light. 40 colour patches were observed by participants with a variation in luminance. Other viewing conditions were fixed: background ratio (23%), colour temperature (6197K), and a dark surround. Luminance (controlled by ND filters in front of the light source) is set at 44, 123, 397, 1 051, and 2 196 cd/m2 . We found that the perceived lightness of the medium colours (not dark and not bright) increases when the luminance level increases and that the shape of the perceived lightness curve changes due to the luminance difference. The average perceived lightness increases with increased peak luminance, see Figure 4.13(b). Lightness in our data shows a similar trend to the LUTCHI experiments. In our data, the average lightness of the 40 colours increases by 5.26% per magnitude of peak luminance [log(peak luminance)]. LUTCHI data sets show that darker colours appears lighter under higher luminance by approximately 4%.

4.5.2

Luminance Effect on Colourfulness

Perceived colourfulness is plotted against physical measurements in Figure 4.14, using the same viewing environment as in Section 4.5.1. The Y axis presents perceived colourfulness, and the X axis shows the chroma value C* (in CIELAB) of the incident light. Colourfulness shows a similar trend. We note that the perceived colourfulness of the bright colours mainly increases. The average perceived colourfulness increases with increased peak luminance (fixed background ratio), as shown in Figure 4.14(b). At higher luminance levels perceived colourfulness increases. It is shown that the slope of the perceived colourfulness trends changes due to the peak luminance. In our data, the average colourfulness of the 40 colours increases by 13.09% per magnitude of peak luminance [log(peak luminance)]. The LUTCHI data sets [Luo et al., 1993b] show that colourfulness increases under higher luminance by approximately 6%.

4.5. Colour Appearance Phenomena

80

80

Perceived lightness

100

Perceived lightness

100

60 40 20

44cd/sqm

0

60 40 20

123cd/sqm

0 0

20

40

60

80

100

0

20

CIELAB L*

(a)

40

60

100

80

80

80

40 20

397cd/sqm

0

Perceived lightness

100

60

60 40 20

1,051cd/sqm

0 0

20

40

60

80

100

100

60 40 20

2,196cd/sqm

0 0

CIELAB L*

80

CIELAB L*

100

Perceived lightness

Perceived lightness

113

20

40

60

80

100

0

CIELAB L*

20

40

60

80

CIELAB L*

100

Aver. perceived lightness

90 80 70 60 50 40 30 20 10 0 10

(b)

100 1,000 Luminance (fixed back.) [cd/sqm]

10,000

Figure 4.13: (a) Lightness perception for different luminance levels (phases 1, 2, 4, 7, and 10). (b) Average lightness perception for different luminance levels.

100

120

120

100

100

Perceived colourfulness

Perceived colourfulness

4.5. Colour Appearance Phenomena

80 60 40 20

44cd/sqm

80 60 40 20

0

123cd/sqm

0 0

20

40

60

80 100 120

0

20

CIELAB C*

(a)

40

60

80 100 120

CIELAB C*

120

120

100

100

100

80 60 40 20

397cd/sqm

0 0

20

40

60

80 60 40 20

1,051cd/sqm

Perceived colourfulness

120

Perceived colourfulness

Perceived colourfulness

114

0

80 100 120

60 40 20

2,196cd/sqm

0 0

CIELAB C*

80

20

40

60

80 100 120

0

CIELAB C*

20

40

60

80 100 120

CIELAB C*

Aver. perceived colourfulness

100 90 80 70 60 50 40 30 20 10 0 10

(b)

100 1,000 Luminance (fixed back.) [cd/sqm]

10,000

Figure 4.14: (a) Colourfulness perception for different luminance levels (phases 1, 2, 4, 7, and 10). (b) Average colourfulness perception for different luminance levels.

400

400

300

300

Perceived hue

Perceived hue

4.5. Colour Appearance Phenomena

200 100

44cd/sqm

0 0

100

300

200 100

123cd/sqm 0

400

0

100

CIELAB H*

(a)

200

300

400

300

300

300

100

Perceived hue

400

200

200 100

397cd/sqm 0 100

200

300

200 100

2,196cd/sqm

1,051cd/sqm

0 0

400

CIELAB H*

400

Perceived hue

Perceived hue

200

115

400

0 0

CIELAB H*

100

200

300

400

0

CIELAB H*

100

200 CIELAB H*

400

Aver. perceived hue

300

200

100

0 10

(b)

100 1,000 Luminance (fixed back.) [cd/sqm]

10,000

Figure 4.15: (a) Hue perception for different luminance levels (phases 1, 2, 4, 7, and 10). (b) Average hue perception for different luminance levels.

300

400

4.5. Colour Appearance Phenomena

4.5.3

116

Luminance Effect on Hue

Perceived hue is plotted against physical measurements in Figure 4.15. The same colour patches are observed by observers with variations in luminance. Other viewing conditions are the same as in Section 4.5.1. As shown in this qualitative comparison, the perceived hue does not present comparative variation with changes in luminance levels. The CV of the average perceived hue is only 1.98. Hue appears constant with regards to variations in luminance, which is consistent with previous data (LUTCHI), see Figure 4.15.

4.5.4

Background Effect on Lightness

Figure 4.16 presents the perceived lightness trend with variations in the background luminance. The peak luminance level is fixed at 2 241 cd/m2 , but the background ratio is changed to 0% (black), 12%, 23%, 55%, and 95% (white). The colour temperature was fixed at 6197K, and the surround was set to dark. Participants judged 40 colour patches against different backgrounds. The perceived lightness comparatively changes due to the background luminance. We note that the perceived lightness of all the colours clearly increases on the dark background. The average perceived lightness increases with decreased background luminance of 8.43% per magnitude of background luminance [log(backg r ound luminance)]. We note that in case of a black background (0% background ratio), the shape of the perceived lightness curve is also changed.

4.5.5

Background Effect on Colourfulness

Perceived colourfulness is presented with variations in background luminance in Figure 4.17. Other viewing conditions were set as in Section 4.5.4. We found that the perceived colourfulness of the medium-dark colours increases. Variation of the perceived colourfulness increases accordingly; however, the slope of the perceived colourfulness trends is not changed by the background ratio. The average perceived colourfulness increases with decreased background luminance by 6.48% per magnitude of background luminance [log(backg r ound luminance)].

4.5.6

Background Effect on Hue

Figure 4.18 presents the perceived hue with variation of the background ratio. Peak luminance, colour temperature, and surround were fixed as in Section 4.5.4. We found that the perceived hue does not show strong variation against different background ratios. The CV of the average perceived hue is only 1.82, see Figure 4.18(b).

4.5.7

Colour Temperature Effect on Colour Appearance

Figure 4.19 presents the perceived colour appearance with variations in colour temperature of the light source. 40 colour patches were presented against a background of ratio 23%, a fixed peak luminance of 1 233 cd/m2 , and a dark surround. The colour temperature of the light source was changed by using Rosco colour-temperature changing filters (1803, 6197, and 7941K). We found that perceived lightness presents small changes of 7-9% with variations in colour temperature and that perceived colourfulness presents also small changes of 14-18% with temperature variation. However, perceived hue under 1803K (yellowish) presents a different CV from others (6197 and

4.5. Colour Appearance Phenomena

80

80

Perceived lightness

100

Perceived lightness

100

60 40 20

Back. 0%

60 40 20

Back. 12%

0

0 0

20

40

60

80

0

100

20

40

60

100

80

80

80

40 20

Back. 23%

0 0

20

40

60

80

Perceived lightness

100

Perceived lightness

100

60

60 40 20

Back. 55%

100

100

60 40 20

Back. 95%

0

0

CIELAB L*

80

CIELAB L*

CIELAB L*

(a)

Perceived lightness

117

0

20

40

60

80

100

0

20

40

60

80

100

CIELAB L*

CIELAB L*

100

Aver. perceived lightness

90 80 70 60 50 40 30 20 10 0

(b)

0%

20% 40% 60% 80% Background level (fixed lumin.)

100%

Figure 4.16: (a) Lightness perception for different background levels (phases 8, 9, 10, 11, and 12). (b) Average lightness perception for different background levels.

4.5. Colour Appearance Phenomena

120

Perceived colourfulness

Perceived colourfulness

120 100 80 60 40 20

Back. 0%

0 0

20

40

60

80

100 80 60 40 20

Back. 12%

0 0

100 120

20

40

(a)

60

60 40 20

Back. 23%

0 0

20

40

60

80

Perceived colourfulness

Perceived colourfulness

80

100 80 60 40 20

Back. 55%

100 80 60 40 20

Back. 95%

0

0

100 120

100 120

120

120

100

80

CIELAB C*

CIELAB C*

120

Perceived colourfulness

118

0

CIELAB C*

20

40

60

80

100 120

0

20

40

60

80

100 120

CIELAB C*

CIELAB C*

Aver. perceived colourfulness

100 90 80 70 60 50 40 30 20 10 0 0%

(b)

20%

40%

60%

80%

100%

Background level (fixed lumin.)

Figure 4.17: (a) Colourfulness perception for different background levels (phases 8, 9, 10, 11, and 12). (b) Average colourfulness perception for different background levels.

400

400

300

300

Perceived hue

Perceived hue

4.5. Colour Appearance Phenomena

200 100

0

100

100

Back. 12%

200

300

0 0

400

100

200

400

300

300

300

100

0

100

200

300

400

200 100

200 100

Back. 55%

Back.: 23%

0

Perceived hue

400

Perceived hue

400

200

300

CIELAB H*

CIELAB H*

(a)

Perceived hue

200

Back. 0%

0

119

400

Back. 95%

0

0 0

CIELAB H*

100

200

300

400

0

100

200

300

400

CIELAB H*

CIELAB H*

400

Aver. perceived hue

300

200

100

0 0%

(b)

20%

40%

60%

80%

100%

Background level (fixed lumin.)

Figure 4.18: (a) Hue perception for different background levels (phases 8, 9, 10, 11, and 12). (b) Average hue perception for different background levels.

4.5. Colour Appearance Phenomena

120

7941K) of 37% (both with two others). The CV value of perceived hue between 6197K and 7941K is 5.86%. In the low colour temperature (1803K), yellowish colours appear more reddish, and bluish colours appear more reddish. As observed in [Li et al., 2002], our experimental data sets also show inconsistent chromatic adaptation in perceiving hue under different colour temperatures and that perceived colourfulness also changes depending on the colour temperature of the light source.

4.5.8

Surround Effect on Colour Appearance

The perceived colour appearance under different surrounds (dark and average – 0% and 20% of the peak luminance) is presented in Figure 4.20. 40 colour patches were observed by the participants under a peak luminance of 2 196 cd/m2 , a background ratio of 23%, and a correlated colour temperature of 6197K. In dark surround settings (0%), we used a dark room with all indoor lights turned off. In average surround settings (20%), florescent-type bulbs illuminated the environment in order to make the surround 20% as bright as the peak luminance [Moroney et al., 2002]. Participants

120

80

100

60 40 20

CCT. 1803K

0

400

Perceived hue

100

Perceived colourfulness

Perceived lightness

judged colour appearance in average bright viewing conditions. We note that perceived lightness,

80 60 40 20

20

40

60

80

100

0

20

40

60

80 100 120

0

100

60 40

CCT. 6197K 40

60

80

80 60 40 20

100

20

40

60 40

CCT. 7941K

0 20

40

60

CIELAB L*

CCT. 6197K

80 100 120

0

100

80

100

200

300

400

CIELAB H* 400

80 60 40 20

CCT. 7941K

0 0

60

Perceived hue

Perceived colourfulness

100

20

100

CIELAB C*

80

400

200

0 0

120

300

300

CCT. 6197K

CIELAB L* 100

200

400

0 20

100

CIELAB H*

Perceived hue

80

Perceived colourfulness

Perceived lightness

120

0

Perceived lightness

CCT. 1803K

CIELAB C*

100

0

100 0

CIELAB L*

20

200

CCT. 1803K

0 0

300

0

20

40

60

80 100 120

CIELAB C*

300 200 100

CCT. 7941K 0 0

100

200

300

CIELAB H*

Figure 4.19: Colour perception for different colour temperatures (phases 14, 7, and 13).

400

4.6. Discussion

121

colourfulness, and hue are almost identical [CV: (L) 9.16%, (C) 14.70%, and (H) 9.07% — less than the short-term repeatability] between the two different surrounds. For the minor changes in perceived hue, we suggest that the cause was the difference in colour temperature of the surround light (3323K) and viewing display (6197K).

100

60 40 20

Dark surr.

0 0

20

40

60

80

400

Perceived hue

80

Perceived colourfulness

120

Perceived lightness

100

80 60 40 20

Dark surr.

0 100

0

20

CIELAB L*

60

80

100 120

40 20

Aver. surr. 20

40

60

Dark surr. 0

100

80

100

CIELAB L*

200

300

400

CIELAB H* 400

100

Perceived hue

60

Perceived colourfulness

80

0

100 0

120

0

200

CIELAB L*

100

Perceived lightness

40

300

80 60 40 20

300 200 100

Aver. surr.

0

Aver. surr.

0 0

20

40

60

80

CIELAB C*

100 120

0

100

200

300

400

CIELAB H*

Figure 4.20: Colour perception for different surrounds (phases 7 and 15).

4.6

Discussion

In order to achieve high levels of luminance, we built a novel display device by utilising two HMI bulbs, which substitute the fluorescent back-lit unit in an LCD display. Its maximum level of luminance is approximately 30,000 cd/m2 . However, we performed experiments up to 16,860 cd/m2 and abandoned higher levels of luminance as the luminances were too uncomfortable for the participants. We mainly measured the impact of luminance and background level changes on colour perception. Hence, our experimental data contains limited variation of media and viewing conditions. For the variation of appearance on different media, LUTCHI data can be integrated as our data is compatible with LUTCHI data.

4.6.1

Perceived Lightness Appearance

The perceived lightness of the medium colours (not dark and not bright) increases when the luminance level increases. The average perceived lightness increases with increased peak luminance. This means that the shape of the perceived lightness curve changes due to the peak luminance. This was also shown by Stevens and Stevens [1963], and is called the Stevens effect. They attempted to

4.7. Summary

122

model the perceived lightness curve as a power function. However, the perceived lightness turns out to have more complex trends than a simple power function. This Stevens-influenced modelling is observed in other colour appearance models, e.g., CIELAB, LLAB, RLAB, and so on. We model this luminance effect on lightness in a more rigorous way (see Chapter 5 for more details) than others. The perceived lightness of all the colours clearly increases with a darker background. When the background luminance level increases, the average perceived lightness decreases (of all the colours), as shown by Bartleson and Breneman [1967]. This effect is called the simultaneous contrast effect. This phenomena is also modelled in our colour appearance model (see Chapter 5).

4.6.2

Perceived Colourfulness Appearance

Colourfulness shows a similar trend to lightness. The perceived colourfulness of brighter colours increases. At higher luminance levels, perceived colourfulness increases, which is known as the Hunt effect [Hunt, 2004]. This shows that the slope of the perceived colourfulness trends changes due to the peak luminance. The perceived colourfulness of the medium-dark colours mainly increases, which was also indicated by the participants in post-experiment interviews. The average perceived colourfulness increases against a darker background, as shown in the simultaneous contrast effect [Albers, 1963]. These two colourfulness phenomena are also modelled in our colour appearance model (see Chapter 5).

4.6.3

Perceived Hue Appearance

Hue is generally constant with regard to variation in luminance, background, and surround, which is consistent with previous data. However, perceived hue presents a variation in colour temperature of the light source. Reddish light (low colour temperature) makes colours appear slightly more reddish, and greenish-and-bluish light (high colour temperature) makes colours appear slightly more bluish. Lesser degrees of adaptation occurred under the low colour temperature (1803K), following Li et al. [2002]’s findings. These inconsistent colour constancy phenomena are modelled through a process called chromatic adaptation modelling (see Chapter 5).

4.7

Summary

Current display devices cannot display five-orders of magnitude of luminance and therefore cannot cover the working dynamic range of the human visual system. Hence, we built a new highluminance display device, which enables us to conduct colour appearance experiments under high luminance levels. Our experiments followed the methodology of previous LUTCHI colour experiments; therefore, our data set is compatible with in the existing colour appearance data. However, our colour appearance data set extends the range of luminance up to 16 860 cd/m2 . We summarise important findings and trends observed in our experimental data. If the luminance level increases, then lightness and colourfulness both increase. This confirms the Stevens and Hunt effects. In contrast, if the background luminance level increases, lightness and colourfulness both decrease, confirming the simultaneous contrast effect. Most of our findings are consistent with

4.7. Summary

123

the LUTCHI data sets, and similar trends can be observed in both data sets. However, the LUTCHI data sets quantify these colour appearance phenomena mostly under approximately 690 cd/m2 , but our data set covers luminance up to 16 860 cd/m2 . Although our colour appearance data includes less various media than the LUTCHI data sets and less variation in colour temperature, it covers the five-orders of magnitude of luminance. The range of the experimental data corresponds to the working range of the human visual system. This experimental contribution enables us to derive a new colour appearance model for an extended range of luminance levels. Accordingly, our numerical model covers the full range of colour perception of the human visual system. The next chapter describes our colour appearance model.

124

Chapter 5

A Colour Appearance Model for Extended Luminance Levels A colour appearance model (CAM) converts from physical measurements to perceptual quantities. This conversion differs amongst existing colour appearance models and involves numerical transfer functions that are matched to psychophysical observation data. These data are, in general, not publicly available and only implicitly embedded into CAMs derived from these data. The only available psychophysical data is from the LUTCHI experiments. Luo et al. [1991a,b, 1993a,b, 1995] measured human perception based mainly on reflective materials and low dynamic range conditions. The luminance level of these measurements is lower than that of many everyday situations in reality. For this reason, we conducted our own high-luminance colour experiments. These experiments, explained in the previous chapter, yielded a novel measurement of perceived colour appearance under extended luminance levels (up to 16 860 cd/m2 ). The dynamic range of the acquired appearance data set is close to that of the human visual system (about five-order magnitude). This enables us to numerically derive a new colour appearance model for high-dynamic-range luminance. In this chapter, a novel colour appearance model is presented to improve accuracy in predicting human colour perception. This model is able to predict not only image appearance as can other colour appearance models, but also real-world observation of the human visual system. The following section describes a forward appearance model and is followed by an analytical inverse model. Both models will be used to complete a cross-media colour reproduction technique for high-dynamicrange imaging in the next chapter.

5.1

Data Sets

For the developments of our colour appearance model, we use the maximum likelihood approach, which derives a model based on training data without taking prior information. However, performance on the whole training set is not a good indicator of predictive performance on the seen data due to the problem of over-fitting [Bishop, 2006]. Insofar as we have 19 phases, our approach is to use some of the available phases as input to a range of models, and to compare the models with independent phases as a validation set. We subgroup certain phases with four different criteria:

5.2. Forward Model

125

luminance-varying phases (1, 2, 4, 7, 10, 17, and 19) — group L, background-varying phases (8, 9, 10, 11, and 12) — group B, colour temperature-varying phases (10, 14, and 15) — group T, and surround-varying phases (10, 15) — group S. For modelling distinctive colour appearance phenomena in our experiments (Stevens, Hunt, and simultaneous contrast effects, see Chapter 4), we mainly use group L and B as training sets for predicting these phenomena. We used group T for chromatic adaptation and group S for surround effect. The other independent phases (3, 5, 6, 16, and 18) were used as a validation set, called group V. In addition, although the LUTCHI data set has colour samples under limited range of luminances, we also used them (R-HL, LT, and CRT) as a third test set for cross-validation and validation on different media.

5.2

Forward Model

We propose a new colour appearance model that closely follows Müller [1930]’s zone theory in order to perform well under high-luminance conditions. The model consists of three main components: chromatic adaptation, cone response, and visual cortex response for each perceptual colour attribute. It aims to accurately predict lightness, colourfulness and hue, including the Hunt effect (colourfulness increases with luminance levels), the Stevens effect (lightness contrast changes with different luminance levels), and simultaneous contrast effect (lightness and colourfulness changes with background luminance levels), as observed in Chapter 4. Additional correlates of brightness, chroma, saturation, hue quadrature, and Cartesian colour opponent coordinates will be derived as well. First, we model input parameters for this forward model as follow: • Absolute CIE tristimulus values (observed main colours): X Y Z,

• Absolute tristimulus values of the reference white point: X w Yw Zw , (where Yw corresponds to the peak luminance level L w ),

• Level of luminance adaptation: L a [unit: cd/m2 ]

(luminance of viewing stimuli at about 10-degree angle),

• A medium type: E (e.g., paper, CRT, transparency, or high-luminance display). The CIE defines colour elements as a light source (spectral energy), an object (normalised reflectance ratio on each wavelength), and a standard observer (presented as colour matching functions). Following this standard, previous colour appearance models take the normalised reflectance property (CIEXYZ, normalised to Y=100) for test colours. However, as shown in Chapter 4, absolute luminance matters in perceived colour appearance. The absolute scale of the measured radiance (CIEXYZ) can be very useful information for predicting colour appearance under high luminance levels. Therefore, we use absolute CIEXYZ measurements instead of normalised CIEXYZ. A spectroradiometer or a characterised HDR camera system [Kim and Kautz, 2008a] (see Chapter 3) can be used to measure absolute radiance. Our model also requires as input reference white point measurements on an absolute scale. Finally, our model requires the level of luminance adaptation by measuring the luminance of the viewing stimuli of a 10-degree viewing area. In our experimental

5.2. Forward Model

126

set, luminance adaptation level comprises 88% of the background luminance, 4% of the test colour luminance, 4% of the reference white luminance, and 4% of the reference colourfulness luminance (see Figure 4.5). This weighted-average luminance of this 10-degree viewing area is used as an input parameter for the level of luminance adaptation, following [Moroney et al., 2002]. In the following, we explain all the components of our model.

5.2.1

Chromatic Adaptation

Humans perceive object colours as constant under different illumination (called colour constancy). This is generally true; however, as shown in Section 4.5.7, lesser degrees of chromatic adaptation may occur under lower colour temperatures such as the CIE illuminant A (white appears slightly yellow, see Figure 4.19). Once our eye has adapted to a certain viewing condition, the perceived colours seem to be scaled by the adapted brightest colour. We assume that this scaling is only performed in cone colour space. Further, certain colours seem more sensitive than others depending their own hue. Such an inconsistency of the chromatic adaptation was discovered in surface colour research [Lam, 1985]. This inconsistent chromatic adaptation, called a chromatic adaptation transform (CAT), has been researched extensively, e.g., Bradford transform (BFD), CMCCAT97s, CMCCAT2000, and CIECAT02. These transforms were derived from data sets [Helson et al., 1952; McCann et al., 1976; Breneman, 1987; Mori et al., 1991; Kuo et al., 1995; Braun and Fairchild, 1996] and enable us to predict corresponding colours in changes of spectral characteristics of illuminant. However, most of these data sets are not publicly available. Chromatic adaptation is as large as a research field as appearance modelling for cross media; hence, generally previous CATs have been researched independently of colour appearance models. As the focus of our experiments was to extend the luminance range of colour appearance models, we exclude modelling chromatic adaptation from our research scope. Instead, we adopt one of the previously developed chromatic adaptation transforms. We tested a selection of transforms: the HPE transform (LMS cone space, used in RLAB) [Estévez, 1979], the BFD transform (used in CIECAT97s) [Lam, 1985], and the CIECAT02 transform (used in CIECAM02) [Li et al., 2002]. Group L of the luminance-varying phases (1, 2, 4, 7, 10, 17, and 19) is used for testing, assuming that the eye has adapted to the light source completely. As shown in Figure 5.1, the three colour transforms perform consistently better in terms of hue than raw calculations of CIELAB (von Kries chromatic adaptation in CIEXYZ). Therefore, these three colour transforms are worth considering in order to predict inconsistent chromatic adaptation with respect to hue. Among these, the HPE transform unfortunately changes the perceived chroma. The BFD and CIECAT02 transforms present similar performance with the CIECAT02 transform slightly outperforming the BFD transform in terms of colourfulness and hue. The BFD transform also has an invertibility problem [Fairchild, 2005]. Therefore, we chose and adopted the CIECAT02 model as our chromatic adaptation transform. Colourfulness errors increase slightly after applying transforms in all cases, but note that chromatic adaptation transform is used to predict hue changes with respect to illumination. Perceived colourfulness will be modelled later.

5.2. Forward Model

50

127

Lightness Colourfulness Hue

45 40

Average CV

35 30 25 20 15 10 5 0 CIEXYZ

M_HPE

M_BFD

M_CAT02

Chromatic adaptation transform

Figure 5.1: We compare three chromatic adaptation transforms (with CIEXYZ): the HPE transform (LMS cone colour space), the BFD transform, and CIECAT02. These three chromatic transforms are plugged into the CIELAB colour space structure as a form of von Kries chromatic adaptation. The calculated L*, C*, H* values are compared with perceptual measurements in phases 1, 2, 4, 7, 10, 17, and 19. Overall, CIECAT02 performs better than the other transforms.

In Equation (5.1), we transform the chromatically adapted cone signal, which is linear to incident radiation into the eye in absolute terms: 3 2 3 2 2 6 X 7 6 0.7328 6 RC 7 7 6 7 6 6 6 7 6 6 G 7=M CAT02 · 6 Y 7 , MCAT02 = 6 −0.7036 6 C 7 5 4 4 5 4 BC Z 0.0030

3 0.4296 1.6975 0.0136

−0.1624 7 7 0.0061 7 7. 5 0.9834

(5.1)

It takes the incident (absolute) X Y Z D50 values and transforms them to new RGBC values, accounting for chromatic adaptation based on the reference white. It is important to note that, in contrast to previous models, we do not normalise the signal but keep its absolute scale; i.e., the white-adapted RGBC has the same scale [use Yw in Equation (5.2)] as the original X Y Z: 2 2 2 3 3 0 0 0 6 RC 7 6 RC 7 6 Yw /R w 6 6 6 7 7 6 G0 7 = M · 6 G 7 , M = 6 0 0 Yw /Gw D 6 D C 7 6 C 7 6 4 0 5 4 4 5 0 0 Yw /Bw BC BC

3 7 7 7. 7 5

(5.2)

In the original CIECAT02 transform, a parameter D is used to estimate the degree of chromatic adaptation by taking into account the level of luminance adaptation L a [see Equation (2.90) and (2.91)]. This parameter linearly interpolates the degree between 100% white adaptation and no adaptation, depending on the luminance adaptation level. As shown in Figure 5.2, the parameter D varies between 0.66–0.80 (F constant for dark surround — 0.8), and it starts to saturate from a luminance level of 310 cd/m2 . This means a higher luminance level than 310 cd/m2 will be adapted in the same was as lower levels. In our luminance-varying phases (group L), no distinguishing difference in the degree of chromatic adaptation was observed up to 16 860 cd/m2 in our experimental data. CVs of predicted

Degree of adaptation D

5.2. Forward Model

128

1.00 0.90 0.80 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 0

200

400

600

800

1000

Luminance adaptation level [cd/sqm]

Figure 5.2: The degree of chromatic adaptation parameter D in CIECAT02. The X axis shows the input luminance level, and the Y axis presents the interpolation parameter D [see Equation (2.90)]. In this experimental dark surround, the parameter D only varies between 0.66–0.80 and starts to saturate after 310 cd/m2 of luminance adaptation level.

lightness, colourfulness, and hue without D were 15.47, 31.96, and 16.98; CVs of predicted lightness, colourfulness, and hue with D were 15.62, 31.74, and 16.74. These two prediction results with/without the D parameter are almost identical to each other. Therefore, although we adopted the chromatic transform matrix MCAT02 from CIECAT02, we exclude the nonlinear interpolation with the degree of adaptation function D.

5.2.2

Cone Responses

Biological and physiological structures and mechanisms of the human eye are still obscured by a lack of knowledge. According to previous research [Müller, 1930; Vos and Walraven, 1971; Estévez, 1979; Hunt, 1995], we have a different population ratio of LMS cones [Vos and Walraven, 1971], which is related to a colour space [Estévez, 1979]. Most models have adopted a ratio based on a compromise of physiological evidence (LMS cone colour space) [Estévez, 1979] and psychophysical experiments resulting in a 40:20:1 ratio of LMS cones [Vos and Walraven, 1971]. Based on previous knowledge, the human eye is believed to exhibit a non-linear response on each cone channel. Following Stevens [1961], this is usually modelled as a power function (exponent: 1/2 [de Vries, 1943; Rose, 1948] or 1/3 [CIE, 1986]) derived from psychophysical experimental data. Older colour appearance models, such as CIELAB, RLAB, and LLAB modelled cone response within X Y Z space and assumed a simple power function as a response curve, which reflects early physiological assumptions [de Vries, 1943; Rose, 1948] (see Section 2.3.4 for more details of other models). Modern Hunt94-based models (Hunt94, CIECAM97s, FC, Fairchild, and CIECAM02) transform the chromatically adapted (and normalised) X Y Z tristimulus values into L M S cone space, commonly using the HPE transform [Estévez, 1979]. Note that RLAB uses the HPE transform only for chromatic adaptation. These CAMs modelled cone response with hyperbolic functions of the form shown in Equation (2.11). However, existing models (and in particular

5.2. Forward Model

129

CIECAM02), use a constant σ in Equation (2.11) (following Boynton and Whitten [1970]) which causes the hyperbolic function [see Equation (2.93)] to resemble a power function (see Figure 5.3), as mentioned by Kwak [2003]. Most applications of dynamic cone response functions take as input normalised cone signals and a fixed adaptation point. Models based on Hunt94 [Hunt, 1995] use the F L function, which takes the adaptation level L a as input, in order to translate the relative input colour information into a quasi-absolute scale. Our cone model is based on two insights. First, the Vm in the original equation [see Equation (2.11)] is not the reference white, but the maximum saturation point of cones. This means that the model works in terms of absolutes. Second, based on findings by Valeton and van Norren [1983], the σ should be decided by the absolute level of luminance adaptation. As mentioned by Hunt [1998] and Fairchild [2005], cones that contribute photopic vision are highly concentrated in the fovea (1.5–2◦ ) and more sparsely populated throughout the peripheral retina. There are no rods in the central fovea and there is a blind spot at a 12–15◦ angle from the fovea. As the luminance of the adapting field, generally background, has been assumed the level of luminance adaptation by most appearance models, e.g., CIECAM97s and CIECAM02, σ can be decided by measuring the actual luminance of viewing stimuli at a 10◦ angle; or, in an imaging application, measuring the averaged luminance value and using it as an input value. In our model, tristimulus values (from chromatic adaptation) are transformed into LM S cone space using the Hunt-Pointer-Estévez (HPE) transform [Estévez, 1979]: 2 3 0 6 RC 6 L 7 6 6 7 −1 6 0 6 M 7=M HPE · MCAT02 · 6 GC 6 7 4 0 4 5 BC S 2

3

2 7 6 0.38971 7 6 7, M 6 HPE = 6 −0.22981 7 5 4 0.00000

35

0.00000

(5.3)

100

Cone responses

Cone responses

1.18340

3 −0.07868 7 7 0.04641 7 7. 5 1.00000

120

30 y = 2.0787x0.3985 R² = 0.9999

25 20 15 10 5 0

80

y = 3.6146x0.3589 R² = 0.9988

60 40 20 0

0

(a)

0.68898

200

400

600

800

Luminance [cd/sqm] (LA=100)

1,000

1,200

0

(b)

2,000

4,000

6,000

8,000

10,000

12,000

Luminance [cd/sqm] (LA=1,000)

Figure 5.3: These plots show a cone response curve modelled by CIECAM02 up to (a) 1 000 cd/m2 and (b) 10 000 cd/m2 . Although it has the form of a hyperbolic function, the actual outputs resemble a power function that has an exponent between 1/2.51–1/2.79. The squared correlation coefficients (R2 ) between a power function and the CIECAM02 cone response function are (a) 0.9999 and (b) 0.9988.

5.2. Forward Model

130

We then model the cones’ absolute responses according to Equation (2.11): L0 = M0 =

L nc n

L nc + L a c M nc n

,

M nc + L a c S nc S0 = n n . S c + La c

,

(5.4)

We have only replaced the σ from the original equation (where it was given in troland units) with the absolute level of adaptation L a measured in cd/m2 (assuming that both units are related almost linearly for the working range of the adaptation level, e.g., 10td ⇡ 1 cd/m2 ). The adaptation level should ideally be the average luminance of the 10◦ viewing field (it serves as an input parameter

to our model). This adapting parameter of the level of luminance adaptation implicitly contains the level of background luminance. It allows our model to predict the simultaneous contrast effect with respect to lightness and colourfulness. Noting that the exponent parameter nc in the original Equation (2.11) is derived from primate cone responses (nc = 0.74 [Valeton and van Norren, 1983]), we have separately derived nc from our experimental data as nc = 0.57 by using an exhaustive search (the iterative numerical optimisation with a certain range of constrains on the entire likelihood data of lightness from the training data sets). See Figure 5.4 for an example of the predicted cone response by using our model.

5.2.3

Achromatic Attributes

Before the cone signals are transported to our visual cortex, it is believed that they are decomposed into two types of signals: achromatic and colour opponent signals by the ganglion cells, based on

80

80

Perceived lightness

100

Perceived lightness

100

60 40 20

40 20 0

0

(a)

60

0

20 40 60 80 100 CIELAB L* (a cube-root function)

(b)

0

20 40 60 80 100 Achro. sig. A (our hyperbolic func.)

Figure 5.4: These two plots compare achromatic signals (lightness) of a cube-root power function model (the same as in CIELAB) and our proposed hyperbolic function model [compared with perceived lightness in phase 19 (16 400 cd/m2 )]. A power-function-based model [plot(a)] forms a curve away from the diagonal for high luminances. The CV between perceptual lightness and L* values is 28.07%. In contrast, our model’s intermediate achromatic signals A/Aw (weighted summation of three cone responses) [plot(b)] are closer to the diagonal, which means our predictions of lightness are much closer to the actual perception. The CV between perceptual lightness and normalised achromatic signals is 10.33%.

5.2. Forward Model

131

zone theory [Müller, 1930]. The actual biological and physiological structures and mechanisms are still unclear due to a lack of evidence. It is believed that LMS cones have a roughly 40:20:1 proportion in the retina [Vos and Walraven, 1971]. The summation of the three cone signals is believed to produce an achromatic signal in retinal ganglion cells in modern colour appearance modelling. Our model takes the weighted summation as an achromatic signal. The relative ratio of the achromatic signal to the reference white produces lightness-to-be signal. The signal A is then defined as: A = (40L 0 + 20M 0 + S 0 )/61.

(5.5)

Lightness is defined as the ratio between the achromatic signal A and the achromatic signal of reference white Aw , since the observer was asked to relate the two. See Figure 5.4 for an example of the predicted achromatic signals. The accuracy of the achromatic signals is decided by that of the cone response functions. As shown in Chapter 4, lightness perception trends are more complicated than a simple power function. Power-function-based models (from CIELAB to CIECAM02) tend to form a curve off from the diagonal for high luminances, which shows the differences between the actual perception and the model’s prediction. Our intermediate achromatic signal (the summation of three cone responses) is closer to the actual perceived values. However, as is shown in Figure 5.4(b), the A/Aw in our model still shows an inverse sigmoidal shape. Hence, we assume that the visual cortex has an additional contrast enhancement process that resembles an inverse sigmoidal function. We solve the undetermined inverse hyperbolic function by using an iterative numerical optimisation on the likelihood data for lightness from the training data sets. g(x) derives the lightness J 0 from a given cone signal A related to Aw : J0 = g



A Aw

◆ ,

(5.6)

with 2

3 n j 1/n j −(x − β )σ j j 7 6 g(x) = 4 . 5 x −βj −↵j

(5.7)

The values of the parameters are derived from our experimental data, yielding ↵ j = 0.89,β j = 0.24,σ j = 0.65, and n j = 3.65. Note that J 0 may yield values below zero and above one hundred, in which case it should be clamped. This corresponds to the case where the observer cannot distinguish dark colours from even darker colours and bright colours from even brighter ones. Our lightness perception function allows us to predict the Stevens effect to a high accuracy, see Figure 5.5. As already mentioned in Chapter 4, the perceived lightness values vary with different media, even though the physical stimuli are otherwise identical. By testing our model with other media data from the LUTCHI data sets, we observed our model showing media dependency and no surround dependency, unlike other models, e.g., Hunt94, LLAB, and CIECAM97s (see Section 2.3.4). We have decided to incorporate these media differences explicitly in our model in order to improve lightness

5.2. Forward Model

132

80

80

80

60 40 20

Perceived lightness

100

Perceived lightness

100

Perceived lightness

100

60 40

40 20

20

0

60

0

0

0

20 40 60 80 100 Predicted lightness (CIELAB L*)

0

0

20 40 60 80 100 Predicted lightness (CIECAM02 J)

(a) CIELAB L*

20 40 60 80 100 Predicted lightness (our model)

(b) CIECAM02 J

(c) Our model J

Figure 5.5: These three plots compare the perceived lightness against the predicted lightness in phase 19 (16 400 cd/m2 ). (a) plots the predicted lightness perception by CIELAB (L*). (b) plots the prediction of lightness by CIECAM02 (J). The lightness predictions of the CIELAB and the CIECAM02 present similar trends (a curve off the diagonal). (c) shows the lightness prediction of our model (J). It is the results of Equation (5.6) and (5.7) on the achromatic signals [see Figure 5.4(b)]. The CVs between perceptions

100

100

80

80

80

60 40 20 0

Perceived lightness

100

Perceived lightness

Perceived lightness

and predictions are (a) 28.07%, (b) 21.17%, and (c) 8.03%.

60 40 20

20

40

60

80

100

0

Predicted lightness J' (transp.)

(a) Transparency (J 0 )

40 20 0

40

60

80

20

40

60

80

100

Predicted lightness J (transp.)

(d) Transparency (J)

20

40

60

80

100

Predicted lightness J' (paper)

(c) Paper (J 0 )

100

80 60 40 20

80 60 40 20 0

0 0

0

100

Perceived lightness

60

20

(b) CRT display (J 0 )

100

80

20

Predicted lightness J' (CRT)

Perceived lightness

Perceived lightness

100

40

0

0 0

60

0

20

40

60

80

Predicted lightness J (CRT)

(e) CRT display (J)

100

0

20

40

60

80

100

Predicted lightness J (paper)

(f) Paper (J)

Figure 5.6: By testing our initial lightness model with other media data from the LUTCHI data sets, our initial model presents media dependency in predicting lightness like other models. We therefore incorporate these lightness differences explicitly in our model in order to improve prediction. Plots (a), (b), and (c) represent the initial lightness prediction J 0 against transparency (LT phase in the LUTCHI data sets), CRT display (CRT phase), and paper (R-HL phase). Plots (d), (e), and (f) show the final lightness predictions J through modelling media dependency [CVs: (d) 8.66%, (e) 8.16% , and (f) 7.85%]

5.2. Forward Model

250

0.5

Brightness

150

log [(aver. Bri.)/(aver. Li.)]

843 cd/sqm 200 cd/sqm 62 cd/sqm 17 cd/sqm 6 cd/sqm 0.4 cd/sqm

200

133

100 50

0.4 0.3 0.2 0.1 -1.0 -0.1 0.0

0

(a) Figure 5.7:

0

20

40 60 Lightness

80

100

y = 0.1308x R² = 0.9609

1.0

2.0

3.0

-0.2 Log (Luminance[cd/sqm])

(b)

(a) plots perceived brightness against perceived lightness in R-VL group phases [Luo

et al., 1993a] in the LUTCHI data sets. The perceived brightness increases linearly according to the perceived lightness. The slope of the brightness is affected by the level of peak luminance. (b) shows the least-squares fitting of the relationship between brightness and lightness with respect to luminance. The logarithm of (brightness/lightness) increases in a slope of 0.1308 according to the logarithm of luminance (squared correlation coefficient R2 = 0.9608).

prediction, yielding a media-dependent lightness value: ⇥ � � ⇤ J = 100 E J 0 − 1 + 1 , where the parameter E is different for each medium.

(5.8)

A value of E = 1.0 corresponds to a

high-luminance LCD display, transparent advertising media yield E = 1.2175, CRT displays are E = 1.4572, and reflective paper is E = 1.7526. The lightness contrast J is optimised from our data; reflective media (R-HL), CRT, and transmittance (LT) phases are from the LUTCHI data sets. Figure 5.6 shows the differences between J 0 and J. Brightness was not measured in our experiments. We used R-VL phases [Luo et al., 1993a] from the LUTCHI data sets, which is the only data set to have a lightness and brightness comparison. These few phases with both lightness and brightness measurements indicate that these two properties have a linear relationship [see Figure 5.7(a)]. We found that luminance has a linear relationship to brightness/lightness in the log-log domain [see Figure 5.7(b)]. We therefore define brightness as: � �n Q = J Lw q .

(5.9)

The parameter is driven from experimental data and yields nq = 0.1308.

5.2.4

Chromatic Attributes

Retinal ganglion cells are believed to convert cone signals into colour opponent signals a and b, which are based on differences between the cone responses. We adopt previous psychophysical results on

1.2

80

1.0

70

Visual perception

Colourfulness/Chroma

5.2. Forward Model

0.8 y = 0.11x + 0.61 R² = 0.935

0.6 0.4 0.2

60 50 40 30

Perceived colourfulness Predicted chroma Predicted colourfulness

20

0.0

(a)

134

10

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Log (luminance[cd/sqm])

1

(b)

10 100 1,000 10,000 100,000 Peak luminance [cd/sqm]

Figure 5.8: (a) shows the least-squares fitting of the slope of (colorfulness/chroma) and its offset. The colourfulness increases with a slope of 0.11 with an offset of 0.61 according to the logarithm of luminance (R2 = 0.935). (b) plots perceived average colourfulness against peak luminance in luminancevarying phases (group L) in our data sets. The average predicted colourfulness (red line) matches the average perceived colourfulness (blue line) with a CV of 3.83%. The green line represents predicted chroma.

how the responses are combined together [Vos and Walraven, 1971; Hunt, 1991], yielding: Redness − Greenness

a=

Yellowness − Blueness

b=

1 � 11 1� 9

� 11L 0 − 12M 0 + S 0 ,

� L 0 + M 0 − 2S 0 .

(5.10) (5.11)

Chroma C is the colourfulness judged in proportion to the brightness of the reference white, i.e., it should be independent of luminance L w (like lightness). It is commonly based on the magnitude of a and b [CIE, 1986]: C = ↵k

Åp

a2 + b2

ã nk

.

(5.12)

Note that it is possible to optimise the parameters ↵k and nk after modelling colourfulness, for which we have actual perceptual data. We further know that colourfulness should increase with the luminance level (Hunt effect, see Chapter 4 for findings). Hence, we found the relationship between chroma (the magnitude of a and b) and colourfulness to be linear in the logarithm of the reference white luminance L w : M = C(↵m log10 L w + βm ).

(5.13)

From this we can derive parameters for colourfulness as well as chroma based on our data and the constraint that chroma does not change with luminance: ↵k = 456.5, nk = 0.62, ↵m = 0.11, and βm = 0.61. These parameters were numerically optimised on the likelihood data of colourfulness from training data sets. See Figure 5.8 and 5.9 for comparison. Saturation is independent of brightness and colourfulness. It is modelled by the square-root of

5.2. Forward Model

135

100

100

100

80 60 40 20 0

Perceived colourfulness

120

Perceived colourfulness

120

Perceived colourfulness

120

80 60 40 20

20 40 60 80 100 120 Predicted colourfulness (CIELAB C*)

60 40 20 0

0

0

80

0

0

20 40 60 80 100 120 Predict. colourfulness (CIECAM02 M)

(a) CIELAB C*

(b) CIECAM02 M

20 40 60 80 100 120 Predict. colourfulness (our model M)

(c) Our model M

Figure 5.9: These three plots compare the perceived colourfulness against the predicted colourfulness in phase 19 (16 400 cd/m2 ). (a) plots the predicted colourfulness perception by CIELAB (C*) with a CV of 31.23%. (b) plots the prediction of colourfulness by CIECAM02 (J) with a CV of 19.67%. (c) presents the colourfulness prediction of our model (M) with a CV of 14.15%. CIELAB C* shows a comparative variation in predicting colourfulness. CIECAM02 M presents better predictions than CIELAB C* (scaled by a colourfulness scalar 1.23). Our model’s colourfulness values are closer to the diagonal with smaller variation, which means our predictions are much closer to the actual perception.

brightness over colourfulness (defined by Hunt [1998]), following Moroney et al. [2002]: r s = 100

M Q

.

(5.14)

The hue angle is derived by converting colour opponent signals of a and b into polar coordinates: h=

180 ⇡

tan−1 (b/a).

(5.15)

This hue angle (0◦ –360◦ ) could be used directly as a prediction of perceived hue. However, the hue angle in psychophysical experiments is scaled from 0 to 400 (see Section 4.3.1 for more details on the hue estimation). Therefore, the computed hue angle is interpolated in the perceptually uniform scale to match the perceptual hue quadrature used in the experiments. The perceptual hue quadrature [H = huequad(h)] has been shown in [Hunt, 1991] to improve accuracy, which we adopt in our model as well: H = H1 + where e is e =

1 4

h

100(h− h1 )/e1 (h− h1 )/e1 + (h2 − h)/e2

,

(5.16)

⌘ i ⇣ ⇡ + 2 + 3.8 . e1 and h1 are the values of e and h, respectively, for the cos h 180

unique hues having the nearest lower value of h in Table 5.1; e2 and h2 are the values of e and h, respectively, for the unique hues having the nearest higher value of h in Table 5.1. H1 is 0, 100, 200, or 300 according to whether red, yellow, green or blue, respectively, is the hue having the nearest lower value of h. See Figure 5.10 for comparison.

5.3. Inverse Model

136

400

300

300

300

Perceived hue

200

Perceived hue

400

Perceived hue

400

200

100

100

100

200

0

0

0 0

100 200 300 Predicted hue (CIELAB h*)

400

0

(a) CIELAB h*

Figure 5.10:

100 200 300 Predicted hue (CIECAM02 H)

400

(b) CIECAM02 H

0

100 200 300 Predicted hue (our model H)

400

(c) Our model H

These three plots compare the perceived hue against the predicted hue in phase 19

(16 400 cd/m2 ). (a) plots the predicted hue perception by CIELAB (h*) with a CV of 23.11%. For only plotting purpose, h* is scaled to 400. (b) plots the prediction of hue by CIECAM02 (H) with a CV of 12.60%. (c) presents the hue prediction of our model (H) with a CV of 13.86%. CIELAB h* shows a comparative variation in predicting hues around green primaries. In contrast, CIECAM02 and our model’s hue values are closer to the diagonal, which means our predictions are closer to the actual perception. Both hue estimates are almost identical. Unique Hue

Red

Yellow

Green

Blue

Hue quadrature H

0

100

200

300

Hue angle h

20.14

90.00

164.25

237.53

Eccentricity e

0.7741

0.7227

0.9884

1.1976

Table 5.1: Hue eccentricity parameters for unique hues. Adapted from [Hunt, 1991]. Finally, the colour coordinates introduced above can form a three-dimensional colour space (lightness, chroma, and hue). The hue angle can be represented in Cartesian coordinates with respect to the three-dimensional colour space (comprising lightness J, chroma C, and hue h): Redness − Greenness Yellowness − Blueness

Å ⇡ ã aC = C cos h , 180 Å ⇡ ã . bC = C sin h 180

(5.17) (5.18)

The next section summarises our analytical inverse model of these forward calculations.

5.3

Inverse Model

The development of our colour appearance model is motivated by the complete colour reproduction pipeline (see Section 2.1 for more details). A forward device transform allows us to convert device-dependent signals to physically-meaningful device-independent coordinates. Forward appearance model transforms these physically-meaningful coordinates to perceptually-uniform appearance scales. These two stages yield the estimation of colour perception, but two inverse stages are required to complete colour communication for reproducing estimated colours on a different

5.3. Inverse Model

137

medium [CIE, 2004] (see Chapter 6 for more details on our colour reproduction pipeline). Therefore, analytical invertibility of the device characterisation (especially output devices) and the colour appearance model is essential for application of the appearance model. With colour reproduction as context, we developed our model while considering analytical invertibility. The proposed mathematical pipeline in Section 5.2 is analytically invertible, and does not require any iterative estimation (such as Newton’s method) to invert them. First, we model input parameters for this inverse model as follows: • Perceptual colour appearance values: J (lightness), M (colourfulness), and h (hue),

• Absolute tristimulus values of the reference white point (of a target media): X w Yw Zw , • Level of luminance adaptation (in viewing the target media): L a [unit: cd/m2 ] (luminance of viewing stimuli at about a 10-degree angle),

• A target medium type: E (e.g., paper, CRT, transparency, or high-luminance display). Our forward model takes physical input values of reference white, luminance adaptation level, and medium type; our inverse model takes perceptual input values of reference white, luminance adaptation level, and medium type (specifying the output medium viewing conditions) and outputs physical values. Our inverse model first computes achromatic white point Aw of the target device using Equations (5.4) and (5.5). Then, it computes brightness Q from lightness J [see Equation (5.9) for optimised parameters]: J = Q/(L w )nq .

(5.19)

Then, the lightness J is used to compute the achromatic signal A [see Equations (5.6), (5.7), and (5.8)]: J 0 = (J/100 − 1)/E + 1, ⌘ ⇣ ↵ J0nj j A = Aw J 0 n j +σ n j + β j . j

(5.20) (5.21)

For inverting colourfulness, Chroma C is first calculated from colourfulness M [see Equation (5.13)]: C = M /(↵m log10 L w + βm ).

(5.22)

The chroma value C is then used for deriving colour opponent signals a and b from chroma C and hue angle h [see Equations (5.12) and (5.15)]: �1/nk � a = cos(⇡h/180) C/↵k , � �1/nk b = sin(⇡h/180) C/↵k .

(5.23) (5.24)

Once we have the achromatic signal A and opponents a and b, this allows us to compute non-linear

5.4. Results cone signals L 0 M 0 S 0 :

2

0 6 L 6 6 M0 6 4 0 S

3

2 7 6 1.0000 7 6 7 = 6 1.0000 7 6 5 4 1.0000

0.3215 −0.6351

−0.1568

138

32 3 0.2053 7 6 A 7 76 7 6 7 −0.1860 7 76 a 7. 54 5 −4.4904 b

(5.25)

The non-linear cone signals L 0 M 0 S 0 are then converted to linear cone signals LM S [see Equation (5.4)]: L= M= S=

⇣ −L nc L 0 ⌘1/nc a

, L0 − 1 n ⇣ −L c M 0 ⌘1/nc a

M0 −1 ⇣ −L nc S 0 ⌘1/nc a S0 − 1

.

(5.26) ,

(5.27) (5.28)

After that, our model computes tristimulus X Y Z D50 from cone signals LM S using the HPE transform [see Equation (5.3)]. Finally, our model applies an inverse chromatic adaptation transform to the white point of the target medium [see Equation (5.1) and (5.2)]. The next section presents the performance of our model in predicting human colour perception when compared with other colour appearance models (CIELAB, RLAB, and CIECAM02).

5.4

Results

The following sections provide qualitative and quantitative analysis of our model. We have applied our model, as well as CIELAB, RLAB, and CIECAM02, to our perceptual data sets (for highluminance conditions) and the LUTCHI data sets (for low-luminance conditions). However, using our data set with the LUTCHI data set has a drawback. The colourfulness data sets are not directly compatible without first applying a calibrating scalar, i.e., a colourfulness scalar should be derived before applying the data set to a model.

5.4.1

Estimations under High Luminances

Modelling accuracy results under luminance-varying phases can be found in Figure 5.11. The group L phases (1, 2, 4, 7, 10, 17, and 19) are used. The luminance levels vary from 44 to 16 400 cd/m2 with a fix background ratio (23%). Our prediction in terms of lightness is statistically significantly (one-side t-test with alpha = 0.05) better than the predictions of the other models and also very consistent up to high luminances. The average CV [see Equation (2.12)] value (11.51%) is as large as the long-term repeatability CV value (11.83%) for the averaged human observer (see Table 4.2). This means that our model performance is as accurate as the variation of the experimental data. Other models achieve a less accurate prediction and, importantly, their prediction quality fluctuates considerably between phases. Colourfulness is also predicted significantly better with our model than with the other models. Our colourfulness prediction is very consistent. The average CV value (17.15%) is similar to the CV value between short-term repeated runs of the same experiment (17.23%). In particular, RLAB performs significantly worse than other models in predicting colourfulness, and the prediction quality in CIELAB fluctuates more than other models. Hue is predicted

5.4. Results

139

similarly among CIELAB, CIECAM02, and our model (average CV: 14.74%), and the hue prediction of CIECAM02 is better than others [see Figure 5.13 (a), (c), and (e) for average comparison]. This result indicates that our CAM models the Stevens and Hunt effects (observed in our experimental data) to a high accuracy. Figure 5.12 shows modelling accuracy results against different background ratios (group B). Phases from 8 to 12 are used. The background ratio varies from 0 (black) to 95% (white). Our prediction in terms of lightness is significantly better than the others. The average CV value (12.26%) is roughly as large as the long-term repeatability CV value (11.83%) for the average human observer. Other models achieve a less accurate prediction. CIECAM02 and our model predictions are getting better against a darker background, but CIELAB and RLAB performance are getting better against a brighter background. Colourfulness is also predicted significantly better than the other models and is very consistent. The average CV value (15.86%) is lower than the CV value of short-term repeatability (17.23%). In particular, the performances of CIECAM02 and CIELAB fluctuate between different backgrounds. Hue prediction is very similar to the other models (average CV: 14.38%) except RLAB. This result shows that our CAM models the simultaneous contrast effect to a high accuracy in terms of lightness and colourfulness as observed in our experimental data. See Figure 5.13 (b), (d), and (f) for average comparison. Chromatic adaptation results can be found in Figure 5.14. Group T (phases 7, 13, and 14) of our data sets is used. As before, our prediction of lightness is significantly better than the other models and is very consistent. The average CV value is 12.26% (as large as the long-term repeatability). The colourfulness prediction of our model is also better in all cases (average CV: 18.77%). Hue prediction is very similar to the other models. Our chromatic adaptation transform is adapted from CIECAM02, but the performance of our model (average CV: 16.34) in three different colour temperatures is better than CIECAM02 (average CV: 17.21) because of the different modelling structure and optimisation of the model. Our model can predict inconsistent chromatic adaptation to a high accuracy [see Figure 5.16 (a), (c), and (e) for an average comparison]. Our model does not include a surround parameter, but the surround effect is implicitly modelled by using the level of luminance adaptation (which implicitly contains a surround measurement). Surround effect results can be found in Figure 5.15. Group S (phases 10 and 15) compares two different surround levels: dark and average (20% of the peak luminance). The lightness prediction of our model is statistically significantly better than other models and very consistent. The average CV value is 13.98%. The lightness prediction of CIECAM02 with an average surround is comparatively worse than with a dark surround. The colourfulness prediction of our model is also better with both surrounds (average CV: 17.34). Hue prediction is very similar to the other models as before (average CV: 14.87). Hue estimation with an average surround increases from 12.30% to 17.45% due to the difference of colour temperatures of the light source (main colour stimuli) and the surround light sources. This indicates that our model can predict the surround effect well. See Figure 5.16(b), (d), and (f) for a comparison of their averages.

5.4. Results

35

CVs of lightness

30 25

140

CIELAB RLAB CIECAM02 Our model

20 15 10 5 0 44

123

521

1,051

2,196

8,458

16,400

8,458

16,400

8,458

16,400

Peak luminance [cd/sqm]

(a) 50

CVs of colourfulness

45 40 35 30 25 20 15 10 5 0

CIELAB RLAB CIECAM02 Our model 44

123

521

1,051

2,196

Peak luminance [cd/sqm]

(b) 30

CIELAB RLAB CIECAM02 Our model

CVs of hue

25 20 15 10 5 0 44

(c)

123

521

1,051

2,196

Peak luminance [cd/sqm]

Figure 5.11: Results of estimations in luminance-varying phases group L (44–16 400 cd/m2 ) with a fix background ratio (23%). We compare a few phases (1, 2, 4, 7, 10, 17, and 19) of our experiment in terms of lightness, colourfulness, and hue prediction error (CV) with CIELAB, RLAB, and CIECAM02. Our model performs statistically significantly better than the other models in terms of lightness and colourfulness, which means our model can predict the Stevens effect to a high accuracy. Colourfulness prediction is also better in all cases, which means our model can predict the Hunt effect to a high accuracy. Hue prediction is very similar to the other models even though CIECAM02 is better especially under low luminances.

5.4. Results

141

35

CVs of lightness

30 25 20 15 10

CIELAB RLAB CIECAM02 Our model

5 0 0

12

23

55

95

Background ratio [%]

(a) 50

CVs of colourfulness

45 40 35 30 25 20 15

CIELAB RLAB CIECAM02 Our model

10 5 0 0

12

23

55

95

Background ratio [%]

(b) 25

CVs of hue

20 15 10

CIELAB RLAB

5

CIECAM02 Our model

0 0

(c)

12

23

55

95

Background ratio [%]

Figure 5.12: Results of estimations in background-varying phases group B (0-95%) under a luminance of 2 241 cd/m2 . We compare these phases (8–12) of our experiment in terms of lightness, colourfulness, and hue prediction error. Our model performs statistically significantly better than the other models in terms of lightness and colourfulness. This means that our model can predict the simultaneous contrast effect to a high accuracy. Hue prediction is very similar to the other models except RLAB.

142

40

40

35

35

30 25

23.86

23.34

22.42

20 15

11.51

10 5

Aver. CVs of lightness

Aver. CVs of lightness

5.4. Results

0

30 25

RLAB

CIECAM02

15 5 CIELAB

30.88 21.70

RLAB

CIECAM02

RLAB

CIECAM02

Our model

(b) Lightness (group B)

17.15

Aver. CVs of colourfulness

Aver. CVs of colourfulness

12.46

10

Our model

40.59

CIELAB

50 45 40 35 30 25 20 15 10 5 0

Our model

38.62 30.16

26.20 15.86

CIELAB

(c) Colourfulness (group L)

RLAB

CIECAM02

Our model

(d) Colourfulness (group B)

30

30

25 18.83 15.98

15

12.93

14.74

10 5 0

Aver. CVs of hue

Aver. CVs of hue

25.52

20

(a) Lightness (group L)

20

27.25

0 CIELAB

50 45 40 35 30 25 20 15 10 5 0

27.33

25 19.12

20 15

14.95

14.70

14.38

10 5 0

CIELAB

RLAB

CIECAM02

(e) Hue (group L)

Our model

CIELAB

RLAB

CIECAM02

Our model

(f) Hue (group B)

Figure 5.13: These plots compare the average CV errors in estimating colour appearance in terms of lightness, colourfulness, and hue with luminance-varying phases (group L) and background-varying phases (group B). Our model performs significantly better than others in predicting lightness and colourfulness in both groups. Hue prediction is almost identical to CIECAM02 and CIELAB.

5.4. Results

143

35

CVs of lightness

30 25 20 15 10

CIELAB RLAB CIECAM02 Our model

5 0 1803K

6391K

7941K

Correlated colour temperature of light source

(a) 60

CVs of colourfulness

50 40 30 20 10 0

CIELAB RLAB CIECAM02 Our model 1803K

6391K

7941K

Correlated colour temperature of light source

(b) 25

CVs of hue

20 15 10

CIELAB RLAB

5

CIECAM02 Our model

0 1803K

(c)

6391K

7941K

Correlated colour temperature of light source

Figure 5.14: Results of estimations in colour-temperature-varying phases (group T) under a luminance of 1 233 cd/m2 . We compare the colour temperature-varying phases (7, 13, and 14) of our experiment in terms of lightness, colourfulness, and hue prediction error (CV) with CIELAB, RLAB, and CIECAM02. Our model performs significantly better than the other models in terms of lightness. Colourfulness prediction of our model is also better in all cases. Hue prediction is very similar to the other models.

5.4. Results

144

40

CVs of lightness

35 30 25 20 15 CIELAB RLAB CIECAM02 Our model

10 5 0 Dark

Average Surround

(a) 45

CVs of colourfulness

40 35 30 25 20 15 10 5 0

CIELAB RLAB CIECAM02 Our model Dark

Average Surround

(b) 25

CVs of hue

20 15 10

CIELAB RLAB

5

CIECAM02 Our model

0 Dark

(c) Figure 5.15:

Average Surround

Results of estimations in surround-varying phases (group S) under a luminance of

2

2 201 cd/m . We compare the surround-varying phases (10 and 15) of our experiment in terms of lightness, colourfulness, and hue prediction error with CIELAB, RLAB, and CIECAM02. Our model performs significantly better than the other models in predicting lightness. Colourfulness prediction is also better in all cases. Hue prediction is very similar to the other models, except RLAB.

5.4. Results

35 30

40 28.47 27.89

26.41

25 20 15

12.38

10 5

Aver. CVs of lightness

Aver. CVs of lightness

40

145

0

35 30 25 20

RLAB

CIECAM02

10 5

Our model

CIELAB

24.23 18.77

CIECAM02

Aver. CVs of colourfulness

Aver. CVs of colourfulness

28.64

RLAB

RLAB

CIECAM02

Our model

(b) Lightness (group S)

43.01

CIELAB

50 45 40 35 30 25 20 15 10 5 0

Our model

38.25 26.69 20.58

CIELAB

(c) Colourfulness (group T)

RLAB

CIECAM02

17.34

Our model

(d) Colourfulness (group S) 30

25

20.04 17.21

16.58

15

16.34

10 5

Aver. CVs of hue

30

Aver. CVs of hue

13.98

15

(a) Lightness (group T)

20

33.03 28.79

0 CIELAB

50 45 40 35 30 25 20 15 10 5 0

28.62

25 20 15

20.31 15.90

15.84

14.87

10 5 0

0 CIELAB

RLAB

CIECAM02

(e) Hue (group T)

Our model

CIELAB

RLAB

CIECAM02

Our model

(f) Hue (group S)

Figure 5.16: These plots compare the average CV errors in estimating colour appearance in terms of lightness, colourfulness, and hue with colour-temperature-varying phases (group T) and surroundvarying phases (group S). Our model performs significantly better than others in predicting lightness. Colourfulness prediction is also better in all cases. Hue prediction is almost identical to CIECAM02 and CIELAB.

5.4. Results

146

40

CIELAB RLAB CIECAM02 Our model

CVs of lightness

35 30 25 20 15 10 5 0 3

5

6 Phases

50 45 40 35 30 25 20 15 10 5 0

16

18

35 CIELAB RLAB CIECAM02 Our model

30

CVs of hue

CVs of colourfulness

(a)

CIELAB RLAB CIECAM02 Our model

3

25 20 15 10 5 0

5

(b)

6 Phases

16

3

18

5

(c)

6

16

18

Phases

Figure 5.17: Results of estimations in a validation set (phases 3, 5, 6, 16, and 18). We compare the group V phases of our experiment in terms of lightness, colourfulness, and hue prediction errors (CV) with CIELAB, RLAB, and CIECAM02. Our model performs significantly better than the other models in terms of lightness even on these independent test phases. Colourfulness prediction is also better in all cases. Hue prediction is very similar to the other models, except RLAB. 40

Aver. CVs of lightness

35 26.82

30

25.80

25

20.56

20 15

10.15

10 5 0 CIELAB

RLAB

CIECAM02

Our model

50 45 40 35 30 25 20 15 10 5 0

30

45.04 36.46 30.11 18.86

25 20

18.76 15.89

15

12.80

14.16

10 5 0

CIELAB

RLAB

CIECAM02

(b) Colourfulness (group V)

Figure 5.18:

Aver. CVs of hue

Aver. CVs of colourfulness

(a) Lightness (group V)

Our model

CIELAB

RLAB

CIECAM02

Our model

(c) Hue (group V)

These three plots compare the average CV errors in estimating colour appearance in

terms of lightness, colourfulness, and hue with a validation set (phases 3, 5, 6, 16, and 18). Our model performs significantly better than others in predicting lightness and colourfulness. Hue prediction is similar to CIECAM02 and CIELAB.

5.4. Results

147

The previous data sets are included in the maximum likelihood optimisation to derive our colour appearance model. We have independent data sets (phases 3, 5, 6, 16, and 18 — group V), which are used as test phases for cross-validation of our model. These data sets have a variation of different peak luminances and different backgrounds. Therefore, these results are a good indicator of predictive performance under high luminance levels. As shown in Figure 5.17, our model prediction in terms of lightness is statistically significantly better than in other models and is also very consistent. The average CV value (10.15%) is as large as the CV values in our training data (11.83% — group L). This proves that our model is free from over-fitting issues. Other models achieve a less accurate prediction, and the performance of the CIELAB and RLAB models fluctuates between phases. Colourfulness prediction of our model is also significantly better than the others and is very consistent (average CV: 18.86 — similar to the training group L: 17.15%). Hue is predicted by our model similarly to CIELAB and CIECAM02. The average CV is 14.16%, similar to the CV of 14.74% in our training data set (group L). In other words, our model predicts lightness and colourfulness consistently to a higher accuracy than other models. See Figure 5.18 for a comparison of average CV. Figure 5.19 summarises the main result of all phases (including the training and test data sets). Our prediction in terms of lightness is significantly better than the other models and is very consistent. The CV value is approximately as large as the repeatability CV value for a human observer, which indicates that our model’s performance is as accurate as the variation of the experimental data. Other models achieve a less accurate prediction and, importantly, their prediction quality fluctuates considerably between phases. Colourfulness is also predicted very consistently by our model and is generally much better than the other models. As before, the CV value is similar to the CV value between two repeated runs of the same experiment. This again indicates that our model’s colourfulness prediction performance is as accurate as the variation of the experimental data. The other models’ performance varies significantly, not only between models, but also between phases. Hue is predicted very similarly between all models, where even the simple CIELAB model performs well. See Appendix A.6 for the entire results.

5.4.2

Estimations on Different Media

We further investigate how our model predicts the data from the LUTCHI data set. This allows us to test our model’s performance on different media like paper, transparency, or CRT, and it validates our model’s performance by using a third test set as a cross-validation. We use a number of phases from three different groups (R-HL, CRT, and LT) in the LUTCHI data sets as these are samples of photopic vision in the LUTCHI data set. Figure 5.20 quantitatively compares the predictions of CIECAM02 and our model on different media against perceived colour appearance. (a), (b), and (c) show the detail lightness, colourfulness, and hue from R-HL phase 2; (d), (e), and (f) show the detail from CRT phase 1; (g), (h), and (i) present the detail of LT phase 1. Our lightness, colourfulness, and hue predictions are very much along the diagonal, indicating that our model covers the dominant perceptual phenomena.

5.4. Results

148

40

CIELAB RLAB CIECAM02 Our Model

CVs of lightness

35 30 25 20 15 10 5 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Phases 50

CVs of colourfulness

45 40 35 30 25 20 15 CIELAB RLAB CIECAM02 Our Model

10 5 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Phases 35 CIELAB RLAB CIECAM02 Our Model

CVs of hue

30 25 20 15 10 5 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Phases

Figure 5.19: We compare all 19 phases of our experiment (including the training and test data sets) in terms of lightness, colourfulness, and hue prediction error (CV) with CIELAB, RLAB, and CIECAM02. Our model performs consistently better than the other models in terms of lightness. Colourfulness prediction is better in almost all cases. Hue prediction is very similar to the other models, even though CIECAM02 is minimally better at lower luminances.

5.4. Results

120

80 60 40 20

CIECAM02 Our model 0

20

40

60

80

80 60 40 CIECAM02

20

40 CIECAM02 Our model 20

40

60

80

100

40

Our model

CIECAM02 Our model 60

80

(g) LT phase 1 (lightness)

20

40

60

80

CIECAM02 Our model

200

100

0

Perceived hue CIECAM02

20

40

60

80

100 120

200

100

CIECAM02 Our model 0

Predicted colourfulness

(h) LT phase 1 (colourfulness)

400

300

0 0

300

Trans.

Our model

0 100

200

(f) CRT phase 1 (hue)

100

20

100

Predicted hue

400

40

400

0

Trans.

60

300

CRT

300

100 120

80

200

(c) R-HL phase 2 (hue)

CIECAM02

20

0

Perceived colourfulness

40

40

100

Predicted hue

400

60

120

Predicted lightness

0

(e) CRT phase 1 (colourfulness)

60

20

100 120

Predicted colourfulness

80

0

80

80

100

Trans.

0

60

0

(d) CRT phase 1 (lightness)

20

40

CRT

Predicted lightness

100

20

Perceived hue

60

Perceived colourfulness

Perceived lightness

120

80

0

CIECAM02 Our model

(b) R-HL phase 2 (colourfulness)

CRT

0

100

Predicted colourfulness

(a) R-HL phase 2 (lightness)

20

200

0 0

100

300

Our model

Predicted lightness

100

Paper

100

0

0

Perceived lightness

400

Paper Perceived hue

Paper Perceived colourfulness

Perceived lightness

100

149

100

200

300

400

Predicted hue

(i) LT phase 1 (hue)

Figure 5.20: Quantitative comparison of the prediction of colour appearance on different media against perceived colour appearance (from LUTCHI data sets). R-HL phase 2 has a background ratio of 6.2% under a luminance of 252 cd/m2 . CRT phase 1 has a background ratio of 20% under a luminance of 44 cd/m2 . LT phase 1 has a background ratio of 16% under a luminance of 2 259 cd/m2 . (The colourfulness scalar of our data was 0.65 against the LUTCHI LT data set.) It can be seen that our model achieves very good lightness, colourfulness, and hue prediction. CIECAM02 is not able to predict lightness and hue on transparency, and colourfulness on paper and CRT media. In particular, the hue measurements on paper and CRT media in the LUTCHI data sets present comparable offsets with certain colours. As CIECAM02 and our model show similar patterns of offset, we suspect the offsets are measurement errors of the hue appearance in the original LUTCHI data sets.

5.4. Results

150

However, CIECAM02 incorrectly estimates lightness [see Figure 5.20(g)], yielding values that form a curve off the diagonal. This indicates that CIECAM02 underestimates lightness perception under high luminances. Colourfulness and hue predictions of CIECAM02 also show mismatches to the actual perception [see Figure 5.20(b), (e), and (i)]. These effects can be noticed in other phases as well: the predicted appearance forms a curve instead of a diagonal line as would be expected. Figure 5.21 summarises the results of the LUTCHI data sets. We ran all four models (CIELAB, RLAB, CIECAM02, and our model) on a number of phases from the data sets [transparency, reflective media (paper), and CRT]. The average CV error of lightness is 10.84%, similar to the average CV of 11.41% for our entire data set. The average CV error of hue is 14.59%, which is almost identical to the error of 15.14% in our data. The average CV error of colourfulness (20.25%) is slightly more than the error of 17.76% in our data. In summary, our model outperforms the other colour appearance models in terms of lightness, colourfulness, and hue, even though the LUTCHI data set was not the main basis for the derivation of our model.

CRT

50

Paper Transparent

40 30 20 10

45

Aver. CVs of colourfulness

Aver. CVs of lightness

60

CRT Paper Transparent

40 35 30 25 20 15 10 5 0

0 CIELAB

RLAB

CIECAM02

CIELAB

Our model

(a) Lightness

Aver. CVs of hue

35 30

CIECAM02

Our model

(b) Colourfulness 45

CRT Paper Transparent

Aver. CVs of estimation

40

RLAB

25 20 15 10 5

Lightness Colourfulness Hue

40 35 30 25 20 15 10 5 0

0 CIELAB

RLAB

(c) Hue

CIECAM02

Our model

CIELAB

RLAB

CIECAM02

Our model

(d) Overall

Figure 5.21: This figure quantitatively compares the average CV error (and standard deviation) of estimated lightness, colourfulness, and hue when applied to several phases of the LUTCHI data set. In particular, we use the LT phases (transparency), R-HL phases (reflective media), and CRT phases. Our model achieves the best overall prediction. Further, the variation in error is rather small for our lightness and colourfulness prediction, indicating that our model performs consistently.

5.5. Discussion

5.5

151

Discussion

In the development of our colour appearance model, we have chosen to fit most constants in our model, instead of relying on previous results. We have considered high-dynamic-range colour reproduction, i.e., the invertibility of our model, as well as tried to avoid over-fitting during the optimisation. Although we developed our appearance model with inspiration from zone theory [Müller, 1930], we tried to avoid using physiological constants which were derived from primate measurements, for instance, parameter n = 0.74 in Equation (2.11). This primate-driven parameter has been adopted in previous CAMs. We found that 0.57 fits better to our experimental data. Hence, we believe that the human visual system may have different responsivity from that of the primate. It is worth noting that colour appearance models are only computational models of colour appearance and as such do not try to describe how human vision actually works. As shown in Chapter 4, the response of the human visual system presents complicated nonlinear characteristics for a given physical stimuli. Modelling these non-linear characteristics with a few sets of equations is a challenging task. For example, the simplest approach might be to use a polynomial function. The function could be easily fitted through linear regression to a high accuracy for the given data set. However, polynomial equations could be over-fitted to the given training data and are not invertible when of a higher order than the second order. Therefore, particularly for modelling lightness, we use hyperbolic functions. This enables us to model lightness to a significantly higher accuracy than other models while keeping analytical invertibility. However, these types of equations cannot be solved by linear solving. Therefore, we conducted an exhaustive search to find the maximum likelihood for a given training data set. We validated our model through cross-validation with independent data sets and third test sets (see Figure 5.17 and 5.21). However, this numerical optimisation is still open to development; our freely available experimental data [Kim et al., 2009] may provide further opportunities. Our psychophysical experiments and colour appearance model focused on high-luminance photopic vision rather than dim (mesopic) or dark (scotopic) vision because our research was motivated by the advent of high-dynamic-range imaging, which deals with higher levels of luminance. For instance, our colour appearance model does not model the rods’ contribution under dark luminance conditions. If the peak luminance level is under ⇠10 cd/m2 the performance of our model may

decrease insofar as the rods and the cones have different sensitivities to the luminance. For mesopic vision (phase 1, under 43 cd/m2 luminance), our model still outperforms other models (average CV: 11.15% in predicting lightness), see Figure 5.21. Our model does not take a separate background parameter. Our model is only driven by the adaptation luminance level and the peak luminance level. In contrast, the CIECAM02 model uses the luminance adaptation level and the background luminance level respectively. We share the same

definition of the level of luminance adaptation [Moroney et al., 2002], which means the amount of luminance in an approximately 10-degree viewing angle. However, we found that the measurement of the level of luminance adaptation implicitly contains the background luminance level (as the

5.6. Summary

152

background is a main part of the adapting field). This means that the separation between the luminance adaptation level and the background luminance level is a redundant parameterisation. Therefore for applications of a colour appearance model with respect to colour reproduction, the decision to use a background luminance level is questionable [Fairchild, 2005]. Hence, we chose an approach that derives our model without an explicit background luminance level parameter. Our model also does not take a separate surround parameter as in our experiment its influence was not significant. Even though our model does not have an explicit parameter for surround, its effect could be taken into account by changing the adaptation level accordingly. In our experiments, we were able to build only a limited range of surround (average level — 20% of the peak luminance) because our main colour stimuli is already very bright. We were not able to create a high-luminance viewing surround because of the limitations with light sources that is large and bright enough to cover the room. As a result, our experiment did not fully investigate how the surround influences perception at high luminances, but the measured influence on the perceived attributes was minimal, as was also observed in [Breneman, 1977].

5.6

Summary

We have presented a new colour appearance model that has been designed from the ground up to work for an extended luminance range. As no colour perception data was available for high luminance ranges, we have first conducted a large psychophysical magnitude experiment to fill this gap. Based on our data, as well as previous data, we have developed a model that predicts lightness, colourfulness and hue to a high accuracy for different luminance ranges, levels of adaptation, and media. In contrast to other CAMs, our method works with absolute luminance scales, which we believe is an important difference and key to achieving good results. The next chapter demonstrates an application of our colour appearance model to complete a high-fidelity colour reproduction pipeline for high-dynamic-range imaging.

153

Chapter 6

Colour Reproduction in High-Dynamic-Range Imaging The previous chapter describes a novel colour appearance model (CAM) which is derived from our experimental data sets of perceptual attributes measured under high levels of luminance. This computational model of human colour vision allows us to convert physically-meaningful high-dynamicrange (HDR) radiance values (obtained from HDR characterisation) to perceptually-uniform colour appearance attributes. These forward calculations yield perceptual coordinates for a given physical stimulus. The perceptual coordinates are reproducible on a new output medium such that the colour appearance model is analytically invertible, i.e., perceptual lightness, colourfulness, and hue values can be mathematically inverted into physical quantities (e.g., CIEXYZ) with a new set of target viewing parameters as input. These physical coordinates of an output device are then converted to device signals through an inverse device characterisation model. This chapter introduces a colour reproduction pipeline to achieve high-fidelity reproduction of real-world radiance values on any output medium and then evaluates the perceived similarity of the reproductions to the real scenes through a series of psychophysical experiments.

6.1

Image Reproduction

This section introduces an image reproduction pipeline for reproducing high-dynamic-range scenes on an output display device. The proposed pipeline achieves a high level of fidelity in the reproductions, as shown by psychophysical evaluations. The imaging characterisation method, introduced in Chapter 3, is used to capture high-dynamic-range scenes. The appearance model, described in Chapter 5, is used to complete the visual communication at each stage. This section proposes an HDR imaging system by combining these previously described elements.

6.1.1

Reproduction Pipeline

Suppose we are taking an HDR image of a real-world landscape with an HDR camera system. As presented in Chapter 3, our characterisation method enables us to convert such an HDR RGB image into a physically-meaningful CIEXYZ radiance map (on an absolute scale). Our colour appearance model for high-luminance levels (covering the dynamic range of the human visual system, see Chapter 5)

6.1. Image Reproduction

154

then allows us to convert the physically-meaningful coordinates into perceptually-uniform coordinates of colour appearance, e.g., lightness, chroma, and hue (see Figure 6.1). This completes the forward communication of HDR colour information from the real world to human perception. On the other hand, suppose we already have a reproduction of a real-world landscape, say a digital photograph of the landscape on an sRGB display. Insofar as we have a characterisation model of the display, we can convert RGB signals of the image into actual physical radiance values in CIEXYZ. Once we have the physical coordinates of the displayed image, we can convert these values to perceptual appearance attributes by using our colour appearance model. This enables us to predict the perception of the photograph under a given viewing environment. At this point, we have two sets of perceptual coordinates: the perception of the real-world landscape and that of the reproduction of the landscape. The closer the reproduction perceptual coordinates are to those of the real world, the more faithful the duplication with respect to visual perception. High-fidelity colour reproduction of the real world is achievable in this approach. The perception of colour reproduction is a metameric sensation, i.e., the relationship can be represented as a many-to-one function with viewing environment parameters. Imagine that two sets of perceived colours are identical. This means that two different observations on different media under

Parameters (original viewing conditions) Device Characterisation

Real world

HDRRGB input

XYZW

XYZD50

radiance

La

CAM

Reproduction

sRGB output

XYZD50

radiance

XYZW 

CAM

La

Gamut mapping

JMh

perception



Colour appearance model

Media

Highfidelity

JMh

perception

Media

Parameters (reproduction viewing conditions)

Forward Inverse

Figure 6.1: High-fidelity colour reproduction pipeline for HDR imaging. In observing the real world, an HDR camera system captures real-world radiance as input. The HDR characterisation model converts the captured HDR image into a physically-meaningful radiance map. A forward colour appearance model then converts physical radiances to perceptual coordinates, e.g., lightness, colourfulness, and hue (J M h). Imagine that we observe a reproduction of the real world. A forward output device characterisation model converts device signals to physical radiance values. The forward colour appearance model with output viewing conditions converts physical radiances to the perceptual coordinates (J M h0 ) of the observation of the reproduction. If J M h0 matched J M h, we would believe that the reproduction appears faithfully identical to the real world. Aiming for high fidelity, we directly map J M h to J M h0 . Ensuring that our forward colour appearance and characterisation models are analytically invertible, we apply these inverse models to J M h0 we finally achieve high-fidelity colour reproduction on an output medium.

6.1. Image Reproduction

155

two different viewing conditions yield identical perceptions, i.e., the reproduction of the real world appears the same as the original real world. We have introduced an analytically invertible forward mathematical transform to convert physical quantities to perceptual quantities. Therefore, perceptual coordinates are transformable to physical coordinates by using the inverse colour appearance model. The parameters of the inverse model are set to specify the viewing environment conditions of the target observation. The converted physical coordinates are reproducible on an output device by using an inverse device characterisation (from the physical coordinates to the device signals). This allows us to reproduce metameric colour reproduction with newly given target environmental conditions (see Figure 6.1). For instance, colour reproduction of HDR images is achieved by first taking an absolute HDR radiance map (containing physically meaningful C I EX Y Z values) and applying our CAM, which yields perceptual attributes, e.g., lightness, colourfulness, and hue (J M h). These attributes are then converted to absolute C I EX Y Z for a specific target display and target viewing condition by applying the inverse CAM. Finally, C I EX Y Z coordinates are transformed into device-dependent coordinates (e.g., sRGB) for display.

6.1.2

Colour Connection Space

As mentioned earlier, the perceptual coordinates of input/output medium are connected in a perceptual colour space to complete colour reproduction. If we use absolute perceptual coordinates for colours, we can reproduce such absolute perceptual quantities (e.g, brightness and colourfulness) on output media. If we use relative perceptual coordinates, we will reproduce only relative colour coordinates (e.g., lightness and chroma) on output media. We call these Cartesian 3D coordinate systems colour connection spaces. As our colour appearance model provides both relative and absolute coordinates of perceptual colour attributes, we have four different options for a colour connection space to complete colour communication: (1) brightness, colourfulness, and hue (QM h), (2) brightness, chroma, and hue (QCh), (3) lightness, colourfulness, and hue (J Ch), and (4) lightness, chroma, and hue (J Ch). For a colour connection space, when relative colours are used, the entire colour information in an image is normalised by the specifications of a target medium. For instance, lightness and chroma are relative brightness and colourfulness normalised by reference white on a target medium (see Section 2.3.1 for definitions), and accordingly output brightness and colourfulness depend on the reference white of the target medium. This means that once we use relative colour coordinates, we could never achieve absolutely identical reproduction of the source input on the output as long as we have a white point and colour gamut of the output medium that is different from the real world. On the other hand, when absolute colours are used, theoretically all colour information is kept in this colour-connecting stage. However, if the maximum brightness level or colour gamut of the target medium is lower or smaller than the original, the original colour information could be saturated by the specifications of the output medium. Thus, if the output medium has a higher maximum brightness and a wider colour gamut than the source medium, we can use absolute colour

6.1. Image Reproduction

156

coordinates. However, if the output produces less brightness and has a smaller colour gamut than the source input, using relative coordinates is a better choice to avoid significant saturation of the colour information. In the experimental context of this thesis, the luminance level of our target LCD display device (⇠250 cd/m2 ) is much lower than that of the real world assuming ordinary reproduction conditions of HDR imaging. Real-world brightness is obviously not reproducible on any of the target media. Therefore, we decided to use relative coordinates for connecting achromatic colour information by using lightness. This narrows the possible colour connection spaces to J M h and J Ch colour spaces. Second, as shown in Section 2.4, our target output device presents an almost identical colour gamut to the sRGB colour space, which covers most colours in the real world [Pointer, 1982] (see Figure 2.14). We decided to use absolute coordinates for chromatic information by using colourfulness instead of chroma. Chroma quantifies the relative intensity of each hue, disregarding absolute intensity, whereas colourfulness preserves the absolute intensity of each hue (see Section 2.3.1). Therefore, using colourfulness coordinates is a better choice to reproduce the original colour information without loss of any information (achieving higher fidelity) with the condition that the output medium could produce the same colourfulness as the input medium. However, if the gamut boundary of the output medium is unknown or significantly smaller than that of the input medium, chroma could be a safer choice for the colour space because we could avoid unpredictable saturation of the colour information in reproduction although the overall colourfulness would be shrunk or expanded depending on the specification of the output medium with chroma mapping. In summary, the J M h colour space is chosen as our main colour connection space considering our experimental conditions. The perceptual performances of J M h and J Ch are evaluated in Section 6.2.

6.1.3

Parameters

Our forward colour appearance model requires absolute CIEXYZ radiance values with three parameters as input: absolute CIEXYZ values of the reference white of the scene, the level of luminance for global adaptation, and the medium type that is observed. Measuring the absolute CIEXYZ radiance values of a scene is achievable by using a spectroradiometer or an HDR characterisation method [Kim and Kautz, 2008a], presented in Chapter 3. The absolute CIEXYZ values of the reference white of the scene are able to be chosen in an automatic or a manual manner: automatically selecting CIEXYZ values of the maximum brightest pixel value or our estimating-illumination method (see Chapter 3), or manually selecting the reference white point. We conducted manual measurements of the reference white for our experimental scenes carefully in order to avoid any measurement error caused by camera noises. For the level of luminance adaptation, we used an averaged luminance level by using the geometric mean of luminance as it is believed to be a good approximation of the average luminance [Pattanaik et al., 2000; Reinhard et al., 2002, 2005]. In order to avoid infinite errors in calculating the geometric mean, we calculated the geometric mean of the luminance by

6.1. Image Reproduction using the exponential of the arithmetic mean of log luminances with a minimum value: 0P � � �� 1 log δ + Y x, y B C B x, y C L a = exp B C, |Y | @ A

157

(6.1)

where δ is 1.0E-30, Y is the luminance of each pixel (x, y), and | Y | is the cardinality of Y . The input

medium parameter is decided by which medium is observed. In the case of a real-world scene, we use a high-luminance LCD display with parameter E = 1.0 [see Equation (5.8)] as this corresponds to real-world observations (see Chapter 5). Our inverse colour appearance model requires perceptual colour attributes J M h or J Ch with

three parameters as input: absolute CIEXYZ values of the reference white of an output device, a level of luminance adaptation of the observation, and a target medium type. For our experimental conditions, we set the target medium to an sRGB calibrated LCD display (having a peak luminance level of 250 cd/m2 ) assuming that the display is observed in dim conditions (10% of the peak luminance level), following [IEC, 2003] for standardised sRGB viewing conditions, i.e., the reference white of the output device for our inverse colour appearance model was (237.62, 250.00, 272.21) in CIEXYZ. The level of luminance adaptation level was set to 25 cd/m2 . In addition, for a general purpose target medium, we used the transparent advertising media parameter (E = 1.2175) for a general sRGB display device with an average surround, which was assumed to have characteristics between our high-luminance LCD display and a CRT display. Therefore, the printed thesis might appear differently depending on printer characteristics and its viewing conditions. The following section demonstrates actual applications of the reproduction of HDR images onto a general sRGB target medium with comparison to other methods.

6.1.4

Qualitative Results

As mentioned in Chapter 5, our colour appearance model can be used to predict perceptual phenomena. Figure 6.2 demonstrates an example of the simultaneous contrast effect. Our colour appearance model can be used to match the appearance of images with different backgrounds. The two images appear identical even though the one on the right is actually lighter and more colourful (see Section 4.5). This is achieved by modifying the target luminance adaptation level when applying our colour appearance model. Compared to a black background, a white background increases luminance adaptation. In Figure 6.3, we demonstrate media-dependent reproduction. The left image printed on paper is perceptually equivalent to the right image displayed on an LCD display (assuming a calibrated device in dim viewing conditions). If both are viewed on an LCD display, the left image appears brighter. This is due to the fact that luminance perception for paper decreases, and our colour appearance model compensates for it. Figure 6.4 qualitatively compares colour reproduction with CIECAM02 [Moroney et al., 2002], iCAM06 [Kuang et al., 2007], and our model [Kim et al., 2009]. A high-dynamic-range scene is captured by our HDR camera system and converted into a CIEXYZ radiance map (on an absolute

6.1. Image Reproduction

158

Figure 6.2: Appearance matching with respect to the background effect. The two colour charts will appear similar (assuming that a calibrated display with a gamma of 2.2 in dim viewing conditions). When comparing the two images without the backgrounds, it can be seen that the right colour chart is actually lighter and more colourful.

Figure 6.3: Appearance matching with respect to media dependency. Our model can be used to match colour appearance on different media. Starting from a radiometrically calibrated C I EX Y Z float image [Kim and Kautz, 2008a], the left image printed on paper will appear very similar to the right image when displayed on an LCD display (assuming calibrated devices in dim viewing conditions under a luminance of 119 cd/m2 ).

6.1. Image Reproduction

159

(a) CIECAM02

(b) iCAM06

(c) Our model Figure 6.4: Qualitative comparison of perceptual predictions of CIECAM02 (top), iCAM06 (middle), and our model (bottom). As the HDR image contains high-dynamic-range luminances, CIECAM02 fails to predict visual perception. As can be seen, CIECAM02 is not designed to handle HDR images. iCAM06 is a hybrid model to combine the revised CIECAM02 and an HDR tone-mapping algorithm. Its result shows halo artefacts on the colour chart and hue deterioration. Measured peak luminance of this scene was 1 382 cd/m2 .

6.2. Experimental Evaluation

160

scale). The peak luminance level of the scene was 1 391 cd/m2 . Image (a) shows the result for CIECAM02, which underestimates the perceived lightness, as observed in qualitative comparison [see Figure 5.5(b)]. As the HDR image contains high-dynamic-range luminances, CIECAM02 fails to predict visual perception. As can be seen, CIECAM02 is not designed to handle HDR images. Image (b) shows a results of iCAM06 (a combination of the revised CIECAM02 and a bilateral tonemapping algorithm [Durand and Dorsey, 2002]). Original colourfulness and hue are altered by the model, and halo artifacts are observed around square patches on the colour chart. Image (c) presents the result of our model. Our model reproduction is much closer to actual perception with HDR images as demonstrated in quantitative comparisons (see Section 5.4). Figures 6.5 and 6.6 compare the use of our colour appearance model for tone-mapping with CIECAM02, Reinhard et al. [2002]’s method, and iCAM06. Our model’s results are consistent throughout with good luminance compression. Colours are slightly more saturated than with the other two models, which is due to our model preserving the original colourfulness impression. Figures 6.7, 6.8, and 6.9 present more results with ordinary HDR images. The next section describes quantitative evaluation of the perceptual similarity of our reproduction model to the real scene with comparison of other methods.

6.2

Experimental Evaluation

We conducted a series of psychophysical experiments to evaluate the fidelity (accuracy) of the reproduction of real scenes. Two real scenes were arranged to be compared with their reproductions (on a calibrated LCD display). Participants were asked to compare the real scene and its reproduction in terms of how similar to the real scene the reproduction is. The participants produced a five-point scoring scale by comparing the reproduction to the real scene. The data from this paired comparison plus category experiments was analysed with Torgerson’s Law of Categorical Judgement [Torgerson, 1958] (extended Thurstone’s Law of Comparative Judgement), as shown in [Kim and MacDonald, 2006; Kuang et al., 2007; Ritschel et al., 2008; Yu et al., 2009].

6.2.1

Stimuli

In order to measure the perceptual similarity of HDR reproductions to real-world scenes, we arranged two real scenes with high-dynamic-range luminances in a dark room (see Figure 6.10 for the experimental setup). The scenes were captured by our HDR imaging system (using a Canon 350D camera, see Chapter 3). The HDR images were characterised to produce physically-meaningful HDR radiance maps in absolute terms [Kim and Kautz, 2008a]. The calibrated HDR radiance maps (absolute CIEXYZ) were reproduced on a characterised LCD display with three different HDR tonemapping algorithms ([Reinhard et al., 2002], [Durand and Dorsey, 2002], and [Reinhard and Devlin, 2005]), an image appearance model (iCAM06), and our method (using J M h and J Ch colour connection spaces). We used an Apple Cinema HD Display 23” monitor with a maximum luminance of 275.6 cd/m2 whose gamma was calibrated to 2.2 following the sRGB colour specification [IEC, 2003]. Figure 6.11 shows a screen shot of the stimulus. Participants were seated in front of the

6.2. Experimental Evaluation

161

(a) CIECAM02

(b) Reinhard et al. (2002)

(c) iCAM06

(d) Our model

Figure 6.5: Qualitative comparison of visual predictions of (a) CIECAM02, (b) Reinhard et al.’s tonemapping algorithm, (c) the iCAM06 image appearance model, and (d) our colour appearance model. The target display is assumed to be sRGB with a peak luminance level of 250 cd/m2 and a gamma of 2.2 (dim viewing conditions, adapting luminance is assumed to be 10% of peak luminance). Our model takes into account not only tone but also original colourfulness. Estimated peak luminance: 13 405 cd/m2 . Image courtesy of Paul Debevec.

6.2. Experimental Evaluation

162

(a) CIECAM02

(b) Reinhard et al. (2002)

(c) iCAM06

(d) Our model

Figure 6.6: Qualitative comparison of visual predictions. Absolute HDR radiance maps are tonemapped using (a) CIECAM02, (b) Reinhard et al.’s tone-mapping algorithm, (c) the iCAM06 image appearance model, and (d) our colour appearance model. Different from other methods, our model does not struggle with local adaptation artefacts like halos. Estimated peak luminance: 1 199 cd/m2 . Image courtesy of Yuanzhen Li.

6.2. Experimental Evaluation

163

(a) Reinhard et al. (2002)

(b) iCAM06

(c) Our model

Figure 6.7: Qualitative comparison of visual predictions of (a) Reinhard et al.’s local tone mapping (top), (b) an image appearance model, iCAM06, (middle), and (c) our model (bottom). Estimated peak luminance: 8 774 cd/m2 . Image courtesy of Martin Cadik.

6.2. Experimental Evaluation

164

(a) Reinhard et al. (2002)

(b) iCAM06

(c) Our model

Figure 6.8: Qualitative comparison of visual predictions of (a) Reinhard et al.’s local tone mapping (top), (b) an image appearance model, iCAM06, (middle), and (c) our model (bottom). Estimated peak luminance: 18 238 cd/m2 . Image courtesy of Dani Lischinski.

6.2. Experimental Evaluation

165

(a) Reinhard et al. (2002)

(b) iCAM06

(c) Our model

Figure 6.9: Qualitative comparison of visual predictions of (a) Reinhard et al.’s local tone mapping (top), (b) an image appearance model, iCAM06, (middle), and (c) our model (bottom). Estimated peak luminance: 13 437 cd/m2 . Image courtesy of Greg Ward.

6.2. Experimental Evaluation

166

screen at a distance of approximately 60–100cm. All stimulus reproductions were presented twice against a middle-gray background (⇠20% background ratio) in a random order, i.e., each reproduction is shown twice in each phase with consideration of training and measurement accuracy. The actual data is averaged from these twice-repeated data. Figure 6.12, 6.13, 6.14, and 6.15 present the actual reproductions used as visual stimuli for the experiment.

6.2.2

Experimental Procedure

The goal of our psychophysical experiments was to quantify perceptual similarity of reproductions to their original real scene. The paired comparison plus category method [Scheffé, 1952] was used (see Figure 6.10), using a five-point scoring scale (see Figure 6.11). The technique is a combination of a five-point category rating scale and a pair comparison. Participants estimate the difference between a pair (the real scene and the reproduction) and assign a number to this difference. These categories are labelled with the following descriptions: (1) not similar, (2) slightly similar, (3) moderately similar, (4) very much similar, and (5) extremely similar, adapted from [Bartleson, 1984; Meilgaard et al., 1991]. The paired comparison plus category experiments were conducted in three sessions on two days. Two different scenes were built and used as stimuli in the same way. Ten colour-normal participants took part in each experiment. In the experiments, three of the participants were female computer scientists with an imaging background; the other participants were male with a computer graphics or science background. The participants were given instructions beforehand which contained a brief description of the task. On each day, participants were given a real scene and a series of reproduced scenes in a dark room (see Figure 6.10 for experimental setting). They were asked to compare the real scene and the reproduced scene with three criteria. In the first phase, they were asked to assign a score (1–5) to how similar each reproduction was to the real scene in terms of realism (considering all visual aspects). In the second, they were asked to score lightness reproduction such as tone, contrast, lightness, and shadow. In the third, they were asked to score colour reproduction, e.g., how similar the reproduced colour chart was to the real colour chart.

Black curtain

High-dynamic-range scene

Darkroom

Reproduced scene

HDR camera system Participant

Figure 6.10: reproductions.

Schematic diagram of psychophysical experiments for evaluating visual accuracy in

6.2. Experimental Evaluation

167

Figure 6.11: Screen capture of a reproduction stimulus. Participants observed six different reproductions in a dark room, compared with the captured real scene. The participants were allowed to compare the real scene and the reproduced scene anytime they felt it necessary. The category of the reproduction was judged based on their memory.

The experiment was conducted in a controlled environment under dark viewing conditions, following sRGB standard viewing conditions. Participants were asked to adapt to the illumination conditions for 5-10 minutes before starting the experiment. The participants were allowed to compare the real scene and the reproduced scene anytime they felt it necessary. The category of the reproduction was judged based on their memory. In the experiment, the participants made six estimates (six reproduction methods compared to a reference for each of the two scenes) in terms of three different criteria: realism reproduction, lightness reproduction, and colour reproduction for each phase. The same set of stimuli were repeated seamlessly twice to achieve a higher accuracy (average data was used for analysis). In completing three phases, participants spent approximately 20–30 minutes. See Table 6.1 for a summary of the experimental evaluation (see Appendix A.7). The inter-observer variance of the ten participants of all phases (average variation of each participant to an average result) was 14.81%. Three observers repeated the same experiment twice in order to judge repeatability. The average variation between the two experiments was 12.96%.

6.2.3

Quantitative Results and Analysis

The category experiment yielded similarity scores on a five-point scale relating the reference real scenes to the reproduced images. We analysed this data using perceptual scaling. The five-point scores were scaled using the “Law of Categorical Judgement” [Torgerson, 1954, 1958]. This is an

6.2. Experimental Evaluation

168

(a) Durand and Dorsey [2002]

(b) Reinhard et al. [2002]

(c) Reinhard and Devlin [2005] Figure 6.12: Comparison of perceptual predictions of Durand and Dorsey [2002] (top), Reinhard et al. [2002] (middle), and Reinhard and Devlin [2005] (bottom). These reproductions are compared with a real scene as ground truth (scene one).

6.2. Experimental Evaluation

169

(a) iCAM06

(b) Our model (JMh)

(c) Our model (JCh) Figure 6.13: Comparison of perceptual predictions of iCAM06 (top), our model by using the JMh colour space (middle), and our model by using JCh colour space (bottom). These reproductions are compared with a real scene as ground truth (scene one).

6.2. Experimental Evaluation

170

(a) Durand and Dorsey [2002]

(b) Reinhard et al. [2002]

(c) Reinhard and Devlin [2005] Figure 6.14: Comparison of perceptual predictions of Durand and Dorsey [2002] (top), Reinhard et al. [2002] (middle), and Reinhard and Devlin [2005] (bottom). These reproductions are compared with a real scene as ground truth (scene two).

6.2. Experimental Evaluation

171

(a) iCAM06

(b) Our model (JMh)

(c) Our model (JCh) Figure 6.15: Comparison of perceptual predictions of iCAM06 (top), our model by using the JMh colour space (middle), and our model by using JCh colour space (bottom). These reproductions are compared with a real scene as ground truth (scene two).

6.2. Experimental Evaluation

Numbers

172

Observers

Phases

Methods

Scenes

Estimates

10

3

6

2

72

Table 6.1: Summary of our evaluation experiment. In each phase, six reproductions were shown twice. Two scenes were used. Each participant totalled 72 estimations, which took approximately 20–30 minutes per participant. extension to Thurstone [1959]’s pair-comparison scaling that allows for several categories. First, the frequency matrix FM of N participants for each score was computed for each reproduction. A cumulative frequency matrix was then computed from the lowest score to the highest score. A logistic psychometric model LG (following Condition D in the Law of Categorical Judgement — assuming that all the discriminal dispersions and correlations are constant, independent of category or sample [Engeldrum, 2000]), was derived from the cumulative frequency matrix: ◆ ✓ FM + 1/2 . LG = ln N − FM + 1/2

(6.2)

Also, from the normalised frequency matrix FM, z scores were computed through the normal-inverse

statistic function. Then, a linear least squares-fit was used to find the best fit from the LG to the z scores (see Figure 6.16 for an example). The difference of the response scales for each method between neighbouring categories were averaged to find category boundary estimates. Finally, the differences between category boundary estimates and response scales yields perceptually-uniform scales for the given stimuli [Morovic, 2008]. These scale values can be related to the original categories (from not similar to extremely similar). The estimated scale values are on a perceptually-uniform scale, which allows one to judge relative differences in similarity of the reproduced HDR images to the captured real-world scene. The results are summarised with estimated category boundaries in Figures 6.18 (see Figure 6.12 and 6.13 for actual stimuli) and 6.19 (see Figure 6.14 and 6.15).

1 y = 0.6422x R² = 1

z-score

0.5 0 -2

-1

0

1

2

-0.5 -1 -1.5

z-score Linear (z-score)

LG

Figure 6.16: An example of a linear least-squares fit from LG to z-score (from a phase of lightness reproduction in scene two).

6.2. Experimental Evaluation

173

Scene one (overall)

Durand&Dorsey Reinhard et al. Reinhard&Devlin iCAM06 Our model (JMh) Our model (JCh)

1

2

(a)

3 Category score

4

5

4

5

Scene two (overall)

Durand&Dorsey Reinhard et al. Reinhard&Devlin iCAM06 Our model (JMh) Our model (JCh)

(b)

1

2

3 Category score

Figure 6.17: Overall quantitative comparison of visual predictions of Durand and Dorsey [2002], Reinhard et al. [2002], Reinhard and Devlin [2005], iCAM06, and our model (J M h and J Ch colour spaces) with real scenes (scene one and two). Our J M h model has overall mean of category scores significantly different from other five methods in both scenes (one-way ANOVA, F-test, alpha=0.05): (scene one) F-value=34.48, p-value=0.0, (scene two) F-value=59.77, p-value=0.0. The dotted lines indicate 95% confidence interval. It is compared in terms of the reproductions of realism, lightness, and colourfulness. Descriptions of scores: (1) not similar, (2) slightly similar, (3) moderately similar, (4) very much similar, and (5) extremely similar.

6.2. Experimental Evaluation 5.0 4.5

174

Error bar: 95% confidence interval

Scene 1 (Realism)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

5.0 4.5

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Error bar: 95% confidence interval

Scene 1 (Lightness)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

5.0 4.5

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Error bar: 95% confidence interval

Scene 1 (Colourfulness)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Figure 6.18: Quantitative comparison of visual predictions of Durand and Dorsey [2002], Reinhard et al. [2002], Reinhard and Devlin [2005], iCAM06, and our model (J M h and J Ch colour spaces) with a real scene (scene one). It is compared in terms of the reproductions of realism, lightness, and colourfulness. Descriptions of scores: (1) not similar, (2) slightly similar, (3) moderately similar, (4) very much similar, and (5) extremely similar.

6.2. Experimental Evaluation

5.0 4.5

175

Error bar: 95% confidence interval

Scene 2 (Realism)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

5.0 4.5

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Error bar: 95% confidence interval

Scene 2 (Lightness)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

5.0 4.5

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Error bar: 95% confidence interval

Scene 2 (Colourfulness)

score

5

Scales (similarity)

4.0

4

3.5 3.0

3

2.5 2.0

2

1.5 1.0

1

0.5 0.0 Durand&Dorsey

Reinhard et al.

Reinhard&Devlin

iCAM06

Our model (JMh) Our model (JCh)

Figure 6.19: Quantitative comparison of visual predictions of Durand and Dorsey [2002], Reinhard et al. [2002], Reinhard and Devlin [2005], iCAM06, and our model (J M h and J Ch colour spaces) with a real scene (scene two). It is compared in terms of the reproductions of realism, lightness, and colourfulness. Descriptions of scores: (1) not similar, (2) slightly similar, (3) moderately similar, (4) very much similar, and (5) extremely similar.

6.3. Discussion

176

Our reproductions using the J M h colour connection space were all considered very much similar to the reference scene in all scenes. Our J M h method statistically significantly outperforms the other methods in all phases of all scenes (one-way ANOVA, F-test, alpha=0.05). See Figure 6.17. Our reproductions using the J Ch colour space were considered very much similar or moderately similar. Our J Ch method mostly outperforms the other methods. However, its performance was significantly lower than our J M h method in all cases. In one phase of realism reproduction in scene two, our J Ch method presents similar but better performance than iCAM06. Reinhard et al. [2002] shows a better performance than our J Ch method in one phase of colour reproduction in scene two.

6.3

Discussion

As shown in Figure 6.18 and 6.19, the performance of the presented HDR tone-mapping algorithms depends on the scene. Scene one contains fewer objects than scene two but has more obvious colour samples and luminance changes (e.g., light and shadow). In post interviews, the participants said that they felt the task was much easier for scene one as the shadow and colour differences (using the colour chart as a reference) were more obvious than in scene two. Participants felt more confident in judging the similarity of colourfulness (with scene one) by comparing the colour chart in the real scene to that in the reproductions. Participants commented that the overall change in luminances in scene one is clearer than that of scene two (see Figure 6.12, 6.13, 6.14, and 6.15) and this helped lightness judgement. Scene two contains more ordinary objects than scene one but does not contain any standard object like the GretagMacbeth ColorChecker. As shown in Figure 6.19, participants felt the reproductions of a few tone-mapping algorithms (Reinhard et al. [2002] and iCAM06) were more similar to the real scene in scene two than in scene one. These two methods were ranked between slightly similar and moderately similar in scene one, but were ranked between moderately similar and very much similar in scene two. Their performance became closer to that of our J Ch method. This indicates that our J Ch method may not distinctively outperform the other methods with ordinary scenes (e.g., without standard objects like a colour chart). However, the performance of our J M h method was ranked top in all scenes with statistical significance. Our reproduction system using the J M h colour connection space significantly outperforms other methods as our pipeline takes perceptual transformation of colour attributes into account. The 1:1 perceptual mapping in our J M h colour space yields high-fidelity colour reproduction. However, our system has limitations when used with current HDR imaging technology. First, current available HDR images have been generated using uncalibrated HDR imaging systems. This means that our J M h reproduction system is not fully compatible with existing HDR images. Hence, in order to obtain absolute scale in uncalibrated HDR images, we empirically scale existing HDR images to reasonable levels (see Figure 6.5). The complete application requires characterisation procedures for input/output devices [Kim and Kautz, 2008a]. In addition, if the specification of an output device is not available (e.g., non sRGB colour device), our J Ch model may be a safer choice for colour reproduction on unknown devices. In this case, the colourfulness intensity will depend on the colour specifications of the output medium.

6.4. Summary

177

The scope of this thesis limits the mapping from an input to an output colour gamut as a direct 1:1 mapping. Colours outside the target gamut were simply clamped. This enables the faithful reproduction of perceived colours with high fidelity such that they are inside the gamut. However, it does not include any image enhancement for out-of-gamut colours. If we would like to improve the preference of users, rather than faithfulness in reproduction, it would be interesting to study a gamut mapping algorithm that scales and adapts perceptual colours intelligently. As shown in Equation (6.1), we calculate the average luminance adaptation by employing the geometric mean, which is empirically believed to work well for tone-mapping [Reinhard et al., 2005]. However, the actual mathematical relationship between the geometric mean and the spatial coherence of the luminance adaptation is currently unknown. This would be worth studying in the future. In addition, the impact and correlation between luminance adaptation and surround luminance levels is not explored in this thesis.

6.4

Summary

This chapter presented a novel HDR imaging pipeline that is built on our HDR characterisation method and colour appearance model. It also describes the psychophysical evaluation of the reproduction performance, compared with other HDR tone reproduction and image appearance models. A series of psychophysical experiments were conducted to quantify the perceptual similarity of the reproductions to the reference real scene. As a result, the proposed colour reproduction system, called the J M h colour space model, outperformed other HDR tone reproduction methods and an image appearance model with statistical significance. This result cross-validates the quantitative evaluation of our colour appearance model (see Chapter 5). Consequently, our J M h colour reproduction system provides a good basis for high-fidelity colour reproduction for high-dynamic-range imaging.

178

Chapter 7

Discussion and Future Work The aim of this thesis was to develop a colour reproduction system for high-dynamic-range (HDR) imaging. Classical colour reproduction systems fail to reproduce HDR images because the current characterisation methods and colour appearance models (CAM) fail to cover the dynamic range of luminance in HDR images. HDR tone-mapping algorithms have been developed to reproduce HDR images on low-dynamic-range (LDR) media such as LCD displays. However, these models have been based on theoretical assumptions, due to a lack of physical and psychological measurements. Hence, we revisited the key infrastructure of classical colour reproduction elements (the characterisation method and the colour appearance model), reformulating them for high-dynamic-range imaging through a series of physical and psychological experiments. To this end, the most essential elements of colour reproduction, the device characterisation and the colour appearance model, were investigated with respect to high-dynamic-range imaging. First, our HDR characterisation method enables us to measure high-dynamic-range radiances to a high accuracy, competing with very expensive spectroradiometers in accuracy (Chapter 3). Second, modelling colour appearance requires significant effort to prepare before the mathematic development. We firstly built a high-luminance display to obtain a controllable high-luminance viewing environments. We conducted a psychophysical experiment on this display to measure colour appearance attributes of human colour perception (Chapter 4). A novel numerical model was derived from a novel experimental data set, which covers the full range of the human visual system (Chapter 5). Our colour appearance model predicts perceptual colour attributes under high luminance levels to a high accuracy. Finally, our colour reproduction system is built on our novel HDR characterisation and colour appearance models. This system outperforms other HDR reproduction methods with statistical significance (Chapter 6). The following sections summarise findings and discuss each chapter.

7.1

High-Dynamic-Range Characterisation

Current camera characterisation methods for low-dynamic-range camera systems were established with classical colorimetry, which interprets a colour with three essential elements: a light source, an object, and an observer. These characterisation models were numerically derived from a set of physical measurements of reflective colour samples and camera responses to the physical radiation.

7.2. High-Luminance Colour Experiments

179

A light source is required to obtain camera responses from the reflective targets. The numerical transform of the characterisation model bakes the actual spectral characteristics into the characterisation model. The imaging sensor of the digital camera is illuminated by radiant power, which is the sum of the product of the spectral energy of the light source and reflectance property of the target surface. Therefore, separate measurements of the light source and reflectance for building a characterisation model are not only redundant, but also contribute to worse performance and luminance dependency as in previous characterisation methods. Our scientific insights into characterisation are that the reflective targets only offer a low dynamic range which make them a bad choice for HDR imaging, and that characterisation based on reflective targets requires both the reflectance of the target and the spectrum of the illuminant to be known. Therefore, we proposed the use of a novel back-lit transparency colour target, specifically designed for HDR imaging, offering a higher dynamic range and wider colour gamut. Thus, our characterisation method only requires the emitted radiant power to be known, which can be measured using a spectroradiometer. This enables us to accurately characterise a digital camera used for HDR imaging. The achieved accuracy of the characterised HDR camera system is similar to the accuracy of a spectroradiometer. Our characterisation model transforms HDR RGB images into physically-meaningful CIEXYZ radiance maps in absolute scale. The captured CIEXYZ radiance can be white-balanced by our illuminant estimation method for display. In addition, the combination of a new transparency colour target, HDR imaging, and characterisation theory yields significantly higher accuracy in measuring real-world radiation. The big advantage using our HDR characterisation method is that such highly-accurate measurement is provided not at a point as with a spectroradiometer but as an whole image. This provides greater efficiency in measuring radiance when compared to a spectroradiometer. The performance of our characterisation method depends on the optical quality of the digital camera, including lens flare, vignetting, veiling glare, and the infrared filter. Modelling these optical phenomena is worth future study. Also, the measurement used in our method returns radiometric CIEXYZ values, not radiance at each wavelength. This means that our method still allows potential measurement errors with metameric colours as is the case with any other target-based models.

7.2

High-Luminance Colour Experiments

Our HDR characterisation method interprets colour specifications of device-dependent HDR RGB images into highly accurate and physically-meaningful radiance values in the form of absolute CIEXYZ. However, this is not sufficient for HDR colour reproduction as the given physical colours are perceived differently under different viewing environment conditions. Therefore, perceptual attributes need to be measured and modelled to allow for HDR colour reproduction. Colour appearance modelling was developed to predict colour appearance attributes under given viewing conditions. CAMs interpret physical colours perceptually. Colour appearance models have previously been derived numerically from experimental measurements of colour appearance. However, currently-available colour appearance measurements present only a limited dy-

7.3. Colour Appearance Model

180

namic range of luminance. The range of colours in the available data sets are at most three orders of magnitude. This was limited by the available display technology in the early 1990s. Therefore, a novel high-luminance display device was built to yield a controllable high-luminance viewing environment, where a series of psychophysical experiments were conducted to produce colour appearance data under high luminance levels up to five-order magnitude (covering the dynamic range of the human visual system). From the high-luminance colour experiments, we found that if the luminance level increases, then lightness and colourfulness both increase. This confirms the Stevens and Hunt effects. In contrast, if the background luminance level increases, lightness and colourfulness both decrease, confirming the simultaneous contrast effect. Most of our findings are consistent with previous colour appearance data sets under low luminance, and similar trends can be observed in both data sets. However, the previous data sets quantify these colour appearance phenomena to at most three orders of magnitude of luminance (690 cd/m2 ), but our data set covers luminance up to five orders of magnitude of luminance (16 860 cd/m2 ). Although our colour appearance data includes fewer different media than previous appearance data and less variation in colour temperature, it covers five orders of magnitude of luminance. The range of the experimental data corresponds to the working range of the human visual system. This experimental data set enables us to derive a novel colour appearance model for an extended range of luminance levels. Accordingly, our numerical model covers the full range of colour perception of the human visual system.

7.3

Colour Appearance Model

A colour appearance model describes a conversion from physical measurements to perceptual quantities. This conversion differs amongst existing CAMs and involves numerical transfer functions that are matched to psychophysical observation data. Our CAM is mainly derived from our observation data under high-luminance levels. Our model is based on two insights. First, the modelling approaches of current CAMs are based on classical colourimetry using relative colour coordinates. As shown in our colour experiments, colour perception changes according to the absolute levels of luminances. Modelling and optimisation in our model were based on absolute-scaled quantities of physical measurements. Second, a physiologically-derived cone response function [Valeton and van Norren, 1983] has been broadly used in existing CAMs. This function has two parameters. One parameter was previously taken from primate measurements. In contrast, our CAM uses this data and optimises the cone model purely based on our high-luminance experimental observations. The other parameter is modelled as a constant in other CAMs and differs amongst existing CAMs. However, through investigation of physiological literature, we found that this parameter should change dynamically according to the level of luminance adaptation, which we adopt in our model. Our cone response function enables us to cover the full working range of the human visual system and to predict the simultaneous contrast effect with various level of luminance adaptation. As a result, the lightness and colourfulness predictions of our model were statistically signifi-

7.4. High-Dynamic-Range Colour Reproduction

181

cantly better than the predictions of the other models and also very consistent up to high level of luminances. The variation of the predictions on the test data sets reaches that of the psychological measurements. This means that it would be difficult to achieve a better lightness and colourfulness prediction than that from our model. Hue predictions were almost identical with current standard models. Our model was also tested on previous appearance data, which allows us to cross-validate our model’s performance on different media such as paper, transparency, or CRT. Our model outperformed the other CAMs even with this data set, presenting similar accuracy to that of our CAM with our high-luminance data set. Our psychophysical experiments and colour appearance model focused on high-luminance photopic vision rather than mesopic or scotopic vision as our research was motivated by the advent of high-dynamic-range imaging, which deals with higher levels of luminance. Therefore, our model does not include the contribution of rods under dark luminance conditions. Our model also does not take a separate background parameters because we believe that the explicit measurement of luminance in a 10◦ viewing angle already includes the background luminance level implicitly, which is confirmed in our result variation (measurement data is fitted well by our model). We also do not include a separate surround parameter, as our data and previous experiments (see Section 4.5.8) showed that this parameter has no significant influence on perception. As is well known from previous research, colour appearance depends on the medium. Considering our research goal, we chose high-luminance and high-dynamic-range media rather than reflective media. Furthermore, our research scope does not include chromatic adaptation experiments, and we rely on previous CAT models. This is a worthwhile area for future study. Eye ball movement and light scattering within the eye may be an interesting direction of study. Our research scope focuses on 2◦ colour perception (sensed by the fovea, the main concentration of colour receptors, that comprises approximately 2◦ diameter of the visual field [Hunt, 1998]) rather than 10◦ perception (sensed by the fovea and rods together before the blind spot in a 10-12◦ diameter of the visual field). Using 10◦ observations, reflective media, increasing variation in surround conditions, and mesopic/scotopic-level luminances are all worth future study.

7.4

High-Dynamic-Range Colour Reproduction

Our HDR characterisation method enables us to convert device-dependent HDR images to physically-meaningful radiance maps. Our colour appearance model allows us to convert physical coordinates to perceptually-uniform colour appearance attributes. These perceptual coordinates from the forward application of our model are reproducible on a target medium with new viewing parameters as input. With colour reproduction in mind, we developed all our model equations to have analytical invertibility. Our inverse CAM is able to convert perceptual coordinates back to physical quantities. These physical quantities are easily reproducible using inverse output device characterisation. This completes the pipeline for image reproduction of the real world onto a target medium. We conducted a series of psychophysical experiments to evaluate accuracy of real-world repro-

7.4. High-Dynamic-Range Colour Reproduction

182

duction. Real scenes were built to be compared with their reproductions. Our reproduction system using the J M h colour connection space outperforms other reproduction methods with statistical significance in all scenes, as our system takes into account perception under high luminance. These experimental results match the previous quantitative results of our CAM for cross-validation. Our colour reproduction system is fully independent of existing HDR imaging systems. The entire pipeline needs to be used in order to achieve high-fidelity reproduction as our method supersedes existing HDR methods. The scope of this thesis did not include gamut mapping and we use direct 1:1 mapping. This enables the faithful reproduction of the perceived colours with high fidelity. A user preference study, called a gamut mapping algorithm, can be a worthwhile area for future study. In addition, the numerical relationship between the averaged luminance adaptation and its spatial coherence within vision is worth studying in the future.

183

Chapter 8

Conclusion The focus of this thesis has been the development of a colour reproduction system for high-dynamicrange imaging, which enables us to reproduce the visual perception of the human visual system on any target medium. With this context, we revisited the key infrastructure of classical colour reproduction elements, reformulating them for high-dynamic-range imaging. We developed an HDR characterisation method that enables us to measure high-dynamic-range radiances to a high accuracy. This method measures the physical radiance values of the real world as an image with significant accuracy, rivaling spectroradiometers. This allows us to obtain physically-meaningful HDR radiance maps with a standard digital camera. However, this is not sufficient for HDR colour reproduction as the given physical colours under high luminance ranges are perceived differently due to their viewing environment conditions. Hence, we built a novel high-luminance display and conducted a psychophysical experiment to measure the perceptual colour attributes under a wide range of luminance levels. This enables us to quantify human perception, covering the full range of the human visual system. A novel colour appearance model was then derived from the experimental data. Our model predicts perceptual colour attributes of lightness and colourfulness under high luminance levels with significant accuracy. This completes the colour reproduction pipeline with respect to high-dynamic-range imaging. Finally, our reproduction system was built on these fundamental contributions, our novel HDR characterisation and colour appearance models. A psychophysical evaluation showed that our HDR reproduction system outperforms other methods with statistical significance. Our colour reproduction system provides high-fidelity colour reproduction for high-dynamic-range imaging.

184

Appendix A

Supplementals A.1

Notation

L . . . . . . . . . . . . . . . . . . . . . . Radiance (luminance),

I . . . . . . . . . . . . . . . . . . . . . . . . . . . Radiant intensity,

F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luminous flux,

Φ . . . . . . . . . . . . . . . . . . . . . . . Radiant power (flux),

B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radiosity,

P . . . . . . . . . Spectral power distribution of light,

E . . . . . . . . . . . . . . . . . . . .Irradiance (Illuminance),

S . . . . . . Surface reflectance (or transmittance),

!i . . . . . . . . . . . . . . . . . . . . . . . . Incoming direction,

!o . . . . . . . . . . . . . . . . . . . . . . . . Outgoing direction,

Km . Maximum photographic luminous efficacy,

M,N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices,

. . . . . . . . . . . . . . . . . . . . . . . . . . . Inverse matrix,

M> . . . . . . . . . . . . . . . . . . . . . . Matrix transposition,

M · N . . . . . . . . . . . . . . . . . . . .Matrix multiplication,

λ . . . . . . . . . . . . . . . . . . . . . . . . . . Wavelength [nm],

J . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lightness,

M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Colourfulness,

H . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hue quadrature,

Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brightness,

C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chroma,

h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hue angle,

aC . . . . . . . . . Cartesian coordinate of red–green,

bC . . . . . . . Cartesian coordinate of yellow–blue,

s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saturation,

A . . . . . . . . . . . . . . . . . . . . . . . . . . Achromatic signal,

a . . . . . . . Colour opponent signals of red–green,

b . . . . . Colour opponent signals of yellow–blue,

⇤ ∆Eab

∆E00 . . . . . . . . . Colour difference (CIEDE2000).

M

−1

A.2

. . . . . . . . . . . . Colour difference in CIELAB,

Relative Camera Transforms Forward transform

Inverse transform

R

G

B

X

Y

Z

X

0.5730

0.2459

0.0243

R

2.0085

-0.5795

-0.1481

Y

0.2486

0.9000

-0.1486

G

-0.5915

1.3206

0.2312

Z

0.0459

-0.1865

0.9108

B

-0.2223

0.2996

1.1528

Table A.1: Relative characterisation for Canon 350D. The forward transform for Canon 350D was computed from reference colour samples and corresponding input radiance.

A.3. Physical Measurements in High-Dynamic-Range Characterisation

A.3

185

Physical Measurements in HDR Characterisation Radiometric measurements

Index

Patch#

X

Y[ cd/m2 ]

1

A1

62.07

55.00

2

A2

74.51

52.30

3

A3

85.72

4

A4

95.19

5

A5

6 7 8

Z

Canon 350D measurements u’

v’

X

Y[ cd/m2 ]

41.80

0.2452

0.4889

61.29

54.78

34.86

0.3093

0.4885

72.08

52.05

51.15

30.13

0.3635

0.4880

83.24

43.70

20.02

0.4696

0.4851

97.18

249.08

223.40

163.09

0.2436

0.4917

A6

274.88

207.10

133.65

0.2907

A7

315.96

201.30

115.16

0.3433

A8

351.65

188.70

90.75

Z

u’

v’

45.81

0.2403

0.4832

38.09

0.2981

0.4844

53.10

33.44

0.3398

0.4876

51.87

23.59

0.4109

0.4935

239.85

212.94

165.53

0.2441

0.4876

0.4928

269.27

208.48

138.88

0.2825

0.4921

0.4922

309.22

205.33

124.13

0.3288

0.4913

0.4072

0.4916

350.27

203.21

103.26

0.3778

0.4932

9

A9

661.76

684.00

507.90

0.2127

0.4946

637.25

660.40

502.79

0.2115

0.4932

10

A10

692.45

671.60

481.01

0.2269

0.4951

672.97

651.20

479.57

0.2266

0.4933

11

A11

707.64

643.60

442.52

0.2422

0.4955

690.25

630.87

452.93

0.2398

0.4932

12

A12

775.76

630.60

396.90

0.2716

0.4967

752.35

624.11

396.62

0.2662

0.4969

13

A13

1536.87

1754.00

1429.26

0.1913

0.4912

1448.50

1679.81

1381.64

0.1882

0.4910

14

A14

1705.22

1876.00

1516.60

0.1983

0.4909

1573.24

1745.80

1374.06

0.1974

0.4928

15

A15

1718.63

1913.00

1403.91

0.1985

0.4972

1625.68

1823.93

1316.89

0.1974

0.4984

16

A16

1691.10

1910.00

1489.51

0.1943

0.4938

1571.64

1796.83

1326.75

0.1934

0.4975

17

A17

1735.75

1914.00

1487.13

0.1989

0.4935

1593.96

1749.30

1339.92

0.2002

0.4943

18

A18

1667.42

1893.00

1375.70

0.1951

0.4983

1470.06

1660.50

1206.57

0.1960

0.4982

19

A19

1635.99

1816.00

1520.63

0.1957

0.4888

1484.88

1654.36

1355.82

0.1956

0.4903

20

A20

139.73

69.14

49.70

0.4215

0.4693

135.13

75.67

52.68

0.3785

0.4768

21

A21

563.15

419.80

314.33

0.2887

0.4842

518.17

391.66

295.22

0.2848

0.4843

22

A22

1129.88

1117.00

680.27

0.2268

0.5045

1027.31

1009.54

633.15

0.2274

0.5028

23

B1

89.74

82.78

51.33

0.2417

0.5016

89.97

82.36

57.19

0.2404

0.4952

24

B2

107.30

84.73

35.98

0.2888

0.5131

106.04

84.60

43.89

0.2815

0.5053

25

B3

125.88

87.83

24.25

0.3321

0.5214

121.11

88.15

31.32

0.3151

0.5161

26

B4

143.16

80.74

2.83

0.4202

0.5332

142.27

86.02

9.81

0.3893

0.5295

27

B5

336.62

316.30

183.19

0.2391

0.5056

327.33

304.71

188.12

0.2397

0.5021

28

B6

365.03

297.90

120.40

0.2811

0.5161

357.58

294.29

132.46

0.2767

0.5124

29

B7

402.06

283.60

74.66

0.3296

0.5230

396.35

286.82

92.62

0.3186

0.5187

30

B8

442.68

260.20

35.90

0.3976

0.5258

440.29

275.90

55.78

0.3711

0.5232

31

B9

810.86

859.30

614.60

0.2087

0.4975

792.55

825.11

606.72

0.2115

0.4954

32

B10

832.46

845.10

540.55

0.2201

0.5027

820.97

823.40

550.50

0.2215

0.4999

33

B11

832.20

807.50

471.41

0.2318

0.5061

822.38

794.46

483.83

0.2318

0.5039

34

B12

922.23

814.50

359.25

0.2595

0.5156

905.39

799.19

373.52

0.2584

0.5133

35

B13

1458.45

1705.00

1462.16

0.1857

0.4884

1402.16

1647.50

1369.41

0.1856

0.4906

36

B14

1544.23

1610.00

1413.15

0.2064

0.4841

1473.92

1555.87

1313.97

0.2050

0.4870

37

B15

1682.46

1887.00

1187.28

0.2006

0.5062

1580.56

1784.79

1124.01

0.1993

0.5063

38

B16

1359.44

1526.00

1196.96

0.1953

0.4933

1323.02

1498.13

1170.19

0.1938

0.4938

39

B17

1566.54

1652.00

1209.22

0.2091

0.4960

1462.77

1537.75

1158.15

0.2089

0.4942

40

B18

1505.11

1783.00

1230.39

0.1885

0.5024

1418.09

1660.18

1162.56

0.1903

0.5013

Table A.2: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 1-40).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

41

B19

1438.32

42

B20

123.06

43

B21

382.46

44

B22

743.61

45

C1

46

C2

47 48

Y[ cd/m

186

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

1479.45

0.1925

0.4820

1326.12

1478.13

1357.06

0.1924

0.4825

79.54

31.62

0.3489

0.5073

120.28

83.02

40.99

0.3232

0.5020

269.80

100.32

0.3234

0.5133

364.02

259.57

113.29

0.3167

0.5081

678.90

374.95

0.2468

0.5070

687.28

626.56

359.23

0.2463

0.5051

179.17

182.90

103.80

0.2216

0.5090

175.59

175.98

110.64

0.2232

0.5032

194.16

186.20

69.29

0.2431

0.5245

189.39

180.15

78.38

0.2423

0.5185

C3

209.88

189.10

41.30

0.2648

0.5368

202.51

180.43

50.63

0.2646

0.5305

C4

221.93

183.00

7.42

0.2970

0.5510

212.04

177.56

12.38

0.2912

0.5487

49

C5

633.05

647.00

323.02

0.2239

0.5150

635.20

635.92

340.53

0.2269

0.5112

50

C6

648.79

621.50

181.47

0.2468

0.5319

654.88

626.96

199.48

0.2458

0.5294

51

C7

682.13

610.00

92.34

0.2699

0.5431

677.16

607.72

109.55

0.2676

0.5404

52

C8

742.59

592.10

13.75

0.3073

0.5513

706.87

564.46

19.64

0.3062

0.5502

53

C9

963.61

1042.00

697.37

0.2063

0.5019

950.18

998.07

709.28

0.2106

0.4977

54

C10

960.63

1018.00

582.37

0.2137

0.5096

942.98

995.44

599.42

0.2134

0.5069

55

C11

973.57

1002.00

475.51

0.2234

0.5174

968.29

983.64

490.16

0.2253

0.5149

56

C12

1054.88

995.20

227.79

0.2532

0.5374

1029.45

983.47

250.09

0.2491

0.5354

57

C13

1371.93

1663.00

1498.88

0.1781

0.4857

1336.75

1649.87

1407.92

0.1764

0.4899

58

C14

1458.92

1449.00

1407.14

0.2129

0.4757

1402.82

1384.45

1351.37

0.2140

0.4751

59

C15

1703.74

1909.00

989.48

0.2046

0.5158

1640.46

1848.88

961.51

0.2034

0.5158

60

C16

1102.99

1233.00

955.53

0.1964

0.4940

1088.48

1194.15

943.90

0.1994

0.4923

61

C17

1387.87

1386.00

959.97

0.2215

0.4978

1340.95

1331.81

940.15

0.2222

0.4966

62

C18

1377.55

1709.00

1051.47

0.1827

0.5099

1327.24

1613.51

1000.99

0.1861

0.5089

63

C19

1226.63

1365.00

1466.04

0.1880

0.4707

1133.42

1253.52

1290.49

0.1904

0.4739

64

C20

309.09

241.30

39.59

0.3055

0.5366

296.64

233.25

54.21

0.2998

0.5304

65

C21

519.28

517.20

265.63

0.2289

0.5130

496.30

486.50

263.16

0.2313

0.5101

66

C22

629.10

567.00

254.77

0.2542

0.5155

586.77

526.34

255.99

0.2537

0.5121

67

D1

121.49

131.20

76.63

0.2095

0.5091

123.57

130.05

84.46

0.2123

0.5028

68

D2

123.00

132.90

49.69

0.2172

0.5279

124.15

131.90

58.35

0.2180

0.5212

69

D3

124.49

136.40

30.18

0.2202

0.5429

127.44

135.97

37.48

0.2236

0.5369

70

D4

124.64

136.90

7.84

0.2264

0.5596

122.64

132.61

10.14

0.2290

0.5571

71

D5

486.96

539.00

271.90

0.2075

0.5167

507.36

553.00

287.39

0.2100

0.5150

72

D6

476.63

532.10

149.27

0.2141

0.5377

479.92

537.68

162.01

0.2126

0.5358

73

D7

453.86

511.60

73.12

0.2175

0.5516

445.41

502.70

79.18

0.2167

0.5502

74

D8

453.52

518.20

17.21

0.2191

0.5634

433.33

489.29

7.30

0.2224

0.5650

75

D9

1111.98

1241.00

613.08

0.2062

0.5179

1081.20

1189.82

627.83

0.2078

0.5145

76

D10

1086.73

1223.00

346.66

0.2123

0.5377

1078.51

1198.42

370.50

0.2139

0.5348

77

D11

1048.63

1185.00

171.48

0.2169

0.5515

1065.41

1188.01

176.34

0.2195

0.5507

78

D12

1060.35

1212.00

67.87

0.2181

0.5610

1019.54

1161.55

47.54

0.2194

0.5625

79

D13

1304.36

1623.00

1522.87

0.1727

0.4834

1288.30

1618.15

1480.31

0.1718

0.4854

80

D14

1387.78

1315.00

1386.15

0.2197

0.4683

1343.14

1272.77

1329.41

0.2200

0.4690

2

]

1601.00

Table A.3: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 41-80).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

81

D15

1689.15

82

D16

83

D17

84

D18

85 86

Y[ cd/m

187

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

1897.00

798.97

0.2076

0.5247

1691.36

1923.33

795.53

0.2055

0.5257

840.61

936.20

712.42

0.1975

0.4950

861.79

933.98

721.79

0.2023

0.4934

1225.97

1153.00

726.33

0.2369

0.5013

1219.78

1136.97

750.30

0.2377

0.4985

1259.78

1616.00

842.58

0.1798

0.5189

1212.71

1530.33

831.40

0.1819

0.5166

D19

1036.21

1149.00

1395.34

0.1846

0.4605

977.00

1085.54

1290.35

0.1849

0.4623

D20

83.71

87.95

38.04

0.2207

0.5218

87.66

87.92

47.72

0.2263

0.5106

87

D21

754.43

750.40

43.67

0.2485

0.5562

702.43

695.24

46.44

0.2493

0.5552

88

D22

981.35

1142.00

1369.00

0.1767

0.4626

877.69

1012.16

1230.30

0.1777

0.4612

89

E1

166.93

194.60

111.49

0.1952

0.5120

170.41

193.09

123.02

0.1984

0.5058

90

E2

156.30

197.00

73.29

0.1877

0.5322

158.53

191.51

86.67

0.1927

0.5237

91

E3

142.09

198.20

46.45

0.1746

0.5481

145.52

194.63

53.74

0.1804

0.5430

92

E4

131.04

207.00

12.33

0.1601

0.5692

129.32

201.89

7.33

0.1627

0.5714

93

E5

695.18

835.70

429.02

0.1915

0.5181

724.04

847.47

458.36

0.1955

0.5150

94

E6

613.05

804.10

266.13

0.1820

0.5371

636.36

814.45

288.34

0.1856

0.5343

95

E7

559.20

801.90

151.07

0.1715

0.5534

565.34

806.92

145.84

0.1725

0.5541

96

E8

507.96

787.20

81.79

0.1618

0.5640

500.57

778.51

63.27

0.1619

0.5665

97

E9

1102.24

1277.00

815.66

0.1942

0.5062

1095.99

1269.32

807.86

0.1943

0.5064

98

E10

1024.48

1247.00

618.44

0.1899

0.5199

1035.48

1240.58

659.16

0.1916

0.5164

99

E11

956.07

1213.00

485.20

0.1856

0.5298

989.63

1258.73

515.00

0.1848

0.5290

100

E12

915.95

1239.00

343.54

0.1784

0.5431

914.79

1226.43

352.79

0.1796

0.5419

101

E13

1235.53

1577.00

1531.82

0.1676

0.4813

1240.89

1599.82

1525.77

0.1665

0.4829

102

E14

1283.05

1152.00

1319.84

0.2279

0.4603

1281.34

1147.15

1273.59

0.2297

0.4628

103

E15

1683.11

1885.00

655.51

0.2109

0.5314

1698.82

1909.29

690.20

0.2097

0.5302

104

E16

634.64

706.20

542.36

0.1975

0.4944

656.85

718.90

572.47

0.1997

0.4917

105

E17

1075.93

945.60

559.26

0.2541

0.5025

1080.41

954.86

573.35

0.2524

0.5019

106

E18

1141.92

1508.00

684.15

0.1769

0.5258

1136.84

1484.65

696.13

0.1784

0.5241

107

E19

862.38

949.40

1329.85

0.1807

0.4475

836.24

888.61

1298.54

0.1852

0.4428

108

E20

198.26

249.80

47.47

0.1940

0.5500

193.42

236.08

55.18

0.1984

0.5448

109

E21

622.21

790.20

166.41

0.1918

0.5481

598.15

751.87

168.19

0.1933

0.5466

110

E22

246.03

498.90

339.87

0.1125

0.5132

254.58

482.17

344.01

0.1195

0.5094

111

F1

112.89

143.10

103.19

0.1758

0.5013

121.64

145.14

118.25

0.1834

0.4923

112

F2

95.31

143.30

86.56

0.1522

0.5150

107.49

148.67

100.25

0.1630

0.5072

113

F3

80.48

144.30

79.26

0.1297

0.5231

92.86

150.92

94.60

0.1407

0.5144

114

F4

60.92

143.00

62.22

0.1018

0.5379

74.92

152.21

74.99

0.1160

0.5303

115

F5

338.96

446.50

304.10

0.1706

0.5056

355.91

456.71

321.66

0.1742

0.5030

116

F6

279.85

450.80

257.52

0.1432

0.5192

292.35

448.31

274.59

0.1491

0.5146

117

F7

223.50

438.80

209.72

0.1202

0.5312

242.63

445.39

231.45

0.1274

0.5262

118

F8

195.05

415.00

190.33

0.1116

0.5343

223.23

438.11

208.86

0.1203

0.5313

119

F9

763.00

930.40

671.02

0.1824

0.5005

781.09

922.97

712.24

0.1864

0.4956

120

F10

671.47

912.20

602.81

0.1662

0.5079

684.98

907.85

628.39

0.1693

0.5047

2

]

Table A.4: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 81-120).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

121

F11

592.91

122

F12

534.95

123

F13

1178.32

124

F14

1185.38

125

F15

126

F16

127

F17

128

F18

129

F19

694.75

130

F20

131

F21

132

F22

269.93

133

G1

134

G2

135

G3

136

G4

137

G5

138

G6

139

G7

140

Y[ cd/m

188

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

531.76

0.1521

0.5158

622.86

900.07

566.22

0.1575

0.5120

908.50

494.07

0.1368

0.5226

565.07

923.74

510.28

0.1417

0.5212

1536.00

1548.79

0.1633

0.4789

1200.79

1560.33

1588.86

0.1635

0.4781

1007.00

1264.54

0.2361

0.4513

1194.98

1006.96

1247.57

0.2385

0.4522

1696.57

1896.00

536.35

0.2138

0.5375

1708.62

1903.50

583.49

0.2135

0.5352

461.95

510.20

394.57

0.1987

0.4938

483.94

525.73

414.17

0.2014

0.4922

936.69

763.20

410.73

0.2752

0.5044

950.98

785.90

449.81

0.2700

0.5020

1048.87

1437.00

570.28

0.1725

0.5319

1039.24

1394.73

591.17

0.1752

0.5289

757.00

1243.16

0.1761

0.4318

664.59

724.49

1204.24

0.1755

0.4305

51.06

99.19

25.21

0.1265

0.5529

57.06

99.10

32.55

0.1391

0.5434

154.58

330.70

58.45

0.1169

0.5626

155.28

316.83

60.11

0.1221

0.5604

522.30

77.40

0.1295

0.5639

252.66

478.50

74.16

0.1321

0.5627

80.90

99.24

87.57

0.1766

0.4875

92.68

105.25

104.84

0.1867

0.4770

72.62

102.00

93.73

0.1542

0.4873

83.81

108.39

108.96

0.1646

0.4790

61.55

98.86

97.24

0.1341

0.4846

73.93

106.21

111.76

0.1477

0.4774

45.36

99.00

107.80

0.0979

0.4806

59.85

108.63

123.62

0.1162

0.4746

351.40

449.90

382.65

0.1704

0.4909

383.97

475.87

415.95

0.1751

0.4884

296.98

453.40

406.54

0.1428

0.4906

317.34

465.07

426.65

0.1481

0.4882

238.79

430.10

412.58

0.1205

0.4883

265.97

444.93

441.52

0.1287

0.4845

G8

210.96

416.50

415.35

0.1095

0.4865

243.13

449.79

434.12

0.1173

0.4882

141

G9

780.10

948.10

768.12

0.1803

0.4931

799.67

954.41

793.48

0.1828

0.4909

142

G10

693.74

936.00

800.17

0.1620

0.4916

709.41

929.82

840.02

0.1652

0.4872

143

G11

619.27

916.90

816.67

0.1472

0.4905

650.70

944.52

852.65

0.1498

0.4892

144

G12

551.84

921.90

847.84

0.1304

0.4903

586.80

936.04

881.49

0.1359

0.4878

145

G13

1126.68

1500.00

1567.95

0.1591

0.4765

1156.75

1563.62

1606.01

0.1572

0.4782

146

G14

1099.24

882.90

1204.07

0.2449

0.4426

1100.98

889.21

1197.54

0.2442

0.4438

147

G15

1705.58

1899.00

441.87

0.2165

0.5423

1727.77

1930.67

471.30

0.2153

0.5413

148

G16

317.72

347.80

277.54

0.1996

0.4916

341.25

367.02

301.73

0.2022

0.4892

149

G17

803.82

603.50

283.15

0.3003

0.5073

858.41

661.52

325.43

0.2920

0.5064

150

G18

961.86

1357.00

461.88

0.1695

0.5380

978.34

1350.04

482.19

0.1726

0.5358

151

G19

560.46

599.10

1167.94

0.1718

0.4131

542.10

576.68

1138.06

0.1720

0.4117

152

G20

91.73

194.40

74.01

0.1136

0.5417

107.05

199.72

89.56

0.1270

0.5331

153

G21

160.71

281.90

204.10

0.1285

0.5073

174.85

286.56

212.05

0.1369

0.5048

154

G22

43.63

57.57

112.89

0.1401

0.4159

48.60

59.87

115.41

0.1504

0.4168

155

H1

83.92

98.83

104.05

0.1787

0.4735

94.37

105.13

120.30

0.1857

0.4656

156

H2

76.46

99.52

129.49

0.1562

0.4575

89.08

107.10

146.45

0.1669

0.4515

157

H3

70.79

100.60

160.63

0.1373

0.4392

82.69

107.50

173.53

0.1493

0.4366

158

H4

57.29

97.82

216.68

0.1054

0.4048

71.67

105.60

237.61

0.1210

0.4013

159

H5

291.72

354.90

382.37

0.1726

0.4723

321.18

375.05

415.57

0.1786

0.4692

160

H6

265.90

356.40

473.30

0.1513

0.4562

284.66

372.42

489.74

0.1551

0.4566

2

]

893.40

Table A.5: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 121-160).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

161

H7

230.30

162

H8

163

H9

164

Y[ cd/m

189

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

346.60

574.60

0.1288

0.4361

253.25

360.26

609.16

0.1353

0.4332

205.73

337.30

662.90

0.1134

0.4185

235.38

356.66

695.37

0.1227

0.4184

824.06

947.40

830.57

0.1881

0.4865

862.10

974.58

877.41

0.1904

0.4842

H10

784.93

944.00

940.00

0.1767

0.4782

808.49

964.29

950.66

0.1784

0.4788

165

H11

734.91

919.10

1019.35

0.1672

0.4705

754.51

924.36

1048.25

0.1699

0.4683

166

H12

669.74

934.10

1340.06

0.1432

0.4495

682.44

943.45

1349.61

0.1446

0.4497

167

H13

1069.91

1451.00

1576.80

0.1553

0.4737

1118.34

1534.94

1654.95

0.1537

0.4746

168

H14

1011.17

768.10

1145.72

0.2533

0.4329

1031.46

782.48

1179.18

0.2530

0.4319

169

H15

1713.50

1891.00

340.02

0.2204

0.5473

1774.26

1970.11

360.56

0.2190

0.5471

170

H16

209.19

229.20

181.07

0.1997

0.4923

234.14

248.20

202.91

0.2051

0.4892

171

H17

697.70

483.00

186.80

0.3282

0.5112

740.46

532.62

222.56

0.3152

0.5101

172

H18

860.00

1240.00

339.02

0.1680

0.5450

886.76

1244.10

369.30

0.1717

0.5421

173

H19

445.52

466.30

1086.04

0.1666

0.3923

436.78

447.37

1070.55

0.1687

0.3887

174

H20

48.51

74.83

101.78

0.1314

0.4562

58.40

80.36

114.86

0.1452

0.4497

175

H21

140.05

240.30

315.37

0.1194

0.4611

154.78

243.15

333.56

0.1289

0.4556

176

H22

41.19

20.08

48.10

0.3385

0.3713

43.33

23.55

48.35

0.3200

0.3913

177

I1

132.74

142.70

196.45

0.1855

0.4486

145.45

150.52

211.75

0.1915

0.4458

178

I2

136.91

146.10

294.68

0.1705

0.4093

144.85

150.52

298.23

0.1757

0.4108

179

I3

137.41

145.70

431.45

0.1519

0.3625

143.13

144.51

443.51

0.1572

0.3572

180

I4

134.93

141.40

647.85

0.1285

0.3030

132.45

128.08

654.95

0.1318

0.2869

181

I5

429.58

469.50

487.10

0.1923

0.4730

445.20

483.29

523.63

0.1922

0.4694

182

I6

426.45

462.70

627.40

0.1844

0.4502

452.61

472.03

672.86

0.1895

0.4448

183

I7

416.76

449.10

751.36

0.1772

0.4297

433.31

467.80

774.18

0.1774

0.4308

184

I8

404.57

429.40

1032.99

0.1627

0.3886

405.47

433.30

1038.51

0.1619

0.3892

185

I9

881.06

971.30

882.15

0.1947

0.4830

926.56

990.60

938.21

0.1993

0.4793

186

I10

860.63

949.80

967.58

0.1911

0.4746

907.80

979.38

1010.70

0.1949

0.4731

187

I11

848.50

930.50

1038.99

0.1894

0.4672

906.02

978.68

1101.71

0.1918

0.4663

188

I12

853.30

924.00

1323.37

0.1827

0.4451

884.04

951.10

1347.22

0.1842

0.4460

189

I13

1022.52

1410.00

1553.91

0.1524

0.4729

1083.06

1519.05

1592.33

0.1512

0.4773

190

I14

934.79

673.80

1073.73

0.2622

0.4252

960.37

699.42

1082.76

0.2613

0.4282

191

I15

1671.97

1838.00

263.38

0.2227

0.5508

1750.36

1924.76

269.29

0.2228

0.5512

192

I16

126.32

138.70

111.37

0.1989

0.4913

146.51

154.33

132.14

0.2051

0.4860

193

I17

600.47

381.90

115.99

0.3597

0.5148

655.83

434.16

151.66

0.3441

0.5126

194

I18

786.28

1169.00

262.09

0.1646

0.5506

805.83

1165.98

276.49

0.1685

0.5487

195

I19

353.49

361.50

988.59

0.1617

0.3722

341.54

352.01

948.85

0.1613

0.3741

196

I20

199.38

249.00

472.13

0.1490

0.4188

203.04

244.08

474.10

0.1536

0.4155

197

I21

308.62

433.00

769.73

0.1355

0.4276

315.90

425.24

784.04

0.1397

0.4230

198

I22

22.83

23.38

3.34

0.2381

0.5486

26.92

27.41

7.55

0.2337

0.5355

199

J1

44.84

40.54

75.05

0.2043

0.4155

52.54

46.85

84.15

0.2085

0.4184

200

J2

52.62

39.51

132.74

0.2017

0.3408

55.25

42.54

129.87

0.2041

0.3535

2

]

Table A.6: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 161-200).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

201

J3

202

J4

203

J5

204

J6

205 206

X

Canon 350D measurements

Z

u’

v’

37.27

225.62

0.1846

0.2589

Z

u’

v’

32.80

217.08

0.1874

0.2461

69.08

35.27

365.91

0.1629

0.1872

265.59

267.20

316.28

0.2034

0.4605

54.28

19.49

341.07

0.1585

0.1280

291.33

282.80

345.34

0.2092

292.03

272.70

441.92

0.2046

0.4300

0.4570

319.86

291.41

472.28

0.2095

0.4294

J7

305.00

263.50

562.40

0.2052

J8

333.18

253.70

812.46

0.2027

0.3989

308.07

267.16

563.68

0.2052

0.4003

0.3472

326.19

234.90

813.02

0.2075

0.3362

207

J9

759.97

820.60

751.34

208

J10

768.36

810.00

828.78

0.1984

0.4820

804.34

854.33

790.69

0.2012

0.4808

0.1995

0.4732

816.04

851.79

902.29

0.2003

209

J11

772.41

787.00

0.4703

913.88

0.2017

0.4624

815.60

818.59

988.11

0.2032

0.4588

210

J12

818.59

211

J13

1002.69

771.50

1229.72

0.2036

0.4318

833.62

782.34

1247.38

0.2044

0.4317

1406.00

1592.45

0.1493

0.4709

1052.99

1482.59

1631.95

0.1494

212

J14

867.42

0.4734

593.20

1012.07

0.2710

0.4170

912.05

630.23

1047.55

0.2701

0.4199

213

J15

214

J16

1671.11

1813.00

191.89

0.2270

0.5542

1731.78

1893.32

188.06

0.2257

0.5551

66.18

72.04

59.91

0.1996

0.4888

82.77

86.03

74.73

0.2073

215

0.4847

J17

497.63

286.00

61.53

0.4003

0.5177

550.09

338.13

95.39

0.3724

0.5151

216

J18

725.19

1092.00

187.21

0.1642

0.5563

743.39

1087.54

192.53

0.1686

0.5551

217

J19

276.44

272.10

898.80

0.1567

0.3471

265.05

251.48

903.87

0.1571

0.3354

218

J20

37.88

15.55

210.57

0.1678

0.1550

28.97

7.98

191.68

0.1601

0.0992

219

J21

255.08

239.60

544.35

0.1861

0.3934

263.22

237.69

557.16

0.1914

0.3889

220

J22

16.17

26.03

4.46

0.1540

0.5578

20.63

30.06

8.16

0.1664

0.5454

221

K1

73.64

65.53

78.97

0.2277

0.4559

82.25

73.76

90.12

0.2255

0.4550

222

K2

90.37

64.78

109.79

0.2598

0.4190

95.93

71.11

114.70

0.2547

0.4248

223

K3

102.26

59.56

141.44

0.2881

0.3775

105.97

62.86

141.43

0.2877

0.3840

224

K4

122.02

53.94

190.48

0.3248

0.3231

126.41

58.13

179.59

0.3289

0.3404

225

K5

218.69

196.30

230.31

0.2270

0.4584

241.41

214.10

255.12

0.2289

0.4568

226

K6

264.42

196.00

310.93

0.2557

0.4264

280.97

212.31

325.09

0.2531

0.4303

227

K7

296.17

183.40

385.93

0.2817

0.3925

309.65

195.02

392.13

0.2808

0.3979

228

K8

340.63

167.10

512.42

0.3108

0.3430

347.59

174.36

485.53

0.3146

0.3551

229

K9

667.09

674.00

634.29

0.2104

0.4784

728.41

719.45

686.13

0.2146

0.4769

230

K10

700.97

653.10

709.43

0.2221

0.4655

739.83

690.84

760.59

0.2211

0.4645

231

K11

748.80

641.10

816.99

0.2337

0.4502

787.05

671.11

868.82

0.2339

0.4487

232

K12

858.20

620.60

1054.26

0.2575

0.4190

881.03

642.22

1052.65

0.2578

0.4228

233

K13

968.57

1369.00

1580.58

0.1476

0.4695

1022.28

1448.70

1658.80

0.1475

0.4702

234

K14

806.35

529.70

956.03

0.2776

0.4103

831.43

564.50

976.50

0.2720

0.4155

235

K15

1652.55

1761.00

130.42

0.2323

0.5569

1702.16

1818.10

126.47

0.2320

0.5575

236

K16

30.57

33.73

27.86

0.1972

0.4896

42.57

44.27

38.04

0.2075

0.4854

237

K17

411.75

216.40

27.61

0.4403

0.5207

458.80

265.31

53.11

0.3992

0.5193

238

K18

666.77

1006.00

123.98

0.1654

0.5614

683.95

1014.76

119.98

0.1682

0.5615

239

K19

226.41

215.20

832.51

0.1522

0.3254

219.78

205.20

833.96

0.1516

0.3184

240

K20

102.89

58.06

207.69

0.2577

0.3272

101.27

57.67

198.77

0.2592

0.3322

59.79

Y[ cd/m

190

2

]

X 56.18

Y[ cd/m2 ]

Table A.7: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 201-240).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

241

K21

411.55

242

K22

243

L1

244

Y[ cd/m

191

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

324.90

654.39

0.2271

0.4034

406.68

319.03

649.37

0.2278

0.4021

15.72

31.86

48.76

0.0983

0.4481

24.24

38.31

57.47

0.1257

0.4470

75.29

68.07

65.25

0.2331

0.4741

86.87

79.15

77.53

0.2306

0.4728

L2

93.20

67.66

70.06

0.2828

0.4619

102.06

76.47

79.55

0.2744

0.4626

245

L3

102.08

61.10

71.55

0.3311

0.4459

114.43

71.81

84.61

0.3167

0.4471

246

L4

116.20

53.21

72.67

0.4105

0.4229

131.08

68.03

76.46

0.3797

0.4434

247

L5

223.18

196.80

186.49

0.2390

0.4743

254.42

222.59

217.29

0.2397

0.4719

248

L6

274.74

193.10

202.42

0.2908

0.4599

299.60

214.79

222.51

0.2861

0.4615

249

L7

308.87

176.20

208.09

0.3455

0.4434

333.28

198.60

226.27

0.3340

0.4478

250

L8

344.20

163.30

219.58

0.3988

0.4257

376.84

197.45

225.30

0.3755

0.4427

251

L9

666.78

675.90

556.44

0.2138

0.4876

712.67

716.63

605.43

0.2147

0.4857

252

L10

717.37

662.50

579.61

0.2315

0.4811

772.99

709.17

640.13

0.2319

0.4788

253

L11

754.65

634.20

595.51

0.2504

0.4735

823.16

694.27

656.67

0.2493

0.4731

254

L12

833.94

613.60

623.33

0.2801

0.4638

920.09

685.24

681.89

0.2779

0.4656

255

L13

879.10

1281.00

1530.48

0.1424

0.4670

948.62

1376.78

1614.36

0.1435

0.4686

256

L14

634.14

360.00

768.83

0.3041

0.3885

683.15

396.73

786.15

0.3039

0.3971

257

L15

1580.28

1579.00

41.61

0.2490

0.5597

1647.26

1640.83

9.52

0.2506

0.5618

258

L16

0.91

0.87

0.84

0.2209

0.4751

6.84

6.66

3.79

0.2315

0.5075

259

L17

209.71

92.58

1.15

0.5237

0.5202

240.90

126.38

15.12

0.4416

0.5213

260

L18

540.56

783.50

33.12

0.1745

0.5690

544.67

792.46

0.02

0.1753

0.5737

261

L19

116.62

90.08

580.72

0.1453

0.2526

106.20

74.49

565.06

0.1456

0.2297

262

L20

158.68

85.98

148.30

0.3352

0.4087

169.72

97.57

151.91

0.3250

0.4204

263

L21

416.18

326.20

363.01

0.2602

0.4588

439.78

342.93

381.95

0.2614

0.4586

2

]

264

L22

48.63

28.06

2.01

0.4090

0.5310

54.38

34.67

7.22

0.3650

0.5235

265

Dmin

2119.82

2346.00

1816.99

0.1983

0.4938

2179.67

2556.41

1877.51

0.1889

0.4985

266

N1

1660.37

1848.00

1455.37

0.1968

0.4929

1743.47

2010.82

1550.42

0.1908

0.4950

267

N2

1511.22

1690.00

1347.91

0.1956

0.4922

1616.26

1845.68

1383.88

0.1933

0.4965

268

N3

1361.14

1524.00

1217.15

0.1953

0.4921

1428.66

1626.15

1244.10

0.1934

0.4952

269

N4

1168.89

1299.00

1029.69

0.1969

0.4924

1232.35

1371.09

1122.58

0.1959

0.4903

270

N5

988.05

1098.00

885.53

0.1965

0.4913

1090.82

1202.39

942.53

0.1987

0.4929

271

N6

855.00

950.00

769.53

0.1964

0.4910

940.16

1023.24

866.59

0.1991

0.4876

272

N7

705.68

785.60

640.03

0.1959

0.4907

791.58

857.17

721.40

0.2002

0.4878

273

N8

597.38

663.00

532.48

0.1968

0.4915

668.50

729.19

596.34

0.1996

0.4899

274

N9

498.70

556.10

459.63

0.1952

0.4898

552.69

607.39

519.64

0.1970

0.4871

275

N10

413.79

459.60

385.10

0.1956

0.4888

462.07

504.43

433.62

0.1981

0.4866

276

N11

336.89

372.00

317.60

0.1962

0.4874

370.12

404.84

352.77

0.1974

0.4857

277

N12

263.65

291.40

247.04

0.1962

0.4879

301.92

323.45

277.69

0.2017

0.4863

278

N13

216.97

240.40

206.35

0.1954

0.4871

239.73

254.51

235.83

0.2012

0.4807

279

N14

174.66

194.30

161.92

0.1954

0.4892

199.72

215.84

192.45

0.1990

0.4839

280

N15

135.74

150.10

129.26

0.1957

0.4868

159.53

170.08

153.55

0.2012

0.4827

Table A.8: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 241-280).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

281

N16

101.13

282

N17

283

N18

284

Y[ cd/m

192

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

111.10

95.53

0.1969

0.4868

117.16

123.88

115.94

0.2017

0.4799

74.20

81.02

69.83

0.1980

0.4864

87.47

92.46

86.23

0.2019

0.4802

52.18

56.92

48.01

0.1988

0.4879

62.74

65.75

61.97

0.2032

0.4792

N19

35.01

38.32

32.24

0.1982

0.4881

41.46

44.40

40.89

0.1998

0.4814

285

N20

20.50

22.47

19.10

0.1977

0.4875

26.74

28.32

26.51

0.2014

0.4799

286

N21

13.02

14.29

12.67

0.1962

0.4846

17.58

18.72

18.05

0.1995

0.4779

287

N22

5.23

5.71

5.06

0.1971

0.4846

9.03

9.50

9.12

0.2018

0.4779

288

Dmax

0.83

0.80

0.79

0.2176

0.4732

3.67

3.78

3.31

0.2086

0.4840

289

A1

0.92

0.88

0.57

0.2325

0.5003

1.68

1.64

1.21

0.2236

0.4939

290

A2

0.97

0.72

0.39

0.2998

0.5008

1.86

1.58

1.11

0.2579

0.4923

291

A3

1.20

0.77

0.38

0.3456

0.4989

2.09

1.62

1.03

0.2845

0.4945

292

A4

1.37

0.66

0.32

0.4481

0.4857

2.36

1.56

0.85

0.3323

0.4961

293

A5

3.62

3.48

2.23

0.2316

0.5010

5.14

4.91

3.41

0.2311

0.4964

294

A6

4.13

3.31

1.96

0.2769

0.4993

5.78

4.81

3.03

0.2659

0.4975

295

A7

4.48

2.98

1.63

0.3314

0.4960

6.37

4.68

2.74

0.3009

0.4967

296

A8

5.19

2.83

1.34

0.4019

0.4930

7.42

4.75

2.45

0.3448

0.4971

297

A9

10.58

11.65

7.83

0.2027

0.5021

13.71

14.68

9.97

0.2079

0.5008

298

A10

11.27

11.67

7.68

0.2153

0.5017

14.03

14.42

9.40

0.2171

0.5020

299

A11

11.80

11.50

7.38

0.2286

0.5014

14.41

14.07

8.87

0.2287

0.5024

300

A12

12.48

10.81

6.34

0.2578

0.5024

15.88

13.95

8.35

0.2538

0.5019

301

A13

27.62

33.27

24.14

0.1844

0.4998

32.10

38.21

26.37

0.1876

0.5025

302

A14

30.43

35.29

24.74

0.1920

0.5010

34.79

39.95

26.02

0.1954

0.5049

303

A15

31.86

37.45

24.29

0.1912

0.5057

35.08

40.98

25.85

0.1929

0.5071

304

A16

32.12

38.27

25.74

0.1880

0.5040

33.54

39.80

25.11

0.1901

0.5074

305

A17

31.28

36.36

24.81

0.1922

0.5026

35.74

40.51

26.43

0.1978

0.5045

306

A18

30.43

36.41

23.30

0.1883

0.5069

34.64

40.87

25.43

0.1913

0.5081

307

A19

30.60

35.67

25.99

0.1902

0.4988

32.90

38.25

26.86

0.1915

0.5009

308

A20

1.95

0.99

0.66

0.4153

0.4744

3.28

2.17

1.49

0.3250

0.4848

309

A21

8.96

7.07

5.08

0.2752

0.4885

11.58

9.30

6.64

0.2708

0.4895

310

A22

19.60

20.59

11.51

0.2160

0.5105

22.64

23.34

13.08

0.2198

0.5099

311

B1

1.20

1.18

0.62

0.2312

0.5116

1.99

2.00

1.24

0.2233

0.5039

312

B2

1.41

1.19

0.41

0.2753

0.5227

2.31

2.08

1.00

0.2533

0.5128

313

B3

1.61

1.21

0.30

0.3117

0.5271

2.60

2.10

0.87

0.2833

0.5148

314

B4

1.92

1.16

0.12

0.3902

0.5305

2.91

2.06

0.52

0.3302

0.5239

315

B5

4.86

4.88

2.50

0.2272

0.5133

6.50

6.54

3.53

0.2257

0.5109

316

B6

5.11

4.48

1.54

0.2657

0.5241

7.08

6.24

2.59

0.2612

0.5179

317

B7

5.49

4.13

0.94

0.3126

0.5290

7.70

6.04

1.89

0.2962

0.5229

318

B8

6.52

3.96

0.46

0.3875

0.5296

8.82

5.94

1.29

0.3465

0.5252

319

B9

12.46

14.01

9.13

0.1994

0.5044

15.64

17.43

10.72

0.2022

0.5073

320

B10

13.35

14.44

8.40

0.2093

0.5093

16.26

17.37

9.88

0.2122

0.5102

2

]

Table A.9: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 281-288 and dark-side – patch index: 289-320).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

Y[ cd/m

321

B11

13.22

13.72

322

B12

14.17

323

B13

27.26

324

B14

325 326

193

Canon 350D measurements

Z

u’

v’

X

7.33

0.2194

0.5123

16.68

17.28

Z

u’

v’

9.01

0.2203

0.5134

13.37

5.35

0.2456

0.5214

18.21

33.52

24.99

0.1802

0.4986

28.82

17.30

7.04

0.2437

0.5210

34.98

25.42

0.1830

27.51

30.31

23.70

0.1989

0.4931

0.4999

29.94

32.45

24.22

0.2032

0.4955

B15

30.06

35.56

19.33

0.1935

B16

24.34

28.76

19.78

0.1890

0.5150

32.55

38.27

20.35

0.1950

0.5159

0.5025

27.17

31.56

21.53

0.1923

0.5026

327

B17

27.69

30.96

20.05

328

B18

27.18

33.70

20.50

0.2006

0.5046

30.12

33.06

20.89

0.2046

0.5054

0.1830

0.5105

29.48

35.45

21.81

0.1881

329

B19

25.33

29.55

0.5091

24.09

0.1873

0.4917

27.33

31.35

25.20

0.1907

0.4922

330

B20

1.71

331

B21

5.51

1.12

0.40

0.3470

0.5114

2.81

2.22

1.11

0.2849

0.5066

4.13

1.40

0.3076

0.5187

7.33

5.73

2.33

0.2925

332

B22

0.5144

11.46

11.15

5.76

0.2339

0.5120

13.91

13.40

7.04

0.2359

0.5109

333 334

C1

2.33

2.58

1.22

0.2085

0.5196

3.36

3.54

2.03

0.2148

0.5093

C2

2.45

2.53

0.78

0.2293

0.5328

3.54

3.58

1.46

0.2298

0.5229

335

C3

2.68

2.59

0.45

0.2500

0.5436

3.76

3.67

0.97

0.2438

0.5350

336

C4

2.85

2.56

0.12

0.2740

0.5537

4.00

3.67

0.46

0.2651

0.5464

337

C5

9.02

9.86

4.20

0.2128

0.5235

11.36

12.17

5.59

0.2157

0.5199

338

C6

9.28

9.54

2.39

0.2327

0.5381

11.94

12.20

3.29

0.2332

0.5361

339

C7

9.56

9.18

1.13

0.2538

0.5484

12.19

11.68

1.97

0.2522

0.5438

340

C8

10.70

9.20

0.27

0.2863

0.5538

13.01

11.25

0.66

0.2832

0.5511

341

C9

14.64

16.77

10.00

0.1977

0.5096

17.89

20.04

12.31

0.2013

0.5075

342

C10

14.74

16.57

8.50

0.2042

0.5164

17.76

19.84

10.06

0.2055

0.5168

343

C11

14.63

16.17

6.92

0.2105

0.5236

18.11

19.55

8.27

0.2155

0.5234

344

C12

15.94

16.18

3.08

0.2380

0.5436

19.48

19.46

4.31

0.2402

0.5400

345

C13

22.28

28.26

22.41

0.1736

0.4954

25.91

32.58

23.98

0.1767

0.4999

346

C14

22.87

23.96

21.22

0.2051

0.4836

26.81

27.87

23.38

0.2082

0.4871

347

C15

28.68

34.55

15.29

0.1935

0.5245

32.45

38.06

16.77

0.1986

0.5240

348

C16

18.04

21.24

14.68

0.1896

0.5022

20.60

23.98

16.07

0.1923

0.5037

349

C17

22.36

23.66

14.76

0.2122

0.5051

25.95

27.01

16.41

0.2161

0.5061

350

C18

23.42

30.35

16.03

0.1778

0.5185

26.57

34.12

17.49

0.1799

0.5197

351

C19

19.95

23.22

22.04

0.1837

0.4811

22.00

25.38

23.19

0.1863

0.4837

352

C20

4.21

3.53

0.39

0.2887

0.5447

5.74

4.99

1.11

0.2734

0.5352

353

C21

7.55

7.99

3.71

0.2180

0.5191

9.68

10.07

4.83

0.2210

0.5172

354

C22

9.20

8.87

3.66

0.2402

0.5210

11.11

10.67

4.49

0.2407

0.5201

355

D1

1.58

1.83

0.84

0.2003

0.5220

2.36

2.63

1.48

0.2041

0.5116

356

D2

1.49

1.68

0.44

0.2128

0.5398

2.38

2.62

1.06

0.2118

0.5258

357

D3

1.55

1.80

0.33

0.2099

0.5484

2.41

2.76

0.78

0.2092

0.5383

358

D4

1.58

1.88

0.16

0.2089

0.5592

2.43

2.81

0.39

0.2123

0.5529

359

D5

6.67

7.84

3.43

0.1983

0.5244

8.42

9.87

4.31

0.1987

0.5244

360

D6

6.42

7.70

1.72

0.2021

0.5453

8.28

9.63

2.49

0.2067

0.5411

2

]

Y[ cd/m2 ]

Table A.10: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 321-360).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

361

D7

6.34

362

D8

363

D9

364

D10

365 366

Y[ cd/m

194

Canon 350D measurements

Z

u’

v’

X

7.63

0.78

0.2060

0.5577

8.07

Z

u’

v’

9.57

1.23

0.2079

0.5545

6.31

7.78

0.26

0.2039

0.5656

16.37

19.53

8.54

0.1955

0.5248

7.97

9.70

0.34

0.2063

0.5651

19.78

22.95

9.89

0.2009

16.83

20.24

4.76

0.2011

0.5442

0.5246

19.59

23.29

5.62

0.2031

0.5433

D11

15.96

19.53

2.26

0.2022

D12

16.82

21.18

0.87

0.1996

0.5568

19.04

23.15

2.72

0.2034

0.5564

0.5654

19.83

24.07

0.75

0.2070

0.5654

367

D13

20.82

26.96

22.25

368

D14

21.44

21.26

20.56

0.1693

0.4932

23.76

30.98

23.81

0.1698

0.4980

0.2133

0.4759

23.75

23.17

21.82

0.2175

369

D15

28.07

33.71

0.4775

12.00

0.1971

0.5325

31.21

36.45

13.32

0.2021

0.5309

370

D16

12.86

371

D17

19.12

15.04

10.19

0.1912

0.5031

15.32

17.91

11.52

0.1924

0.5060

19.01

11.06

0.2266

0.5070

21.45

20.98

12.07

0.2304

372

D18

0.5071

20.69

27.87

12.38

0.1739

0.5271

23.35

30.49

13.22

0.1795

0.5274

373 374

D19

15.93

18.31

20.26

0.1814

0.4690

18.00

20.51

21.91

0.1840

0.4716

D20

1.12

1.25

0.43

0.2117

0.5317

1.89

2.04

0.97

0.2140

0.5187

375

D21

11.04

11.91

0.49

0.2310

0.5607

13.24

14.16

0.61

0.2329

0.5602

376

D22

15.41

18.53

19.83

0.1747

0.4726

16.80

20.01

20.70

0.1773

0.4751

377

E1

2.08

2.57

1.15

0.1887

0.5247

2.97

3.52

1.86

0.1935

0.5163

378

E2

1.89

2.51

0.72

0.1813

0.5417

2.79

3.51

1.32

0.1880

0.5318

379

E3

1.91

2.76

0.51

0.1704

0.5540

2.74

3.70

0.94

0.1792

0.5455

380

E4

1.70

2.78

0.18

0.1548

0.5694

2.49

3.86

0.32

0.1621

0.5663

381

E5

9.54

12.13

5.37

0.1838

0.5259

11.71

14.59

6.28

0.1879

0.5265

382

E6

8.73

12.36

3.25

0.1713

0.5456

10.92

14.64

3.97

0.1802

0.5435

383

E7

8.26

12.42

1.85

0.1651

0.5586

9.88

14.31

2.08

0.1712

0.5581

384

E8

8.02

13.02

0.98

0.1555

0.5681

9.64

15.16

0.92

0.1607

0.5690

385

E9

16.54

20.34

11.32

0.1861

0.5148

19.18

22.82

13.06

0.1914

0.5126

386

E10

15.48

19.93

8.70

0.1818

0.5267

18.12

22.95

9.84

0.1850

0.5271

387

E11

14.47

19.49

6.73

0.1770

0.5364

17.66

23.46

7.58

0.1800

0.5382

388

E12

14.19

20.28

4.57

0.1709

0.5496

17.23

24.10

5.30

0.1746

0.5496

389

E13

21.33

28.22

24.28

0.1649

0.4908

22.24

29.29

22.99

0.1677

0.4969

390

E14

18.73

17.68

19.01

0.2197

0.4667

21.63

20.42

20.19

0.2227

0.4731

391

E15

27.24

33.07

9.64

0.1973

0.5390

31.20

36.17

10.72

0.2060

0.5373

392

E16

9.31

10.69

7.49

0.1938

0.5008

11.09

12.63

8.49

0.1963

0.5029

393

E17

15.76

14.67

7.84

0.2431

0.5091

18.24

16.86

9.32

0.2440

0.5073

394

E18

19.11

26.15

10.09

0.1731

0.5329

20.74

28.22

10.84

0.1741

0.5329

395

E19

12.94

14.59

18.96

0.1793

0.4549

14.20

15.77

20.44

0.1821

0.4547

396

E20

2.69

3.55

0.51

0.1872

0.5559

3.80

4.78

0.99

0.1937

0.5483

397

E21

9.06

12.12

2.00

0.1841

0.5541

11.28

14.52

2.62

0.1905

0.5515

398

E22

3.91

7.63

4.46

0.1187

0.5213

5.03

9.22

5.27

0.1263

0.5214

399

F1

1.31

1.74

1.02

0.1720

0.5139

2.06

2.57

1.67

0.1809

0.5071

400

F2

1.15

1.76

0.91

0.1519

0.5231

1.93

2.71

1.50

0.1638

0.5180

2

]

Y[ cd/m2 ]

Table A.11: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 361-400).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

401

F3

1.02

402

F4

403

F5

404

Y[ cd/m

195

Canon 350D measurements

Z

u’

v’

X

Z

u’

v’

1.83

0.79

0.1323

0.5340

1.78

2.78

1.39

0.1490

0.5252

0.78

1.75

0.57

0.1086

0.5480

1.58

2.84

1.14

0.1328

0.5372

4.35

5.95

3.51

0.1671

0.5143

5.84

7.58

4.69

0.1748

0.5106

F6

3.44

5.70

2.74

0.1416

0.5280

5.10

7.84

3.91

0.1517

0.5249

405

F7

2.93

5.77

2.26

0.1218

0.5395

4.41

7.91

3.27

0.1327

0.5358

406

F8

2.72

5.71

2.05

0.1151

0.5437

4.08

7.68

3.03

0.1272

0.5384

407

F9

10.34

13.19

8.37

0.1773

0.5088

12.34

15.37

9.81

0.1813

0.5079

408

F10

9.76

13.84

7.80

0.1622

0.5174

11.49

15.60

8.98

0.1688

0.5154

409

F11

8.81

13.67

6.98

0.1501

0.5240

10.71

15.97

8.31

0.1556

0.5223

410

F12

8.16

14.06

6.30

0.1372

0.5318

10.13

16.80

7.48

0.1424

0.5313

411

F13

18.81

25.35

22.29

0.1615

0.4897

20.28

26.90

23.38

0.1642

0.4902

412

F14

16.76

14.74

17.73

0.2303

0.4558

18.92

16.66

19.24

0.2318

0.4591

413

F15

26.10

31.53

7.61

0.2000

0.5437

29.31

34.87

8.81

0.2026

0.5422

414

F16

6.33

7.29

5.21

0.1928

0.4997

8.07

8.96

6.58

0.1991

0.4971

415

F17

12.86

11.13

5.52

0.2620

0.5101

15.21

13.29

6.58

0.2596

0.5105

416

F18

16.44

23.32

7.64

0.1690

0.5393

18.32

25.49

8.01

0.1725

0.5402

417

F19

9.85

10.84

16.83

0.1767

0.4376

11.41

12.53

18.14

0.1798

0.4444

418

F20

0.73

1.34

0.29

0.1346

0.5558

1.38

2.14

0.73

0.1550

0.5399

419

F21

2.18

4.54

0.59

0.1210

0.5671

3.20

6.13

0.95

0.1304

0.5630

420

F22

4.34

8.29

0.88

0.1322

0.5681

5.04

9.33

0.98

0.1362

0.5676

421

G1

0.86

1.12

0.83

0.1707

0.5002

1.53

1.81

1.41

0.1861

0.4946

422

G2

0.83

1.20

0.90

0.1542

0.5016

1.46

1.88

1.52

0.1708

0.4943

423

G3

0.72

1.17

0.98

0.1358

0.4965

1.38

1.93

1.62

0.1573

0.4936

424

G4

0.51

1.10

1.03

0.1015

0.4925

1.24

2.01

1.76

0.1352

0.4935

425

G5

4.33

5.71

4.39

0.1679

0.4982

5.72

7.34

5.53

0.1729

0.4989

426

G6

3.62

5.62

4.52

0.1427

0.4984

5.04

7.38

5.85

0.1512

0.4983

427

G7

3.10

5.53

4.82

0.1234

0.4952

4.53

7.52

6.22

0.1332

0.4977

428

G8

2.78

5.37

4.79

0.1138

0.4947

4.13

7.28

6.21

0.1252

0.4966

429

G9

10.59

13.37

9.68

0.1764

0.5010

12.25

15.21

11.07

0.1791

0.5003

430

G10

9.98

13.79

10.39

0.1610

0.5004

11.33

15.46

11.16

0.1638

0.5028

431

G11

8.92

13.44

10.59

0.1473

0.4992

10.63

15.62

11.88

0.1516

0.5010

432

G12

8.27

13.93

10.90

0.1324

0.5016

10.01

16.37

12.49

0.1366

0.5028

433

G13

16.49

22.82

20.81

0.1566

0.4876

18.84

25.72

21.90

0.1602

0.4921

434

G14

14.73

12.17

16.40

0.2390

0.4444

16.53

13.87

17.67

0.2382

0.4497

435

G15

26.57

31.78

6.05

0.2038

0.5485

28.15

33.22

6.14

0.2066

0.5487

436

G16

3.95

4.47

3.20

0.1960

0.4991

5.34

5.89

4.40

0.1999

0.4959

437

G17

10.49

8.27

3.55

0.2890

0.5126

12.75

10.13

4.65

0.2855

0.5103

438

G18

14.88

21.92

5.84

0.1648

0.5462

16.67

23.74

6.24

0.1704

0.5458

439

G19

7.53

7.94

15.09

0.1752

0.4157

8.52

9.19

16.14

0.1750

0.4246

440

G20

1.23

2.52

0.74

0.1193

0.5498

2.07

3.61

1.33

0.1372

0.5396

2

]

Y[ cd/m2 ]

Table A.12: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 401-440).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

441

G21

2.15

442

G22

443

H1

444

Y[ cd/m

196

Canon 350D measurements

Z

u’

v’

X

Z

u’

v’

3.72

2.36

0.1322

0.5148

3.06

4.83

3.24

0.1436

0.5101

0.49

0.63

1.25

0.1432

0.4142

1.09

1.29

1.88

0.1675

0.4450

0.95

1.16

1.07

0.1763

0.4842

1.53

1.77

1.60

0.1858

0.4845

H2

0.80

1.06

1.26

0.1562

0.4658

1.45

1.76

1.95

0.1719

0.4704

445

H3

0.80

1.14

1.69

0.1393

0.4467

1.44

1.83

2.34

0.1610

0.4586

446

H4

0.60

0.98

2.07

0.1116

0.4100

1.35

1.82

3.26

0.1406

0.4260

447

H5

3.49

4.36

4.31

0.1706

0.4796

4.61

5.51

5.41

0.1781

0.4792

448

H6

3.09

4.18

5.12

0.1523

0.4636

4.26

5.59

6.63

0.1579

0.4658

449

H7

2.73

4.01

6.32

0.1334

0.4410

3.89

5.38

8.33

0.1418

0.4420

450

H8

2.40

3.75

7.05

0.1203

0.4229

3.69

5.47

9.18

0.1302

0.4347

451

H9

10.60

12.70

10.02

0.1834

0.4945

12.54

14.74

11.59

0.1868

0.4943

452

H10

10.39

12.86

11.71

0.1743

0.4854

12.12

14.66

12.96

0.1789

0.4871

453

H11

9.65

12.41

12.53

0.1654

0.4786

11.66

14.82

14.19

0.1686

0.4823

454

H12

9.16

12.79

16.77

0.1458

0.4580

10.84

15.11

18.38

0.1482

0.4647

455

H13

15.78

22.09

20.94

0.1540

0.4850

17.39

24.03

21.66

0.1570

0.4884

456

H14

13.50

10.49

15.46

0.2486

0.4346

15.38

12.09

16.35

0.2504

0.4427

457

H15

24.98

29.71

4.34

0.2066

0.5529

26.87

31.37

4.63

0.2102

0.5522

458

H16

2.44

2.76

1.96

0.1963

0.4996

3.64

4.03

2.83

0.2008

0.4997

459

H17

8.81

6.34

2.16

0.3192

0.5169

10.33

7.83

3.11

0.3012

0.5140

460

H18

12.83

19.42

4.09

0.1622

0.5524

14.41

21.19

4.35

0.1669

0.5523

461

H19

5.58

5.82

13.02

0.1692

0.3970

6.63

6.88

14.44

0.1731

0.4042

462

H20

0.54

0.82

0.99

0.1366

0.4668

1.19

1.54

1.76

0.1610

0.4683

463

H21

1.76

2.88

3.58

0.1264

0.4654

2.58

3.90

4.74

0.1370

0.4661

464

H22

0.48

0.24

0.58

0.3299

0.3711

0.92

0.72

0.93

0.2541

0.4470

465

I1

1.43

1.56

2.08

0.1841

0.4519

2.02

2.17

2.57

0.1910

0.4620

466

I2

1.46

1.57

3.06

0.1708

0.4133

2.09

2.22

3.88

0.1778

0.4249

467

I3

1.53

1.60

4.57

0.1560

0.3670

2.10

2.19

5.48

0.1635

0.3834

468

I4

1.47

1.41

6.77

0.1370

0.2956

2.04

1.97

8.38

0.1435

0.3128

469

I5

4.84

5.40

5.28

0.1904

0.4780

6.05

6.68

6.52

0.1924

0.4778

470

I6

4.80

5.30

6.81

0.1833

0.4555

5.90

6.52

8.10

0.1844

0.4585

471

I7

4.65

5.08

8.09

0.1769

0.4349

5.85

6.35

9.69

0.1796

0.4391

472

I8

4.83

5.04

11.97

0.1661

0.3899

5.60

5.89

13.64

0.1660

0.3929

473

I9

10.60

12.21

10.14

0.1891

0.4902

12.55

14.11

11.82

0.1934

0.4891

474

I10

10.59

11.92

11.64

0.1888

0.4783

12.35

13.91

12.96

0.1900

0.4817

475

I11

10.69

12.08

12.72

0.1859

0.4726

12.49

13.89

14.32

0.1894

0.4739

476

I12

10.45

11.57

16.11

0.1799

0.4482

12.51

13.84

17.94

0.1828

0.4547

477

I13

14.50

20.37

19.80

0.1529

0.4831

16.21

22.74

21.19

0.1541

0.4863

478

I14

11.45

8.42

13.49

0.2570

0.4252

13.30

10.06

14.67

0.2555

0.4349

479

I15

24.20

28.51

3.07

0.2100

0.5565

25.57

30.02

3.20

0.2107

0.5565

480

I16

1.38

1.58

1.09

0.1947

0.5016

2.29

2.52

1.76

0.2017

0.5000

2

]

Y[ cd/m2 ]

Table A.13: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 441-480).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

Y[ cd/m

481

I17

7.20

4.69

482

I18

11.59

483

I19

4.26

484

I20

485 486

197

Canon 350D measurements

Z

u’

v’

X

1.28

0.3539

0.5186

8.81

6.13

Z

u’

v’

1.96

0.3303

0.5174

17.77

3.01

0.1614

0.5569

12.71

4.20

11.51

0.1674

0.3714

5.11

19.10

3.01

0.1649

0.5577

4.97

13.38

0.1705

2.41

2.94

5.43

0.1535

0.4213

0.3733

3.14

3.68

6.62

0.1604

0.4234

I21

3.66

4.90

8.49

0.1426

I22

0.27

0.26

0.07

0.2466

0.4297

4.62

6.19

9.78

0.1455

0.4394

0.5342

0.68

0.76

0.37

0.2077

0.5182

487

J1

0.47

0.44

0.70

488

J2

0.47

0.34

1.24

0.2050

0.4318

0.87

0.85

1.13

0.2064

0.4493

0.2024

0.3294

0.92

0.82

1.71

0.2011

489

J3

0.60

0.34

0.4016

2.30

0.1905

0.2429

0.94

0.66

2.80

0.1948

0.3085

490

J4

0.72

491

J5

2.88

0.30

3.83

0.1724

0.1616

1.01

0.57

4.66

0.1721

0.2185

2.96

3.22

0.2023

0.4679

3.76

3.86

4.11

0.2032

492

J6

0.4694

3.22

3.03

4.72

0.2050

0.4340

3.98

3.85

5.69

0.2020

0.4396

493 494

J7

3.27

2.87

6.05

0.2029

0.4007

4.08

3.62

7.01

0.2053

0.4104

J8

3.59

2.68

9.14

0.2017

0.3387

4.26

3.28

10.34

0.2015

0.3495

495

J9

8.76

9.76

8.28

0.1947

0.4880

10.15

11.42

9.08

0.1946

0.4925

496

J10

8.89

9.68

9.43

0.1950

0.4777

10.42

11.15

10.60

0.1990

0.4791

497

J11

8.89

9.37

10.66

0.1960

0.4648

10.76

11.18

11.88

0.2010

0.4700

498

J12

9.51

9.10

14.43

0.2010

0.4326

11.11

10.65

15.64

0.2040

0.4401

499

J13

13.78

19.75

19.70

0.1493

0.4815

14.93

21.31

20.16

0.1512

0.4855

500

J14

10.62

7.33

12.79

0.2673

0.4151

11.88

8.56

13.44

0.2632

0.4265

501

J15

23.87

27.76

2.16

0.2137

0.5592

24.77

28.40

2.09

0.2168

0.5592

502

J16

0.71

0.79

0.49

0.2024

0.5068

1.37

1.50

0.99

0.2038

0.5028

503

J17

5.99

3.47

0.67

0.3990

0.5201

6.92

4.49

1.20

0.3560

0.5189

504

J18

10.32

16.03

2.02

0.1607

0.5617

11.36

17.18

1.89

0.1654

0.5628

505

J19

3.28

3.07

10.27

0.1637

0.3448

3.79

3.70

11.23

0.1633

0.3581

506

J20

0.48

0.19

2.33

0.1860

0.1657

0.72

0.46

2.79

0.1818

0.2581

507

J21

2.91

2.69

6.17

0.1884

0.3919

3.52

3.35

6.84

0.1895

0.4058

508

J22

0.16

0.26

0.08

0.1488

0.5442

0.60

0.76

0.34

0.1823

0.5253

509

K1

0.70

0.63

0.67

0.2303

0.4663

1.18

1.13

1.16

0.2183

0.4710

510

K2

0.84

0.58

1.02

0.2667

0.4143

1.32

1.07

1.44

0.2423

0.4442

511

K3

0.99

0.57

1.34

0.2920

0.3783

1.44

1.02

1.79

0.2606

0.4146

512

K4

1.28

0.56

2.08

0.3216

0.3166

1.66

0.93

2.38

0.2924

0.3675

513

K5

2.24

2.05

2.32

0.2243

0.4618

3.00

2.80

3.01

0.2215

0.4666

514

K6

2.51

1.89

3.08

0.2504

0.4242

3.44

2.70

4.02

0.2457

0.4340

515

K7

2.97

1.84

4.16

0.2760

0.3847

3.72

2.52

4.77

0.2670

0.4061

516

K8

3.61

1.71

6.00

0.3055

0.3256

4.24

2.30

6.31

0.2941

0.3591

517

K9

7.26

7.59

6.87

0.2049

0.4820

8.66

8.86

8.02

0.2093

0.4814

518

K10

7.64

7.35

7.96

0.2156

0.4666

9.00

8.62

8.83

0.2186

0.4707

519

K11

8.31

7.34

9.15

0.2279

0.4529

9.76

8.67

10.17

0.2294

0.4581

520

K12

9.61

7.02

12.34

0.2530

0.4158

10.99

8.21

12.91

0.2542

0.4275

2

]

Y[ cd/m2 ]

Table A.14: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 481-520).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

521

K13

12.91

522

K14

523

K15

524

K16

0.32

525

K17

526

K18

527 528

Y[ cd/m

198

Canon 350D measurements

Z

u’

v’

X

18.78

19.09

0.1468

0.4803

13.92

Z

u’

v’

20.02

19.52

0.1493

0.4834

9.91

6.47

11.91

0.2778

0.4081

22.82

26.08

1.38

0.2183

0.5613

10.63

7.36

12.04

0.2706

0.4215

23.42

26.71

1.10

0.2192

0.35

0.25

0.2025

0.4984

0.5625

0.83

0.93

0.61

0.2006

0.5040

4.58

2.41

0.28

0.4407

0.5218

9.22

14.40

1.24

0.1611

0.5661

5.40

3.32

0.74

0.3757

0.5204

10.01

15.14

1.07

0.1667

0.5670

K19

2.59

2.32

9.23

0.1592

0.3208

K20

1.02

0.52

2.20

0.2646

0.3035

2.94

2.69

10.03

0.1604

0.3300

1.45

0.96

2.63

0.2428

529

K21

4.49

3.58

7.17

0.2253

0.3645

0.4043

5.05

4.11

7.76

0.2241

0.4112

530

K22

0.18

0.30

0.34

531

L1

0.68

0.64

0.52

0.1263

0.4737

0.60

0.79

0.80

0.1611

0.4792

0.2297

0.4865

1.15

1.11

0.96

0.2221

532

L2

0.78

0.57

0.4827

0.56

0.2834

0.4659

1.25

1.04

0.88

0.2569

0.4799

533

L3

0.92

534

L4

1.17

0.56

0.65

0.3265

0.4472

1.42

1.06

1.01

0.2791

0.4683

0.54

0.74

0.4073

0.4230

1.69

1.07

1.06

0.3241

535

L5

0.4600

2.17

1.98

1.77

0.2335

0.4793

2.93

2.75

2.35

0.2287

0.4831

536 537

L6

2.52

1.79

1.83

0.2892

0.4621

3.42

2.65

2.51

0.2697

0.4704

L7

2.91

1.64

2.08

0.3449

0.4373

3.76

2.45

2.60

0.3111

0.4564

538

L8

3.60

1.65

2.28

0.4092

0.4220

4.48

2.50

2.75

0.3564

0.4482

539

L9

6.95

7.32

5.67

0.2078

0.4925

8.31

8.64

6.73

0.2104

0.4918

540

L10

7.61

7.34

6.18

0.2234

0.4848

8.69

8.34

6.80

0.2255

0.4868

541

L11

8.28

7.20

6.52

0.2438

0.4770

9.45

8.37

7.38

0.2406

0.4794

542

L12

9.04

6.80

6.72

0.2756

0.4665

10.63

8.24

7.63

0.2706

0.4721

543

L13

11.65

17.04

18.02

0.1450

0.4773

12.53

18.45

17.93

0.1461

0.4840

544

L14

7.19

4.01

9.39

0.3011

0.3779

8.07

4.87

9.43

0.2953

0.4005

545

L15

20.90

22.90

0.48

0.2285

0.5634

19.97

22.02

0.15

0.2278

0.5651

546

L16

0.03

0.03

0.06

0.1818

0.4091

0.46

0.50

0.28

0.2070

0.5116

547

L17

2.43

1.08

0.10

0.5135

0.5135

3.20

1.88

0.39

0.3940

0.5194

548

L18

7.48

11.41

0.39

0.1664

0.5711

7.74

11.64

0.04

0.1697

0.5741

549

L19

1.28

0.88

5.97

0.1581

0.2445

1.51

1.12

6.82

0.1562

0.2597

550

L20

1.61

0.84

1.58

0.3398

0.3989

2.05

1.30

1.92

0.3000

0.4290

551

L21

4.25

3.41

3.85

0.2539

0.4584

4.99

4.16

4.30

0.2488

0.4661

2

]

Y[ cd/m2 ]

552

L22

0.47

0.30

0.07

0.3629

0.5212

0.85

0.70

0.31

0.2770

0.5126

553

Dmin

24.81

29.18

19.83

0.1901

0.5031

24.08

27.95

18.41

0.1932

0.5046

554

N1

18.28

21.54

14.98

0.1893

0.5018

17.89

21.03

14.57

0.1898

0.5020

555

N2

16.03

18.71

13.30

0.1905

0.5003

16.28

18.73

12.92

0.1937

0.5017

556

N3

13.73

15.97

11.30

0.1912

0.5005

14.68

16.93

11.73

0.1933

0.5015

557

N4

11.64

13.44

9.70

0.1921

0.4992

12.90

14.60

10.34

0.1963

0.4998

558

N5

9.80

11.37

8.17

0.1913

0.4995

11.28

12.98

9.20

0.1931

0.5001

559

N6

8.31

9.62

6.97

0.1916

0.4989

9.77

11.16

8.03

0.1942

0.4990

560

N7

6.91

7.95

5.89

0.1923

0.4974

8.10

9.11

6.79

0.1961

0.4966

Table A.15: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 521-560).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Radiometric measurements Index

Patch#

X

561

N8

5.97

562

N9

4.77

563

N10

564

N11

565 566

Y[ cd/m2 ]

199

Canon 350D measurements

Z

u’

v’

X

6.87

4.92

0.1929

0.4995

6.74

5.47

4.10

0.1924

0.4967

5.84

3.90

4.50

3.30

0.1917

0.4983

3.25

3.69

2.79

0.1945

0.4958

N12

2.54

2.87

2.15

0.1955

N13

1.97

2.26

1.69

0.1923

567

N14

1.57

1.78

1.29

568

N15

1.18

1.34

1.03

569

N16

0.86

0.98

570

N17

0.70

0.77

571

N18

0.47

572

N19

0.35

573

N20

574

N21

575 576

Y[ cd/m2 ]

Z

u’

v’

7.70

5.41

0.1947

0.5004

6.61

4.84

0.1955

0.4977

4.80

5.48

3.93

0.1944

0.4992

4.02

4.58

3.48

0.1934

0.4956

0.4962

3.45

3.84

2.84

0.1982

0.4969

0.4968

2.82

3.11

2.42

0.1985

0.4933

0.1959

0.4985

2.34

2.58

1.89

0.2006

0.4970

0.1938

0.4949

1.83

2.01

1.52

0.2006

0.4948

0.72

0.1944

0.4978

1.41

1.57

1.14

0.1997

0.4977

0.57

0.1987

0.4974

1.17

1.27

0.95

0.2025

0.4953

0.54

0.37

0.1968

0.5019

0.91

1.00

0.73

0.2019

0.4971

0.38

0.30

0.2020

0.4921

0.70

0.76

0.55

0.2036

0.4972

0.16

0.17

0.16

0.2042

0.4804

0.53

0.58

0.41

0.2032

0.4981

0.14

0.14

0.14

0.2122

0.4759

0.44

0.48

0.33

0.2018

0.5005

N22

0.08

0.08

0.08

0.2045

0.4669

0.36

0.40

0.27

0.2004

0.5018

Dmax

0.07

0.07

0.06

0.2330

0.4823

0.34

0.39

0.25

0.1985

0.5048

Table A.16: Radiometric measurements of training colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 561-576).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

Y[ cd/m2 ]

1

A1

64.45

55.63

2

A2

73.73

52.57

3

A3

85.64

53.81

4

A4

99.44

5

A5

250.70

6

A6

277.98

7

A7

8

A8

9

Z

200

Nikon D40 measurements u’

v’

X

Y[ cd/m2 ]

47.93

0.2472

0.4802

64.56

55.33

40.87

0.2994

0.4804

74.46

53.47

36.74

0.3415

0.4828

87.71

55.39

52.51

26.30

0.4117

0.4892

100.15

220.99

169.04

0.2462

0.4884

247.17

208.55

140.75

0.2904

0.4903

279.18

310.72

203.00

125.38

0.3330

0.4896

351.33

200.71

103.13

0.3828

0.4920

A9

641.06

656.62

503.18

0.2137

10

A10

673.75

643.83

475.69

11

A11

703.12

633.17

452.24

12

A12

768.91

617.86

13

A13

1491.29

1722.17

14

A14

1597.99

1789.56

15

A15

1650.75

16

A16

1612.23

17

A17

18 19

Z

u’

v’

46.44

0.2498

0.4817

38.16

0.3005

0.4856

35.26

0.3425

0.4867

53.07

25.62

0.4117

0.4909

214.87

164.48

0.2494

0.4879

207.17

140.30

0.2933

0.4897

312.73

202.51

122.79

0.3364

0.4901

357.82

205.16

102.65

0.3824

0.4933

0.4925

641.74

644.70

483.92

0.2182

0.4932

0.2292

0.4928

663.54

627.43

467.15

0.2313

0.4920

0.2433

0.4931

700.72

630.73

437.50

0.2443

0.4947

414.98

0.2726

0.4929

764.57

622.33

396.32

0.2709

0.4962

1389.58

0.1894

0.4922

1450.57

1708.46

1376.12

0.1859

0.4927

1391.51

0.1960

0.4938

1595.55

1779.78

1392.89

0.1966

0.4933

1882.77

1303.23

0.1953

0.5013

1635.09

1844.62

1285.57

0.1972

0.5006

1841.61

1383.86

0.1932

0.4964

1596.38

1833.54

1372.49

0.1922

0.4968

1614.16

1791.49

1373.76

0.1980

0.4945

1611.72

1808.42

1367.50

0.1963

0.4956

A18

1593.73

1818.15

1268.34

0.1951

0.5009

1571.93

1820.89

1281.89

0.1921

0.5007

A19

1513.41

1681.36

1380.48

0.1961

0.4901

1521.87

1699.51

1431.34

0.1944

0.4885

20

A20

142.02

80.14

56.27

0.3755

0.4767

150.14

88.56

67.73

0.3571

0.4739

21

A21

532.42

395.41

304.34

0.2887

0.4824

537.15

398.55

305.95

0.2890

0.4826

22

A22

1057.45

1026.46

637.76

0.2303

0.5030

1052.75

1023.53

637.46

0.2299

0.5029

23

B1

92.63

83.44

58.82

0.2436

0.4938

93.77

82.30

57.66

0.2499

0.4934

24

B2

109.14

85.04

45.46

0.2870

0.5031

108.67

84.30

43.72

0.2890

0.5043

25

B3

125.23

88.88

32.24

0.3221

0.5144

125.55

87.57

31.91

0.3272

0.5135

26

B4

143.55

85.94

12.01

0.3910

0.5266

147.46

88.44

10.61

0.3917

0.5286

27

B5

336.95

310.18

186.22

0.2429

0.5031

336.84

302.85

189.79

0.2473

0.5002

28

B6

369.92

297.69

139.15

0.2817

0.5101

372.95

297.06

133.52

0.2853

0.5112

29

B7

405.91

287.28

94.55

0.3248

0.5172

396.94

281.21

94.33

0.3242

0.5167

30

B8

443.81

273.00

58.24

0.3766

0.5213

442.69

271.69

58.13

0.3774

0.5211

31

B9

801.36

827.33

610.67

0.2131

0.4950

789.42

814.48

596.23

0.2134

0.4954

32

B10

821.34

807.71

563.16

0.2246

0.4970

809.14

806.04

538.40

0.2230

0.4998

33

B11

832.30

798.40

483.22

0.2335

0.5040

840.19

797.17

482.04

0.2359

0.5037

34

B12

922.64

814.23

384.85

0.2582

0.5128

908.40

796.44

383.98

0.2594

0.5117

35

B13

1438.76

1708.99

1424.08

0.1836

0.4907

1393.93

1645.57

1424.89

0.1837

0.4879

36

B14

1488.13

1564.12

1379.06

0.2046

0.4840

1484.32

1553.39

1339.74

0.2061

0.4854

37

B15

1622.99

1847.58

1129.96

0.1984

0.5081

1651.08

1875.89

1147.48

0.1987

0.5080

38

B16

1375.77

1564.33

1129.28

0.1949

0.4987

1342.37

1507.30

1184.33

0.1952

0.4932

39

B17

1521.93

1603.60

1170.99

0.2093

0.4961

1543.22

1623.78

1188.96

0.2095

0.4959

40

B18

1446.67

1704.55

1171.70

0.1895

0.5025

1445.42

1720.68

1193.83

0.1875

0.5022

Table A.17: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 1-40).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

201

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

41

B19

1355.17

1490.14

1394.39

0.1944

0.4809

1372.71

1535.13

1396.46

0.1921

0.4833

42

B20

127.70

87.73

43.79

0.3243

0.5013

138.90

98.73

56.17

0.3107

0.4969

43

B21

378.76

268.46

116.37

0.3186

0.5082

375.63

270.13

120.27

0.3138

0.5077

44

B22

711.46

628.58

375.97

0.2526

0.5021

695.11

618.67

363.14

0.2513

0.5032

45

C1

181.52

177.90

113.90

0.2275

0.5016

178.73

171.23

108.88

0.2326

0.5014

46

C2

193.52

179.55

77.31

0.2482

0.5182

192.75

178.47

77.09

0.2486

0.5180

47

C3

202.50

178.60

50.57

0.2670

0.5299

205.65

179.45

50.46

0.2698

0.5297

48

C4

217.14

178.29

13.48

0.2962

0.5473

218.23

178.35

12.92

0.2977

0.5474

49

C5

633.62

641.75

343.56

0.2245

0.5116

628.07

615.72

324.53

0.2318

0.5113

50

C6

650.36

616.67

193.95

0.2482

0.5295

646.06

606.95

199.22

0.2497

0.5279

51

C7

665.84

595.31

107.05

0.2686

0.5403

657.40

581.07

106.90

0.2713

0.5395

52

C8

712.84

570.05

20.20

0.3058

0.5502

718.34

569.25

21.07

0.3083

0.5497

53

C9

955.23

1010.51

705.22

0.2096

0.4989

943.14

997.75

678.90

0.2102

0.5004

54

C10

956.10

989.64

601.60

0.2172

0.5059

972.10

997.71

581.49

0.2199

0.5078

55

C11

982.86

998.91

495.92

0.2252

0.5151

962.20

979.55

482.41

0.2250

0.5155

56

C12

1054.71

984.38

258.85

0.2542

0.5338

1053.27

989.45

256.28

0.2528

0.5344

57

C13

1331.58

1625.14

1464.23

0.1769

0.4859

1338.10

1646.10

1429.13

0.1765

0.4887

58

C14

1458.22

1454.81

1395.13

0.2124

0.4767

1422.73

1435.07

1375.03

0.2102

0.4771

59

C15

1667.24

1881.97

953.55

0.2036

0.5171

1658.88

1894.14

965.75

0.2013

0.5171

60

C16

1097.48

1205.21

952.56

0.1992

0.4923

1089.92

1206.86

947.02

0.1979

0.4930

61

C17

1370.89

1369.44

949.27

0.2215

0.4978

1367.68

1361.18

943.38

0.2222

0.4977

62

C18

1334.08

1624.72

975.70

0.1864

0.5107

1344.69

1671.21

1010.42

0.1827

0.5108

63

C19

1175.30

1284.41

1408.06

0.1906

0.4687

1164.46

1302.99

1399.67

0.1870

0.4708

64

C20

305.47

239.05

56.94

0.3008

0.5296

316.74

249.43

70.30

0.2968

0.5258

65

C21

520.66

498.47

281.39

0.2355

0.5074

519.81

493.80

276.14

0.2375

0.5076

66

C22

606.52

531.15

262.45

0.2592

0.5107

609.86

530.69

261.23

0.2608

0.5106

67

D1

125.54

128.57

88.56

0.2165

0.4988

129.99

129.88

86.16

0.2225

0.5002

68

D2

126.91

132.07

59.99

0.2219

0.5195

127.92

130.41

59.17

0.2262

0.5190

69

D3

128.83

135.28

37.65

0.2269

0.5361

129.15

132.80

38.57

0.2310

0.5343

70

D4

125.05

135.61

10.19

0.2284

0.5574

124.35

130.99

11.64

0.2342

0.5550

71

D5

502.46

535.64

296.58

0.2132

0.5114

493.97

529.17

289.29

0.2125

0.5121

72

D6

482.71

531.10

157.86

0.2164

0.5357

483.43

519.46

163.42

0.2206

0.5334

73

D7

452.85

499.80

81.38

0.2211

0.5490

452.70

491.96

77.42

0.2245

0.5490

74

D8

434.45

494.25

1.70

0.2213

0.5664

431.12

480.96

9.33

0.2247

0.5641

75

D9

1103.50

1223.29

617.57

0.2072

0.5167

1097.43

1201.37

612.25

0.2095

0.5160

76

D10

1083.96

1215.29

362.17

0.2125

0.5362

1089.69

1192.01

354.82

0.2176

0.5355

77

D11

1049.79

1192.57

175.96

0.2157

0.5514

1057.79

1193.75

159.82

0.2176

0.5526

78

D12

1022.02

1171.17

49.05

0.2182

0.5626

1044.25

1186.30

52.84

0.2199

0.5620

79

D13

1302.09

1631.68

1498.19

0.1721

0.4851

1305.03

1672.15

1476.66

0.1694

0.4883

80

D14

1378.06

1316.65

1341.30

0.2192

0.4711

1331.03

1276.62

1334.99

0.2174

0.4692

Table A.18: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 41-80).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

202

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

81

D15

1698.16

1924.84

793.36

0.2061

0.5257

1696.67

1935.82

794.89

0.2049

0.5261

82

D16

868.25

935.97

746.37

0.2025

0.4913

854.49

928.62

707.97

0.2022

0.4943

83

D17

1230.54

1138.41

738.19

0.2399

0.4993

1206.77

1124.27

717.60

0.2387

0.5003

84

D18

1232.41

1570.07

815.64

0.1810

0.5189

1246.35

1590.58

821.46

0.1808

0.5192

85

D19

984.65

1079.72

1312.52

0.1865

0.4602

985.24

1079.17

1337.70

0.1860

0.4584

86

D20

94.67

94.55

52.78

0.2266

0.5092

103.72

104.00

63.55

0.2237

0.5048

87

D21

715.97

700.14

45.62

0.2522

0.5549

723.20

701.98

52.04

0.2536

0.5538

88

D22

901.26

1026.50

1242.67

0.1800

0.4613

894.38

1029.84

1245.25

0.1782

0.4616

89

E1

176.94

196.86

122.03

0.2025

0.5068

170.68

189.97

118.67

0.2022

0.5064

90

E2

160.56

190.67

85.52

0.1960

0.5236

162.63

189.07

85.57

0.1998

0.5227

91

E3

148.92

195.00

55.05

0.1839

0.5418

151.26

195.86

54.94

0.1859

0.5417

92

E4

127.91

198.82

8.75

0.1631

0.5705

128.32

196.89

9.86

0.1650

0.5696

93

E5

713.15

838.01

440.42

0.1953

0.5164

701.34

825.90

431.91

0.1950

0.5167

94

E6

628.71

814.04

277.05

0.1840

0.5359

626.59

796.86

281.83

0.1867

0.5342

95

E7

559.05

784.70

145.97

0.1751

0.5531

553.90

768.66

148.70

0.1768

0.5521

96

E8

500.18

776.42

61.37

0.1623

0.5667

497.91

762.16

64.37

0.1643

0.5658

97

E9

1131.11

1304.06

831.35

0.1951

0.5062

1088.94

1256.78

814.84

0.1946

0.5053

98

E10

1037.17

1260.13

631.25

0.1900

0.5195

1030.34

1238.36

631.25

0.1917

0.5184

99

E11

980.99

1260.23

506.36

0.1833

0.5299

979.11

1224.89

503.83

0.1877

0.5284

100

E12

934.10

1244.48

346.51

0.1810

0.5426

932.74

1240.48

349.54

0.1812

0.5423

101

E13

1242.19

1584.24

1557.55

0.1674

0.4804

1247.56

1610.86

1513.83

0.1666

0.4840

102

E14

1293.97

1153.13

1320.94

0.2295

0.4602

1270.94

1148.06

1304.91

0.2269

0.4611

103

E15

1720.06

1920.00

679.84

0.2113

0.5307

1669.74

1927.96

669.49

0.2049

0.5323

104

E16

660.58

704.03

560.66

0.2048

0.4911

665.45

717.83

575.20

0.2023

0.4910

105

E17

1074.25

934.38

593.35

0.2547

0.4985

1095.86

958.58

569.84

0.2551

0.5020

106

E18

1142.74

1483.74

692.35

0.1794

0.5242

1151.31

1503.55

678.90

0.1789

0.5257

107

E19

814.18

898.14

1264.35

0.1801

0.4471

830.31

916.31

1245.74

0.1814

0.4503

108

E20

197.87

235.88

56.55

0.2026

0.5435

205.56

245.32

69.93

0.2008

0.5391

109

E21

606.06

743.18

169.67

0.1977

0.5454

606.51

745.53

170.40

0.1972

0.5455

110

E22

261.10

466.43

345.39

0.1259

0.5062

266.62

475.38

354.71

0.1260

0.5056

111

F1

123.76

146.39

116.83

0.1854

0.4934

123.78

142.94

114.00

0.1897

0.4929

112

F2

108.56

147.87

102.30

0.1649

0.5053

108.89

144.87

102.01

0.1683

0.5038

113

F3

94.82

148.34

94.26

0.1457

0.5130

95.83

145.95

96.01

0.1490

0.5105

114

F4

78.72

150.55

79.47

0.1223

0.5261

74.65

143.58

75.57

0.1216

0.5263

115

F5

360.28

454.74

326.61

0.1766

0.5015

363.04

447.40

323.74

0.1805

0.5005

116

F6

296.24

446.90

275.46

0.1514

0.5139

300.64

443.81

277.39

0.1544

0.5127

117

F7

244.92

439.00

221.13

0.1307

0.5273

246.92

432.91

224.96

0.1332

0.5254

118

F8

219.55

416.35

207.23

0.1239

0.5288

219.96

411.41

211.33

0.1252

0.5271

119

F9

769.99

917.80

687.64

0.1855

0.4976

771.87

905.49

679.34

0.1883

0.4972

120

F10

691.34

910.46

619.78

0.1706

0.5056

679.08

904.54

606.34

0.1691

0.5067

Table A.19: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 81-120).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

203

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

121

F11

630.03

902.02

554.44

0.1593

0.5130

621.12

895.89

561.42

0.1578

0.5121

122

F12

562.29

919.85

522.53

0.1412

0.5198

554.68

901.40

507.39

0.1422

0.5201

123

F13

1191.33

1584.50

1570.24

0.1606

0.4806

1198.22

1581.90

1546.42

0.1621

0.4815

124

F14

1181.74

1005.55

1235.21

0.2367

0.4532

1169.50

1006.17

1238.37

0.2342

0.4533

125

F15

1724.61

1916.26

563.79

0.2145

0.5363

1733.50

1962.38

547.97

0.2113

0.5382

126

F16

493.22

520.40

420.37

0.2064

0.4899

486.48

513.85

414.45

0.2062

0.4900

127

F17

958.23

780.76

444.66

0.2737

0.5018

962.32

779.26

455.31

0.2746

0.5003

128

F18

1055.33

1409.35

557.39

0.1769

0.5314

1045.77

1415.35

546.47

0.1749

0.5326

129

F19

674.62

715.22

1181.44

0.1805

0.4306

688.41

740.17

1202.57

0.1788

0.4326

130

F20

61.28

101.63

36.00

0.1447

0.5400

68.09

109.36

47.01

0.1473

0.5322

131

F21

159.85

314.39

59.04

0.1265

0.5600

160.72

311.15

67.05

0.1278

0.5568

132

F22

257.66

476.25

69.32

0.1354

0.5633

259.26

474.89

78.97

0.1361

0.5609

133

G1

91.29

102.71

103.39

0.1880

0.4760

92.55

102.81

100.25

0.1913

0.4781

134

G2

84.52

106.11

112.86

0.1678

0.4740

84.07

105.21

107.21

0.1695

0.4773

135

G3

74.13

103.93

113.09

0.1503

0.4742

73.25

99.53

112.46

0.1539

0.4706

136

G4

62.56

107.19

129.27

0.1216

0.4687

61.03

104.56

124.12

0.1220

0.4701

137

G5

367.45

449.66

411.62

0.1761

0.4848

372.28

451.17

399.97

0.1786

0.4869

138

G6

314.26

453.89

433.05

0.1493

0.4851

309.58

439.81

422.26

0.1515

0.4843

139

G7

266.37

432.79

450.07

0.1314

0.4804

262.25

426.19

435.61

0.1318

0.4818

140

G8

241.12

421.14

448.82

0.1220

0.4795

238.79

418.98

445.19

0.1215

0.4798

141

G9

800.02

951.12

802.89

0.1831

0.4898

791.91

938.80

782.65

0.1839

0.4906

142

G10

710.77

936.45

822.96

0.1650

0.4893

706.55

928.31

802.00

0.1659

0.4904

143

G11

648.84

957.12

840.21

0.1481

0.4915

641.10

906.74

876.21

0.1520

0.4837

144

G12

592.25

950.35

902.32

0.1350

0.4872

584.51

944.91

885.08

0.1343

0.4884

145

G13

1139.53

1524.24

1574.36

0.1587

0.4775

1155.44

1559.48

1623.56

0.1571

0.4771

146

G14

1101.21

890.94

1196.57

0.2440

0.4441

1113.85

902.65

1187.67

0.2446

0.4460

147

G15

1727.24

1962.65

457.84

0.2123

0.5428

1756.00

1954.49

462.33

0.2164

0.5419

148

G16

353.84

361.61

313.50

0.2107

0.4844

357.66

369.92

308.41

0.2094

0.4873

149

G17

858.65

642.11

329.65

0.2992

0.5034

848.21

641.51

313.55

0.2973

0.5059

150

G18

961.61

1329.44

461.72

0.1726

0.5368

972.71

1345.77

459.71

0.1726

0.5374

151

G19

545.91

573.36

1131.57

0.1741

0.4115

553.36

575.92

1141.36

0.1754

0.4108

152

G20

106.49

195.45

85.95

0.1292

0.5337

113.00

198.60

98.85

0.1334

0.5275

153

G21

175.74

274.23

222.56

0.1418

0.4979

179.77

276.61

224.05

0.1438

0.4978

154

G22

52.10

59.91

123.11

0.1579

0.4084

54.86

65.25

118.22

0.1581

0.4230

155

H1

95.49

104.01

119.60

0.1896

0.4647

94.96

101.90

118.24

0.1920

0.4636

156

H2

88.47

103.15

149.38

0.1698

0.4455

87.92

103.32

140.97

0.1707

0.4513

157

H3

83.67

105.93

180.01

0.1513

0.4309

83.89

105.82

175.81

0.1526

0.4332

158

H4

72.49

103.92

236.33

0.1239

0.3996

72.35

101.91

235.38

0.1254

0.3975

159

H5

309.84

357.56

406.14

0.1798

0.4669

314.52

356.66

411.99

0.1823

0.4652

160

H6

284.33

358.37

495.47

0.1591

0.4513

284.71

351.70

487.72

0.1621

0.4507

Table A.20: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 121-160).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

204

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

161

H7

248.18

347.06

600.27

0.1368

0.4305

249.27

346.31

590.88

0.1382

0.4319

162

H8

228.78

336.15

697.46

0.1243

0.4109

228.98

339.03

682.22

0.1244

0.4145

163

H9

847.41

969.55

874.31

0.1882

0.4844

858.03

965.30

859.17

0.1916

0.4849

164

H10

798.08

957.03

938.45

0.1777

0.4793

806.38

943.34

957.59

0.1809

0.4762

165

H11

759.55

929.67

1065.44

0.1697

0.4674

756.34

932.69

1043.03

0.1692

0.4696

166

H12

697.54

961.31

1393.58

0.1446

0.4483

696.28

941.98

1404.33

0.1463

0.4453

167

H13

1111.71

1544.22

1586.52

0.1532

0.4787

1115.74

1542.96

1610.30

0.1534

0.4774

168

H14

1032.79

784.68

1167.79

0.2533

0.4331

1011.65

770.71

1141.76

0.2530

0.4336

169

H15

1697.34

1897.54

348.44

0.2176

0.5473

1719.40

1943.86

358.67

0.2152

0.5475

170

H16

232.30

241.99

206.72

0.2073

0.4859

246.88

254.34

208.12

0.2107

0.4885

171

H17

744.27

522.62

228.69

0.3212

0.5074

741.69

522.14

222.85

0.3210

0.5084

172

H18

883.23

1244.74

359.20

0.1712

0.5430

880.15

1253.95

351.95

0.1697

0.5440

173

H19

427.97

440.86

1032.52

0.1688

0.3914

432.84

443.50

1031.55

0.1701

0.3921

174

H20

59.59

79.04

115.71

0.1497

0.4467

67.29

87.39

126.45

0.1532

0.4475

175

H21

157.04

237.98

334.28

0.1328

0.4529

157.24

233.97

329.63

0.1351

0.4523

176

H22

46.03

27.41

51.25

0.3014

0.4038

48.76

30.97

55.02

0.2876

0.4109

177

I1

146.74

148.32

219.79

0.1937

0.4404

142.35

143.57

205.95

0.1954

0.4435

178

I2

142.99

145.85

307.75

0.1758

0.4034

148.91

147.79

308.43

0.1810

0.4042

179

I3

142.96

144.44

440.02

0.1575

0.3582

141.94

142.89

438.86

0.1576

0.3570

180

I4

128.92

133.36

633.31

0.1280

0.2979

132.15

132.15

635.89

0.1314

0.2957

181

I5

443.78

467.87

511.40

0.1973

0.4681

440.22

458.46

488.44

0.2005

0.4698

182

I6

435.66

465.49

652.55

0.1859

0.4468

432.15

457.52

625.08

0.1885

0.4490

183

I7

419.89

451.71

750.75

0.1778

0.4303

425.90

442.21

761.61

0.1823

0.4259

184

I8

399.45

415.45

1038.26

0.1639

0.3837

400.80

419.25

1003.74

0.1653

0.3890

185

I9

912.64

1003.36

906.18

0.1954

0.4834

913.58

982.27

913.58

0.1987

0.4808

186

I10

873.89

953.96

1007.85

0.1920

0.4716

882.16

949.76

1016.32

0.1941

0.4702

187

I11

884.32

962.66

1085.88

0.1904

0.4663

881.09

954.36

1099.55

0.1906

0.4644

188

I12

856.93

924.50

1353.25

0.1825

0.4430

883.13

945.92

1348.13

0.1848

0.4453

189

I13

1051.36

1452.72

1595.24

0.1522

0.4732

1080.08

1492.08

1614.70

0.1526

0.4744

190

I14

949.36

697.65

1065.02

0.2599

0.4298

958.84

708.59

1067.43

0.2593

0.4312

191

I15

1720.42

1889.67

255.19

0.2232

0.5516

1723.27

1902.46

287.75

0.2215

0.5501

192

I16

149.04

153.53

131.13

0.2095

0.4856

160.06

163.73

137.32

0.2114

0.4867

193

I17

648.77

421.70

153.75

0.3490

0.5104

642.73

421.81

155.05

0.3458

0.5106

194

I18

803.05

1178.38

272.63

0.1665

0.5496

803.84

1171.44

267.03

0.1677

0.5498

195

I19

340.72

339.48

975.88

0.1630

0.3654

349.26

345.07

970.93

0.1656

0.3680

196

I20

204.75

243.55

466.77

0.1558

0.4169

213.55

247.78

477.26

0.1593

0.4159

197

I21

308.61

412.26

752.86

0.1411

0.4240

307.27

405.02

747.80

0.1425

0.4226

198

I22

29.25

29.47

11.34

0.2315

0.5249

33.07

33.30

18.31

0.2252

0.5101

199

J1

53.42

47.20

86.50

0.2093

0.4161

51.53

44.65

82.80

0.2126

0.4144

200

J2

56.78

43.33

135.95

0.2038

0.3499

58.20

43.39

137.84

0.2074

0.3479

Table A.21: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 161-200).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

201

J3

57.20

202

J4

54.01

203

J5

204 205 206

Y[ cd/m2 ]

205

Nikon D40 measurements

Z

u’

v’

X

35.36

216.31

0.1850

0.2574

56.96

24.82

332.72

0.1517

0.1568

56.14

278.90

273.00

341.73

0.2066

0.4551

J6

305.99

280.40

458.91

0.2078

J7

305.23

259.49

575.10

0.2061

J8

319.41

242.12

785.55

207

J9

772.13

831.70

208

J10

803.73

824.66

209

J11

801.57

210

J12

211

J13

212

J14

213

J15

214

J16

85.26

215

J17

547.17

216

J18

732.51

217

J19

258.55

218

J20

219

J21

220

J22

221 222

Y[ cd/m2 ]

Z

u’

v’

36.43

215.34

0.1824

0.2624

29.76

328.60

0.1509

0.1800

285.38

273.67

346.99

0.2102

0.4535

0.4285

315.55

283.04

472.52

0.2111

0.4261

0.3943

312.26

261.29

574.06

0.2098

0.3950

0.2025

0.3455

324.96

242.85

789.21

0.2052

0.3450

781.97

0.1981

0.4800

777.60

825.75

772.37

0.2009

0.4801

855.95

0.2042

0.4715

806.50

830.93

870.42

0.2031

0.4709

808.54

966.18

0.2026

0.4597

813.14

813.31

975.27

0.2041

0.4592

825.88

776.83

1209.96

0.2051

0.4340

828.95

781.62

1252.01

0.2033

0.4313

1050.15

1483.34

1667.68

0.1484

0.4717

1057.47

1473.60

1659.99

0.1503

0.4713

890.93

620.93

1007.72

0.2694

0.4225

889.05

614.61

1018.96

0.2701

0.4202

1689.91

1839.88

182.01

0.2266

0.5550

1710.91

1851.75

193.99

0.2276

0.5542

85.68

76.37

0.2132

0.4821

93.93

94.68

80.79

0.2139

0.4851

331.45

95.59

0.3770

0.5138

537.51

327.07

93.92

0.3755

0.5141

1086.41

182.27

0.1667

0.5563

737.23

1082.03

184.49

0.1683

0.5558

255.64

833.58

0.1568

0.3489

270.89

261.72

869.06

0.1593

0.3462

29.48

12.72

189.86

0.1493

0.1449

34.04

18.78

190.31

0.1535

0.1906

253.13

232.42

546.20

0.1883

0.3890

255.52

232.22

538.42

0.1909

0.3904

23.16

31.73

11.95

0.1731

0.5338

26.40

34.58

18.67

0.1757

0.5177

K1

84.81

74.04

94.34

0.2295

0.4507

84.00

71.43

91.99

0.2347

0.4491

K2

98.20

70.11

119.47

0.2604

0.4183

98.05

70.10

114.29

0.2628

0.4227

223

K3

107.36

65.30

142.97

0.2833

0.3877

111.05

67.53

142.99

0.2860

0.3913

224

K4

122.23

59.74

176.54

0.3158

0.3473

122.58

59.22

178.81

0.3169

0.3445

225

K5

235.36

207.13

250.20

0.2300

0.4555

240.84

207.46

254.75

0.2340

0.4535

226

K6

271.80

200.73

327.16

0.2550

0.4237

278.27

206.18

317.09

0.2575

0.4293

227

K7

298.55

189.58

386.57

0.2776

0.3966

305.18

194.72

385.51

0.2785

0.3999

228

K8

339.24

174.30

487.73

0.3072

0.3552

336.91

174.52

493.90

0.3038

0.3540

229

K9

693.12

685.71

675.95

0.2132

0.4745

695.92

688.61

668.17

0.2136

0.4756

230

K10

720.74

662.29

757.29

0.2230

0.4611

730.82

661.57

744.47

0.2268

0.4620

231

K11

783.10

663.58

853.42

0.2356

0.4491

774.27

661.24

834.04

0.2347

0.4510

232

K12

873.56

635.82

1051.55

0.2576

0.4218

865.29

629.73

1075.13

0.2557

0.4187

233

K13

1013.33

1428.03

1633.79

0.1483

0.4702

1024.23

1437.25

1650.95

0.1488

0.4698

234

K14

837.48

562.42

947.73

0.2765

0.4177

850.48

574.15

970.70

0.2749

0.4176

235

K15

1691.13

1809.26

109.56

0.2320

0.5584

1705.08

1817.25

120.68

0.2326

0.5577

236

K16

46.08

46.34

40.75

0.2135

0.4830

52.39

52.87

44.31

0.2142

0.4863

237

K17

444.17

252.90

54.65

0.4036

0.5171

443.07

252.87

55.82

0.4025

0.5168

238

K18

677.53

1011.69

110.39

0.1675

0.5626

673.07

997.50

112.63

0.1685

0.5620

239

K19

208.60

197.04

808.31

0.1493

0.3173

220.13

205.27

799.61

0.1545

0.3242

240

K20

101.57

60.19

196.92

0.2547

0.3396

105.43

64.11

203.91

0.2512

0.3437

Table A.22: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 201-240).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

206

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

241

K21

397.92

313.75

626.45

0.2279

0.4043

407.33

315.61

640.92

0.2306

0.4021

242

K22

26.39

38.46

59.42

0.1350

0.4429

28.51

40.69

65.57

0.1365

0.4383

243

L1

92.12

83.81

84.97

0.2297

0.4702

83.92

73.07

74.17

0.2394

0.4689

244

L2

105.80

79.81

86.31

0.2709

0.4599

100.80

72.82

80.85

0.2809

0.4565

245

L3

118.05

75.97

89.27

0.3096

0.4482

112.05

69.99

80.97

0.3191

0.4484

246

L4

132.96

71.45

82.93

0.3659

0.4424

125.95

65.45

77.16

0.3762

0.4399

247

L5

248.78

215.71

218.65

0.2403

0.4689

246.33

209.64

208.76

0.2453

0.4697

248

L6

298.48

211.91

226.01

0.2873

0.4590

293.48

207.08

220.62

0.2890

0.4589

249

L7

329.08

196.99

226.73

0.3320

0.4472

313.28

187.11

218.10

0.3320

0.4462

250

L8

367.14

191.98

219.31

0.3761

0.4425

364.38

191.91

222.17

0.3728

0.4418

251

L9

702.22

710.63

609.03

0.2130

0.4849

700.24

696.94

583.10

0.2171

0.4861

252

L10

747.56

684.67

614.98

0.2325

0.4791

743.97

679.10

614.01

0.2330

0.4785

253

L11

794.03

671.57

638.28

0.2485

0.4728

800.25

668.10

639.68

0.2512

0.4719

254

L12

872.47

649.98

657.65

0.2771

0.4645

887.07

654.72

666.73

0.2792

0.4637

255

L13

913.07

1334.38

1598.43

0.1420

0.4669

931.02

1354.07

1620.16

0.1427

0.4669

256

L14

653.15

385.35

750.19

0.3009

0.3994

660.93

393.15

759.78

0.2991

0.4004

257

L15

1584.03

1598.42

-2.98

0.2480

0.5630

1614.00

1604.47

11.56

0.2511

0.5615

258

L16

12.89

13.06

8.80

0.2193

0.4997

16.32

16.47

10.83

0.2206

0.5011

259

L17

236.37

122.76

17.69

0.4437

0.5185

245.20

128.75

19.21

0.4390

0.5187

260

L18

525.75

766.61

-1.85

0.1750

0.5740

533.83

772.69

5.26

0.1759

0.5728

261

L19

103.32

82.82

546.26

0.1385

0.2498

111.11

88.75

548.21

0.1440

0.2587

262

L20

168.23

96.80

154.51

0.3229

0.4181

167.70

98.39

153.76

0.3187

0.4207

263

L21

428.17

329.87

385.52

0.2622

0.4545

421.04

328.24

371.12

0.2608

0.4574

264

L22

56.15

36.87

11.45

0.3490

0.5156

58.54

38.78

15.24

0.3413

0.5088

265

Dmin

2126.75

2411.78

1915.30

0.1931

0.4928

2120.00

2438.94

1904.74

0.1909

0.4942

266

N1

1655.90

1942.30

1527.69

0.1872

0.4942

1718.26

1957.42

1568.09

0.1921

0.4923

267

N2

1567.32

1799.14

1405.20

0.1913

0.4941

1548.59

1775.02

1387.93

0.1916

0.4940

268

N3

1408.41

1599.83

1264.04

0.1929

0.4931

1405.73

1583.77

1268.70

0.1941

0.4921

269

N4

1226.09

1390.55

1092.23

0.1934

0.4935

1223.92

1338.37

1131.58

0.1983

0.4878

270

N5

1043.54

1173.03

941.47

0.1945

0.4919

1043.75

1154.80

961.54

0.1965

0.4891

271

N6

904.22

998.35

837.52

0.1967

0.4885

906.56

991.02

827.04

0.1987

0.4886

272

N7

755.87

833.96

701.14

0.1967

0.4884

754.80

809.90

704.97

0.2010

0.4854

273

N8

633.78

689.96

593.87

0.1986

0.4865

649.66

698.06

582.28

0.2020

0.4883

274

N9

532.52

580.66

507.47

0.1979

0.4855

543.79

581.67

516.40

0.2011

0.4839

275

N10

446.83

481.71

416.37

0.2003

0.4859

450.43

472.77

417.10

0.2049

0.4839

276

N11

358.84

382.23

347.06

0.2012

0.4822

369.07

386.86

346.17

0.2047

0.4829

277

N12

291.86

310.17

277.97

0.2020

0.4831

294.02

308.54

288.73

0.2032

0.4797

278

N13

238.56

254.74

232.11

0.2006

0.4821

245.92

256.56

232.86

0.2052

0.4818

279

N14

193.90

206.70

186.05

0.2013

0.4829

198.04

205.06

190.97

0.2059

0.4798

280

N15

155.89

161.32

155.07

0.2051

0.4775

159.38

165.68

152.33

0.2055

0.4808

Table A.23: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 241-280).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements

207

Nikon D40 measurements

Index

Patch#

X

Y[ cd/m2 ]

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

281

N16

114.60

120.65

110.55

0.2032

0.4813

118.95

122.39

113.45

0.2073

0.4799

282

N17

87.98

89.47

87.04

0.2081

0.4761

94.09

94.21

90.00

0.2118

0.4771

283

N18

63.81

65.00

63.24

0.2078

0.4762

67.31

67.53

64.66

0.2113

0.4770

284

N19

44.79

45.16

44.77

0.2091

0.4745

46.51

46.26

45.43

0.2122

0.4749

285

N20

27.23

27.89

28.17

0.2055

0.4735

29.88

30.02

31.11

0.2084

0.4711

286

N21

19.11

19.54

18.84

0.2074

0.4769

21.90

22.24

22.18

0.2076

0.4743

287

N22

11.40

11.76

11.21

0.2059

0.4780

13.69

13.81

14.29

0.2076

0.4713

288

Dmax

6.58

6.95

6.18

0.2034

0.4835

9.35

9.85

10.06

0.1999

0.4733

289

A1

2.72

2.66

2.13

0.2217

0.4886

2.33

2.18

1.72

0.2322

0.4881

290

A2

3.00

2.65

2.05

0.2450

0.4877

2.50

2.11

1.63

0.2562

0.4864

291

A3

3.33

2.78

2.13

0.2592

0.4864

2.82

2.18

1.60

0.2790

0.4867

292

A4

3.92

2.98

2.04

0.2868

0.4898

3.32

2.34

1.53

0.3092

0.4896

293

A5

6.38

6.08

4.32

0.2307

0.4950

5.72

5.23

3.75

0.2400

0.4933

294

A6

7.04

5.98

4.16

0.2581

0.4927

6.43

5.24

3.35

0.2705

0.4960

295

A7

7.80

5.99

3.88

0.2856

0.4932

7.30

5.26

3.21

0.3050

0.4938

296

A8

9.13

6.21

3.60

0.3228

0.4944

8.31

5.36

2.90

0.3411

0.4952

297

A9

14.62

15.40

10.44

0.2111

0.5005

13.67

14.37

9.46

0.2122

0.5021

298

A10

15.05

15.09

10.25

0.2212

0.4990

14.22

14.11

9.48

0.2237

0.4993

299

A11

15.74

15.31

9.69

0.2294

0.5020

15.03

14.00

9.08

0.2383

0.4995

300

A12

19.74

17.54

11.95

0.2478

0.4953

16.48

14.12

8.36

0.2602

0.5016

301

A13

31.89

37.42

25.52

0.1905

0.5029

30.46

35.78

23.87

0.1907

0.5041

302

A14

33.96

38.95

25.51

0.1956

0.5046

32.66

36.98

24.99

0.1973

0.5025

303

A15

34.87

40.47

24.75

0.1948

0.5086

33.52

38.46

22.74

0.1976

0.5101

304

A16

34.72

40.72

25.29

0.1925

0.5080

33.37

38.38

24.84

0.1953

0.5053

305

A17

34.84

40.26

25.09

0.1952

0.5075

33.60

38.01

24.58

0.1984

0.5049

306

A18

34.13

39.90

24.44

0.1934

0.5087

33.87

39.57

23.29

0.1943

0.5107

307

A19

32.88

38.02

25.54

0.1935

0.5034

32.15

36.38

25.03

0.1970

0.5015

308

A20

4.95

3.68

2.60

0.2914

0.4873

4.19

2.87

2.10

0.3131

0.4823

309

A21

12.29

9.97

7.00

0.2690

0.4908

11.58

9.04

6.48

0.2778

0.4883

310

A22

23.06

23.43

13.15

0.2228

0.5094

22.68

22.58

12.22

0.2279

0.5106

311

B1

2.96

2.86

2.07

0.2270

0.4945

2.58

2.44

1.69

0.2325

0.4964

312

B2

3.31

2.97

1.92

0.2472

0.4984

2.89

2.47

1.49

0.2603

0.5008

313

B3

3.63

3.10

1.81

0.2611

0.5024

3.20

2.57

1.35

0.2794

0.5050

314

B4

4.12

3.18

1.57

0.2916

0.5062

3.62

2.61

1.06

0.3152

0.5112

315

B5

7.25

7.12

4.25

0.2286

0.5054

6.97

6.53

3.90

0.2391

0.5040

316

B6

8.19

7.23

3.55

0.2576

0.5111

7.60

6.50

2.88

0.2672

0.5143

317

B7

8.77

7.07

2.97

0.2836

0.5142

8.17

6.26

2.20

0.3006

0.5184

318

B8

10.13

7.11

2.57

0.3257

0.5139

9.30

6.22

1.78

0.3447

0.5186

319

B9

16.77

18.20

11.59

0.2066

0.5048

15.47

16.38

10.56

0.2113

0.5034

320

B10

17.27

17.92

10.64

0.2172

0.5072

16.06

16.29

9.69

0.2219

0.5065

Table A.24: Nikon D100 and D40 measurements by HDR characterisation (bright-side IT8.7/1 – patch index: 281-288 and dark-side – patch index: 289-320).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

321

B11

17.68

322

B12

19.13

323

B13

324 325

Y[ cd/m2 ]

208

Nikon D40 measurements

Z

u’

v’

X

17.68

9.97

0.2261

0.5087

16.63

17.75

8.01

0.2473

0.5163

17.89

28.45

34.61

23.74

0.1839

0.5034

B14

30.41

32.92

23.70

0.2043

B15

33.20

38.12

20.37

0.1994

326

B16

27.37

31.83

21.05

327

B17

30.45

33.21

328

B18

29.64

36.20

329

B19

27.25

330

B20

331

B21

332 333

Y[ cd/m2 ]

Z

u’

v’

16.35

9.13

0.2300

0.5087

16.32

7.01

0.2523

0.5176

26.88

31.98

23.04

0.1868

0.4999

0.4977

29.17

31.35

22.90

0.2054

0.4966

0.5151

32.22

36.40

19.07

0.2028

0.5155

0.1928

0.5044

26.54

29.91

20.02

0.1983

0.5029

20.60

0.2063

0.5063

29.71

31.95

19.88

0.2090

0.5057

20.25

0.1872

0.5144

28.22

34.41

19.63

0.1871

0.5134

31.47

23.54

0.1913

0.4969

26.57

30.64

23.41

0.1910

0.4956

4.20

3.54

2.28

0.2614

0.4969

3.53

2.77

1.63

0.2823

0.4988

8.64

6.86

3.34

0.2845

0.5079

7.92

6.01

2.61

0.2993

0.5107

B22

14.85

14.05

7.81

0.2385

0.5078

14.12

13.19

6.74

0.2432

0.5113

C1

4.20

4.31

2.64

0.2188

0.5053

3.72

3.77

2.28

0.2216

0.5056

334

C2

4.44

4.39

2.16

0.2317

0.5146

3.99

3.82

1.78

0.2393

0.5160

335

C3

4.71

4.46

1.84

0.2439

0.5206

4.23

3.91

1.38

0.2526

0.5250

336

C4

4.95

4.54

1.40

0.2566

0.5290

4.55

4.00

1.01

0.2690

0.5328

337

C5

11.94

12.61

6.08

0.2177

0.5174

11.13

11.44

5.45

0.2237

0.5172

338

C6

12.71

12.61

4.28

0.2369

0.5286

11.81

11.50

3.41

0.2428

0.5320

339

C7

13.04

12.35

2.92

0.2520

0.5368

12.23

11.27

2.10

0.2607

0.5407

340

C8

14.12

12.23

1.79

0.2783

0.5424

13.07

11.05

1.06

0.2872

0.5464

341

C9

18.50

20.34

11.88

0.2060

0.5096

16.95

18.22

10.69

0.2103

0.5088

342

C10

18.76

20.29

10.55

0.2116

0.5147

17.46

18.68

9.45

0.2143

0.5157

343

C11

19.13

20.14

9.26

0.2193

0.5194

17.93

18.94

8.23

0.2195

0.5217

344

C12

20.79

20.63

5.41

0.2399

0.5359

19.41

19.08

4.44

0.2435

0.5384

345

C13

25.31

31.42

23.29

0.1787

0.4992

24.17

30.19

21.96

0.1781

0.5005

346

C14

27.06

28.45

22.24

0.2080

0.4919

26.00

26.77

21.20

0.2118

0.4905

347

C15

32.05

37.52

16.05

0.1994

0.5252

29.71

34.07

14.61

0.2033

0.5245

348

C16

21.69

24.61

16.42

0.1971

0.5033

20.26

22.80

15.39

0.1985

0.5024

349

C17

26.39

27.26

16.25

0.2180

0.5068

25.04

25.47

15.52

0.2208

0.5053

350

C18

26.82

33.98

17.52

0.1821

0.5191

24.14

29.95

15.69

0.1855

0.5179

351

C19

23.22

26.28

22.73

0.1913

0.4870

21.60

24.44

22.11

0.1901

0.4839

352

C20

7.08

6.08

2.24

0.2699

0.5211

6.29

5.26

1.58

0.2797

0.5265

353

C21

10.66

10.74

5.49

0.2266

0.5135

9.72

9.70

4.87

0.2291

0.5140

354

C22

12.39

11.57

5.42

0.2452

0.5150

11.53

10.66

4.49

0.2493

0.5189

355

D1

3.06

3.21

2.09

0.2130

0.5024

2.69

2.77

1.74

0.2175

0.5042

356

D2

3.17

3.34

1.78

0.2162

0.5129

2.76

2.86

1.38

0.2221

0.5167

357

D3

3.24

3.50

1.55

0.2147

0.5216

2.79

2.90

1.14

0.2246

0.5252

358

D4

3.23

3.59

1.23

0.2130

0.5317

2.85

3.06

0.85

0.2218

0.5369

359

D5

9.27

10.30

5.08

0.2072

0.5178

8.50

9.22

4.54

0.2120

0.5173

360

D6

9.28

10.53

3.21

0.2099

0.5359

8.52

9.38

2.67

0.2168

0.5369

Table A.25: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 321-360).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

361

D7

8.86

362

D8

9.02

363

D9

364 365

Y[ cd/m2 ]

209

Nikon D40 measurements

Z

u’

v’

X

10.08

2.32

0.2122

0.5432

8.18

10.48

1.42

0.2115

0.5532

8.05

20.06

23.02

10.23

0.2026

0.5231

D10

20.56

23.71

6.50

0.2079

D11

19.93

23.39

3.88

0.2085

366

D12

19.73

23.85

1.93

367

D13

23.16

29.83

368

D14

24.53

24.13

369

D15

31.39

370

D16

371

D17

372 373

Y[ cd/m2 ]

Z

u’

v’

9.24

1.52

0.2162

0.5495

9.41

0.68

0.2129

0.5599

18.69

21.37

9.31

0.2037

0.5238

0.5392

18.88

21.57

5.61

0.2102

0.5403

0.5505

18.62

21.61

3.12

0.2115

0.5523

0.2059

0.5600

18.86

22.69

1.03

0.2082

0.5636

22.68

0.1719

0.4984

22.62

28.90

22.17

0.1731

0.4977

21.62

0.2174

0.4812

23.66

23.10

19.97

0.2200

0.4834

36.16

13.46

0.2045

0.5299

29.52

34.42

12.07

0.2029

0.5322

16.19

18.27

12.41

0.1978

0.5021

15.02

16.82

11.04

0.2000

0.5038

22.09

21.72

12.42

0.2293

0.5076

20.84

20.07

11.20

0.2346

0.5081

D18

23.57

30.65

13.73

0.1798

0.5259

21.68

27.93

12.58

0.1813

0.5255

D19

18.12

20.71

20.48

0.1858

0.4777

17.14

19.14

20.02

0.1882

0.4728

374

D20

2.98

3.11

1.95

0.2150

0.5045

2.33

2.33

1.35

0.2255

0.5074

375

D21

13.64

14.22

1.51

0.2357

0.5529

12.79

13.22

0.87

0.2393

0.5568

376

D22

16.68

19.88

19.37

0.1789

0.4797

16.03

18.72

18.91

0.1813

0.4765

377

E1

3.63

4.03

2.50

0.2027

0.5068

3.22

3.55

2.09

0.2055

0.5093

378

E2

3.44

4.05

1.97

0.1963

0.5200

3.07

3.55

1.54

0.2018

0.5242

379

E3

3.37

4.27

1.62

0.1864

0.5318

2.91

3.67

1.17

0.1896

0.5372

380

E4

3.22

4.50

1.21

0.1735

0.5447

2.74

3.87

0.70

0.1741

0.5539

381

E5

12.14

14.66

6.88

0.1921

0.5222

11.18

13.34

6.09

0.1948

0.5230

382

E6

11.24

14.71

4.61

0.1830

0.5388

10.60

13.69

3.94

0.1861

0.5410

383

E7

10.41

14.75

2.97

0.1731

0.5518

9.59

13.54

2.26

0.1747

0.5553

384

E8

10.08

15.40

1.88

0.1634

0.5618

9.16

14.05

1.16

0.1640

0.5661

385

E9

19.04

22.51

12.32

0.1934

0.5146

17.96

21.10

11.37

0.1950

0.5152

386

E10

18.65

22.96

9.90

0.1899

0.5262

17.40

21.21

9.18

0.1917

0.5257

387

E11

17.81

23.00

8.25

0.1839

0.5341

16.68

21.30

7.14

0.1865

0.5361

388

E12

17.79

24.23

5.88

0.1784

0.5467

16.27

21.87

5.24

0.1808

0.5467

389

E13

21.63

28.27

22.77

0.1683

0.4950

20.79

27.22

20.83

0.1692

0.4984

390

E14

21.98

20.65

19.78

0.2248

0.4752

21.40

19.71

18.87

0.2291

0.4747

391

E15

29.24

34.39

10.74

0.2026

0.5361

29.24

33.50

10.09

0.2081

0.5365

392

E16

11.87

13.36

9.22

0.1978

0.5012

11.38

12.46

8.45

0.2034

0.5015

393

E17

19.09

17.52

9.58

0.2458

0.5076

18.10

16.15

8.56

0.2532

0.5082

394

E18

20.69

27.32

10.90

0.1787

0.5309

20.15

26.50

10.18

0.1799

0.5321

395

E19

14.75

16.65

18.94

0.1837

0.4663

14.39

15.99

18.65

0.1855

0.4640

396

E20

4.55

5.48

1.82

0.1976

0.5348

3.93

4.63

1.26

0.2036

0.5400

397

E21

11.26

14.39

2.99

0.1908

0.5486

10.78

13.64

2.48

0.1935

0.5509

398

E22

5.67

9.58

5.80

0.1359

0.5170

4.97

8.53

5.36

0.1334

0.5152

399

F1

2.70

3.10

2.35

0.1917

0.4961

2.31

2.62

1.87

0.1956

0.4995

400

F2

2.53

3.16

2.17

0.1791

0.5040

2.09

2.65

1.62

0.1793

0.5107

Table A.26: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 361-400).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

401

F3

2.42

402

F4

2.27

403

F5

404 405 406

Y[ cd/m2 ]

210

Nikon D40 measurements

Z

u’

v’

X

3.32

2.05

0.1661

0.5120

1.90

3.43

1.93

0.1529

0.5186

1.70

6.45

8.08

5.11

0.1806

0.5085

F6

5.74

8.14

4.62

0.1621

F7

4.97

8.08

3.84

0.1443

F8

4.82

8.12

3.66

407

F9

13.14

16.10

408

F10

12.13

16.17

409

F11

11.51

16.48

410

F12

10.68

411

F13

20.46

412

F14

413

F15

414

Y[ cd/m2 ]

Z

u’

v’

2.69

1.54

0.1619

0.5164

2.76

1.33

0.1440

0.5275

5.80

7.19

4.51

0.1824

0.5087

0.5169

5.00

7.19

4.06

0.1600

0.5175

0.5282

4.31

7.24

3.22

0.1405

0.5316

0.1402

0.5310

4.01

7.16

3.11

0.1327

0.5337

9.96

0.1847

0.5093

11.98

14.46

9.06

0.1871

0.5083

9.20

0.1719

0.5156

11.06

14.56

8.49

0.1735

0.5140

8.91

0.1613

0.5197

10.30

14.97

7.80

0.1596

0.5217

16.71

8.14

0.1495

0.5263

9.68

15.57

7.09

0.1463

0.5298

27.28

22.18

0.1650

0.4948

18.89

24.94

20.29

0.1665

0.4946

19.84

17.74

17.74

0.2341

0.4707

18.75

16.19

17.66

0.2383

0.4632

29.16

33.89

8.69

0.2070

0.5412

28.20

32.40

8.05

0.2095

0.5417

F16

8.95

9.72

6.90

0.2040

0.4986

8.11

8.77

6.06

0.2054

0.5001

415

F17

16.39

14.20

7.15

0.2613

0.5095

15.26

12.69

6.26

0.2720

0.5089

416

F18

18.52

25.34

8.52

0.1746

0.5376

17.61

23.90

7.68

0.1764

0.5389

417

F19

11.78

12.95

17.54

0.1821

0.4506

10.99

11.78

17.20

0.1838

0.4430

418

F20

2.14

2.88

1.50

0.1717

0.5202

1.50

2.11

0.92

0.1670

0.5290

419

F21

3.79

6.51

1.45

0.1434

0.5538

3.09

5.55

1.04

0.1382

0.5582

420

F22

5.42

9.24

1.63

0.1455

0.5585

4.73

8.38

1.06

0.1416

0.5644

421

G1

2.15

2.35

2.09

0.1968

0.4843

1.72

1.87

1.59

0.1992

0.4872

422

G2

2.22

2.51

2.19

0.1912

0.4867

1.59

1.90

1.67

0.1815

0.4871

423

G3

2.01

2.51

2.27

0.1729

0.4861

1.51

1.90

1.72

0.1714

0.4863

424

G4

1.89

2.60

2.51

0.1561

0.4834

1.34

1.96

1.87

0.1476

0.4855

425

G5

6.36

7.73

5.85

0.1819

0.4974

5.61

6.83

5.30

0.1810

0.4960

426

G6

5.60

7.79

6.20

0.1588

0.4971

5.01

6.96

5.55

0.1590

0.4969

427

G7

4.98

7.58

6.68

0.1434

0.4918

4.31

6.82

5.93

0.1386

0.4935

428

G8

4.84

7.67

6.42

0.1391

0.4961

4.09

6.69

5.84

0.1341

0.4938

429

G9

12.97

16.14

10.85

0.1804

0.5051

11.85

14.22

10.14

0.1855

0.5008

430

G10

11.90

15.53

11.42

0.1705

0.5008

10.78

14.06

10.38

0.1705

0.5005

431

G11

11.00

15.60

11.75

0.1570

0.5010

10.10

14.38

10.68

0.1567

0.5019

432

G12

10.60

16.69

12.72

0.1417

0.5022

9.38

14.84

11.04

0.1415

0.5038

433

G13

18.43

25.23

21.37

0.1599

0.4926

17.31

23.14

19.88

0.1633

0.4911

434

G14

17.56

14.79

17.06

0.2417

0.4581

16.51

13.70

16.15

0.2441

0.4559

435

G15

28.03

32.68

6.31

0.2087

0.5475

26.11

30.05

5.92

0.2112

0.5468

436

G16

6.40

6.96

5.08

0.2032

0.4969

5.66

5.92

4.32

0.2107

0.4960

437

G17

13.79

11.04

5.32

0.2824

0.5086

12.38

9.66

4.24

0.2913

0.5114

438

G18

16.55

22.98

6.55

0.1738

0.5430

15.32

21.42

5.73

0.1732

0.5449

439

G19

9.15

9.76

16.17

0.1794

0.4305

8.55

9.00

15.28

0.1805

0.4277

440

G20

2.62

4.03

1.98

0.1520

0.5255

2.02

3.31

1.39

0.1451

0.5335

Table A.27: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 401-440).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

441

G21

3.63

442

G22

1.60

443

H1

444 445

Y[ cd/m2 ]

211

Nikon D40 measurements

Z

u’

v’

X

5.22

3.84

0.1554

0.5027

3.09

1.78

2.37

0.1813

0.4522

1.21

2.07

2.22

2.22

0.1972

0.4755

H2

1.99

2.25

2.53

0.1840

H3

2.00

2.37

2.95

0.1727

446

H4

1.89

2.37

3.64

447

H5

5.19

5.88

448

H6

4.83

5.90

449

H7

4.33

5.73

450

H8

4.19

451

H9

12.88

452

H10

12.31

453

H11

11.87

454

H12

455 456

Y[ cd/m2 ]

Z

u’

v’

4.58

3.05

0.1529

0.5091

1.34

1.89

0.1795

0.4472

1.70

1.79

1.68

0.2034

0.4794

0.4675

1.59

1.78

1.97

0.1862

0.4683

0.4598

1.50

1.77

2.36

0.1708

0.4534

0.1562

0.4409

1.38

1.78

3.11

0.1480

0.4281

5.65

0.1884

0.4795

4.65

5.33

5.00

0.1868

0.4815

6.67

0.1704

0.4686

4.34

5.34

6.19

0.1685

0.4667

8.24

0.1505

0.4485

3.81

5.16

7.47

0.1473

0.4481

5.82

9.35

0.1403

0.4381

3.55

5.06

8.55

0.1351

0.4334

14.82

11.08

0.1920

0.4969

12.02

13.72

10.51

0.1929

0.4952

14.73

12.39

0.1821

0.4902

11.42

13.62

11.68

0.1821

0.4888

14.66

13.57

0.1742

0.4842

10.94

13.37

12.80

0.1751

0.4815

10.99

14.89

17.68

0.1529

0.4663

10.02

13.67

16.58

0.1513

0.4646

H13

17.12

23.80

20.54

0.1571

0.4916

15.90

21.89

19.75

0.1577

0.4883

H14

15.78

12.52

16.01

0.2508

0.4479

14.72

11.54

14.72

0.2538

0.4477

457

H15

26.90

31.16

4.91

0.2114

0.5509

25.64

29.57

4.36

0.2126

0.5518

458

H16

4.53

4.75

3.67

0.2090

0.4925

3.74

3.95

2.83

0.2093

0.4974

459

H17

11.59

8.66

3.78

0.3033

0.5100

10.58

7.72

3.00

0.3126

0.5133

460

H18

14.80

21.28

4.95

0.1698

0.5490

13.79

19.56

4.28

0.1723

0.5501

461

H19

7.11

7.35

14.27

0.1775

0.4129

6.37

6.65

13.14

0.1750

0.4112

462

H20

1.77

2.08

2.32

0.1774

0.4689

1.25

1.49

1.74

0.1728

0.4658

463

H21

3.07

4.17

4.99

0.1524

0.4656

2.52

3.57

4.34

0.1461

0.4649

464

H22

1.59

1.30

1.48

0.2488

0.4582

1.20

0.88

1.04

0.2725

0.4530

465

I1

2.62

2.64

3.21

0.2020

0.4583

2.23

2.27

2.58

0.2026

0.4641

466

I2

2.61

2.64

4.28

0.1894

0.4319

2.19

2.19

3.64

0.1907

0.4287

467

I3

2.60

2.71

5.89

0.1710

0.4003

2.19

2.14

5.36

0.1736

0.3824

468

I4

2.49

2.55

8.23

0.1522

0.3507

2.02

2.04

7.57

0.1463

0.3318

469

I5

6.59

7.08

6.61

0.1987

0.4805

5.89

6.29

6.00

0.1995

0.4787

470

I6

6.58

6.98

8.16

0.1940

0.4627

5.93

6.13

7.47

0.1969

0.4587

471

I7

6.26

6.66

9.85

0.1843

0.4417

5.72

5.95

8.88

0.1882

0.4402

472

I8

6.20

6.64

12.75

0.1722

0.4148

5.59

5.66

12.42

0.1748

0.3989

473

I9

13.07

14.48

11.16

0.1982

0.4941

12.01

13.01

10.54

0.2012

0.4904

474

I10

12.67

14.07

12.41

0.1942

0.4853

11.91

12.95

11.42

0.1983

0.4847

475

I11

12.66

14.03

13.79

0.1915

0.4774

11.94

12.89

12.57

0.1965

0.4774

476

I12

12.92

13.99

17.80

0.1871

0.4559

11.51

12.39

15.59

0.1887

0.4568

477

I13

16.22

22.53

20.65

0.1559

0.4873

15.02

20.80

19.12

0.1563

0.4870

478

I14

14.40

10.99

14.81

0.2576

0.4422

13.41

10.09

13.50

0.2612

0.4425

479

I15

26.02

29.41

3.89

0.2174

0.5528

24.44

27.57

3.16

0.2185

0.5545

480

I16

3.25

3.35

2.68

0.2108

0.4900

2.51

2.56

1.84

0.2163

0.4963

Table A.28: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 441-480).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

Y[ cd/m2 ]

481

I17

10.13

482

I18

13.10

483

I19

5.49

5.52

484

I20

3.59

485

I21

4.94

486

I22

487 488

212

Nikon D40 measurements

Z

u’

v’

X

7.11

2.84

0.3234

0.5107

8.44

19.11

3.75

0.1686

0.5530

11.85

12.94

0.1729

0.3907

4.87

4.05

6.73

0.1697

0.4314

6.29

10.05

0.1528

0.4372

1.12

1.15

0.87

0.2132

J1

1.42

1.38

1.69

J2

1.47

1.33

2.23

489

J3

1.51

1.30

490

J4

1.48

491

J5

4.37

492

J6

493

J7

494

Y[ cd/m2 ]

Z

u’

v’

5.74

1.98

0.3363

0.5141

17.51

2.82

0.1675

0.5569

4.86

11.86

0.1719

0.3859

3.05

3.42

5.88

0.1692

0.4276

4.45

5.74

9.11

0.1511

0.4382

0.4929

0.77

0.77

0.46

0.2249

0.5054

0.2091

0.4563

1.03

0.95

1.20

0.2182

0.4528

0.2093

0.4258

1.05

0.87

1.75

0.2165

0.4048

3.17

0.1982

0.3834

1.09

0.83

2.66

0.2026

0.3477

1.17

4.91

0.1757

0.3116

1.11

0.78

4.25

0.1734

0.2745

4.29

4.34

0.2136

0.4724

3.85

3.72

3.82

0.2166

0.4709

4.65

4.35

5.94

0.2120

0.4463

4.09

3.75

5.12

0.2159

0.4460

4.65

4.23

7.03

0.2089

0.4267

4.23

3.60

6.56

0.2169

0.4159

J8

5.00

4.11

10.08

0.2064

0.3817

4.40

3.48

9.28

0.2085

0.3708

495

J9

10.76

11.62

9.32

0.2020

0.4910

9.91

10.53

8.61

0.2046

0.4893

496

J10

10.71

11.19

10.08

0.2052

0.4824

9.80

10.19

9.27

0.2058

0.4816

497

J11

11.08

11.11

11.90

0.2076

0.4685

10.41

10.59

10.68

0.2070

0.4735

498

J12

11.62

11.25

15.46

0.2050

0.4465

10.57

10.04

14.30

0.2072

0.4428

499

J13

14.98

20.97

19.55

0.1543

0.4862

13.79

19.13

18.38

0.1550

0.4838

500

J14

12.79

9.27

13.23

0.2671

0.4357

11.95

8.38

12.39

0.2734

0.4315

501

J15

24.68

28.05

2.59

0.2178

0.5570

23.74

26.75

1.99

0.2204

0.5586

502

J16

2.19

2.28

1.86

0.2087

0.4889

1.56

1.58

1.14

0.2172

0.4962

503

J17

8.16

5.42

2.01

0.3418

0.5109

7.32

4.61

1.29

0.3645

0.5164

504

J18

11.36

16.91

2.24

0.1673

0.5601

10.56

15.73

1.83

0.1677

0.5618

505

J19

4.12

3.98

11.21

0.1693

0.3675

3.70

3.57

10.31

0.1676

0.3645

506

J20

1.15

0.93

3.31

0.1839

0.3338

0.74

0.54

2.66

0.1756

0.2899

507

J21

3.99

3.70

6.96

0.1983

0.4144

3.53

3.21

6.41

0.1991

0.4075

508

J22

1.04

1.13

1.18

0.1933

0.4722

0.62

0.74

0.43

0.1920

0.5121

509

K1

1.78

1.67

1.77

0.2221

0.4677

1.36

1.22

1.19

0.2334

0.4727

510

K2

1.96

1.68

1.94

0.2377

0.4582

1.53

1.21

1.48

0.2542

0.4510

511

K3

2.16

1.67

2.32

0.2521

0.4400

1.73

1.22

1.80

0.2724

0.4318

512

K4

2.56

1.71

2.87

0.2770

0.4183

2.11

1.28

2.35

0.2969

0.4063

513

K5

3.68

3.34

3.36

0.2309

0.4707

3.12

2.77

2.89

0.2343

0.4674

514

K6

4.23

3.35

4.31

0.2510

0.4472

3.69

2.80

3.79

0.2589

0.4416

515

K7

4.55

3.22

5.14

0.2663

0.4246

3.95

2.63

4.33

0.2800

0.4198

516

K8

5.24

3.24

6.56

0.2849

0.3967

4.60

2.61

5.90

0.2989

0.3825

517

K9

9.22

9.42

7.76

0.2122

0.4879

8.36

8.27

7.14

0.2172

0.4839

518

K10

9.67

9.07

9.18

0.2232

0.4712

8.88

8.24

8.04

0.2270

0.4735

519

K11

10.44

9.20

10.33

0.2328

0.4614

9.70

8.37

9.13

0.2385

0.4632

520

K12

11.93

8.98

13.55

0.2547

0.4316

10.95

8.11

11.96

0.2599

0.4332

Table A.29: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 481-520).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

521

K13

14.04

522

K14

11.81

523

K15

524 525

Y[ cd/m2 ]

213

Nikon D40 measurements

Z

u’

v’

X

19.80

19.03

0.1526

0.4841

12.89

8.25

12.52

0.2729

0.4289

10.79

23.74

26.54

1.67

0.2225

0.5596

K16

1.59

1.66

1.40

0.2073

K17

6.98

4.41

1.58

0.3585

526

K18

10.03

14.90

1.51

527

K19

3.31

3.14

528

K20

2.15

1.62

529

K21

5.68

530

K22

531

L1

532 533

Y[ cd/m2 ]

Z

u’

v’

18.21

17.20

0.1526

0.4854

7.37

11.06

0.2793

0.4292

21.92

24.21

1.15

0.2257

0.5608

0.4869

0.94

0.96

0.67

0.2166

0.4983

0.5097

6.09

3.64

0.85

0.3855

0.5181

0.1686

0.5633

9.26

13.82

1.03

0.1687

0.5662

9.61

0.1671

0.3566

2.88

2.67

9.06

0.1644

0.3426

3.14

0.2393

0.4066

1.64

1.09

2.50

0.2576

0.3846

4.63

7.97

0.2295

0.4207

5.29

4.12

7.35

0.2372

0.4161

0.99

1.16

1.36

0.1767

0.4642

0.59

0.74

0.79

0.1686

0.4733

1.74

1.67

1.49

0.2224

0.4807

1.34

1.22

0.98

0.2364

0.4862

L2

1.94

1.68

1.52

0.2446

0.4773

1.54

1.22

1.07

0.2676

0.4764

L3

2.16

1.74

1.48

0.2643

0.4787

1.75

1.26

1.13

0.2923

0.4715

534

L4

2.61

1.82

1.55

0.3027

0.4735

2.14

1.34

1.17

0.3317

0.4684

535

L5

3.67

3.32

2.77

0.2375

0.4839

3.15

2.76

2.32

0.2445

0.4821

536

L6

4.21

3.27

3.04

0.2700

0.4719

3.74

2.76

2.46

0.2842

0.4732

537

L7

4.77

3.23

3.12

0.3050

0.4644

4.12

2.62

2.47

0.3241

0.4640

538

L8

5.66

3.49

3.25

0.3338

0.4637

5.02

2.86

2.67

0.3590

0.4603

539

L9

8.93

9.02

6.72

0.2173

0.4938

8.18

8.19

6.12

0.2190

0.4934

540

L10

9.43

9.00

7.02

0.2280

0.4894

8.72

7.97

6.36

0.2367

0.4868

541

L11

10.21

8.86

7.76

0.2456

0.4792

9.32

7.91

6.75

0.2515

0.4802

542

L12

11.30

8.71

8.17

0.2715

0.4710

10.43

7.85

7.03

0.2797

0.4732

543

L13

12.27

17.76

18.09

0.1474

0.4801

11.31

15.97

16.34

0.1509

0.4793

544

L14

9.00

5.71

9.78

0.2903

0.4145

8.34

5.10

8.69

0.3008

0.4137

545

L15

21.24

22.93

0.62

0.2314

0.5622

20.22

21.61

-0.01

0.2349

0.5648

546

L16

1.02

1.09

1.03

0.1996

0.4793

0.50

0.51

0.38

0.2152

0.4937

547

L17

4.57

2.82

1.13

0.3633

0.5052

3.78

2.16

0.49

0.4016

0.5165

548

L18

7.80

11.51

0.54

0.1713

0.5690

7.27

10.70

0.10

0.1731

0.5730

549

L19

1.93

1.56

7.15

0.1644

0.3006

1.55

1.24

6.38

0.1577

0.2842

550

L20

2.89

2.01

2.43

0.2867

0.4485

2.40

1.50

1.90

0.3142

0.4410

551

L21

5.63

4.55

4.65

0.2562

0.4663

5.29

4.17

3.99

0.2652

0.4703

552

L22

1.49

1.25

0.87

0.2597

0.4929

1.03

0.78

0.39

0.2981

0.5049

553

Dmin

23.09

26.74

17.44

0.1938

0.5051

22.01

25.18

15.90

0.1968

0.5065

554

N1

18.13

20.45

14.57

0.1967

0.4993

17.14

19.43

12.78

0.1976

0.5040

555

N2

16.73

18.95

13.13

0.1967

0.5011

15.62

17.54

11.52

0.1994

0.5039

556

N3

14.69

16.70

11.43

0.1961

0.5019

13.69

15.31

10.62

0.1990

0.5007

557

N4

12.98

14.56

10.07

0.1984

0.5010

12.30

13.49

9.53

0.2022

0.4992

558

N5

11.50

12.71

8.83

0.2012

0.5003

10.87

11.96

8.22

0.2022

0.5009

559

N6

9.96

10.97

7.95

0.2008

0.4977

9.19

10.03

7.29

0.2026

0.4973

560

N7

8.60

9.51

6.82

0.2005

0.4984

7.75

8.44

5.98

0.2034

0.4988

Table A.30: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 521-560).

A.3. Physical Measurements in High-Dynamic-Range Characterisation

Nikon D100 measurements Index

Patch#

X

561

N8

7.41

562

N9

563

N10

564 565

Y[ cd/m

214

Nikon D40 measurements

Z

u’

v’

X

Z

u’

v’

8.10

5.77

0.2028

0.4985

6.68

7.12

5.07

0.2076

0.4979

6.34

7.00

5.07

0.2005

0.4979

5.72

6.11

4.38

0.2069

0.4976

5.54

5.96

4.46

0.2046

0.4951

4.77

5.15

3.69

0.2049

0.4980

N11

4.61

4.93

3.91

0.2042

0.4913

3.98

4.26

3.09

0.2066

0.4969

N12

3.98

4.23

3.39

0.2052

0.4906

3.36

3.52

2.63

0.2098

0.4945

566

N13

3.43

3.64

3.04

0.2046

0.4878

2.81

2.93

2.27

0.2098

0.4922

567

N14

2.90

3.06

2.60

0.2049

0.4867

2.34

2.41

1.82

0.2130

0.4933

568

N15

2.48

2.58

2.17

0.2080

0.4868

1.89

1.92

1.51

0.2146

0.4904

569

N16

2.06

2.12

1.89

0.2086

0.4826

1.50

1.51

1.14

0.2172

0.4929

570

N17

1.62

1.67

1.44

0.2082

0.4854

1.23

1.23

0.89

0.2199

0.4951

571

N18

1.37

1.39

1.23

0.2110

0.4832

1.04

1.01

0.77

0.2236

0.4913

572

N19

1.15

1.19

1.11

0.2068

0.4793

0.77

0.77

0.58

0.2193

0.4925

573

N20

1.01

1.06

0.93

0.2052

0.4842

0.63

0.64

0.48

0.2170

0.4933

574

N21

0.87

0.92

0.87

0.2017

0.4789

0.52

0.52

0.41

0.2170

0.4900

575

N22

0.77

0.83

0.80

0.1980

0.4776

0.43

0.44

0.34

0.2148

0.4906

576

Dmax

0.69

0.76

0.76

0.1943

0.4756

0.39

0.40

0.33

0.2124

0.4871

2

]

Y[ cd/m2 ]

Table A.31: Nikon D100 and D40 measurements by HDR characterisation (dark-side IT8.7/1 – patch index: 561-576).

A.3. Physical Measurements in High-Dynamic-Range Characterisation Radiometric measurements Index

Patch#

X

1

A1

995.63

2

B1

3

C1

4

D1

5

E1

6

F1

7 8

Y[ cd/m

215

Canon 350D measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

769.20

168.81

0.3054

0.5309

1012.00

794.99

183.49

0.3001

0.5305

3334.80

2572.00

657.77

0.3039

0.5274

3313.93

2616.25

731.45

0.2962

0.5261

1141.34

1150.00

802.56

0.2195

0.4976

1152.74

1145.77

756.85

0.2237

0.5003

745.52

778.40

153.67

0.2315

0.5438

756.09

778.77

143.24

0.2350

0.5447

1306.82

1153.00

799.08

0.2489

0.4942

1255.95

1117.23

759.71

0.2476

0.4955

1230.12

1523.00

672.44

0.1886

0.5253

1210.60

1474.50

648.31

0.1916

0.5251

A2

3782.08

2603.00

161.71

0.3493

0.5409

3665.68

2597.90

252.28

0.3379

0.5388

B2

814.61

747.00

989.62

0.2174

0.4485

813.86

748.64

899.14

0.2208

0.4571

2

]

9

C2

2441.79

1472.00

310.03

0.3838

0.5205

2437.53

1562.42

360.51

0.3617

0.5217

10

D2

581.89

407.50

276.14

0.3094

0.4875

576.17

413.30

256.70

0.3054

0.4930

11

E2

2007.70

2164.00

231.15

0.2284

0.5539

1876.52

2020.10

195.06

0.2291

0.5549

12

F2

2266.96

1780.00

97.39

0.3099

0.5475

2201.09

1732.40

96.03

0.3092

0.5476

13

A3

389.11

343.60

672.03

0.2059

0.4091

389.15

337.37

600.45

0.2147

0.4187

14

B3

1093.39

1443.00

265.01

0.1858

0.5519

1116.02

1423.04

256.35

0.1922

0.5513

15

C3

1652.94

892.00

102.73

0.4310

0.5233

1749.88

1064.67

174.89

0.3836

0.5252

16

D3

3919.03

3417.00

202.02

0.2810

0.5513

3756.82

3296.34

229.62

0.2788

0.5505

17

E3

1646.87

991.60

467.07

0.3676

0.4980

1672.82

1068.57

472.59

0.3500

0.5030

18

F3

419.54

573.30

519.02

0.1587

0.4879

417.65

556.89

498.19

0.1627

0.4882

19

A4

7008.98

6508.00

2488.92

0.2501

0.5225

6296.21

5806.93

2193.75

0.2519

0.5227

20

B4

4046.31

3787.00

1494.28

0.2477

0.5217

3710.41

3455.20

1380.44

0.2487

0.5211

21

C4

2192.16

2053.00

815.23

0.2475

0.5215

2006.57

1863.14

732.82

0.2496

0.5215

22

D4

954.32

890.70

356.41

0.2481

0.5211

868.45

817.09

309.70

0.2472

0.5233

23

E4

349.61

327.80

133.37

0.2468

0.5206

334.58

314.37

121.86

0.2471

0.5224

24

F4

109.86

102.30

41.44

0.2485

0.5206

97.08

90.51

36.41

0.2483

0.5209

25

A1

2.41

1.80

0.44

0.3137

0.5272

3.32

2.74

0.80

0.2842

0.5267

26

B1

7.22

5.30

1.47

0.3169

0.5234

8.24

6.52

1.94

0.2947

0.5246

27

C1

2.69

2.61

1.84

0.2272

0.4960

3.70

3.53

1.99

0.2361

0.5074

28

D1

1.81

1.82

0.47

0.2372

0.5367

2.85

2.71

0.65

0.2512

0.5366

29

E1

3.08

2.60

1.72

0.2608

0.4953

3.76

3.27

1.84

0.2577

0.5046

30

F1

2.84

3.27

1.53

0.2011

0.5211

2.99

3.30

1.46

0.2106

0.5222

31

A2

6.84

4.62

0.64

0.3505

0.5327

7.51

5.57

0.87

0.3209

0.5352

32

B2

1.63

1.36

1.64

0.2419

0.4542

2.84

2.49

1.99

0.2464

0.4855

33

C2

4.90

2.88

0.72

0.3900

0.5157

6.00

4.13

1.06

0.3373

0.5225

34

D2

1.59

1.07

0.61

0.3267

0.4946

2.61

2.05

0.87

0.2901

0.5130

35

E2

3.76

3.89

0.56

0.2358

0.5488

4.49

4.55

0.80

0.2390

0.5449

36

F2

4.61

3.56

0.57

0.3088

0.5365

4.63

3.79

0.63

0.2926

0.5383

37

A3

0.93

0.80

1.21

0.2246

0.4348

2.16

1.92

1.41

0.2452

0.4911

38

B3

1.98

2.41

0.59

0.1985

0.5436

3.28

3.53

0.91

0.2225

0.5388

39

C3

3.51

1.93

0.33

0.4197

0.5193

4.99

3.45

0.82

0.3369

0.5245

40

D3

6.79

5.70

0.60

0.2887

0.5452

7.94

6.75

0.90

0.2839

0.5429

41

E3

3.41

2.02

0.91

0.3743

0.4989

4.45

3.17

1.25

0.3188

0.5118

42

F3

1.28

1.45

1.08

0.1949

0.4968

2.00

2.01

1.14

0.2246

0.5083

43

A4

8.69

7.78

3.24

0.2573

0.5182

9.74

8.84

3.39

0.2556

0.5217

44

B4

5.39

4.87

2.02

0.2551

0.5187

6.76

6.18

2.32

0.2540

0.5227

45

C4

3.47

3.17

1.29

0.2529

0.5198

4.81

4.32

1.61

0.2586

0.5222

46

D4

1.90

1.68

0.71

0.2600

0.5173

3.46

3.01

1.06

0.2672

0.5232

47

E4

1.05

0.93

0.35

0.2617

0.5215

2.39

2.09

0.73

0.2661

0.5232

48

F4

0.72

0.65

0.22

0.2588

0.5256

1.79

1.54

0.48

0.2716

0.5263

Table A.32: Radiometric measurements of test colour samples (transparency) by a spectroradiometer (Jeti Specbos 1200) and Canon 350D measurements by HDR characterisation (bright & dark-sides GretagMacbeth ColorCheckers).

A.3. Physical Measurements in High-Dynamic-Range Characterisation Nikon D100 measurements Index

Patch#

X

1

A1

1563.51

2

B1

3

C1

4 5

Y[ cd/m

216

Nikon D40 measurements

Z

u’

v’

X

Y[ cd/m2 ]

Z

u’

v’

1209.45

300.38

0.3054

0.5309

1529.01

1168.31

300.86

0.3065

0.5269

5388.68

4041.83

1114.36

0.3039

0.5274

4730.72

3579.01

1109.47

0.3065

0.5217

1754.96

1663.61

1152.08

0.2195

0.4976

1724.44

1645.95

1140.15

0.2312

0.4965

D1

1123.48

1132.00

216.52

0.2315

0.5438

1097.44

1109.21

212.98

0.2389

0.5433

E1

1938.59

1643.50

1131.49

0.2489

0.4942

1929.11

1619.97

1113.76

0.2610

0.4931

6

F1

1818.31

2146.55

1003.45

0.1886

0.5253

1782.92

2054.50

987.00

0.2005

0.5200

7

A2

5816.25

3963.53

403.72

0.3493

0.5409

5127.59

3598.59

347.07

0.3410

0.5385

8

B2

1320.94

1140.22

1366.04

0.2174

0.4485

1295.98

1116.05

1302.31

0.2362

0.4577

2

]

9

C2

3758.13

2325.53

565.57

0.3838

0.5205

3523.91

2200.79

547.96

0.3692

0.5188

10

D2

1042.99

721.89

417.94

0.3094

0.4875

995.71

693.53

401.29

0.3160

0.4953

11

E2

2780.15

2951.69

299.99

0.2284

0.5539

2690.60

2773.74

301.36

0.2381

0.5523

12

F2

3267.35

2520.37

151.11

0.3099

0.5475

3177.95

2468.17

133.69

0.3131

0.5471

13

A3

625.19

541.31

888.94

0.2059

0.4091

611.49

527.14

859.59

0.2204

0.4275

14

B3

1632.16

2016.94

396.01

0.1858

0.5519

1599.64

1902.93

395.86

0.2042

0.5466

15

C3

2659.14

1573.26

282.62

0.4310

0.5233

2594.40

1537.27

267.46

0.3923

0.5230

16

D3

5609.20

4854.46

290.62

0.2810

0.5513

5287.17

4467.88

306.34

0.2888

0.5491

17

E3

2544.93

1595.77

722.79

0.3676

0.4980

2544.25

1592.41

711.83

0.3563

0.5017

18

F3

661.74

818.98

759.73

0.1587

0.4879

638.28

786.74

728.27

0.1746

0.4842

19

A4

9909.69

9027.25

3347.35

0.2501

0.5225

8842.42

8049.25

3065.92

0.2549

0.5220

20

B4

5719.89

5271.14

1910.92

0.2477

0.5217

5165.40

4543.92

1916.52

0.2613

0.5172

21

C4

2880.11

2672.47

1049.17

0.2475

0.5215

2911.87

2612.87

1074.60

0.2570

0.5188

22

D4

1326.38

1197.24

485.98

0.2481

0.5211

1289.35

1176.01

466.36

0.2537

0.5207

23

E4

505.84

463.13

193.00

0.2468

0.5206

504.86

460.81

193.03

0.2526

0.5187

24

F4

155.71

141.03

60.98

0.2485

0.5206

152.71

139.86

56.38

0.2524

0.5202

25

A1

10.97

7.94

3.41

0.3137

0.5272

8.81

6.19

2.65

0.3218

0.5082

26

B1

26.26

18.00

6.48

0.3169

0.5234

22.04

15.09

5.20

0.3341

0.5145

27

C1

13.21

10.24

6.54

0.2272

0.4960

10.40

8.02

5.07

0.2852

0.4946

28

D1

8.68

7.13

2.95

0.2372

0.5367

5.80

4.81

1.66

0.2799

0.5220

29

E1

14.53

10.67

6.25

0.2608

0.4953

11.16

8.15

4.88

0.3016

0.4955

30

F1

9.90

8.71

4.46

0.2011

0.5211

7.50

6.69

3.54

0.2533

0.5082

31

A2

22.38

15.15

4.50

0.3505

0.5327

19.25

12.59

3.44

0.3524

0.5188

32

B2

11.99

8.88

5.70

0.2419

0.4542

8.62

6.17

4.60

0.2999

0.4830

33

C2

19.40

12.74

4.77

0.3900

0.5157

15.01

9.38

3.12

0.3635

0.5114

34

D2

14.44

10.09

5.05

0.3267

0.4946

9.92

6.49

3.22

0.3397

0.4995

35

E2

15.01

12.36

4.17

0.2358

0.5488

10.77

8.92

2.59

0.2828

0.5269

36

F2

16.10

11.57

3.78

0.3088

0.5365

12.01

8.55

2.37

0.3261

0.5221

37

A3

7.72

5.99

4.40

0.2246

0.4348

5.28

3.97

3.34

0.2820

0.4774

38

B3

10.18

8.95

3.85

0.1985

0.5436

7.41

6.59

2.97

0.2573

0.5149

39

C3

19.47

12.97

4.81

0.4197

0.5193

14.38

8.99

3.20

0.3622

0.5094

40

D3

24.46

18.33

5.26

0.2887

0.5452

18.78

13.86

3.34

0.3174

0.5270

41

E3

18.43

12.44

5.46

0.3743

0.4989

13.43

8.61

3.89

0.3481

0.5024

42

F3

7.60

6.27

4.07

0.1949

0.4968

5.01

4.15

2.90

0.2640

0.4916

43

A4

27.13

20.67

8.86

0.2573

0.5182

22.80

17.43

7.25

0.2980

0.5126

44

B4

19.46

15.35

6.66

0.2551

0.5187

14.96

11.63

5.27

0.2915

0.5101

45

C4

14.23

11.46

5.07

0.2529

0.5198

10.08

7.94

3.51

0.2887

0.5115

46

D4

10.92

8.79

4.20

0.2600

0.5173

6.48

5.14

2.45

0.2853

0.5086

47

E4

7.83

6.34

3.13

0.2617

0.5215

4.02

3.19

1.61

0.2840

0.5062

48

F4

5.48

4.48

2.36

0.2588

0.5256

2.87

2.29

1.24

0.2802

0.5036

Table A.33: Nikon D100 and D40 measurements of test colour samples by HDR characterisation (bright & dark-sides GretagMacbeth ColorCheckers).

A.4. Physical Measurements of the High-Luminance Display

A.4

217

Physical Measurements of the High-Luminance Display High-luminance display signals R

G

Normalised radiometric measurements

Index

Patch#

B

X

Z

L

A

1

A1

0

0

0

0.37

Y 0.6

0.37

5.42

-8.4

B 2.38

2

A2

0

0

85

1.64

1.29

6.45

11.23

11.11

-38.56

3

A3

0

0

170

8.66

5.48

39.55

28.05

34.05

-80.58

4

A4

0

0

255

27.38

18.12

125.73

49.65

45.68

-116.98

5

A5

0

85

0

2.59

5.23

0.8

27.38

-37.32

32.07

6

A6

0

85

85

4.33

6.74

7.81

31.21

-25.8

-9.74

7

A7

0

85

170

11.53

10.85

42.35

39.33

7.83

-64.74

8

A8

0

85

255

30.32

23.62

128.67

55.7

30.94

-108.32

9

A9

0

170

0

16.56

34.94

3.63

65.7

-74.24

70.28

10

B1

0

170

85

18.76

36.96

11.56

67.25

-69.12

39.64

11

B2

0

170

170

26.71

41.85

48.81

70.77

-48.05

-18.3

12

B3

0

170

255

46.06

55.17

137.52

79.14

-19.22

-73.12

13

B4

0

255

0

47.7

100.57

9.66

100.22

-105.5

102.52

14

B5

0

255

85

51.21

104.93

19.33

101.88

-103.16

79.93

15

B6

0

255

170

59.79

110.56

58.15

103.95

-90.65

28.81

16

B7

0

255

255

79.6

124.53

148.25

108.8

-68.87

-27.99

17

B8

85

0

0

4.51

3.08

0.71

20.38

23.39

21.78

18

B9

85

0

85

6.03

3.96

6.56

23.55

27.96

-17.83

19

C1

85

0

170

13.61

8.45

40.72

34.91

40.9

-70.3

20

C2

85

0

255

32.67

21.44

127.7

53.43

49.3

-111.65

21

C3

85

85

0

7.42

8.79

1.22

35.57

-9.58

39.76

22

C4

85

85

85

9.25

10.15

8.07

38.11

-4.37

1.14

23

C5

85

85

170

17.14

14.73

43.32

45.26

17.09

-55.73

24

C6

85

85

255

36.54

27.83

131

59.73

35.41

-102.76

25

C7

85

170

0

22.38

40.16

4.16

69.58

-61.6

73.65

26

C8

85

170

85

24.85

42.3

12.37

71.08

-57.15

43.87

27

C9

85

170

170

32.46

46.76

43.5

74.04

-40.26

-6.35

28

D1

85

170

255

53.09

61.36

137.91

82.57

-15.08

-67.41

29

D2

85

255

0

54.23

107.07

10.29

102.67

-98.78

104.68

30

D3

85

255

85

58.24

111.95

19.74

104.45

-96.52

83.5

31

D4

85

255

170

66.95

117.15

59.81

106.28

-84.33

31.16

32

D5

85

255

255

87.13

131.54

148.97

111.1

-64.45

-24.42

33

D6

170

0

0

25.64

15.93

2.1

46.89

50.46

49.63

34

D7

170

0

85

27.44

17

7.27

48.27

51.89

21.79

35

D8

170

0

170

35.14

21.55

40.8

53.55

57.37

-38.25

36

D9

170

0

255

55.23

35.01

125.92

65.75

62.86

-89.33

37

E1

170

85

0

29.61

22.94

2.74

55.01

31.28

58.17

38

E2

170

85

85

32.2

24.86

9.02

56.94

32.5

30.12

39

E3

170

85

170

40.29

29.49

43.41

61.21

40.99

-28.34

40

E4

170

85

255

61

43.39

131.24

71.82

50.69

-82.07

41

E5

170

170

0

45.87

55.46

5.65

79.3

-20.48

82.51

42

E6

170

170

85

48.52

57.97

13.18

80.72

-19.21

58.22

43

E7

170

170

170

58.17

64.39

50.2

84.17

-9.28

3.22

44

E8

170

170

255

78.51

78.01

138.23

90.78

6.63

-53.45

45

E9

170

255

0

78.76

124.28

11.82

108.72

-70.17

110.38

46

F1

170

255

85

83.7

130.29

20.81

110.69

-69.12

92.06

47

F2

170

255

170

93.08

136.55

59.65

112.69

-60.55

42.37

48

F3

170

255

255

114.23

151.23

148.85

117.15

-44.84

-13.92

49

F4

255

0

0

85.32

52.04

3.33

77.3

77.87

92.25

50

F5

255

0

85

87.45

53.25

8.21

78.02

78.73

69.44

Table A.34: Device signals and corresponding radiometric measurements of the high-luminance display (patch index: 1-50).

A.4. Physical Measurements of the High-Luminance Display High-luminance display signals Index

Patch#

R

51

F6

255

52

F7

53 54

G

218

Normalised radiometric measurements

B

X

Y

Z

L

A

0

170

94.87

57.64

43.21

80.54

81.19

B 5.23

255

0

255

113.69

70.4

129.77

87.19

83.43

-54.69

F8

255

85

0

89

59.32

4

81.47

66.72

95.1

F9

255

85

85

91.47

61.49

9.76

82.64

66.11

71.9

55

G1

255

85

170

98.97

65.8

45.62

84.89

69.5

9.78

56

G2

255

85

255

118.12

78.75

133.96

91.12

73.28

-50.39

57

G3

255

170

0

105.84

93.8

7.05

97.55

26.33

107.69

58

G4

255

170

85

108.2

96.51

14.29

98.64

25.46

86.15

59

G5

255

170

170

116.49

101.74

52.68

100.67

29.64

28.92

60

G6

255

170

255

136.03

115.28

142.32

105.63

36.51

-30.17

61

G7

255

255

0

139.3

164.4

13.3

120.91

-24.88

127.19

62

G8

255

255

85

143.59

170.47

22.62

122.57

-26.31

108.98

63

G9

255

255

170

151.99

175.54

62.58

123.93

-21.25

58.85

64

H1

255

255

255

171.92

189.87

152.32

127.64

-12.83

2.29

65

H2

0

0

15

0.33

0.53

0.37

4.76

-7.06

1.16

66

H3

0

0

30

0.4

0.64

0.39

5.81

-8.92

2.57

67

H4

0

0

51

0.41

0.58

0.78

5.27

-6.25

-5.55

68

H5

0

0

115

3.24

2.26

13.59

16.8

19.99

-53.1

69

H6

0

0

145

5.53

3.61

24.63

22.35

27.52

-67.56

70

H7

0

0

204

14.18

8.93

65.23

35.85

40.44

-95.55

71

H8

0

0

225

18.45

11.74

85.17

40.81

43.29

-104.2

72

H9

0

0

240

22.6

14.59

103.94

45.07

45.06

-110.73

73

I1

0

15

0

0.37

0.56

0.39

5.09

-6.9

1.46

74

I2

0

30

0

0.38

0.61

0.44

5.51

-8.26

1.17

75

I3

0

51

0

0.64

1.2

0.54

10.55

-19.8

7.94

76

I4

0

115

0

5.92

12.43

1.51

41.89

-52.27

47.13

77

I5

0

145

0

10.66

22.42

2.44

54.47

-63.8

59.66

78

I6

0

204

0

26.1

55.04

5.5

79.06

-86.3

82.8

79

I7

0

225

0

34.21

72.2

7.04

88.07

-94.61

91.37

80

I8

0

240

0

40.04

84.5

8.18

93.67

-99.68

96.5

81

I9

15

0

0

0.4

0.62

0.37

5.58

-7.79

2.69

82

J1

30

0

0

0.5

0.68

0.34

6.16

-6.23

4.19

83

J2

51

0

0

1.52

1.31

0.49

11.37

7.35

10.36

84

J3

115

0

0

9.08

5.9

1.06

29.15

32.84

31.02

85

J4

145

0

0

16.97

10.68

1.6

39.04

42.98

41.19

86

J5

204

0

0

41.46

25.56

2.79

57.62

60.07

62.22

87

J6

225

0

0

54.08

33.21

3.22

64.33

66.08

70.65

88

J7

240

0

0

62.96

38.57

3.35

68.44

69.8

76.85

89

J8

0

20

20

0.44

0.63

0.38

5.65

-6.67

2.48

90

J9

20

0

20

0.42

0.61

0.42

5.53

-6.99

1.63

91

K1

20

20

0

0.43

0.68

0.38

6.11

-8.84

3.31

92

K2

0

225

225

55.84

87.14

103.88

94.8

-60.81

-24.94

93

K3

225

0

225

75.59

46.38

87.41

73.79

74.02

-49.09

94

K4

225

225

0

95.22

115.5

10.56

105.71

-26.68

109.03

95

K5

20

20

20

0.5

0.68

0.48

6.16

-6.33

1.64

96

K6

51

51

51

2.44

3.06

2.39

20.27

-9.6

1.09

97

K7

128

128

128

26.33

29

23.22

60.78

-6.57

1.3

98

K8

204

204

204

94.15

103.76

82.41

101.44

-10.14

2.54

99

K9

225

225

225

122.29

134.74

106.92

112.12

-11.02

2.83

Table A.35: Device signals and corresponding radiometric measurements of the high-luminance display (patch index: 51-99).

A.5. Instruction for Colour Experiments

A.5

219

Instruction for Colour Experiments

In an experiment trial, during the time allowed for the participant’s eye to adapt to the environment lighting, observers were read this following instruction by a experimenter, adapted from [Kwak, 2003].

Instructions

Please sit comfortably and look at the test pattern. You will be shown a series of test colours in a random order. Your task will be to tell me what lightness, colourfulness and hue you see. You will enter corresponding numbers by using a keyboard. There is no time limit for each test colour and you can take as long as required until you report your estimations.

Lightness scaling Use the reference white as a standard, which has a lightness of 100, and your imaginary black, which has a lightness of zero. Describe the test colour by assigning a number, which is in the right relationship to the reference white and the imaginary black. (The reference white is displayed in the test pattern.)

Colourfulness scaling Colourfulness is an attribute of a visual sensation according to which an area appears to exhibit more or less of its hue. A neutral colour has no colourfulness, represented by zero on your scale. You are asked to assign a reasonable number to describe the colourfulness of the test colour. This is an open-ended scale since no top limit is set. The reference colourfulness patch in the test pattern should be remembered as 40 so that all subsequent test colours can be related to it.

Hue scaling There are four psychological primaries: red, yellow, green and blue. These four colours can be arranged as points around a circle and lie at opposite ends of x and y axes. You are asked to describe a hue as a proportion of two neighbouring primaries. First, decide whether or not you perceive any hue at all. If not, please reply ‘Neutral’. On the other hand if the test colour does not appear neutral then decide which of the four primaries is predominant. Next decide whether or not you see a trace of any other primary hue. If so, identify it. Finally, estimate the proportions in which the two primaries stand, e.g. an orange colour may be 60% yellow and 40% red.

A.6. Colour Appearance Data

A.6

220

Colour Appearance Data

This appendix provides the psychophysical experimental data which was used to develop our model. The data comprises three main parts: (1) physical measurements of colour stimuli (absolute X Y Z coordinates measured with a spectroradiometer; Y in [ cd/m2 ]), (2) perceptual attribute estimates (averaged over six participants) of the stimuli in terms of lightness J 0 , colourfulness M 0 , and hue quadrature H 0 , and (3) colour appearance attributes predicted by our colour appearance model in terms of lightness J, colourfulness M , hue quadrature H, brightness Q, chroma C, hue angle h, and saturation s. In each phase, 40 main colour patches were observed on a high-luminance display under different viewing conditions (different peak luminance level, background, ambient surround, and colour temperature, see Table A.36). Six observers who passed the Ishihara and City University vision tests for normal colour vision participated in the experiments. Each participant completed a total of 2,280 estimations (19 phases with 40 patches and 3 estimates each), which took about 10 hours per participant. The averaged repeatability in terms of CV was 11.83% for lightness, 22.82% for colourfulness, and 11.42% for hue. Note that our experimental results are compatible with the LUTCHI data [Luo et al., 1991a]. The lightness attribute has a scale of 0–100 relative to the brightness of reference white. The colourfulness attribute has an absolute scale of 0 to unlimited. The hue attribute varies from 0 to 400: redness (0) - yellowness (100) - greenness (200) - blueness (300) - redness (400); observers were allowed to judge a hue as undefined (denoted as ‘N/A’ below) if the shown patch was too dark, too bright, or neutral. Phase

Medium

CCT

Number

Type

[K]

1

LCD

5935

2

LCD

3

LCD

4

La(10 ◦ )

Reference White (Abs.) X

Y[ cd/m2 ]

32.51

43.88

6265

93.68

6265

376.08

LCD

6265

5

LCD

6

Z

[ cd/m2 ]

Background (Abs.) %

X

Y[ cd/m2 ]

Ambient Z

Luminance

25.72

12.06

24.52%

8.64

10.76

7.66

dark

122.90

84.06

31.26

21.81%

21.94

26.81

20.98

dark

493.60

348.19

30.07

0.34%

1.22

1.68

1.23

dark

396.71

521.00

371.57

144.29

23.82%

103.97

126.90

104.31

dark

6197

419.38

562.60

373.03

466.17

87.11%

366.89

490.10

326.54

dark

LCD

6197

800.51

1067.00

714.78

70.00

0.32%

2.43

3.37

2.46

dark

7

LCD

6197

800.22

1051.00

736.81

269.90

22.06%

189.58

231.80

183.53

dark

8

LCD

6390

1712.46

2176.00

1689.59

136.44

0.22%

3.59

4.76

4.12

dark

9

LCD

6392

1721.81

2189.00

1697.69

367.61

12.16%

229.77

266.20

234.15

dark

10

LCD

6391

1726.31

2196.00

1702.69

576.71

22.90%

422.04

502.90

427.31

dark

11

LCD

6387

1732.94

2205.00

1708.66

1204.03

55.06%

1012.64

1214.00

1010.70

dark

12

LCD

6388

1758.09

2241.00

1729.44

2009.94

94.87%

1667.67

2126.00

1636.72

dark

13

LCD

7941

995.40

1274.00

1293.24

312.49

21.16%

228.16

269.55

314.68

dark

14

LCD

1803

1063.72

1233.00

356.61

284.36

19.17%

217.84

236.35

73.20

dark

15

LCD

6391

1730.32

2201.40

1705.62

604.76

22.90%

432.93

533.92

429.98

average

16

Trans.

5823

6890.34

8519.00

5936.19

941.70

5.61%

408.02

477.60

256.97

dark

17

Trans.

5823

6849.58

8458.00

5911.69

2120.66

21.41%

1499.62

1811.00

1062.72

dark

18

Trans.

5921

13676.05

16860.00

12201.60

1860.80

5.49%

791.12

926.00

523.38

dark

19

Trans.

5937

13295.61

16400.00

11918.19

4183.52

21.81%

2963.80

3577.00

2194.24

dark

Table A.36: Summary of viewing conditions for all 19 phases.

A.6. Colour Appearance Data

Physical measurements Y[ cd/m2 ]

Perceptual estimates Z

J0

M0

Colour

X

1

0.61

0.56

0.30

8.67

12.44

2

0.44

0.50

0.30

2.00

2.33

3

1.31

0.85

4.93

32.50

4

0.69

0.79

0.35

5

1.98

1.42

0.37

6

2.60

1.56

8.57

7

0.62

0.94

0.50

8

3.36

2.04

H0

221

Predicted appearance J

M

H

Q

C

h

s

0.25

1.00

27.35

-11.8

1.64

34.59

8.46

408.40

N/A

1.00

15.05

N/A

1.64

19.04

N/A

302.97

64.28

305.00

30.33

60.32

312.7

49.73

76.30

262.98

110.13

6.67

6.75

77.50

23.80

18.95

21.3

39.03

23.97

36.66

69.68

43.00

50.84

1.33

39.00

50.56

5.8

63.95

63.95

24.76

88.92

41.33

64.26

325.83

41.64

62.39

316.9

68.29

78.92

270.91

95.58

15.83

30.28

179.17

27.50

13.02

196.9

45.09

16.46

161.03

53.73

14.17

52.00

61.81

305.83

45.22

70.18

306.9

74.16

88.76

251.80

97.28

9

1.00

1.79

0.41

39.50

55.77

207.50

39.39

34.51

158.4

64.60

43.64

126.78

73.08

10

4.64

3.07

0.59

55.00

60.54

10.50

50.61

63.05

10.7

83.00

79.74

28.58

87.16

11

1.80

2.00

0.51

31.17

27.80

127.00

42.55

32.86

55.3

69.77

41.57

61.07

68.63

12

1.51

2.90

0.48

42.67

55.22

190.00

46.39

43.96

157.6

76.08

55.60

126.16

76.01

13

3.03

2.37

2.02

46.17

48.29

372.50

46.54

46.47

369.6

76.32

58.78

347.94

78.03

14

2.90

4.18

5.21

47.33

34.51

275.33

52.58

36.42

304.3

86.22

46.07

246.63

65.00

15

8.58

5.52

0.70

68.33

52.87

6.67

59.22

70.35

20.3

97.12

88.98

35.88

85.11

16

6.56

4.27

0.85

52.83

57.80

3.83

55.43

66.23

9.9

90.90

83.76

27.95

85.36

17

4.06

3.44

11.91

53.33

52.41

320.33

51.54

57.14

313.8

84.51

72.27

265.06

82.22

18

6.53

5.48

10.96

55.17

39.94

349.17

58.53

49.34

333.9

95.98

62.40

299.49

71.69

19

10.06

6.78

0.84

61.83

62.88

36.17

62.22

68.93

25.0

102.02

87.18

39.44

82.19

20

3.43

4.62

6.41

49.00

38.09

283.67

54.23

38.54

307.7

88.93

48.75

253.47

65.84

21

3.34

6.58

1.15

59.50

50.56

187.50

58.22

49.07

159.5

95.47

62.07

127.63

71.69

22

9.63

7.34

0.89

58.67

57.49

54.50

62.96

63.28

36.1

103.24

80.03

47.58

78.29

23

12.91

9.54

15.52

62.33

42.42

363.67

68.17

54.80

350.7

111.79

69.31

324.08

70.02

24

12.88

10.57

2.40

57.83

38.24

52.00

68.98

55.04

24.7

113.12

69.61

39.19

69.75

25

4.98

7.59

5.42

54.83

24.06

232.00

61.31

21.52

276.9

100.55

27.22

218.14

46.26

26

6.70

8.31

16.89

63.83

37.24

296.67

63.41

49.03

306.0

103.99

62.01

250.06

68.67

27

16.90

15.00

13.88

67.83

44.30

369.50

75.81

43.06

361.7

124.32

54.46

338.37

58.85

28

5.20

10.39

1.00

65.83

51.39

181.33

65.34

61.27

152.0

107.15

77.50

122.01

75.62

29

16.57

17.14

5.95

66.17

35.76

72.50

77.59

37.05

29.6

127.24

46.86

42.79

53.96

30

15.79

17.06

1.45

69.17

61.28

91.17

76.84

61.20

91.1

126.01

77.41

84.51

69.69

31

19.50

20.00

18.41

73.67

27.74

374.50

81.49

33.10

351.4

133.63

41.86

325.03

49.77

32

18.31

19.75

9.07

76.00

29.21

60.83

80.70

29.95

9.7

132.35

37.88

27.75

47.57

33

16.15

17.81

1.56

69.00

63.58

91.67

77.69

60.71

93.4

127.39

76.78

85.95

69.03

34

9.05

17.46

2.71

77.83

57.62

180.83

75.18

54.11

154.5

123.29

68.43

123.88

66.25

35

20.65

25.84

15.39

82.00

3.09

N/A

86.88

13.41

N/A

142.47

16.96

N/A

30.68

36

18.55

27.94

9.15

85.50

27.74

167.83

87.84

30.66

140.6

144.05

38.78

114.00

46.14

37

21.31

28.42

2.29

90.00

70.67

103.17

88.42

63.10

119.6

145.00

79.80

100.83

65.97

38

24.41

30.88

16.30

88.17

4.65

82.83

92.15

14.13

22.9

151.11

17.87

37.84

30.58

39

21.22

30.34

2.84

92.83

64.35

111.67

90.04

60.75

125.9

147.65

76.84

104.62

64.15

40

27.32

33.76

24.28

100.00

1.44

N/A

95.46

17.05

N/A

156.55

21.56

N/A

33.00

Table A.37: Physical measurements, perceptual estimates, and our model’s predictions (Phase 1).

A.6. Colour Appearance Data

Physical measurements

Perceptual estimates Z

J0

M0

1.79

0.99

10.83

17.88

1.61

0.87

2.83

4.47

4.48

2.79

17.89

33.67

68.77

4

2.17

2.53

0.94

8.00

5

6.63

4.61

1.30

40.33

6

9.05

5.25

30.85

7

1.94

3.13

1.59

8

11.45

6.74

Colour

X

1

2.03

2

1.35

3

Y[ cd/m2 ]

H0

222

Predicted appearance J

M

9.00

18.87

32.42

50.00

1.00

15.22

304.17

33.24

65.86

11.74

74.17

28.91

54.01

2.50

42.06

46.17

68.51

334.17

23.67

35.88

200.33

49.71

60.33

71.36

H

Q

C

h

s

-6.2

35.41

38.60

14.10

95.69

33.3

1.88

18.12

45.55

284.83

311.7

62.37

78.42

261.17

102.76

25.03

58.7

54.25

29.80

63.40

67.92

57.61

7.2

78.92

68.60

25.84

85.44

44.17

66.65

317.6

82.88

79.36

272.13

89.67

32.23

22.07

182.9

60.48

26.28

147.37

60.41

267.83

47.08

74.93

306.4

88.34

89.22

250.86

92.10

9

3.16

6.01

1.17

43.67

58.28

210.00

42.68

45.23

158.8

80.08

53.86

127.12

75.16

10

15.26

9.99

1.87

57.17

65.50

4.17

53.32

69.88

15.6

100.04

83.21

32.31

83.58

11

5.95

6.69

1.62

43.17

34.53

126.17

45.71

39.26

69.8

85.76

46.74

70.84

67.66

12

4.86

9.63

1.50

51.83

55.61

191.67

49.35

53.16

157.7

92.60

63.29

126.28

75.77

13

10.14

7.86

7.24

47.33

46.51

375.00

49.43

50.53

372.1

92.76

60.16

350.84

73.80

14

9.75

14.17

18.62

51.83

39.92

275.67

55.64

37.99

300.7

104.40

45.23

238.92

60.32

15

28.02

17.81

2.24

72.33

62.82

12.00

61.96

77.22

24.8

116.25

91.95

39.26

81.50

16

21.27

13.69

2.90

63.67

57.17

9.00

57.98

72.43

12.8

108.79

86.24

30.18

81.59

17

14.13

11.83

42.42

62.00

51.86

327.67

54.45

59.27

313.9

102.18

70.57

265.39

76.16

18

21.82

18.18

38.37

60.00

48.68

349.17

61.31

50.97

335.2

115.04

60.69

301.60

66.56

19

32.85

22.11

2.52

67.83

70.82

44.17

65.16

75.59

32.6

122.26

90.00

44.99

78.63

20

11.81

15.83

23.14

53.83

35.68

278.67

57.49

39.43

305.3

107.87

46.95

248.62

60.46

21

10.88

21.90

3.94

62.33

61.04

190.33

61.36

56.96

159.7

115.13

67.82

127.83

70.34

22

31.73

24.31

2.79

59.50

60.09

52.50

66.18

69.72

44.8

124.18

83.01

53.76

74.93

23

42.90

31.55

54.69

68.67

46.60

365.00

71.36

57.09

352.5

133.91

67.98

326.51

65.30

24

42.19

34.80

8.60

68.50

63.93

50.83

72.43

58.55

30.4

135.90

69.71

43.41

65.64

25

16.77

25.49

19.44

58.17

31.24

239.50

64.65

23.20

255.7

121.30

27.63

201.72

43.74

26

23.01

28.32

60.21

74.17

50.83

300.83

66.78

50.15

304.6

125.30

59.72

247.10

63.27

27

55.72

49.57

49.14

73.83

35.20

359.50

79.64

43.47

364.2

149.43

51.76

341.49

53.94

28

16.71

33.71

3.38

67.33

64.72

188.33

68.39

69.10

152.2

128.32

82.28

122.16

73.38

29

53.72

55.68

20.54

68.83

34.82

57.83

81.35

39.86

40.6

152.63

47.47

50.79

51.11

30

51.81

56.58

5.15

71.17

72.14

92.50

81.05

67.56

97.2

152.07

80.44

88.29

66.65

31

64.64

66.23

65.07

72.33

24.89

361.17

85.89

32.21

354.3

161.17

38.35

328.83

44.70

32

60.30

65.35

32.24

76.33

25.63

60.33

85.15

31.05

23.4

159.77

36.97

38.21

44.09

33

52.93

58.99

5.31

73.17

72.06

81.17

81.93

67.78

99.9

153.73

80.70

89.96

66.40

34

29.44

57.45

9.58

83.83

60.51

178.33

79.09

60.41

155.0

148.41

71.93

124.24

63.80

35

68.72

86.00

54.73

83.17

3.87

100.40

92.18

12.39

19.5

172.97

14.75

35.28

26.77

36

61.10

92.38

32.47

83.83

26.46

187.00

93.17

35.47

143.4

174.83

42.24

115.95

45.04

37

69.41

93.51

7.78

87.67

74.78

119.17

93.68

70.30

123.4

175.79

83.70

103.10

63.24

38

80.86

102.40

58.25

88.67

6.54

92.00

98.11

15.93

61.7

184.09

18.97

65.43

29.42

39

68.30

98.26

9.56

87.50

71.09

138.33

95.01

67.61

128.7

178.27

80.50

106.29

61.58

40

90.04

111.20

85.51

99.67

1.63

N/A

101.60

13.38

N/A

190.63

15.93

N/A

26.49

Table A.38: Physical measurements, perceptual estimates, and our model’s predictions (Phase 2).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

223

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

3.98

3.11

1.23

28.71

59.45

12.00

26.80

49.78

-0.6

60.30

54.93

19.52

90.86

2

2.00

2.25

1.20

22.71

23.45

94.86

1.00

22.37

46.4

2.25

24.68

54.88

315.27

3

12.12

6.55

54.72

44.71

78.01

308.00

40.27

83.45

310.7

90.64

92.08

259.34

95.95

4

4.29

4.70

1.37

31.14

38.64

126.43

33.80

38.35

68.5

76.08

42.32

69.96

71.00

5

19.38

12.55

2.66

46.71

59.89

7.71

49.62

77.74

6.2

111.66

85.78

25.06

83.44

6

26.96

14.53

100.40

55.71

78.48

336.14

51.32

75.54

317.9

115.49

83.36

272.76

80.88

7

3.25

5.83

2.84

40.86

51.77

193.14

35.85

33.05

187.9

80.68

36.47

152.03

64.00

8

37.60

20.32

174.31

61.57

87.33

306.14

54.79

85.99

306.1

123.31

94.88

250.16

83.50

9

6.73

13.78

2.10

44.71

63.39

189.43

47.60

62.32

160.7

107.13

68.77

128.61

76.27

10

50.34

31.42

4.80

58.71

68.92

11.43

61.29

85.11

13.4

137.93

93.91

30.65

78.55

11

14.94

15.70

2.91

46.14

46.81

100.71

50.87

55.55

74.1

114.49

61.29

73.63

69.65

12

11.84

24.79

3.19

57.43

72.34

185.00

54.97

69.86

158.6

123.70

77.08

126.97

75.15

13

29.48

20.84

19.49

53.29

49.26

373.29

55.86

66.75

370.9

125.72

73.65

349.50

72.86

14

26.67

38.05

57.55

55.71

52.99

277.43

61.21

45.09

299.4

137.77

49.76

236.99

57.21

15

101.12

62.46

7.03

78.29

72.72

16.43

70.52

86.60

22.2

158.70

95.56

37.31

73.87

16

73.91

45.77

7.46

72.00

62.62

7.43

66.29

84.51

10.0

149.19

93.25

28.04

75.26

17

42.31

32.35

142.31

64.00

63.65

327.57

60.42

65.57

313.9

135.98

72.35

265.40

69.44

18

69.73

54.11

130.56

70.86

44.53

347.14

67.85

54.78

335.4

152.71

60.44

301.92

59.89

19

118.82

75.96

7.02

83.57

75.80

30.00

73.16

86.10

31.5

164.64

95.01

44.18

72.32

20

32.51

42.52

72.61

57.14

35.65

278.14

62.86

46.15

304.8

141.48

50.93

247.64

57.12

21

29.90

62.32

10.42

64.86

59.75

185.86

67.01

67.06

161.6

150.81

74.00

129.30

66.68

22

111.26

79.33

8.23

77.14

69.29

45.86

73.42

80.10

44.3

165.24

88.38

53.43

69.62

23

150.53

104.20

192.87

80.57

53.88

358.57

78.12

58.94

352.0

175.82

65.03

325.76

57.90

24

152.22

116.60

25.10

78.71

44.39

48.14

79.23

64.55

27.0

178.31

71.23

40.87

60.17

25

47.64

72.60

59.87

64.43

29.76

230.71

69.96

25.63

260.5

157.45

28.28

205.32

40.35

26

72.33

83.96

212.76

74.71

56.58

284.29

72.34

52.91

303.8

162.82

58.38

245.57

57.01

27

196.50

163.40

167.20

82.43

31.64

369.86

84.80

45.65

362.6

190.85

50.38

339.49

48.91

28

50.02

103.90

10.61

78.29

78.02

198.57

74.22

77.10

153.0

167.04

85.08

122.72

67.94

29

189.77

185.10

65.43

79.57

28.18

49.71

86.19

42.07

37.5

193.97

46.42

48.56

46.57

30

181.71

185.00

15.76

71.71

83.53

88.00

85.57

73.51

97.9

192.59

81.11

88.73

61.78

31

224.22

216.50

231.01

86.57

12.34

372.17

89.19

33.38

353.1

200.73

36.83

327.23

40.78

32

209.32

213.70

104.63

85.71

24.45

52.71

88.68

33.11

19.5

199.57

36.53

35.28

40.73

33

185.40

192.70

16.70

82.14

89.69

94.29

86.22

72.86

100.5

194.04

80.39

90.27

61.27

34

90.14

181.00

28.83

87.00

49.99

181.71

83.20

63.51

156.5

187.25

70.08

125.37

58.24

35

230.13

279.00

188.02

95.57

2.44

90.00

93.10

13.78

7.6

209.54

15.20

26.15

25.64

36

199.61

300.90

107.13

91.29

19.57

173.86

93.68

33.19

143.7

210.83

36.62

116.14

39.68

37

240.67

310.60

26.70

95.14

75.00

103.29

94.43

70.36

123.0

212.51

77.64

102.81

57.54

38

277.13

338.40

201.47

98.29

4.15

77.00

97.07

15.65

55.2

218.46

17.27

61.01

26.76

39

235.29

330.50

32.61

94.57

75.78

105.43

95.43

67.03

128.2

214.77

73.97

106.01

55.87

40

316.13

374.50

320.99

100.00

1.97

N/A

99.42

15.85

N/A

223.75

17.49

N/A

26.62

Table A.39: Physical measurements, perceptual estimates, and our model’s predictions (Phase 3).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

224

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

9.16

7.98

4.02

15.57

38.10

6.71

19.04

36.37

3.6

43.15

40.02

22.97

91.81

2

6.06

7.32

3.85

6.71

6.87

66.00

1.00

17.39

22.0

2.27

19.13

37.19

276.97

3

20.51

12.82

85.04

36.43

80.24

301.86

33.32

71.67

305.6

75.53

78.86

249.17

97.41

4

10.82

12.83

4.38

19.00

31.57

129.14

31.77

29.66

57.6

72.00

32.64

62.62

64.19

5

30.53

20.90

6.19

39.57

55.90

7.43

42.47

63.33

13.8

96.25

69.68

30.91

81.11

6

41.56

24.10

142.99

48.71

78.74

330.14

44.41

71.71

312.5

100.67

78.90

262.70

84.40

7

9.07

15.38

8.09

25.29

49.74

195.00

33.98

26.48

187.7

77.01

29.14

151.81

58.64

8

50.87

30.70

224.51

58.86

74.69

299.57

47.01

80.76

300.4

106.56

88.86

238.43

87.06

9

14.56

28.62

5.81

43.86

64.39

195.00

43.65

49.87

158.2

98.93

54.87

126.62

71.00

10

66.86

42.69

8.51

57.57

79.69

5.29

52.99

76.76

23.6

120.09

84.46

38.37

79.95

11

27.09

30.92

7.31

36.71

41.42

119.00

46.27

43.46

70.1

104.87

47.82

71.00

64.38

12

22.76

46.24

7.43

48.71

60.89

192.29

50.47

58.49

156.4

114.38

64.36

125.28

71.51

13

47.16

36.61

36.23

45.14

51.02

369.43

50.14

54.64

374.5

113.65

60.12

353.64

69.34

14

45.42

66.68

89.77

54.71

43.16

262.00

56.50

41.17

300.2

128.06

45.30

237.85

56.70

15

117.67

73.41

9.99

79.71

78.48

9.71

61.10

84.14

35.4

138.49

92.58

47.02

77.95

16

92.49

58.18

13.58

66.14

68.11

7.00

57.60

79.41

23.6

130.54

87.38

38.39

78.00

17

64.20

55.26

192.04

59.43

59.16

323.14

55.04

62.71

309.7

124.75

69.00

257.34

70.90

18

101.07

85.40

179.74

59.00

57.92

354.57

62.23

54.08

336.9

141.05

59.51

304.20

61.92

19

135.23

90.72

10.80

76.43

87.41

34.29

64.15

81.68

44.8

145.39

89.87

53.77

74.95

20

52.89

71.18

106.59

53.86

38.63

294.29

57.71

42.35

303.9

130.80

46.59

245.78

56.90

21

48.97

99.74

20.43

64.00

70.25

188.71

61.93

60.07

157.9

140.36

66.09

126.45

65.42

22

134.40

104.00

12.86

67.29

78.35

47.14

65.80

74.57

51.9

149.15

82.05

58.72

70.71

23

186.12

138.80

247.22

65.43

73.81

358.86

71.40

60.80

355.6

161.82

66.90

330.64

61.30

24

177.17

148.90

43.04

68.29

54.41

44.86

72.07

61.04

36.5

163.34

67.16

47.81

61.13

25

76.86

116.30

94.50

50.14

33.23

221.43

65.30

25.30

263.0

148.02

27.84

207.27

41.35

26

100.83

126.80

267.11

75.29

46.65

296.00

66.99

53.57

302.3

151.84

58.94

242.32

59.40

27

234.65

212.30

222.34

74.00

44.30

380.71

79.27

45.86

367.4

179.68

50.46

345.39

50.52

28

75.81

154.20

17.66

71.14

83.62

186.00

69.24

73.25

151.2

156.92

80.60

121.44

68.32

29

229.05

242.10

101.05

70.00

31.80

57.86

81.44

40.56

39.4

184.60

44.62

49.92

46.87

30

215.15

239.60

23.68

78.43

91.76

89.29

80.54

72.22

99.1

182.55

79.46

89.42

62.90

31

271.71

281.80

287.69

76.71

36.49

374.29

85.47

34.11

356.9

193.72

37.53

332.31

41.96

32

252.87

277.50

150.11

75.14

24.60

54.71

84.73

31.98

19.0

192.05

35.19

34.92

40.81

33

219.63

249.40

24.63

74.71

84.26

91.43

81.40

72.28

101.1

184.50

79.52

90.58

62.59

34

128.48

250.80

49.00

85.00

72.98

184.57

79.34

62.62

153.5

179.84

68.90

123.12

59.01

35

284.65

356.70

242.66

86.57

5.59

78.33

91.16

12.67

-3.1

206.62

13.94

17.20

24.76

36

259.59

389.50

153.34

88.57

32.13

176.71

92.80

36.01

141.5

210.33

39.62

114.62

41.38

37

285.73

388.40

37.88

92.29

92.38

102.57

92.81

73.76

122.9

210.36

81.15

102.75

59.21

38

334.78

424.90

260.98

92.43

12.43

98.57

97.10

15.74

35.4

220.09

17.32

47.04

26.74

39

282.70

408.60

46.24

93.43

68.66

111.71

94.20

70.91

128.4

213.50

78.02

106.12

57.63

40

369.89

460.90

365.11

99.14

1.77

N/A

100.45

13.53

N/A

227.67

14.89

N/A

24.38

Table A.40: Physical measurements, perceptual estimates, and our model’s predictions (Phase 4).

A.6. Colour Appearance Data

Physical measurements Y[ cd/m2 ]

225

Perceptual estimates Z

J0

M0

Colour

X

1

14.89

16.22

9.01

9.83

15.28

2

12.06

15.74

8.90

3.67

3.16

3

25.48

20.64

85.85

31.50

58.73

4

16.32

20.73

9.20

15.00

5

35.58

28.81

10.75

36.00

6

46.06

31.81

141.97

7

14.52

23.07

12.53

8

55.27

38.40

H0

Predicted appearance J

M

6.67

18.85

20.23

31.25

10.11

9.88

300.83

29.50

55.28

17.32

109.17

27.44

38.44

7.50

37.02

35.83

62.00

339.50

18.83

24.46

202.50

221.08

53.00

60.07

H

Q

C

h

s

2.0

43.16

22.17

21.73

68.47

62.0

23.15

10.83

65.63

65.34

314.9

67.54

60.58

267.12

90.47

17.86

68.8

62.83

19.58

70.15

53.32

43.51

4.4

84.77

47.68

23.61

71.64

39.15

60.83

319.8

89.63

66.66

276.14

82.39

29.15

17.42

188.4

66.73

19.09

152.56

51.10

300.83

41.77

69.44

310.7

95.64

76.10

259.32

85.21

9

20.22

36.53

10.59

40.67

45.20

189.17

38.08

33.30

162.9

87.17

36.49

130.27

61.80

10

71.85

50.93

13.17

49.17

60.73

1.67

46.96

59.00

8.7

107.50

64.66

27.00

74.09

11

32.85

39.25

11.94

29.67

26.00

100.50

40.67

30.27

67.9

93.10

33.18

69.56

57.02

12

28.17

53.75

12.23

48.33

54.98

194.17

44.29

41.14

161.1

101.41

45.09

128.91

63.70

13

51.71

44.18

39.47

39.17

42.09

374.50

44.01

41.17

369.7

100.77

45.12

348.04

63.92

14

49.94

73.59

90.45

49.50

27.44

275.67

49.98

35.08

302.3

114.41

38.44

242.42

55.37

15

122.11

81.54

14.72

59.33

58.98

6.67

54.68

69.46

14.5

125.18

76.12

31.45

74.49

16

97.09

66.34

18.08

53.83

55.17

1.67

51.30

64.02

6.0

117.44

70.16

24.87

73.83

17

68.03

62.18

189.32

50.50

55.52

336.33

48.77

56.98

316.1

111.65

62.44

269.49

71.44

18

104.13

92.00

177.24

53.33

49.39

346.67

55.51

50.08

334.0

127.08

54.88

299.69

62.78

19

140.51

99.33

15.44

66.33

56.42

23.33

57.69

68.80

21.9

132.08

75.40

37.12

72.17

20

58.77

80.35

110.11

52.33

33.29

279.00

51.60

37.47

307.1

118.13

41.06

252.23

56.32

21

53.88

106.80

24.09

59.00

46.59

188.33

55.15

48.19

163.4

126.26

52.81

130.67

61.78

22

138.44

111.50

17.32

60.83

49.50

52.00

59.08

62.70

33.3

135.26

68.71

45.51

68.09

23

188.69

145.60

243.26

67.00

51.97

N/A

64.52

57.85

N/A

147.72

63.40

N/A

62.58

24

180.66

156.20

45.98

61.50

44.48

56.50

65.14

55.53

21.0

149.14

60.85

36.44

61.02

25

80.60

122.60

94.44

57.33

23.51

240.00

58.34

22.25

268.2

133.57

24.38

211.22

40.81

26

103.80

132.70

260.87

63.00

38.39

294.83

60.13

51.38

306.7

137.66

56.31

251.36

61.10

27

236.88

219.00

218.80

69.50

37.34

369.67

72.33

45.19

360.7

165.60

49.52

337.12

52.24

28

79.58

159.60

21.22

67.00

55.48

180.00

62.10

60.22

154.9

142.18

65.99

124.18

65.08

29

231.19

248.20

101.06

64.00

27.21

58.17

74.42

39.09

31.1

170.39

42.84

43.93

47.90

30

218.04

245.90

27.47

62.50

60.27

86.67

73.61

62.26

87.9

168.53

68.23

82.52

60.78

31

273.64

288.50

282.17

72.00

38.28

363.67

78.74

35.55

351.0

180.27

38.95

324.41

44.41

32

254.64

283.40

148.32

68.83

30.49

60.67

77.82

31.78

13.1

178.16

34.83

30.44

42.24

33

222.32

255.50

28.02

64.83

59.71

103.67

74.47

62.67

91.1

170.50

68.68

84.50

60.63

34

131.98

256.40

50.96

74.50

50.20

176.67

72.23

56.40

158.0

165.37

61.80

126.52

58.40

35

290.17

368.20

240.55

77.50

9.37

55.60

85.22

13.45

-10.0

195.11

14.74

10.26

26.25

36

262.18

396.30

151.46

84.50

21.22

173.67

86.52

34.78

142.9

198.09

38.12

115.59

41.90

37

288.47

395.50

41.19

88.17

65.07

99.67

86.72

66.68

118.3

198.53

73.07

100.09

57.96

38

336.41

431.80

255.25

84.67

13.81

68.40

91.41

15.60

38.2

209.27

17.09

49.04

27.30

39

290.00

422.40

49.36

85.67

46.69

115.83

88.85

65.38

125.3

203.42

71.64

104.22

56.69

40

371.86

468.30

357.84

97.00

5.89

361.60

95.41

16.82

342.5

218.44

18.43

312.49

27.75

Table A.41: Physical measurements, perceptual estimates, and our model’s predictions (Phase 5).

A.6. Colour Appearance Data

Physical measurements

Perceptual estimates J0

M0

3.08

26.86

64.01

3.32

23.29

27.38

21.34

169.15

46.14

76.47

19.32

22.96

4.60

32.00

60.24

38.53

7.29

50.86

6

80.85

44.77

282.50

7

15.05

28.34

12.34

8

97.79

58.54

Colour

X

Y[ cd/m2 ]

1

15.20

11.17

2

8.93

10.66

3

37.50

4 5

226

Z

H0

Predicted appearance J

M

7.57

36.21

62.39

95.00

33.34

35.08

304.29

45.30

87.99

45.66

91.00

44.83

69.41

14.43

53.44

53.00

78.11

332.86

37.57

52.78

207.57

442.85

61.86

82.53

H

Q

C

h

s

2.5

90.14

66.15

22.13

83.19

55.8

82.99

37.20

61.37

65.02

310.4

112.78

93.30

258.58

88.33

49.63

71.7

111.59

52.63

72.05

66.69

83.98

8.2

133.03

89.05

26.64

79.45

55.47

75.82

318.0

138.08

80.40

272.94

74.10

46.51

40.25

186.7

115.78

42.68

150.93

58.96

302.14

57.90

86.79

305.8

144.14

92.03

249.49

77.60

9

27.57

58.63

7.38

49.14

84.07

190.00

55.43

72.07

160.8

138.00

76.42

128.66

72.27

10

136.63

84.80

12.76

67.71

66.28

10.86

63.64

88.80

14.6

158.43

94.15

31.52

74.87

11

56.31

64.76

10.90

49.86

42.75

111.43

58.17

59.93

76.9

144.79

63.55

75.47

64.34

12

44.64

95.72

10.66

53.86

76.70

186.29

61.72

76.94

158.8

153.64

81.58

127.12

70.77

13

96.36

74.20

70.82

57.71

53.88

376.00

61.41

62.77

372.1

152.87

66.55

350.93

64.08

14

91.58

139.00

180.29

58.57

56.73

279.86

67.62

43.36

296.3

168.34

45.98

234.30

50.75

15

240.78

148.10

15.66

80.14

70.87

19.14

71.31

90.51

24.2

177.51

95.97

38.81

71.40

16

188.73

116.40

22.76

78.86

74.39

15.86

68.02

86.36

11.5

169.33

91.58

29.15

71.42

17

128.63

113.40

379.68

60.29

57.98

326.00

66.14

60.88

313.7

164.65

64.55

264.93

60.81

18

206.42

177.80

355.38

68.00

62.94

350.00

73.05

50.49

335.7

181.85

53.53

302.27

52.69

19

279.91

187.80

17.06

83.00

78.74

33.43

74.45

86.89

33.2

185.33

92.13

45.43

68.47

20

110.76

153.60

218.66

59.71

43.45

277.00

69.27

42.69

303.7

172.44

45.26

245.23

49.76

21

100.13

209.10

39.16

68.00

67.01

189.71

72.70

64.54

162.0

180.99

68.44

129.60

59.72

22

276.80

216.10

21.50

73.71

61.72

57.14

76.04

79.50

46.5

189.29

84.30

54.93

64.81

23

380.63

288.30

486.59

83.00

55.65

356.86

81.34

54.67

352.9

202.47

57.97

326.96

51.96

24

362.81

309.20

84.16

82.14

54.70

49.00

81.76

56.89

29.2

203.53

60.33

42.52

52.87

25

158.12

243.60

187.83

64.43

36.95

241.71

75.93

24.64

253.1

189.02

26.13

199.84

36.10

26

202.69

263.80

527.10

74.57

50.48

302.29

77.39

49.16

303.2

192.64

52.13

244.20

50.52

27

479.49

441.70

438.30

82.86

36.43

369.86

88.01

39.71

364.0

219.10

42.10

341.22

42.57

28

155.84

323.00

31.27

81.29

83.74

186.71

79.32

78.74

153.1

197.46

83.49

122.85

63.15

29

468.16

502.30

198.77

83.71

34.57

61.71

89.66

35.26

41.2

223.18

37.39

51.19

39.75

30

440.76

497.70

43.19

80.14

99.45

92.00

88.76

73.67

99.1

220.96

78.11

89.42

57.74

31

555.70

586.10

565.95

89.43

23.80

362.43

92.90

28.61

354.4

231.26

30.33

329.00

35.17

32

517.18

575.70

297.02

89.43

24.11

52.57

92.24

26.90

23.8

229.63

28.52

38.50

34.22

33

450.83

518.60

45.04

82.57

88.78

95.29

89.47

73.41

101.6

222.73

77.84

90.81

57.41

34

261.57

517.90

94.47

87.43

44.58

186.14

87.83

59.01

156.7

218.63

62.57

125.53

51.95

35

587.85

747.40

481.91

97.43

3.38

70.00

97.17

10.19

26.5

241.90

10.81

40.50

20.53

36

531.05

804.10

301.86

93.57

27.05

178.29

97.95

30.15

144.3

243.82

31.97

116.55

35.17

37

582.80

801.40

70.25

91.86

77.91

99.57

97.71

70.64

123.6

243.22

74.90

103.20

53.89

38

682.66

877.20

511.94

97.86

3.08

82.40

100.78

12.47

64.6

250.87

13.22

67.36

22.30

39

587.93

856.40

91.54

96.00

70.57

110.00

99.05

65.36

128.6

246.58

69.30

106.28

51.48

40

747.91

946.50

712.76

99.86

2.12

N/A

102.70

11.16

N/A

255.65

11.83

N/A

20.89

Table A.42: Physical measurements, perceptual estimates, and our model’s predictions (Phase 6).

A.6. Colour Appearance Data

Physical measurements

Perceptual estimates J0

M0

7.26

19.29

21.64

7.36

8.14

8.97

165.02

36.86

87.75

26.05

8.53

18.29

42.48

11.43

42.71

81.19

47.98

274.36

18.14

31.33

15.63

99.06

61.58

Colour

X

Y[ cd/m2 ]

1

18.47

15.90

2

12.62

15.32

3

40.05

25.76

4

21.82

5

62.15

6 7 8

227

Z

H0

Predicted appearance J

M

H

Q

C

h

s

8.43

21.03

39.43

10.1

52.25

41.84

28.11

86.87

81.67

6.46

19.97

69.9

16.04

21.19

70.87

111.57

300.86

34.11

74.54

303.8

84.75

79.10

245.57

93.78

18.92

120.71

32.77

31.58

84.3

81.42

33.51

80.23

62.28

63.50

8.57

43.18

66.98

15.2

107.28

71.07

32.04

79.01

52.14

75.84

329.43

44.87

74.09

314.2

111.47

78.62

265.90

81.53

27.29

37.83

197.29

34.92

29.25

185.7

86.77

31.04

149.91

58.07

434.33

63.43

87.47

301.43

47.54

83.63

300.0

118.10

88.74

237.54

84.15

9

29.71

59.26

11.17

44.57

72.48

195.14

44.59

53.89

156.7

110.79

57.19

125.54

69.75

10

136.99

87.83

16.74

67.14

82.87

0.86

53.76

80.19

22.8

133.57

85.10

37.79

77.48

11

57.26

65.64

14.54

41.29

40.55

118.57

47.55

46.78

87.2

118.13

49.64

82.06

62.93

12

46.04

94.62

14.35

54.14

72.20

194.43

51.18

62.40

155.0

127.16

66.21

124.20

70.05

13

93.91

73.67

68.92

57.29

55.51

370.29

50.62

56.35

370.7

125.77

59.79

349.22

66.93

14

90.40

135.60

174.35

53.14

46.84

259.71

57.12

42.53

294.6

141.90

45.13

232.86

54.75

15

241.14

150.90

18.90

89.14

91.10

11.14

61.85

87.89

35.0

153.67

93.26

46.77

75.63

16

187.31

118.20

25.95

72.00

83.08

5.43

58.18

82.79

16.9

144.54

87.86

33.35

75.68

17

127.49

112.60

372.54

61.43

63.47

324.29

55.71

64.28

310.7

138.40

68.21

259.20

68.15

18

201.15

173.00

345.78

61.57

61.43

343.57

62.78

55.04

335.6

155.97

58.41

302.14

59.41

19

281.05

189.70

21.72

79.71

88.22

35.00

65.18

84.94

48.1

161.93

90.13

56.06

72.42

20

108.00

148.50

211.22

56.57

49.82

271.43

58.71

43.30

302.6

145.86

45.95

243.01

54.49

21

98.40

202.50

40.18

72.43

72.75

189.00

62.50

63.26

159.4

155.28

67.13

127.60

63.83

22

275.22

213.80

24.93

76.86

96.82

47.14

66.57

77.92

61.1

165.39

82.68

65.03

68.64

23

374.15

281.80

475.78

75.71

62.69

365.00

71.98

62.37

352.8

178.82

66.19

326.91

59.06

24

360.18

303.50

83.12

75.29

52.10

43.57

72.70

63.52

33.2

180.61

67.40

45.46

59.30

25

153.28

234.80

181.96

55.29

38.79

236.71

65.79

26.51

261.7

163.45

28.13

206.24

40.27

26

197.16

253.40

514.03

74.71

53.47

298.29

67.31

55.16

301.2

167.22

58.54

240.00

57.44

27

472.12

429.70

428.10

75.57

47.73

367.57

79.76

46.85

362.3

198.16

49.71

339.12

48.62

28

150.99

309.70

33.72

84.00

84.56

188.71

69.60

77.09

150.0

172.91

81.80

120.57

66.77

29

463.43

491.60

194.92

76.43

40.24

55.71

81.98

42.41

41.1

203.66

45.01

51.12

45.64

30

436.86

486.90

45.97

76.14

92.39

91.00

81.09

75.60

108.4

201.46

80.23

94.46

61.26

31

549.70

574.80

560.14

83.29

32.51

360.29

86.11

34.24

352.8

213.93

36.33

326.84

40.00

32

506.39

556.30

286.24

76.14

34.68

52.14

84.93

33.60

19.4

211.01

35.66

35.24

39.91

33

445.04

505.00

47.59

80.29

96.37

94.57

81.87

75.68

110.4

203.39

80.31

95.57

61.00

34

255.79

503.90

94.37

89.00

66.75

187.29

79.63

65.74

154.5

197.83

69.76

123.88

57.65

35

579.74

732.10

476.38

88.00

7.35

70.86

91.96

13.11

3.2

228.46

13.91

22.66

23.96

36

519.82

785.10

296.01

90.57

32.00

173.71

92.99

38.22

143.2

231.02

40.55

115.78

40.67

37

572.08

778.80

71.85

92.00

88.34

115.86

92.82

77.30

126.4

230.61

82.02

104.89

57.90

38

666.43

849.30

498.26

92.86

6.37

88.33

96.93

17.41

52.3

240.81

18.47

58.99

26.89

39

578.88

837.90

92.68

94.00

78.35

103.57

94.95

73.53

129.8

235.90

78.03

107.03

55.83

40

741.05

930.00

711.74

99.43

2.60

388.33

100.53

12.62

346.5

249.77

13.39

318.26

22.48

Table A.43: Physical measurements, perceptual estimates, and our model’s predictions (Phase 7).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

228

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

26.96

19.66

6.83

32.17

75.63

16.67

34.22

62.28

1.3

93.51

63.73

21.18

81.61

2

15.28

17.79

7.17

28.17

41.67

119.17

30.06

32.67

48.2

82.13

33.44

56.17

63.07

3

69.17

38.91

315.77

46.17

83.51

304.67

43.14

89.73

310.8

117.89

91.82

259.51

87.24

4

32.56

37.24

9.03

38.00

59.22

122.00

42.26

49.44

67.9

115.47

50.60

69.57

65.44

5

110.87

69.29

14.86

54.17

79.89

16.83

52.23

87.45

7.6

142.73

89.49

26.12

78.27

6

150.90

82.11

546.02

57.33

75.05

337.50

53.66

78.67

318.4

146.63

80.51

273.57

73.25

7

25.41

46.37

23.81

37.17

58.34

204.17

44.04

42.13

190.2

120.35

43.12

154.25

59.17

8

194.29

111.30

904.19

59.83

85.83

318.33

56.38

90.85

305.9

154.06

92.98

249.82

76.79

9

45.04

94.48

14.53

49.33

79.65

191.67

52.89

73.70

163.1

144.52

75.43

130.43

71.41

10

266.28

160.80

26.01

67.83

79.34

18.00

63.03

93.71

13.9

172.24

95.91

31.03

73.76

11

97.35

106.80

20.31

49.83

56.45

123.00

55.93

62.47

72.9

152.83

63.93

72.82

63.93

12

73.59

156.40

20.82

53.67

77.93

190.83

59.25

79.75

161.3

161.90

81.61

129.00

70.18

13

173.95

127.80

127.11

53.83

53.38

376.50

59.56

67.31

374.3

162.74

68.89

353.37

64.31

14

157.95

231.40

337.18

55.50

52.65

273.33

65.04

46.31

294.3

177.72

47.40

232.54

51.05

15

506.80

303.00

34.45

80.50

94.61

27.50

71.68

95.39

22.7

195.88

97.63

37.66

69.79

16

381.79

229.10

44.99

72.67

73.38

15.00

67.85

91.63

11.9

185.39

93.77

29.47

70.30

17

238.64

196.90

757.07

67.17

57.00

319.33

63.90

64.77

313.6

174.61

66.29

264.74

60.91

18

382.49

310.00

698.89

73.17

55.02

347.50

70.93

54.07

336.8

193.81

55.34

304.09

52.82

19

597.45

381.00

37.28

78.33

86.80

34.67

74.78

92.67

31.6

204.34

94.84

44.30

67.34

20

192.48

256.90

412.99

57.17

51.13

288.17

66.68

45.45

303.0

182.20

46.51

243.82

49.95

21

169.12

351.40

72.06

61.00

69.89

185.83

70.24

70.29

164.0

191.93

71.93

131.15

60.52

22

573.77

417.30

45.18

70.67

83.39

60.83

75.70

85.55

42.6

206.86

87.55

52.19

64.31

23

776.34

550.20

1004.81

79.67

59.94

362.50

80.46

59.96

355.3

219.87

61.36

330.25

52.22

24

761.52

596.70

156.64

83.00

62.71

47.17

81.32

65.24

28.1

222.22

66.77

41.71

54.18

25

274.64

414.30

351.75

66.33

41.15

246.67

73.42

27.88

249.3

200.61

28.53

197.05

37.28

26

381.07

461.60

1102.55

77.00

66.00

298.83

75.06

53.85

302.8

205.09

55.11

243.53

51.24

27

985.09

836.70

885.98

85.17

58.61

375.33

87.02

45.13

367.7

237.77

46.19

345.77

43.57

28

269.18

556.10

62.21

73.17

86.53

180.00

76.97

83.54

155.2

210.34

85.49

124.37

63.02

29

950.59

939.70

377.88

78.83

48.02

64.17

88.46

42.28

37.6

241.73

43.27

48.62

41.82

30

901.72

933.90

91.14

83.00

88.81

97.00

87.75

77.98

92.2

239.77

79.80

85.21

57.03

31

1126.00

1099.00

1192.29

88.83

32.11

382.83

91.51

32.42

357.8

250.05

33.18

333.46

36.01

32

1043.65

1076.00

576.42

84.83

38.30

49.83

90.91

33.04

22.4

248.40

33.81

37.47

36.47

33

920.98

972.70

95.48

82.83

89.43

90.33

88.42

77.55

94.9

241.62

79.37

86.85

56.65

34

467.24

921.90

183.53

81.83

64.72

181.33

85.60

64.77

158.6

233.91

66.28

126.94

52.62

35

1157.77

1394.00

989.90

96.50

5.00

63.67

95.45

13.32

19.6

260.83

13.63

35.35

22.59

36

1003.61

1486.00

592.74

92.67

37.07

170.83

95.92

35.08

144.6

262.12

35.91

116.79

36.59

37

1175.48

1518.00

151.34

92.50

87.71

118.33

96.47

74.44

120.3

263.60

76.18

101.26

53.14

38

1365.29

1658.00

1053.42

97.83

4.88

80.00

99.20

16.33

53.0

271.08

16.71

59.48

24.54

39

1164.67

1619.00

189.70

91.67

80.02

103.67

97.63

70.04

127.2

266.79

71.68

105.39

51.24

40

1541.92

1824.00

1635.45

100.00

3.71

2.50

101.47

13.81

345.9

277.28

14.14

317.42

22.32

Table A.44: Physical measurements, perceptual estimates, and our model’s predictions (Phase 8).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

229

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

32.10

25.53

11.73

29.17

52.39

27.67

22.34

47.01

8.3

61.10

48.09

26.66

87.71

2

20.32

23.68

12.25

18.83

22.08

144.50

1.00

22.63

68.2

2.73

23.15

69.79

287.68

3

73.89

44.56

320.57

47.33

96.64

303.33

34.72

80.89

304.6

94.94

82.75

247.15

92.30

4

37.54

43.11

14.00

28.00

48.33

128.67

33.58

36.82

81.9

91.82

37.67

78.65

63.33

5

116.75

75.50

19.71

51.67

76.07

9.67

44.63

76.24

15.0

122.05

78.00

31.87

79.04

6

156.41

88.25

552.54

54.17

80.61

337.83

46.07

78.40

314.9

125.99

80.21

267.11

78.89

7

30.34

52.27

28.84

39.83

63.45

195.33

35.62

33.40

188.4

97.40

34.17

152.57

58.56

8

200.54

118.90

910.13

63.33

87.69

317.00

49.09

89.09

300.0

134.24

91.15

237.60

81.47

9

50.03

100.70

19.43

54.67

80.99

190.00

45.21

60.63

159.6

123.63

62.04

127.69

70.03

10

270.63

166.10

30.76

64.50

86.03

9.67

55.61

88.94

23.7

152.06

91.00

38.48

76.48

11

102.81

113.10

25.55

50.00

57.01

128.33

48.36

52.57

84.6

132.25

53.79

80.42

63.05

12

78.53

162.30

26.11

56.50

78.77

189.50

51.71

68.82

157.5

141.40

70.41

126.12

69.76

13

180.15

134.60

133.17

55.50

53.13

372.50

52.09

63.05

374.5

142.44

64.51

353.61

66.53

14

162.87

238.30

340.50

57.33

43.58

277.00

57.63

46.11

290.3

157.60

47.17

229.13

54.09

15

512.09

309.10

40.05

80.83

94.49

24.50

64.60

95.18

36.4

176.65

97.38

47.78

73.40

16

385.79

234.10

49.54

75.50

78.92

9.83

60.52

90.85

20.3

165.50

92.94

35.89

74.09

17

244.90

204.00

764.67

62.17

54.20

322.00

56.54

67.89

310.3

154.61

69.46

258.53

66.26

18

389.97

318.40

706.11

65.50

56.78

357.50

63.75

58.00

337.6

174.33

59.34

305.17

57.68

19

601.81

386.60

42.80

84.17

94.76

40.33

67.82

92.93

47.7

185.47

95.07

55.76

70.78

20

199.16

265.40

420.99

52.50

38.42

284.17

59.39

46.38

301.0

162.42

47.46

239.60

53.44

21

174.30

357.80

77.13

66.67

69.88

188.83

62.96

68.28

161.1

172.16

69.86

128.86

62.98

22

577.72

422.50

49.99

72.50

85.50

46.17

68.74

85.47

58.1

187.99

87.45

62.97

67.43

23

781.73

557.10

1010.82

80.17

75.93

365.83

73.79

66.37

356.6

201.78

67.90

331.84

57.35

24

770.86

606.60

164.88

77.50

58.56

47.50

74.88

70.19

36.2

204.76

71.82

47.65

58.55

25

279.60

420.30

356.39

60.17

40.90

232.50

66.21

29.17

248.2

181.07

29.85

196.25

40.14

26

386.74

469.70

1104.73

75.17

57.75

295.00

68.04

58.76

300.3

186.08

60.11

238.17

56.19

27

991.40

844.00

894.23

81.00

54.22

380.83

81.13

51.25

368.2

221.88

52.44

346.29

48.06

28

272.45

559.00

66.25

76.67

95.12

N/A

69.96

82.51

N/A

191.31

84.42

N/A

65.67

29

957.74

948.80

386.90

76.17

48.42

55.83

82.87

47.47

42.9

226.63

48.56

52.39

45.76

30

909.49

942.80

95.64

79.00

84.88

87.00

82.16

80.19

103.1

224.68

82.04

91.63

59.74

31

1134.49

1109.00

1201.68

82.83

42.70

382.50

86.50

37.73

358.3

236.56

38.60

334.09

39.93

32

1054.31

1089.00

588.07

82.33

47.62

70.83

85.84

37.87

25.0

234.75

38.74

39.42

40.16

33

926.32

979.50

100.94

78.00

91.94

106.67

82.90

79.75

105.3

226.72

81.59

92.81

59.31

34

471.99

928.90

186.70

82.83

70.23

180.00

79.53

69.62

156.1

217.49

71.23

125.10

56.58

35

1164.59

1403.00

996.03

89.17

7.83

51.17

91.39

15.68

20.9

249.93

16.04

36.35

25.05

36

1014.08

1499.00

603.34

84.83

42.73

156.17

92.09

40.12

144.5

251.83

41.04

116.67

39.91

37

1176.79

1519.00

153.58

91.33

83.09

102.00

92.73

80.09

124.6

253.58

81.94

103.79

56.20

38

1372.22

1668.00

1062.09

92.67

9.02

101.00

96.36

19.27

55.7

263.52

19.72

61.32

27.04

39

1170.06

1626.00

195.66

93.83

82.02

105.67

94.35

76.04

129.3

258.01

77.79

106.72

54.29

40

1553.40

1840.00

1643.56

99.50

3.35

355.00

99.62

16.55

345.7

272.44

16.93

317.12

24.65

Table A.45: Physical measurements, perceptual estimates, and our model’s predictions (Phase 9).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

230

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

36.36

30.91

16.09

19.00

22.33

19.43

14.64

39.51

3.0

40.06

40.42

22.51

99.32

2

24.46

28.85

16.45

7.86

8.24

81.00

1.00

18.58

52.7

2.74

19.00

59.28

260.57

3

78.02

49.80

324.49

37.00

81.31

301.86

31.38

74.99

308.9

85.86

76.71

255.80

93.46

4

42.02

48.66

18.49

17.00

19.68

120.00

30.18

31.11

72.1

82.56

31.83

72.35

61.39

5

120.75

80.69

23.88

47.86

64.99

6.43

41.59

68.94

9.6

113.79

70.52

27.68

77.84

6

160.77

93.70

557.49

54.29

77.22

325.57

43.09

75.81

317.4

117.90

77.55

271.76

80.19

7

34.72

57.82

33.30

24.29

40.71

198.29

32.33

29.07

189.6

88.45

29.73

153.65

57.33

8

204.71

123.70

916.05

72.29

93.24

302.29

46.09

86.34

304.1

126.10

88.32

246.08

82.75

9

54.39

106.30

23.67

46.86

66.75

194.00

42.21

53.68

162.0

115.48

54.91

129.58

68.18

10

276.38

172.30

35.18

65.14

80.75

4.86

52.74

83.96

16.6

144.29

85.89

33.12

76.28

11

107.70

119.10

29.99

45.86

38.79

109.29

45.44

47.15

76.4

124.30

48.23

75.12

61.59

12

82.91

168.10

30.33

55.14

72.14

192.71

48.75

62.46

160.1

133.38

63.89

128.13

68.43

13

184.94

140.30

137.80

52.14

50.83

371.57

49.13

59.02

374.3

134.41

60.37

353.41

66.27

14

167.62

244.20

345.62

55.00

52.80

270.00

54.67

44.46

292.7

149.58

45.48

231.22

54.52

15

516.69

314.30

44.01

83.00

91.86

13.00

61.70

92.38

26.4

168.81

94.50

40.48

73.98

16

392.10

240.90

54.89

77.14

91.05

5.43

57.68

87.27

14.5

157.80

89.27

31.45

74.37

17

249.67

209.80

771.03

66.29

64.57

310.71

53.60

67.24

312.6

146.63

68.79

262.88

67.72

18

394.28

323.40

711.39

63.00

62.43

342.57

60.77

58.05

337.1

166.25

59.38

304.49

59.09

19

607.33

392.60

47.01

85.00

101.32

38.29

64.99

90.55

36.2

177.80

92.63

47.62

71.36

20

203.55

270.80

425.92

60.14

46.52

279.57

56.41

45.30

302.3

154.33

46.34

242.39

54.18

21

179.38

364.90

81.62

62.43

69.54

190.43

60.06

65.17

163.1

164.31

66.67

130.45

62.98

22

583.19

428.50

53.70

79.29

93.64

50.57

65.90

83.17

47.1

180.29

85.08

55.41

67.92

23

790.41

565.60

1020.46

79.29

63.02

358.86

71.05

67.36

355.7

194.39

68.91

330.75

58.87

24

773.39

611.10

168.50

79.00

73.20

51.43

72.06

70.43

30.4

197.14

72.05

43.36

59.77

25

285.66

428.00

363.07

60.71

41.95

237.86

63.34

28.70

248.7

173.29

29.36

196.63

40.69

26

390.95

475.50

1109.04

77.86

69.25

298.29

65.13

59.35

302.0

178.19

60.71

241.75

57.71

27

995.20

849.50

899.16

78.29

55.88

370.43

78.53

52.62

367.9

214.84

53.82

345.93

49.49

28

278.72

568.00

72.23

74.00

72.94

187.14

67.19

78.95

N/A

183.81

80.77

N/A

65.54

29

963.83

956.20

392.42

76.71

44.94

59.29

80.40

48.61

39.2

219.97

49.73

49.80

47.01

30

912.69

946.80

100.76

77.14

101.34

89.00

79.64

78.45

96.0

217.87

80.25

87.56

60.01

31

1140.21

1117.00

1207.43

81.43

46.06

365.29

84.24

39.19

357.9

230.46

40.09

333.64

41.24

32

1059.86

1097.00

593.89

79.00

40.18

48.00

83.54

39.14

23.0

228.54

40.04

37.94

41.38

33

931.38

986.20

105.40

79.57

96.27

91.43

80.47

78.33

98.2

220.16

80.13

88.91

59.65

34

479.58

939.80

194.29

84.86

75.31

199.29

77.00

69.18

157.7

210.67

70.77

126.24

57.31

35

1172.12

1413.00

1004.82

88.57

8.15

76.43

89.54

16.37

20.4

244.96

16.74

35.94

25.85

36

1021.02

1508.00

608.72

91.57

32.91

167.43

90.29

41.50

144.7

247.00

42.46

116.82

40.99

37

1186.78

1531.00

162.75

91.43

83.53

101.86

91.14

79.41

122.0

249.34

81.23

102.27

56.43

38

1377.69

1674.00

1069.43

91.86

10.09

88.86

94.95

20.23

54.7

259.76

20.69

60.63

27.90

39

1173.43

1632.00

195.67

93.86

92.63

122.86

92.76

77.12

127.9

253.78

78.89

105.82

55.13

40

1559.29

1849.00

1647.77

100.00

2.48

362.50

98.69

17.48

345.8

269.99

17.89

317.23

25.45

Table A.46: Physical measurements, perceptual estimates, and our model’s predictions (Phase 10).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

231

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

49.58

47.11

29.37

14.83

23.00

4.17

8.98

28.80

-2.2

24.59

29.45

18.07

108.23

2

37.73

45.13

29.66

6.17

3.68

133.33

1.00

13.14

34.4

2.74

13.44

46.29

219.13

3

91.19

65.99

337.87

39.00

72.28

320.83

28.31

63.67

313.9

77.50

65.12

265.41

90.64

4

54.88

64.50

31.39

12.83

21.77

132.83

26.76

22.68

59.6

73.26

23.19

64.01

55.63

5

134.07

96.98

37.30

42.00

52.69

12.33

38.01

55.75

4.0

104.05

57.02

23.29

73.20

6

174.42

110.20

571.64

44.17

66.81

325.33

39.52

68.58

320.1

108.17

70.14

276.61

79.62

7

47.75

73.75

46.27

20.83

30.61

206.67

28.90

21.53

190.9

79.12

22.02

154.95

52.16

8

216.29

138.90

926.17

60.50

86.00

300.83

42.30

78.78

309.0

115.80

80.57

255.86

82.48

9

67.25

121.90

36.76

45.17

58.39

198.33

38.46

41.46

164.8

105.27

42.41

131.85

62.76

10

288.80

188.00

48.19

53.67

68.32

7.50

48.82

72.76

9.7

133.65

74.41

27.75

73.78

11

119.97

134.20

43.00

39.33

37.62

135.67

41.54

37.39

66.0

113.71

38.24

68.27

57.34

12

96.10

184.40

43.07

53.83

66.32

192.83

44.91

50.62

162.9

122.93

51.78

130.30

64.17

13

197.85

156.30

150.70

51.00

53.14

367.17

45.24

50.41

374.1

123.83

51.55

353.22

63.80

14

180.39

260.10

357.36

47.67

50.74

271.67

50.66

39.84

296.7

138.67

40.74

234.65

53.60

15

529.04

330.10

56.79

80.17

86.06

4.50

57.72

83.90

17.0

157.98

85.81

33.39

72.88

16

404.24

256.10

67.63

66.00

71.27

4.67

53.67

78.14

8.3

146.90

79.91

26.68

72.93

17

261.43

224.70

780.50

63.17

64.33

317.00

49.58

63.24

315.3

135.70

64.68

267.85

68.26

18

407.67

339.80

724.80

61.83

59.50

345.00

56.71

55.32

336.4

155.23

56.58

303.47

59.70

19

620.04

408.50

60.21

70.50

75.24

36.50

61.01

83.06

24.6

166.99

84.95

39.13

70.53

20

216.13

286.60

438.18

54.50

45.12

293.17

52.36

41.48

304.1

143.34

42.42

246.09

53.80

21

192.17

380.60

94.48

63.83

57.59

191.67

55.98

57.39

165.6

153.24

58.70

132.41

61.20

22

593.58

442.40

66.71

63.83

68.51

50.00

61.82

75.94

34.9

169.21

77.66

46.67

66.99

23

801.97

580.70

1032.89

77.33

57.85

355.00

66.99

65.85

354.6

183.38

67.35

329.24

59.93

24

785.28

625.90

181.03

70.33

59.68

71.00

68.05

67.64

24.2

186.27

69.18

38.78

60.26

25

297.68

443.00

374.63

61.33

41.39

225.83

59.19

26.71

249.7

162.03

27.32

197.31

40.60

26

403.71

491.10

1118.72

73.67

54.82

300.00

61.05

57.57

304.4

167.11

58.87

246.82

58.69

27

1009.22

866.40

913.82

75.17

45.63

371.33

74.75

52.50

367.5

204.62

53.69

345.51

50.65

28

290.08

581.30

84.08

71.33

76.95

186.33

63.05

70.73

156.9

172.59

72.34

125.63

64.02

29

973.73

969.50

404.23

70.50

46.23

61.33

76.64

48.30

34.9

209.78

49.39

46.69

47.98

30

923.96

961.50

111.43

68.33

82.90

87.50

75.95

73.08

86.2

207.89

74.74

81.44

59.29

31

1151.32

1131.00

1220.32

78.50

35.66

363.50

80.74

40.07

357.4

221.00

40.98

332.96

42.58

32

1071.39

1111.00

606.96

75.33

35.55

59.67

79.99

39.60

20.7

218.95

40.50

36.19

42.53

33

943.27

1001.00

117.52

75.83

80.91

88.33

76.84

72.89

88.9

210.34

74.54

83.13

58.87

34

492.78

954.60

207.90

79.17

54.22

203.33

73.14

65.69

160.0

200.20

67.18

128.01

57.28

35

1183.60

1427.00

1016.75

77.83

11.72

94.17

86.55

16.93

18.4

236.91

17.31

34.42

26.73

36

1029.10

1518.00

619.36

82.50

36.14

178.50

87.30

41.92

145.0

238.97

42.87

117.04

41.88

37

1196.33

1545.00

171.57

89.33

70.59

117.83

88.47

76.36

117.9

242.18

78.10

99.80

56.15

38

1387.22

1687.00

1077.89

88.00

11.94

88.33

92.68

21.13

52.6

253.70

21.61

59.20

28.86

39

1186.33

1646.00

211.88

92.33

67.91

104.33

90.34

74.04

125.5

247.28

75.72

104.34

54.72

40

1568.55

1861.00

1656.54

98.17

2.34

N/A

97.09

18.54

N/A

265.76

18.96

N/A

26.41

Table A.47: Physical measurements, perceptual estimates, and our model’s predictions (Phase 11).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

232

Perceptual estimates Z

J0

M0

H0

Predicted appearance J

M

H

Q

C

h

s

1

64.47

68.09

43.88

9.17

9.78

17.50

19.75

21.07

3.3

54.17

21.53

22.74

62.37

2

52.69

66.23

44.25

2.50

2.18

N/A

13.83

9.68

N/A

37.93

9.89

N/A

50.52

3

106.44

87.19

354.74

32.17

70.68

301.67

28.83

55.30

315.1

79.09

56.51

267.59

83.61

4

70.16

85.91

46.17

6.33

7.19

106.00

27.59

17.90

69.9

75.67

18.29

70.88

48.64

5

149.33

118.10

51.59

35.17

50.71

8.00

36.89

45.99

4.2

101.20

47.00

23.46

67.42

6

190.11

131.60

589.65

39.17

75.21

326.33

38.32

61.92

320.6

105.13

63.28

277.56

76.75

7

62.76

95.13

60.95

21.67

35.23

202.50

29.16

18.91

191.3

80.00

19.33

155.39

48.62

8

232.62

160.80

946.58

53.00

75.73

302.50

40.97

71.97

310.8

112.38

73.55

259.46

80.03

9

82.17

143.30

51.11

33.67

48.20

192.50

37.27

34.85

166.8

102.23

35.62

133.46

58.39

10

304.82

209.60

62.67

49.50

71.04

10.17

47.00

63.46

8.2

128.92

64.85

26.61

70.16

11

135.93

156.40

57.56

28.00

30.11

113.83

40.18

31.30

66.6

110.21

31.99

68.69

53.29

12

111.05

206.00

57.49

50.17

67.57

192.33

43.22

43.17

164.7

118.57

44.12

131.70

60.34

13

214.36

178.50

166.53

40.33

52.44

364.50

43.61

43.08

374.2

119.63

44.03

353.35

60.01

14

196.07

282.30

373.73

48.67

48.16

269.33

48.72

35.95

297.3

133.65

36.74

235.21

51.86

15

545.97

352.40

71.90

68.50

77.26

3.67

55.62

75.84

14.1

152.58

77.50

31.18

70.50

16

420.25

277.80

81.63

55.50

73.06

2.50

51.65

69.69

7.0

141.67

71.22

25.69

70.13

17

278.14

247.50

800.73

56.17

59.00

330.17

47.77

58.68

316.0

131.02

59.96

269.22

66.92

18

425.59

363.50

744.02

56.67

63.01

350.00

54.64

51.54

335.9

149.88

52.67

302.64

58.64

19

634.39

429.30

73.83

66.50

72.85

24.67

58.77

75.79

21.3

161.21

77.45

36.63

68.57

20

232.35

309.30

455.13

49.00

47.12

276.00

50.38

37.82

304.5

138.21

38.65

246.90

52.31

21

207.44

403.30

108.79

59.83

63.42

192.50

53.86

51.35

166.8

147.74

52.47

133.40

58.95

22

611.09

465.50

81.66

59.17

62.76

40.67

59.64

69.18

30.9

163.60

70.70

43.76

65.03

23

816.35

601.60

1047.62

63.67

63.67

358.33

64.63

62.46

353.9

177.28

63.83

328.33

59.36

24

800.25

647.70

195.95

63.17

55.26

44.83

65.72

63.39

22.1

180.29

64.78

37.23

59.30

25

315.06

467.60

392.51

54.33

47.49

247.50

57.06

24.82

249.6

156.51

25.36

197.25

39.82

26

419.86

515.10

1135.37

68.00

56.71

295.00

58.87

54.64

305.3

161.49

55.84

248.59

58.17

27

1025.71

890.70

932.00

69.00

41.73

367.17

72.47

50.35

367.2

198.78

51.45

345.10

50.33

28

307.56

606.00

100.16

71.17

70.58

185.83

60.86

63.44

158.1

166.93

64.83

126.58

61.65

29

991.90

996.00

421.38

69.00

42.27

58.00

74.43

46.28

33.6

204.17

47.29

45.73

47.61

30

939.85

984.60

128.08

60.83

63.02

86.00

73.70

67.11

81.8

202.16

68.58

78.63

57.62

31

1170.49

1158.00

1241.17

68.33

44.30

363.17

78.61

39.14

356.9

215.63

40.00

332.30

42.61

32

1086.37

1133.00

623.79

69.17

41.58

56.00

77.72

38.39

19.6

213.20

39.23

35.40

42.43

33

958.30

1024.00

130.35

61.67

76.98

85.83

74.59

67.49

85.3

204.61

68.97

80.83

57.43

34

511.26

983.20

222.32

71.17

65.95

185.83

70.95

61.69

160.9

194.61

63.04

128.70

56.30

35

1203.23

1454.00

1035.93

71.67

17.30

92.17

84.61

16.63

17.1

232.09

16.99

33.45

26.77

36

1052.80

1551.00

637.58

76.67

34.76

173.33

85.54

40.80

145.0

234.65

41.70

117.03

41.70

37

1216.31

1574.00

187.93

70.83

71.30

115.50

86.75

71.60

115.8

237.97

73.17

98.64

54.85

38

1411.07

1722.00

1103.10

78.00

22.07

91.50

91.28

20.72

52.2

250.38

21.17

58.95

28.77

39

1205.44

1675.00

227.17

74.00

51.51

123.00

88.70

70.05

124.5

243.32

71.59

103.72

53.66

40

1586.78

1889.00

1672.76

87.00

14.33

361.33

95.86

18.57

345.4

262.95

18.98

316.62

26.58

Table A.48: Physical measurements, perceptual estimates, and our model’s predictions (Phase 12).

A.6. Colour Appearance Data

Physical measurements Y[ cd/m2 ]

233

Perceptual estimates Z

J0

M0

Colour

X

1

18.12

15.64

12.15

17.50

43.53

2

12.63

14.82

12.55

9.50

12.15

3

56.36

28.96

254.19

42.83

91.79

4

20.63

24.11

13.04

25.83

5

59.75

40.71

18.30

46.33

6

109.20

53.76

435.71

7

17.93

29.17

22.82

8

148.27

72.66

H0

Predicted appearance J

M

H

Q

C

h

s

17.00

1.00

105.00

1.00

37.27

-3.8

16.25

24.0

2.55

39.17

16.47

382.49

2.55

17.08

38.69

300.83

29.97

75.92

312.4

252.55

76.34

79.78

262.52

99.72

37.65 74.97

131.17

27.67

28.95

10.17

40.42

65.87

60.2

70.50

30.43

64.38

64.09

4.8

102.99

69.22

23.93

79.97

50.67 32.67

85.90

327.83

42.04

50.53

193.33

30.40

77.22

318.9

107.11

81.15

274.51

84.91

25.66

189.6

77.46

26.97

153.65

701.15

64.33

89.17

301.67

57.56

45.00

85.97

307.4

114.66

90.35

252.86

86.59

9

28.21

55.57

16.58

49.33

77.53

191.67

41.47

51.02

160.1

105.65

53.62

128.14

69.49

10

134.69

85.58

27.14

57.17

86.24

3.00

51.44

80.40

11.2

131.05

84.50

28.93

78.33

11

53.16

60.29

21.40

48.00

57.18

141.17

44.26

44.78

70.0

112.75

47.05

70.93

63.02

12

43.33

89.14

20.90

55.33

75.06

189.17

48.23

60.32

158.2

122.88

63.39

126.67

70.06

13

96.94

71.49

106.11

56.00

41.31

365.83

47.93

59.22

369.7

122.11

62.23

348.09

69.64

14

103.47

133.70

267.55

55.50

50.30

257.50

54.22

43.99

302.4

138.15

46.22

242.62

56.43

15

243.20

151.00

32.47

71.50

90.61

9.17

59.74

88.55

20.4

152.21

93.06

35.96

76.27

16

187.93

117.10

40.50

68.83

75.71

5.83

55.99

83.98

9.2

142.66

88.25

27.36

76.72

17

163.13

115.90

592.40

65.17

60.39

320.17

52.62

68.45

315.2

134.05

71.93

267.80

71.45

18

232.08

174.30

551.84

61.50

67.27

350.00

59.75

60.16

336.0

152.23

63.22

302.75

62.86

19

283.22

187.70

34.20

74.33

91.78

39.17

62.79

86.69

29.6

159.98

91.10

42.79

73.61

20

124.58

147.60

328.79

57.67

58.32

269.00

55.81

45.99

307.1

142.19

48.34

252.18

56.88

21

96.41

198.20

57.50

62.33

55.87

205.83

59.76

63.12

160.8

152.25

66.33

128.68

64.39

22

274.80

209.40

39.03

70.33

91.59

70.00

63.93

79.23

41.5

162.87

83.26

51.41

69.75

23

426.50

290.50

774.27

73.83

73.88

358.67

69.01

67.55

352.2

175.81

70.99

326.03

61.99

24

369.77

303.10

125.60

67.17

66.80

46.67

69.91

66.29

24.8

178.12

69.66

39.24

61.00

25

163.89

233.00

278.67

59.67

35.87

213.00

62.83

26.10

270.7

160.08

27.42

213.18

40.38

26

252.86

265.20

837.77

69.67

58.77

298.17

64.51

58.18

305.7

164.36

61.14

249.38

59.50

27

523.31

442.30

690.95

70.00

55.15

370.33

76.53

51.88

362.2

194.99

54.52

339.06

51.58

28

147.56

308.70

48.90

70.83

99.22

185.33

66.80

77.90

152.5

170.19

81.87

122.37

67.66

29

482.56

496.10

298.96

71.50

47.56

57.00

78.48

44.38

32.7

199.96

46.64

45.06

47.11

30

440.94

489.00

69.24

70.33

92.85

95.00

77.75

76.84

96.0

198.10

80.75

87.54

62.28

31

609.87

588.60

907.43

76.67

33.96

365.00

82.09

39.58

352.6

209.13

41.60

326.57

43.51

32

539.69

571.10

451.72

71.67

41.21

56.67

81.42

35.65

12.6

207.44

37.47

30.01

41.46

33

449.89

508.50

71.58

71.67

84.80

90.33

78.51

76.99

98.5

200.01

80.91

89.05

62.04

34

258.43

512.40

139.13

79.50

73.67

184.00

76.44

66.96

155.7

194.75

70.37

124.74

58.64

35

624.66

750.10

755.00

88.17

4.65

106.00

87.40

15.38

-13.3

222.67

16.16

6.94

26.28

36

541.65

804.90

455.34

85.00

26.66

178.83

88.47

37.86

142.3

225.40

39.79

115.21

40.98

37

583.99

802.70

107.27

82.83

96.21

105.00

88.79

79.53

122.4

226.20

83.58

102.49

59.29

38

727.91

889.00

805.43

95.83

6.40

88.67

92.40

16.79

30.8

235.40

17.65

43.70

26.71

39

582.57

856.10

132.44

90.50

80.71

107.17

90.32

76.59

127.9

230.10

80.48

105.85

57.69

40

831.79

977.10

1181.01

100.00

2.64

390.00

95.43

18.99

342.8

243.14

19.96

312.94

27.95

Table A.49: Physical measurements, perceptual estimates, and our model’s predictions (Phase 13).

A.6. Colour Appearance Data

Physical measurements Y[ cd/m2 ]

234

Perceptual estimates Z

J0

M0

Colour

X

1

28.46

21.50

3.35

25.67

57.75

2

17.35

17.63

3.89

10.17

9.15

3

22.33

23.80

62.90

39.17

83.97

4

30.84

29.89

4.38

22.17

5

102.99

65.38

4.78

50.00

6

64.61

52.80

106.70

7

19.32

30.00

7.65

8

50.89

56.43

H0

Predicted appearance J

M

H

Q

C

h

s

42.83

27.10

44.11

100.00

15.96

20.19

5.6

68.76

46.43

24.55

80.09

23.7

40.49

21.25

38.45

300.83

33.56

72.31

306.2

70.61

85.13

76.11

250.42

92.16

28.94

113.33

33.41

32.00

69.31

11.83

46.82

71.33

46.9

84.75

33.68

55.22

61.44

15.0

118.78

75.09

31.86

77.50

55.17

87.03

338.33

46.12

28.83

45.20

198.67

32.32

68.13

317.1

116.99

71.71

271.18

76.31

29.96

211.7

81.99

31.53

171.53

171.21

64.17

101.03

299.67

60.45

47.28

84.00

301.4

119.95

88.43

240.55

83.69

9

32.40

55.43

7.58

52.33

64.68

195.83

42.09

46.18

181.0

106.79

48.61

145.64

65.76

10

233.44

141.60

6.84

61.00

83.11

10.17

57.81

83.21

22.1

146.66

87.59

37.26

75.32

11

82.19

75.65

7.80

45.67

49.11

122.50

47.91

46.08

54.1

121.54

48.51

60.24

61.58

12

49.35

87.51

10.63

57.33

65.23

187.50

48.54

52.75

179.2

123.15

55.53

144.03

65.45

13

139.25

100.30

27.98

59.33

43.53

378.33

53.00

59.24

382.9

134.45

62.36

402.95

66.38

14

75.50

124.40

72.85

57.33

61.96

264.83

54.60

49.89

283.1

138.50

52.52

223.16

60.02

15

418.33

248.60

7.15

81.33

100.21

37.83

66.50

90.05

32.0

168.69

94.78

44.58

73.06

16

322.52

193.70

10.00

68.67

84.98

10.67

62.58

85.34

21.7

158.76

89.83

36.93

73.32

17

102.69

111.90

148.02

55.50

56.06

317.00

55.08

61.92

310.1

139.73

65.18

258.08

66.57

18

232.54

200.00

138.40

60.33

53.35

356.67

63.53

51.43

341.5

161.16

54.14

310.98

56.49

19

483.43

298.80

9.50

79.83

94.36

45.00

69.52

87.36

35.8

176.37

91.95

47.35

70.38

20

95.33

140.90

88.35

56.67

42.94

267.83

56.68

47.86

294.7

143.80

50.38

232.89

57.69

21

103.49

188.20

26.16

61.67

58.29

190.00

59.54

55.56

183.7

151.04

58.48

148.11

60.65

22

460.25

309.70

12.52

78.33

105.61

53.33

69.92

80.46

41.0

177.39

84.70

51.05

67.35

23

503.33

368.20

190.79

75.33

81.58

373.67

74.44

61.12

362.1

188.86

64.34

338.92

56.89

24

579.82

415.40

37.59

73.17

71.43

47.50

75.52

68.71

27.5

191.58

72.32

41.28

59.89

25

158.49

227.70

80.50

64.17

41.15

238.67

63.52

33.04

258.7

161.13

34.77

203.97

45.28

26

147.83

234.80

214.67

71.50

62.48

294.17

64.79

61.29

292.9

164.36

64.52

231.34

61.07

27

657.96

533.90

177.46

80.17

52.79

374.50

81.38

49.43

375.0

206.46

52.03

354.22

48.93

28

167.00

292.50

29.23

68.00

101.57

192.17

66.45

61.71

172.1

168.57

64.96

137.80

60.51

29

679.57

591.10

90.37

74.67

38.93

57.50

82.76

47.31

28.5

209.94

49.80

42.03

47.47

30

670.89

587.80

33.47

75.83

89.74

86.33

82.00

65.41

74.8

208.04

68.86

74.11

56.07

31

710.51

656.50

235.77

76.33

31.41

379.17

85.95

36.33

365.6

218.04

38.24

343.14

40.82

32

715.61

655.60

129.02

77.00

34.59

55.33

85.29

39.10

16.9

216.38

41.15

33.35

42.51

33

678.30

603.80

36.53

72.83

70.57

89.83

82.60

63.89

76.0

209.55

67.25

74.89

55.22

34

281.14

479.90

61.97

76.17

65.58

182.33

75.68

55.41

176.8

191.99

58.33

141.84

53.72

35

729.03

787.80

211.55

86.67

8.50

64.17

89.81

16.82

5.0

227.84

17.70

24.14

27.17

36

639.23

809.50

148.55

80.17

34.96

160.83

89.42

31.37

151.3

226.85

33.02

121.55

37.18

37

821.76

868.80

58.23

81.83

81.27

102.33

91.57

59.48

106.7

232.29

62.61

93.53

50.60

38

863.08

933.40

230.01

91.50

10.23

59.17

94.94

18.59

32.3

240.86

19.57

44.79

27.78

39

796.50

902.10

70.22

81.50

66.96

106.67

92.47

55.99

117.9

234.57

58.94

99.81

48.86

40

919.57

1004.00

322.02

99.17

3.23

N/A

97.80

15.87

N/A

248.10

16.70

N/A

25.29

Table A.50: Physical measurements, perceptual estimates, and our model’s predictions (Phase 14).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

235

Perceptual estimates Z

J0

M0

Predicted appearance

H0

J

M

H

Q

C

h

s

1

41.90

38.26

21.58

22.50

35.59

398.00

24.21

35.84

1.6

66.24

36.66

21.43

73.56

2

30.09

36.34

21.96

7.17

6.12

166.67

18.72

16.69

53.5

51.22

17.07

59.82

57.08

3

83.74

57.29

330.69

45.83

90.83

301.83

33.53

72.56

311.6

91.77

74.22

260.97

88.92

4

47.43

55.88

23.82

17.67

20.02

114.50

32.44

28.80

68.4

88.77

29.46

69.87

56.96

5

126.35

88.11

29.47

48.00

64.74

7.00

42.33

65.82

6.8

115.84

67.32

25.48

75.38

6

166.52

101.17

563.47

56.33

93.13

328.00

43.72

74.44

318.7

119.66

76.14

274.09

78.87

7

40.19

65.07

38.63

25.33

57.90

200.50

34.16

27.69

190.3

93.49

28.32

154.33

54.42

8

209.69

130.72

920.62

64.00

91.60

299.83

46.49

84.89

306.4

127.22

86.83

250.79

81.69

9

59.78

113.34

29.10

49.50

78.23

213.17

42.83

50.91

163.9

117.21

52.08

131.06

65.91

10

281.38

179.36

40.56

57.00

80.12

3.33

52.88

81.75

13.0

144.72

83.62

30.31

75.16

11

112.75

125.92

35.28

42.50

47.60

130.33

45.82

44.95

71.8

125.41

45.97

72.16

59.87

12

88.44

175.44

35.56

57.83

81.15

209.00

49.04

60.06

161.8

134.21

61.43

129.41

66.90

13

190.25

147.50

143.06

53.17

58.10

368.83

49.39

57.19

374.3

135.15

58.50

353.40

65.05

14

172.98

251.26

350.88

51.50

51.35

272.83

54.75

43.73

294.5

149.84

44.72

232.77

54.02

15

522.12

321.78

49.44

80.83

100.67

10.83

61.69

90.77

21.5

168.83

92.85

36.82

73.33

16

396.83

247.60

59.74

64.67

66.22

7.50

57.69

85.65

11.3

157.87

87.60

29.04

73.66

17

254.56

216.58

774.80

60.50

62.18

318.00

53.69

66.51

313.8

146.93

68.03

265.13

67.28

18

400.00

331.02

717.04

59.50

63.74

348.33

60.76

57.48

336.7

166.27

58.79

303.90

58.80

19

612.21

399.60

52.22

79.67

94.68

40.50

64.92

89.14

30.4

177.66

91.17

43.38

70.83

20

208.73

277.80

430.64

54.33

42.87

286.33

56.44

44.66

303.1

154.47

45.68

243.99

53.77

21

184.48

371.60

86.82

63.67

69.23

187.50

60.02

63.85

164.3

164.26

65.31

131.44

62.35

22

587.87

435.24

59.45

70.00

92.74

50.50

65.81

81.72

41.1

180.10

83.58

51.10

67.36

23

793.36

571.04

1023.32

72.33

69.33

365.50

70.87

66.98

355.2

193.95

68.51

330.07

58.76

24

778.26

617.60

173.40

76.33

69.86

48.50

71.91

69.92

27.6

196.81

71.51

41.34

59.60

25

290.53

434.64

367.67

55.00

45.46

240.83

63.25

28.47

249.1

173.10

29.12

196.89

40.55

26

396.47

482.60

1114.08

75.00

64.08

295.83

65.05

58.98

303.0

178.03

60.32

243.91

57.56

27

1001.32

857.46

905.04

72.83

63.14

378.83

78.40

52.43

367.7

214.56

53.62

345.73

49.43

28

283.60

574.08

76.99

81.83

88.61

186.67

67.06

77.58

155.4

183.52

79.35

124.56

65.02

29

967.56

962.04

396.56

67.33

55.46

59.00

80.22

48.45

37.5

219.54

49.56

48.54

46.98

30

917.54

953.92

105.41

69.83

88.32

86.67

79.50

77.54

91.6

217.55

79.30

84.81

59.70

31

1144.50

1122.80

1212.58

73.50

35.22

378.33

84.07

39.20

357.7

230.06

40.09

333.30

41.28

32

1063.12

1101.20

597.83

71.33

33.85

54.17

83.33

39.12

22.1

228.04

40.01

37.24

41.42

33

936.05

992.68

109.94

74.50

86.41

93.83

80.32

77.45

94.2

219.81

79.22

86.45

59.36

34

484.57

945.68

198.95

78.17

76.84

186.17

76.83

68.62

158.7

210.27

70.19

127.05

57.13

35

1176.26

1418.20

1008.69

87.00

7.88

97.50

89.38

16.39

19.3

244.60

16.76

35.10

25.89

36

1024.12

1512.40

612.35

81.67

42.10

166.17

90.11

41.48

144.8

246.60

42.43

116.87

41.01

37

1190.34

1537.40

165.43

85.50

96.93

103.33

91.01

79.10

120.3

249.08

80.90

101.21

56.35

38

1382.70

1681.80

1073.19

91.00

8.05

87.00

94.87

20.27

53.7

259.63

20.73

59.93

27.94

39

1179.99

1639.60

204.01

86.33

80.22

107.83

92.69

76.19

126.9

253.67

77.93

105.24

54.81

40

1561.99

1852.60

1651.22

99.50

4.07

380.00

98.57

17.60

345.7

269.74

18.00

317.14

25.54

Table A.51: Physical measurements, perceptual estimates, and our model’s predictions (Phase 15).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

236

Perceptual estimates Z

J0

M0

Predicted appearance

H0

J

M

H

Q

C

h

s

1

12.36

12.02

6.71

6.67

26.18

395.00

1.00

20.03

0.9

3.27

19.22

20.87

247.65

2

14.21

15.49

6.33

6.83

25.88

87.50

1.00

19.68

48.3

3.27

18.88

56.25

245.47

3

51.61

28.08

234.05

21.33

101.00

301.17

1.00

75.71

310.7

3.27

72.64

259.23

481.44

4

159.01

71.32

13.59

33.33

117.14

2.50

25.85

93.72

-2.6

84.46

89.91

17.67

105.34

5

511.93

135.80

2169.70

40.00

94.91

327.67

40.63

101.43

313.4

132.71

97.31

264.30

87.42

6

418.63

212.50

12.62

48.50

121.52

399.50

44.34

104.78

16.7

144.82

100.52

33.15

85.06

7

585.45

233.60

2808.28

48.67

89.46

305.33

45.39

105.75

302.8

148.28

101.45

243.40

84.45

8

264.38

256.10

6.70

38.17

54.78

98.33

44.59

80.01

87.2

145.66

76.76

82.03

74.12

9

119.30

325.00

99.72

44.67

94.71

195.67

45.70

68.08

189.4

149.27

65.32

153.46

67.53

10

957.14

609.00

74.48

64.17

85.15

12.83

57.55

94.79

28.0

187.99

90.94

41.61

71.01

11

211.41

708.70

39.54

58.33

96.46

199.17

55.60

103.37

168.3

181.63

99.17

134.61

75.44

12

657.73

808.80

138.87

43.33

68.82

148.67

59.44

65.66

111.3

194.16

62.99

96.06

58.15

13

965.47

890.40

904.20

53.50

66.49

360.00

61.80

47.30

358.3

201.87

45.38

334.11

48.41

14

1431.62

928.80

464.40

60.67

91.11

5.17

63.56

84.73

-8.3

207.62

81.28

12.07

63.88

15

439.37

972.10

1256.21

58.50

78.05

247.50

60.57

70.49

263.6

197.85

67.63

207.69

59.69

16

1128.14

1053.00

2896.00

62.33

60.79

309.83

63.81

64.40

312.2

208.44

61.79

262.14

55.59

17

337.86

1084.00

63.89

60.00

97.23

196.67

61.43

105.91

166.4

200.66

101.61

133.09

72.65

18

1548.81

1519.00

2556.04

60.17

47.17

334.17

69.35

49.32

325.8

226.55

47.31

286.41

46.66

19

2090.34

1573.00

16.19

63.33

105.80

51.17

70.23

115.08

84.2

229.42

110.41

80.13

70.83

20

2207.29

1598.00

3118.45

61.50

79.90

370.33

71.39

67.01

348.0

233.21

64.28

320.38

53.60

21

2223.13

1882.00

34.78

64.17

95.39

52.00

72.70

110.68

93.4

237.49

106.18

85.95

68.27

22

1498.01

2092.00

1956.12

61.50

44.62

253.33

73.36

32.71

282.6

239.65

31.38

222.78

36.95

23

2477.33

2169.00

697.35

66.83

57.64

39.17

75.54

59.96

32.2

246.74

57.52

44.72

49.30

24

1096.26

2361.00

320.34

69.17

82.15

190.83

74.03

84.74

160.9

241.81

81.29

128.71

59.20

25

2971.35

2735.00

2586.56

69.67

51.42

387.83

79.63

44.22

363.0

260.13

42.42

340.06

41.23

26

1996.49

2798.00

3759.46

73.00

66.34

288.50

78.26

46.00

293.1

255.66

44.13

231.57

42.42

27

1999.92

2992.00

1854.77

73.33

44.31

231.00

79.41

30.24

204.1

259.40

29.01

166.76

34.14

28

3451.11

3706.00

1282.88

74.33

30.81

59.33

84.75

44.61

67.0

276.83

42.80

68.97

40.14

29

3511.12

3739.00

3300.11

78.50

20.09

393.33

85.09

28.26

353.1

277.97

27.12

327.28

31.89

30

3543.47

3909.00

53.29

77.17

107.60

92.00

84.78

119.38

121.6

276.93

114.53

102.01

65.66

31

2040.48

3922.00

247.14

76.67

101.35

190.33

83.32

99.50

148.5

272.17

95.46

119.54

60.46

32

3806.31

4037.00

1768.59

77.67

32.40

53.33

86.65

37.30

45.5

283.04

35.78

54.24

36.30

33

3336.75

4281.00

2187.57

85.17

16.74

197.50

87.11

24.81

125.4

284.54

23.80

104.29

29.53

34

3723.35

4348.00

104.65

79.17

110.79

94.50

86.88

112.21

123.8

283.80

107.65

103.33

62.88

35

2496.79

4348.00

466.46

77.67

78.83

172.50

85.89

84.52

147.0

280.56

81.08

118.42

54.89

36

3595.13

5319.00

1464.41

81.17

50.63

184.17

91.27

51.61

144.0

298.16

49.52

116.38

41.61

37

4016.11

5489.00

172.26

84.17

102.45

105.00

91.50

108.89

132.3

298.90

104.47

108.58

60.36

38

4494.54

5512.00

2632.77

87.17

17.47

101.67

93.12

26.94

96.2

304.17

25.85

87.64

29.76

39

4762.91

5719.00

4159.67

96.50

2.99

N/A

94.19

9.54

N/A

307.67

9.15

N/A

17.61

40

4116.73

5933.00

340.81

83.67

108.50

118.00

93.39

96.16

134.6

305.06

92.25

110.07

56.14

Table A.52: Physical measurements, perceptual estimates, and our model’s predictions (Phase 16).

A.6. Colour Appearance Data

Physical measurements Colour

X

Y[ cd/m2 ]

237

Perceptual estimates Z

J0

M0

Predicted appearance

H0

J

M

H

Q

C

h

s

1

31.89

36.11

20.98

2.00

4.42

380.00

1.00

12.89

26.1

3.26

12.37

40.20

198.76

2

34.40

40.08

21.13

2.17

5.55

N/A

1.00

13.88

N/A

3.26

13.32

N/A

206.25

3

73.95

54.12

257.68

15.83

94.28

303.33

1.00

58.67

315.2

3.26

56.31

267.81

424.00

4

184.53

98.45

29.46

22.83

102.07

398.83

1.00

71.14

-3.8

3.26

68.28

16.45

466.90

5

543.14

162.50

2238.20

32.00

91.97

299.33

34.47

95.55

316.3

112.51

91.70

269.84

92.15

6

450.86

242.70

27.47

38.83

101.58

398.00

38.69

87.98

10.6

126.25

84.43

28.49

83.48

7

623.63

267.00

2907.38

41.17

86.49

300.00

39.94

98.67

306.6

130.34

94.69

251.20

87.01

8

296.72

292.80

22.27

37.33

60.28

102.50

39.16

61.51

76.9

127.79

59.03

75.48

69.38

9

141.17

356.20

115.61

41.33

79.43

199.17

39.87

58.44

189.6

130.12

56.08

153.67

67.02

10

1006.43

653.20

92.51

55.67

76.72

10.17

52.35

86.52

22.5

170.83

83.03

37.54

71.17

11

241.28

763.10

57.11

55.50

86.23

213.33

50.48

89.85

171.2

164.73

86.23

137.10

73.85

12

695.66

855.60

157.55

46.67

48.92

127.50

54.14

58.66

108.5

176.69

56.29

94.53

57.62

13

1026.10

953.90

955.07

50.33

54.12

356.67

56.74

45.16

358.3

185.19

43.34

334.15

49.38

14

1510.38

993.00

501.16

61.17

78.35

2.83

58.57

81.47

-8.4

191.15

78.19

11.94

65.28

15

472.89

1029.00

1309.59

53.00

74.05

264.33

55.37

67.31

264.9

180.71

64.60

208.70

61.03

16

1185.02

1115.00

3004.45

62.67

63.30

308.00

58.71

65.61

313.5

191.60

62.97

264.62

58.52

17

362.11

1129.00

77.16

58.17

83.18

211.67

56.09

95.82

169.2

183.05

91.96

135.41

72.35

18

1621.87

1596.00

2662.74

61.83

51.85

339.50

64.40

51.03

325.9

210.17

48.97

286.63

49.27

19

2189.50

1655.00

28.66

68.67

109.71

50.00

65.56

103.18

72.2

213.95

99.02

72.38

69.44

20

2296.66

1676.00

3235.94

64.00

74.84

370.83

66.51

68.99

347.2

217.04

66.21

319.20

56.38

21

2328.70

1978.00

44.38

66.00

92.75

50.00

68.14

101.10

84.4

222.38

97.03

80.27

67.43

22

1563.74

2183.00

2029.92

62.50

50.43

265.67

68.54

33.56

282.5

223.70

32.20

222.70

38.73

23

2574.96

2262.00

731.73

68.83

56.87

33.67

70.87

61.37

29.9

231.30

58.89

43.04

51.51

24

1154.15

2475.00

339.74

68.17

81.38

190.00

69.39

83.65

162.6

226.46

80.28

130.02

60.78

25

3091.92

2858.00

2681.91

67.83

59.53

374.17

75.34

46.24

363.2

245.88

44.37

340.20

43.37

26

2094.67

2927.00

3899.40

70.50

54.51

294.17

73.91

48.40

295.1

241.19

46.45

233.23

44.80

27

2083.35

3114.00

1920.36

72.17

40.21

223.50

75.04

32.12

202.4

244.89

30.82

165.72

36.21

28

3600.76

3865.00

1342.36

80.33

27.05

55.00

81.08

47.16

64.6

264.59

45.26

67.36

42.22

29

3635.44

3877.00

3417.92

76.67

26.87

384.50

81.31

30.28

352.9

265.35

29.06

327.04

33.78

30

2109.65

4042.00

272.38

78.33

90.55

175.50

79.32

97.76

150.2

258.85

93.82

120.73

61.46

31

3670.72

4054.00

65.44

75.33

102.33

88.67

81.26

111.77

117.1

265.20

107.27

99.39

64.92

32

3981.29

4230.00

1857.74

74.83

34.38

53.00

83.36

39.97

44.6

272.06

38.35

53.63

38.33

33

3437.79

4418.00

2249.42

80.33

17.46

155.00

83.51

27.11

126.6

272.54

26.02

105.01

31.54

34

3842.10

4495.00

120.09

75.00

96.35

93.00

83.57

107.16

120.5

272.73

102.84

101.34

62.68

35

2596.52

4523.00

485.99

79.17

79.81

175.83

82.39

86.19

148.0

268.88

82.72

119.16

56.62

36

3767.22

5573.00

1544.55

84.50

42.95

176.67

88.91

55.05

144.6

290.16

52.83

116.75

43.56

37

4644.97

5701.00

2723.72

88.00

18.96

92.17

90.76

29.58

96.4

296.20

28.39

87.81

31.60

38

4181.37

5707.00

201.80

86.67

97.32

102.50

89.25

105.89

131.0

291.27

101.62

107.77

60.29

39

4942.49

5924.00

4327.15

96.67

2.35

N/A

92.15

10.74

N/A

300.73

10.31

N/A

18.90

40

4274.83

6157.00

360.46

81.67

93.56

109.17

91.41

97.24

134.1