Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1...
-
Upload
rudolph-maximilian-wilkinson -
Category
Documents
-
view
212 -
download
0
Transcript of Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1...
Analysis of the 2006 IPA Proofing Roundup DataAnalysis of the 2006 IPA Proofing Roundup Data
William B. BirkettCharles Spontelli
CGATS TF1November, 2006
Mesa, AZ
William B. BirkettCharles Spontelli
CGATS TF1November, 2006
Mesa, AZ
Mission StatementMission StatementTF1 - Objective Color Matching TF1 - Objective Color Matching Development of a method based on Development of a method based on colorimetric measurements which colorimetric measurements which will estimate the probability that will estimate the probability that hardcopy images reproduced by hardcopy images reproduced by single or multiple systems, using single or multiple systems, using identical input, will appear similar to identical input, will appear similar to the typical human observer.the typical human observer.
AssumptionsAssumptions◊ Colorimetry works (patches with the
same color values appear identical).◊ Our application of colorimetry is
correct.◊ Visual illusions are insignificant.
◊ Colorimetry works (patches with the same color values appear identical).
◊ Our application of colorimetry is correct.
◊ Visual illusions are insignificant.
AssumptionsAssumptions◊ Our test targets provide a good
sampling of the colors used in images.
◊ Our color spaces are homogeneous - no discontinuities.
◊ Our test targets provide a good sampling of the colors used in images.
◊ Our color spaces are homogeneous - no discontinuities.
AssumptionsAssumptions◊ Test target data correlates with
the color of images.◊ Test target data correlates with
the judgment of human observers (using methods yet to be determined).
◊ Test target data correlates with the color of images.
◊ Test target data correlates with the judgment of human observers (using methods yet to be determined).
ExpectationsExpectations◊ Two prints will match perfectly if
the measured colors of all corresponding patches in the test targets are identical.
◊ The quality level of a match can be gauged by some statistical measure of test target errors.
◊ Two prints will match perfectly if the measured colors of all corresponding patches in the test targets are identical.
◊ The quality level of a match can be gauged by some statistical measure of test target errors.
QuestionQuestion◊ Is it possible for two prints to
match when the measured colors of corresponding patches are different?
◊ Is it possible for two prints to match when the measured colors of corresponding patches are different?
AnswerAnswer
That depends on how you define match:
◊ Colorimetric matching requires that all colors are literally identical.
◊ Appearance matching depends on the illusion of differently colored prints appearing the same.
That depends on how you define match:
◊ Colorimetric matching requires that all colors are literally identical.
◊ Appearance matching depends on the illusion of differently colored prints appearing the same.
ExamplesExamples
◊ Reproducing a color transparency on a printed sheet (smaller gamut).
◊ Printing an uncoated paper to match a coated paper (smaller gamut).
◊ Printing a bluish paper to match a neutral paper (white point).
◊ Reproducing a color transparency on a printed sheet (smaller gamut).
◊ Printing an uncoated paper to match a coated paper (smaller gamut).
◊ Printing a bluish paper to match a neutral paper (white point).
Reinventing the Wheel?Reinventing the Wheel?◊ Much work has already been done
on appearance matching.◊ For instance, CIECAM-02◊ Can we adapt this work to our
needs?
◊ Much work has already been done on appearance matching.
◊ For instance, CIECAM-02◊ Can we adapt this work to our
needs?
2006 IPA Proofing Roundup2006 IPA Proofing Roundup◊ Reference press sheets printed
with the help of GRACoL experts◊ Test targets cut from selected
press sheets and given to the participants
◊ Proofs made to “match the numbers” of these test targets
◊ Reference press sheets printed with the help of GRACoL experts
◊ Test targets cut from selected press sheets and given to the participants
◊ Proofs made to “match the numbers” of these test targets
2006 IPA Proofing Roundup2006 IPA Proofing Roundup◊ Human judges evaluate the quality
of the match to the press sheets, based on the appearance of images and other test elements
◊ Human judges evaluate the quality of the match to the press sheets, based on the appearance of images and other test elements
2006 IPA Proofing Roundup2006 IPA Proofing Roundup◊ Spectral measurements made of
all test targets - press sheets and proofs
◊ Can we correlate these measurements to the scores given by the judges?
◊ Spectral measurements made of all test targets - press sheets and proofs
◊ Can we correlate these measurements to the scores given by the judges?
Average deltaE?Average deltaE?◊ How about our old favorite,
average deltaE?◊ This has already been tested, but
let’s review the data.
◊ How about our old favorite, average deltaE?
◊ This has already been tested, but let’s review the data.
0
5
10
15
20
25
30
35
40
0 0.5 1 1.5
Average deltaE 2000
Overall Score
Average deltaE?Average deltaE?◊ Again, no useful correlation from
this measurement.◊ Note that the average deltaE is
only about 0.7, which is a barely detectable difference in adjacent color patches.
◊ Again, no useful correlation from this measurement.
◊ Note that the average deltaE is only about 0.7, which is a barely detectable difference in adjacent color patches.
Does this Make Sense?Does this Make Sense?◊ Significant differences were
reported by the judges, yet the measured data is virtually identical.
◊ This is same result that has baffled us in previous TF1 studies.
◊ Significant differences were reported by the judges, yet the measured data is virtually identical.
◊ This is same result that has baffled us in previous TF1 studies.
Our Experiment: Our Experiment: ◊ Use the measured data to make
simulated test prints, and compare those prints with the same judging criteria.
◊ Use the measured data to make simulated test prints, and compare those prints with the same judging criteria.
Our Experiment: Our Experiment: ◊ We decided to compare the best
and the worst scoring proofs.◊ Vendor 19 (Avg dE = 0.60) (best)◊ Vendor 35 (Avg dE = 0.54) (worst)◊ We made ICC profiles from the four
datasets using PM 5.0.7
◊ We decided to compare the best and the worst scoring proofs.
◊ Vendor 19 (Avg dE = 0.60) (best)◊ Vendor 35 (Avg dE = 0.54) (worst)◊ We made ICC profiles from the four
datasets using PM 5.0.7
Our Experiment: Our Experiment: ◊ Then, we made prints of the IPA
test file using an Epson 4800 printer, one for each of the four data sets. The prints were made over a period of about 30 minutes (one after another). We did a nozzle test before and after to ensure consistency.
◊ Then, we made prints of the IPA test file using an Epson 4800 printer, one for each of the four data sets. The prints were made over a period of about 30 minutes (one after another). We did a nozzle test before and after to ensure consistency.
Our Experiment: Our Experiment:
Our Experiment: Our Experiment: ◊ The prints were judged by a group
of 29 graphic arts students at BGSU. We gave them the very same judging sheet that was used by the IPA. They compared the prints in a D50 standard viewing booth, after an explanation of the judging criteria.
◊ The prints were judged by a group of 29 graphic arts students at BGSU. We gave them the very same judging sheet that was used by the IPA. They compared the prints in a D50 standard viewing booth, after an explanation of the judging criteria.
The Results: The Results: Judge Vendor 19 Vendor 35
1 38 362 31 303 33 364 28 325 36 366 33 327 33 328 36 359 38 33
10 36 3411 36 2512 35 3213 35 3614 34 3815 35 3616 35 3717 35 3618 34 3619 31 3420 29 3521 34 3122 32 3023 34 3724 37 4025 35 3726 32 3327 33 3428 32 3529 34 33
Average 33.93 34.17
BGSU’s average BGSU’s average scores are virtually scores are virtually identical, with the identical, with the IPA’s worst match IPA’s worst match just slightly better just slightly better than the IPA’s than the IPA’s best.best.
ConclusionConclusion
These data sets do not contain These data sets do not contain information indicating that one information indicating that one pair matches better than the pair matches better than the other.other.
Possible Explanations Possible Explanations
◊ Our simulation proofs did not represent the data sets accurately enough.
◊ Color sampling of the data sets is too coarse to pick up subtle differences in the proofs.
◊ Our simulation proofs did not represent the data sets accurately enough.
◊ Color sampling of the data sets is too coarse to pick up subtle differences in the proofs.
Possible Explanations Possible Explanations
◊ Viewing light was not D50, causing metamerism.
◊ Color gradients in the press sheets created differences between between images and data sets.
◊ Viewing light was not D50, causing metamerism.
◊ Color gradients in the press sheets created differences between between images and data sets.
Possible Explanations Possible Explanations
◊ Non-color attributes such as gloss and bronzing account for differences in the judging.
◊ UV/optical brightener effects caused color differences (some measurements used UV-cut filter while others didn’t).
◊ Non-color attributes such as gloss and bronzing account for differences in the judging.
◊ UV/optical brightener effects caused color differences (some measurements used UV-cut filter while others didn’t).
Future WorkFuture Work
◊ More tests to establish the actual cause(s) of color matching differences among the IPA test proofs.
◊ Eliminate as many variables as possible when doing color research.
◊ More tests to establish the actual cause(s) of color matching differences among the IPA test proofs.
◊ Eliminate as many variables as possible when doing color research.
RecommendationsRecommendations
◊ Nearly perfect colorimetric matching is now routine among proofing systems.
◊ There are other causes of matching failure that need to be considered.
◊ Match quality is not a one-to-one function of average deltaE.
◊ Nearly perfect colorimetric matching is now routine among proofing systems.
◊ There are other causes of matching failure that need to be considered.
◊ Match quality is not a one-to-one function of average deltaE.
Match Quality vs. Average deltaEMatch Quality vs. Average deltaE
Average deltaE
Mat
ch Q
ualit
y
RecommendationRecommendation
◊ Match quality measurement should be built upon a quantitative understanding of appearance matching.
◊ Match quality measurement should be built upon a quantitative understanding of appearance matching.
ActionsActions
◊ Investigate the nature of appearance matching as it applies to print/proof comparisons.
◊ Test potential match quality measures for correlation with visual assessments.
◊ Investigate the nature of appearance matching as it applies to print/proof comparisons.
◊ Test potential match quality measures for correlation with visual assessments.
ActionsActions
◊ Testing should be done with methods that avoid “unexplainable results.”
◊ Tests should include comparisons of prints that match poorly.
◊ Testing should be done with methods that avoid “unexplainable results.”
◊ Tests should include comparisons of prints that match poorly.
ActionsActions
◊ When functional measures are found, test them outside of TF1.
◊ If outside testing is successful, publish our results.
◊ When functional measures are found, test them outside of TF1.
◊ If outside testing is successful, publish our results.