Supplementary Material of Consolidated Financial Results ...
Supplementary 3: Comparison Results
Transcript of Supplementary 3: Comparison Results
1
Supplementary 3: Comparison Results (Ligang Liu, Renjie Chen, Lior Wolf, Daniel Cohen-Or. Optimizing Photo Composition, Eurographics 2010)
Test-I: General comparisons
We have compared our approach to existing recomposition method [Suh et al. 2003] (“Automatic
Thumbnail Cropping and its Effectiveness”) and [Santella et al. 2006] (“Gaze-Based Interaction for
Semi-Automatic Photo Cropping”) in this test. The photos in column (a) are the original images. The
results generated by our approach are shown in column (b). The results of [Santella et al. 2006] and [Suh et
al. 2003] are respectively shown in (c) and (d).
[Suh et al. 2003] maximizes the crop’s saliency and aims to create thumbnail images that are easily
recognizable.
[Santella et al. 2006] uses simple composition rule such as rule of thirds to maximizes content area and
features.
Our approach considers better composition rules and uses a retargeting operator to optimize the relative
position among the objects in the image.
The results show that our approach performs better than the other two approaches for most of the images,
especially for those images with prominent visual lines.
Note: instead of using the eye tracking data, we use the same salience map to run the algorithm of [Santella
et al. 2006] as used in the other approaches.
(a) (b) (c) (d)
2
3
4
Test-II: Comparisons on casual images
In addition to the artistic works, casual images can also be improved in composition sometimes. As the
approach of [Suh et al. 2003] aims to create thumbnail images that are easily recognizable, we don’t
compare our approach with it in this test. We only compare with the approach of [Santella et al. 2006] on
arbitrarily selected casual images.
Column (a): original images;
Column (b): the results generated by our approach;
Column (c): the results generated by the approach of [Santella et al. 2006].
(a) (b) (c)
5
6
7
8
9
10
Test-III: Comparisons on benchmark images
We use a set of benchmark images at Berkeley in this test:
http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/BSDS300/html/dataset/ima
ges.html
As these images already have segmentation provided in the website, we are relatively easy to detect the
ROI objects in the images. We test our approach and the approach of [Santella et al. 2006] in a continuous
subset (“training images 101-150”) of the benchmark. The benchmark images are shown in column (a).
The results generated by our approach and [Santella et al. 2006]’s approach are shown in column (b) and (c)
11
respectively.
As can be seen, most of the benchmark images have little room to be optimized in composition. Thus both
our approach and [Santella et al. 2006]’s performs similarly for these images and it is understandable that
both approaches don’t introduce much change in the results for many images in the set.
(a) (b) (c)
12
13
14
15
16
17
18
19
20
21
22
23
6
24