Supplementary 3: Comparison Results

24
1 Supplementary 3: Comparison Results (Ligang Liu, Renjie Chen, Lior Wolf, Daniel Cohen-Or. Optimizing Photo Composition, Eurographics 2010) Test-I: General comparisons We have compared our approach to existing recomposition method [Suh et al. 2003] (“Automatic Thumbnail Cropping and its Effectiveness”) and [Santella et al. 2006] (“Gaze-Based Interaction for Semi-Automatic Photo Cropping”) in this test. The photos in column (a) are the original images. The results generated by our approach are shown in column (b). The results of [Santella et al. 2006] and [Suh et al. 2003] are respectively shown in (c) and (d). [Suh et al. 2003] maximizes the crop’s saliency and aims to create thumbnail images that are easily recognizable. [Santella et al. 2006] uses simple composition rule such as rule of thirds to maximizes content area and features. Our approach considers better composition rules and uses a retargeting operator to optimize the relative position among the objects in the image. The results show that our approach performs better than the other two approaches for most of the images, especially for those images with prominent visual lines. Note: instead of using the eye tracking data, we use the same salience map to run the algorithm of [Santella et al. 2006] as used in the other approaches. (a) (b) (c) (d)

Transcript of Supplementary 3: Comparison Results

Page 1: Supplementary 3: Comparison Results

1

Supplementary 3: Comparison Results (Ligang Liu, Renjie Chen, Lior Wolf, Daniel Cohen-Or. Optimizing Photo Composition, Eurographics 2010)

Test-I: General comparisons

We have compared our approach to existing recomposition method [Suh et al. 2003] (“Automatic

Thumbnail Cropping and its Effectiveness”) and [Santella et al. 2006] (“Gaze-Based Interaction for

Semi-Automatic Photo Cropping”) in this test. The photos in column (a) are the original images. The

results generated by our approach are shown in column (b). The results of [Santella et al. 2006] and [Suh et

al. 2003] are respectively shown in (c) and (d).

[Suh et al. 2003] maximizes the crop’s saliency and aims to create thumbnail images that are easily

recognizable.

[Santella et al. 2006] uses simple composition rule such as rule of thirds to maximizes content area and

features.

Our approach considers better composition rules and uses a retargeting operator to optimize the relative

position among the objects in the image.

The results show that our approach performs better than the other two approaches for most of the images,

especially for those images with prominent visual lines.

Note: instead of using the eye tracking data, we use the same salience map to run the algorithm of [Santella

et al. 2006] as used in the other approaches.

(a) (b) (c) (d)

Page 2: Supplementary 3: Comparison Results

2

Page 3: Supplementary 3: Comparison Results

3

Page 4: Supplementary 3: Comparison Results

4

Test-II: Comparisons on casual images

In addition to the artistic works, casual images can also be improved in composition sometimes. As the

approach of [Suh et al. 2003] aims to create thumbnail images that are easily recognizable, we don’t

compare our approach with it in this test. We only compare with the approach of [Santella et al. 2006] on

arbitrarily selected casual images.

Column (a): original images;

Column (b): the results generated by our approach;

Column (c): the results generated by the approach of [Santella et al. 2006].

(a) (b) (c)

Page 5: Supplementary 3: Comparison Results

5

Page 6: Supplementary 3: Comparison Results

6

Page 7: Supplementary 3: Comparison Results

7

Page 8: Supplementary 3: Comparison Results

8

Page 9: Supplementary 3: Comparison Results

9

Page 10: Supplementary 3: Comparison Results

10

Test-III: Comparisons on benchmark images

We use a set of benchmark images at Berkeley in this test:

http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/BSDS300/html/dataset/ima

ges.html

As these images already have segmentation provided in the website, we are relatively easy to detect the

ROI objects in the images. We test our approach and the approach of [Santella et al. 2006] in a continuous

subset (“training images 101-150”) of the benchmark. The benchmark images are shown in column (a).

The results generated by our approach and [Santella et al. 2006]’s approach are shown in column (b) and (c)

Page 11: Supplementary 3: Comparison Results

11

respectively.

As can be seen, most of the benchmark images have little room to be optimized in composition. Thus both

our approach and [Santella et al. 2006]’s performs similarly for these images and it is understandable that

both approaches don’t introduce much change in the results for many images in the set.

(a) (b) (c)

Page 12: Supplementary 3: Comparison Results

12

Page 13: Supplementary 3: Comparison Results

13

Page 14: Supplementary 3: Comparison Results

14

Page 15: Supplementary 3: Comparison Results

15

Page 16: Supplementary 3: Comparison Results

16

Page 17: Supplementary 3: Comparison Results

17

Page 18: Supplementary 3: Comparison Results

18

Page 19: Supplementary 3: Comparison Results

19

Page 20: Supplementary 3: Comparison Results

20

Page 21: Supplementary 3: Comparison Results

21

Page 22: Supplementary 3: Comparison Results

22

Page 23: Supplementary 3: Comparison Results

23

6

Page 24: Supplementary 3: Comparison Results

24