arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e.,...

5
DEEP NO-REFERENCE TONE MAPPED IMAGE QUALITY ASSESSMENT Chandra Sekhar Ravuri 1 , Rajesh Sureddi 2 , Sathya Veera Reddy Dendi 2 , Shanmuganathan Raman 1 , Sumohana S. Channappayya 2 1 Department of Electrical Engineering, Indian Institute of Technology Gandhinagar, India. 2 Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India. ABSTRACT The process of rendering high dynamic range (HDR) images to be viewed on conventional displays is called tone mapping. However, tone mapping introduces distortions in the final im- age which may lead to visual displeasure. To quantify these distortions, we introduce a novel no-reference quality assess- ment technique for these tone mapped images. This technique is composed of two stages. In the first stage, we employ a con- volutional neural network (CNN) to generate quality aware maps (also known as distortion maps) from tone mapped im- ages by training it with the ground truth distortion maps. In the second stage, we model the normalized image and distor- tion maps using an Asymmetric Generalized Gaussian Dis- tribution (AGGD). The parameters of the AGGD model are then used to estimate the quality score using support vector regression (SVR). We show that the proposed technique de- livers competitive performance relative to the state-of-the-art techniques. The novelty of this work is its ability to visualize various distortions as quality maps (distortion maps), espe- cially in the no-reference setting, and to use these maps as features to estimate the quality score of tone mapped images. Index TermsHigh dynamic range images, image qual- ity assessment, convolutional neural networks, tone mapping. 1. INTRODUCTION The real world scene contains a very high dynamic range (HDR) in the radiance space and it is not possible to capture all the intensity levels in the scene. Generally, we capture in the order of 256 levels per channel using normal cameras, but with the advancement in camera technology, we are able to capture higher intensity range in the order of 10,000 levels. These images are called HDR images. HDR images give vi- sual pleasure and details while capturing the scene, which al- most looks like the real world. To visualize HDR images, we require special displays which are very expensive. In order to display the HDR images on normal displays, we need to map from high dynamic range to a lower dynamic range (LDR) in the order of 256 levels. This process is called tone mapping. Even though the image looks pristine in HDR, we may lose some visual information because of this compression in the intensity dynamic range. The tone mapping operator should be decided on how well it can preserve all the perceptual in- formation present in the HDR image while obtaining the tone mapped image. With this requirement, we need to assess the perceptual quality of a tone mapped image. The best way to assess the quality of these images is via human opinion scores. However, collecting human scores is an expensive and time-consuming process. This drawback of subjective quality assessment necessitates the design of objective qual- ity assessment algorithms. In this context, algorithms like the tone mapped image quality index (TMQI) in [1] were imple- mented with the requirement of an original HDR image as ref- erence to compare with the tone mapped image. However, it is not always possible to store HDR images which are memory intensive. Hence, there is a requirement for quality assess- ment metrics that do not require a reference - also called as no-reference image quality assessment (NRIQA). One of the major distortions that might occur in the tone mapping pro- cess is contrast distortion given the HDR images contain only static objects. We have addressed this problem in the pro- posed work. In this work, we propose a CNN based NRIQA algorithm for measuring tone mapped image quality. In brief, the proposed approach has two stages. In the first stage, a test image is mapped to the quality-aware domain (distortion maps) and in the next stage, the quality-aware representations are mapped to a single perceptual quality score. We briefly review the existing NRIQA techniques of both LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion type) NRIQA technique. IL- NIQE [3] is a variant of NIQE based on multiple cues. The DIIVINE [4] framework identifies the type of the distortion first and assesses the image quality next. Another blind im- age quality assessment technique by Gu et al. in [5] uses deep neural networks (DNN) with NSS features as input and image quality assessment was treated as a classification task. Kang et al. in [6] proposed a NRIQA technique using CNN. HDR images have very few NRIQA techniques, unlike LDR im- ages. Gu et al. in [7] proposed an NRIQA technique for tone mapped images using naturalness and structural information of the tone mapped image to predict the quality of the tone arXiv:2002.03165v1 [cs.GR] 8 Feb 2020

Transcript of arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e.,...

Page 1: arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion

DEEP NO-REFERENCE TONE MAPPED IMAGE QUALITY ASSESSMENT

Chandra Sekhar Ravuri1, Rajesh Sureddi2, Sathya Veera Reddy Dendi2,Shanmuganathan Raman1, Sumohana S. Channappayya2

1Department of Electrical Engineering, Indian Institute of Technology Gandhinagar, India.2Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India.

ABSTRACT

The process of rendering high dynamic range (HDR) imagesto be viewed on conventional displays is called tone mapping.However, tone mapping introduces distortions in the final im-age which may lead to visual displeasure. To quantify thesedistortions, we introduce a novel no-reference quality assess-ment technique for these tone mapped images. This techniqueis composed of two stages. In the first stage, we employ a con-volutional neural network (CNN) to generate quality awaremaps (also known as distortion maps) from tone mapped im-ages by training it with the ground truth distortion maps. Inthe second stage, we model the normalized image and distor-tion maps using an Asymmetric Generalized Gaussian Dis-tribution (AGGD). The parameters of the AGGD model arethen used to estimate the quality score using support vectorregression (SVR). We show that the proposed technique de-livers competitive performance relative to the state-of-the-arttechniques. The novelty of this work is its ability to visualizevarious distortions as quality maps (distortion maps), espe-cially in the no-reference setting, and to use these maps asfeatures to estimate the quality score of tone mapped images.

Index Terms— High dynamic range images, image qual-ity assessment, convolutional neural networks, tone mapping.

1. INTRODUCTION

The real world scene contains a very high dynamic range(HDR) in the radiance space and it is not possible to captureall the intensity levels in the scene. Generally, we capture inthe order of 256 levels per channel using normal cameras, butwith the advancement in camera technology, we are able tocapture higher intensity range in the order of 10,000 levels.These images are called HDR images. HDR images give vi-sual pleasure and details while capturing the scene, which al-most looks like the real world. To visualize HDR images, werequire special displays which are very expensive. In order todisplay the HDR images on normal displays, we need to mapfrom high dynamic range to a lower dynamic range (LDR) inthe order of 256 levels. This process is called tone mapping.Even though the image looks pristine in HDR, we may losesome visual information because of this compression in the

intensity dynamic range. The tone mapping operator shouldbe decided on how well it can preserve all the perceptual in-formation present in the HDR image while obtaining the tonemapped image. With this requirement, we need to assess theperceptual quality of a tone mapped image. The best wayto assess the quality of these images is via human opinionscores. However, collecting human scores is an expensiveand time-consuming process. This drawback of subjectivequality assessment necessitates the design of objective qual-ity assessment algorithms. In this context, algorithms like thetone mapped image quality index (TMQI) in [1] were imple-mented with the requirement of an original HDR image as ref-erence to compare with the tone mapped image. However, it isnot always possible to store HDR images which are memoryintensive. Hence, there is a requirement for quality assess-ment metrics that do not require a reference - also called asno-reference image quality assessment (NRIQA). One of themajor distortions that might occur in the tone mapping pro-cess is contrast distortion given the HDR images contain onlystatic objects. We have addressed this problem in the pro-posed work. In this work, we propose a CNN based NRIQAalgorithm for measuring tone mapped image quality. In brief,the proposed approach has two stages. In the first stage, atest image is mapped to the quality-aware domain (distortionmaps) and in the next stage, the quality-aware representationsare mapped to a single perceptual quality score.

We briefly review the existing NRIQA techniques of bothLDR and HDR images. NIQE [2] is an opinion unaware(i.e., does not use subjective scores) and distortion unaware(i.e., does not rely on distortion type) NRIQA technique. IL-NIQE [3] is a variant of NIQE based on multiple cues. TheDIIVINE [4] framework identifies the type of the distortionfirst and assesses the image quality next. Another blind im-age quality assessment technique by Gu et al. in [5] uses deepneural networks (DNN) with NSS features as input and imagequality assessment was treated as a classification task. Kanget al. in [6] proposed a NRIQA technique using CNN. HDRimages have very few NRIQA techniques, unlike LDR im-ages. Gu et al. in [7] proposed an NRIQA technique for tonemapped images using naturalness and structural informationof the tone mapped image to predict the quality of the tone

arX

iv:2

002.

0316

5v1

[cs

.GR

] 8

Feb

202

0

Page 2: arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion

mapped image. HIGRADE [8] is NRIQA algorithm proposedbased on gradient scene-statistics, defined in the LAB colorspace. DRIIQA is a full reference image quality assessment(FRIQA) technique by Aydin et al. in [9] which generates aquality map of the test image with respective to its referenceimage. These maps are called distortion maps, which localizethe distortion in space. However, this technique does not givea single quality score of a test image. After studying the ex-isting techniques, we propose a new framework for NRIQAof tone mapped images by generating distortion maps in [9]using CNN and extracting features from them to map to asingle quality score. Our algorithm will be explained in thefollowing sections.

2. PROPOSED APPROACH

We follow the work in [9] where they generated distortionmaps using both HDR and the tone mapped image to visualizedifferent distortions, namely amplification of invisible con-trast, loss of visible contrast, and reversal of visible contrastin both high-pass and low-pass bands. The problem that weaddress in this work is to remove the dependency on the orig-inal HDR image [9] i.e., transition from full-reference qualityassessment to no-reference quality assessment. To extend thiswork, the distortion maps are used as features to calculate asingle quality score for a tone mapped image. In brief, theproposed metric is composed of two stages. In the first stage,we generate distortion maps using the proposed convolutionalneural network. In the second stage, we model the distortionmap intensities using an AGGD and use the model parametersto predict the overall quality score.

2.1. Dataset Generation and Description

To train the proposed RcNet shown in Fig. 1, we followthe work in [9] to generate ground truth distortion maps byusing HDR images and corresponding tone mapped images(a FRIQA approach). Our efforts to find datasets whichcontain both the original HDR images and correspondingtone mapped images resulted only one database [1]. Al-ternatively, we identified and collected HDR images fromdifferent sources including [10–13]. We do not have anyinformation about the tone-mapped images for the collectedHDR images. Hence, we selected five popular tone-mappingalgorithms [14–18] and operated on the collected HDR im-ages. We generated 5,325 tone mapped images with theirdefault parameters of tone mapping operators using the HDRtoolbox [19]. From the collected 5,325 images, 3,460 imagesare used for training, 1,000 images are used for testing, andremaining 865 images are used for validation.

While these tone mapped images act as the input to ourCNN, we do not have corresponding labels yet. To train theCNN, which will be explained in detail in the next section,we need ground truth distortion maps (labels). To achieve

this, we generated distortion maps using the algorithm pro-posed by Aydin et al. in [9]. The motivation behind usingthis algorithm is that when tone mapping an HDR image, themost likely distortion resulting from the process is contrastdistortion. These distortions are clearly captured by this fullreference algorithm. We provided the collected HDR imagesand the corresponding tone mapped images to this algorithmand generated six distortion maps per tone mapped image. Weused these distortion maps as labels to train the CNN.

2.2. Training Procedure

We trained the neural network architecture shown in Fig. 1.The numbers in the architecture indicate the depth of convo-lution maps. It accepts the tone mapped image (I) as input anddistortion map (D) as label. As mentioned in the previous sec-tion, for each tone mapped image, there are six correspondingdistortion maps. Hence, we trained a different model for eachdistortion map with the same architecture. There are two op-tions to train the network: one is to compromise on computa-tions and train the network for the large input size, where thesize of the image may not be same and also may have differ-ent aspect ratios. Another is to split the data into patches andtrain the network. We adopted the second option to train ournetwork. A natural question that arises due to this alternativeis about the optimal patch size. To answer this question, wetrained our network with overlapping patches of three differ-ent sizes: 64× 64, 128× 128, and 256× 256. A patch size of128× 128 gave us the best performance and is chosen in thiswork. While designing this network, we imposed the follow-ing constraints: a) low complexity, b) avoid over-fitting, c) useinitial layer features in final layers with skip connections. Wehave used pooling, up-sampling, and dropouts at appropriatelayers in proposed network to make it more robust.

Fig. 1: Network architecture of the proposed RcNet

Fig. 2: Tone-mapped image from test dataset [20].

Page 3: arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Fig. 3: (a) - (f) represent ground truth distortion maps and (g) - (l) represent predicted distortion maps with proposed RcNet.

2.3. Stage-I: Estimation of Distortion-maps

In this stage, we explain the details about the generation ofquality maps or distortion maps. The distortion maps arethe representations of the amount of distortion present ateach pixel introduced by the tone mapping operation. Asdescribed earlier, we generate distortion maps using a con-volutional neural network which will predict the pixel wisedistortions after sufficient training. After experimenting withseveral deep network architectures (including auto encoders)we found that the RcNet architecture in Fig. 1 to be bestsuited for our problem. The first stage of the quality metric isto generate the distortion maps using the trained models. Theresults of applying the models on an image as shown in Fig.2 from the dataset [20] are shown in Fig. 3.

Fig. 3 (a) represents the ground truth distortion map ofamplification of contrast in the high-pass band, and Fig. 3 (g)represents the corresponding predicted distortion map usingRcNet as shown in Fig. 1. Similarly, Fig. 3(b), Fig. 3(c), Fig.3(d), Fig. 3(e), and Fig. 3(f) represent the ground truth distor-tion maps of amplification of contrast in the low-pass band,loss of contrast in the high-pass band, loss of contrast in thelow-pass band, reversal of contrast in the high-pass band, andreversal of contrast in the low-pass band, respectively. Fig.3(h), Fig. 3(i), Fig. 3(j), Fig. 3(k), and Fig. 3(l) representthe corresponding predicted distortion maps using RcNet inFig. 1. The similarity between the generated and ground truthdistortion maps provides a qualitative illustration of our net-work’s ability for distortion map generation.

2.4. Stage-II: Estimation of Quality Scores

In Stage-I, we predicted quality aware maps to quantify theamount of distortion present at each pixel. To estimate theoverall quality score of a tone mapped image, we require qual-ity discerning features. We observed that the distortion mapsobtained from the previous stage can be used to quantify thequality of a tone mapped image. We applied a Mean Sub-

traction and Contrast Normalization (MSCN) [2] transformto the tone mapped image and its corresponding distortionmaps. We observed the histograms of the resulting MSCNcoefficients as shown in Fig. 4 and Fig. 5. These histogramsare uni-modal and are modeled using an Asymmetric Gener-alized Gaussian Distribution (AGGD). An AGGD with zeromean is given by:

f(x; γ, βl, βr) =

γ

(βl+βr)Γ( 1γ )

exp(−(−xβl

)γ);∀x < 0

γ(βl+βr)Γ( 1

γ )exp

(−(xβr

)γ);∀x ≥ 0

(1)The parameters of an AGGD are estimated using the momentestimation method [21]. Here γ > 0 is the shape parameterand βl > 0, βr > 0 are left-scale and right-scale parametersrespectively. Γ(·) is defined as,

Γ(a) =

∫ ∞0

ta−1e−tdt; a > 0 (2)

Fig. 4: Log normalized MSCN coefficients distribution of theimage shown in Fig. 2.

These AGGD model parameters serve as features to ourregression model. We appended other statistical quantities(viz., the mean and standard deviation of the tone mapped

Page 4: arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion

Fig. 5: Log normalized MSCN coefficients distribution of dis-tortion maps of the image shown in Fig. 2.

image and its distortion maps) to the model parameters tocompose the final feature vector used for supervised learning.We use support vector regression (SVR) to map these fea-ture vectors to the quality score (mean opinion score (MOS)).We trained the SVR using 80% of the data chosen randomlyfrom our dataset. We then tested the model with the remaining20% of the data to measure the performance of the proposedapproach. We repeated the same procedure 100 times by ran-domly selecting 80% data to fit the model and the remaining20% data to test the model. The average of all these trials ischosen to be the final performance measures.

3. RESULTS AND DISCUSSION

The performance of the proposed approach is evaluated andcompared using the following statistical measures: Pearsonlinear correlation coefficient (PLCC), Spearman rank ordercorrelation coefficient (SROCC), and root mean squared error(RMSE). The performance of the first stage was measured bycomparing the predicted distortion maps with the ground truthdistortion maps generated with the full reference algorithm.We can infer from Fig. 3 that the ground truth distortion mapsand the predicted distortion maps are visually similar.

Tables 1 and 2 show the performance of the proposed ap-proach on two popular HDR IQA datasets, ESPL-LIVE [22]and Yaganeh et al. [1]. Since ESPL-LIVE contains theonly tone mapped images and corresponding quality (MOS)scores, we compare our method only with existing no-reference algorithms. As we focused only on the contrastdistortion we only used the tone mapped images from theESPL-LIVE dataset [22] for performance comparison. Theother dataset [1] contains original HDR images, their tonemapped images, and the corresponding quality (MOS) scores.Therefore, it allows us to evaluate full reference algorithmsand include them in the comparison as well. By comparingthe performance of various no-reference quality assessmenttechniques with the proposed metric, we observe that the pro-posed metric shows consistent and competitive performance.We would also like to highlight that the proposed NRIQAalgorithm not only delivers competitive performance but alsohelps localize distortions using the distortion maps.

Table 1: Performance comparison of various IQA algorithmson the ESPL-LIVE dataset [22].

Metric PLCC SROCC RMSEBIQI [23] 0.313 0.333 9.427BRISQUE [24] 0.370 0.340 9.535BLIINDS-II [25] 0.442 0.412 9.330DIIVINE [4] 0.530 0.523 8.805DESIQUE [26] 0.553 0.542 8.577HIGRADE-1 [8] 0.764 0.728 6.711HIGRADE-2 [8] 0.794 0.760 6.643RcNet (ours) 0.707 0.667 7.479

Table 2: Performance comparison of various IQA algorithmson the Yeganeh dataset [1].

Metric PLCC SROCC RMSENIQE [2] 0.368 0.259 1.788BIQI [23] 0.351 0.370 1.800ILNIQE [3] 0.326 0.314 1.818HIGRADE-1 [8] 0.423 0.270 1.742HIGRADE-2 [8] 0.563 0.440 1.589BRISQUE [24] 0.569 0.476 1.581BLIINDS-II [25] 0.544 0.508 1.614TMQI [1] 0.770 0.739 1.225RcNet (ours) 0.847 0.768 1.029

4. CONCLUSION

The proposed NRIQA metric for tone mapped images hastwo major outcomes. The first is to localize distortions in atone mapped image and the second is to predict its perceptualquality score in a reference free setting. Different types ofdistortions are visualized by what are called distortion maps.These distortion maps are learned using CNNs. We havedemonstrated that these distortion maps contain quality infor-mation that can be used as features to predict the overall qual-ity score of a tone mapped image. We have showed that theproposed NRIQA algorithm performs competitively on twotone mapped IQA datasets. This work can be improved at ev-ery stage by selecting and freezing robust hyper-parametersand with an efficient network architecture. Further, it is possi-ble to add more distortion discerning features extracted fromdistortion maps, and select regression algorithms which arebetter suited to work with these features for the quality as-sessment task.

5. REFERENCES

[1] Hojatollah Yeganeh and Zhou Wang, “Objective qualityassessment of tone-mapped images,” IEEE Transactionson Image Processing, vol. 22, no. 2, pp. 657–667, 2013.

Page 5: arXiv:2002.03165v1 [cs.GR] 8 Feb 2020 · LDR and HDR images. NIQE [2] is an opinion unaware (i.e., does not use subjective scores) and distortion unaware (i.e., does not rely on distortion

[2] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik,“Making a completely blind image quality analyzer,”IEEE SPL, vol. 20, no. 3, pp. 209–212, 2013.

[3] Lin Zhang, Lei Zhang, and Alan C Bovik, “A feature-enriched completely blind image quality evaluator,”IEEE Transactions on Image Processing, vol. 24, no. 8,pp. 2579–2591, 2015.

[4] Anush Krishna Moorthy and Alan Conrad Bovik,“Blind image quality assessment: From natural scenestatistics to perceptual quality,” IEEE transactions onImage Processing, vol. 20, no. 12, pp. 3350–3364, 2011.

[5] Ke Gu, Guangtao Zhai, Xiaokang Yang, and WenjunZhang, “Deep learning network for blind image qual-ity assessment,” in IEEE ICIP, 2014, pp. 511–515.

[6] Le Kang, Peng Ye, Yi Li, and David Doermann, “Con-volutional neural networks for no-reference image qual-ity assessment,” in IEEE CVPR, 2014, pp. 1733–1740.

[7] Ke Gu, Shiqi Wang, Guangtao Zhai, Siwei Ma, Xi-aokang Yang, Weisi Lin, Wenjun Zhang, and WenGao, “Blind quality assessment of tone-mapped imagesvia analysis of information, naturalness, and structure,”IEEE TMM, vol. 18, no. 3, pp. 432–443, 2016.

[8] Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik,and Brian L Evans, “No-reference quality assessmentof tone-mapped hdr pictures,” IEEE Transactions onImage Processing, vol. 26, no. 6, pp. 2957–2971, 2017.

[9] Tunc Ozan Aydin, Rafał Mantiuk, Karol Myszkowski,and Hans-Peter Seidel, “Dynamic range independentimage quality assessment,” ACM Transactions onGraphics (TOG), vol. 27, no. 3, pp. 69, 2008.

[10] Feng Xiao, Jeffrey M DiCarlo, Peter B Catrysse, andBrian A Wandell, “High dynamic range imaging of nat-ural scenes,” in CIC. IS&T, vol. 2002, pp. 337–342.

[11] Joel Kronander, Stefan Gustavson, Gerhard Bonnet, An-ders Ynnerman, and Jonas Unger, “A unified frameworkfor multi-sensor hdr video reconstruction,” SPIC, vol.29, no. 2, pp. 203–215, 2014.

[12] Mark D Fairchild, “The hdr photographic survey,” inCIC. IS&T, vol. 2007, pp. 233–238.

[13] Maryam Azimi, Amin Banitalebi-Dehkordi, YuanyuanDong, Mahsa T Pourazad, and Panos Nasiopoulos,“Evaluating the performance of existing full-referencequality metrics on high dynamic range (hdr) video con-tent,” in ICMSP, 2014.

[14] Erik Reinhard, Michael Stark, Peter Shirley, and JamesFerwerda, “Photographic tone reproduction for digitalimages,” ACM transactions on graphics (TOG), vol. 21,no. 3, pp. 267–276, 2002.

[15] Raanan Fattal, Dani Lischinski, and Michael Werman,“Gradient domain high dynamic range compression,” inACM TOG. ACM, 2002, vol. 21, pp. 249–256.

[16] Gregory Ward Larson, Holly Rushmeier, and ChristinePiatko, “A visibility matching tone reproduction opera-tor for high dynamic range scenes,” IEEE Transactionson Visualization and Computer Graphics, vol. 3, no. 4,pp. 291–306, 1997.

[17] Fredo Durand and Julie Dorsey, “Fast bilateral filteringfor the display of high-dynamic-range images,” in ACMtransactions on graphics (TOG). ACM, 2002, vol. 21,pp. 257–266.

[18] Shanmuganathan Raman and Subhasis Chaudhuri, “Bi-lateral filter based compositing for variable exposurephotography.,” in Eurographics (Short Papers), 2009,pp. 1–4.

[19] Francesco Banterle, Alessandro Artusi, Kurt Debattista,and Alan Chalmers, Advanced High Dynamic RangeImaging (2nd Edition), AK Peters (CRC Press), Natick,MA, USA, July 2017.

[20] Kanita Karaduzovic-Hadziabdic, Telalovic J Hasic, andRafal Konrad Mantiuk, “Multi-exposure image stacksfor testing hdr deghosting methods,” Computers &Graphics, 2017.

[21] Nour-Eddine Lasmar, Youssef Stitou, and YannickBerthoumieu, “Multiscale skewed heavy tailed modelfor texture analysis,” in IEEE ICIP, 2009, pp. 2281–2284.

[22] D Kundu, D Ghadiyaram, AC Bovik, and Evans,“Large-scale crowdsourced study for high dynamicrange images,” IEEE Trans. Image Process., vol. 26,no. 10, 2017.

[23] Anush Krishna Moorthy and Alan Conrad Bovik, “Atwo-step framework for constructing blind image qual-ity indices,” IEEE SPL, vol. 17, no. 5, pp. 513–516,2010.

[24] Anish Mittal, Anush Krishna Moorthy, and Alan Con-rad Bovik, “No-reference image quality assessment inthe spatial domain,” IEEE Transactions on Image Pro-cessing, vol. 21, no. 12, pp. 4695–4708, 2012.

[25] Michele A Saad, Alan C Bovik, and Christophe Char-rier, “Blind image quality assessment: A natural scenestatistics approach in the dct domain,” IEEE TIP, vol.21, no. 8, pp. 3339–3352, 2012.

[26] Yi Zhang and Damon M Chandler, “No-reference imagequality assessment based on log-derivative statistics ofnatural scenes,” Journal of Electronic Imaging, vol. 22,no. 4, pp. 043025, 2013.