## Automatic Image Quality Assessment in Python

Image quality is a notion that highly depends on observers. Generally,

it is linked to the conditions in which it is viewed; therefore, it is a highly subjective topic. Image quality assessment aims to quantitatively represent the human perception of quality. These metrics are commonly used to analyze the performance of algorithms in different fields of computer vision like image compression, image transmission, and image processing [1].

Image quality assessment (IQA) is mainly divided into two areas of research (1) reference-based evaluation and (2) no-reference evaluation. The main difference is that reference-based methods depend on a high-quality image as a source to evaluate the difference between images. An example of reference-based evaluations is the Structural Similarity Index (SSIM) [2].

## No-reference Image Quality Assessment

No-reference image quality assessment does not require a base image to evaluate image quality, the only information that the algorithm receives is a distorted image whose quality is being assessed.

Blind methods are mostly comprised of two steps. The first step calculates features that describe the image’s structure and the second step finds the patterns among the features to human opinion. TID2008 is a famous database created following a methodology that describes how to measure human opinion scores from referenced images [3]. It is widely used to compare the performance of IQA algorithms.

For an implementation of a deep learning method using TensorFlow 2.0 check:

## weizhou-geek/Image-Quality-Assessment-Benchmark

Use Git or checkout with SVN using the web URL.

Work fast with our official CLI. Learn more.

#### Sign In Required

Please sign in to use Codespaces.

#### Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

#### Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

#### Launching Xcode

If nothing happens, download Xcode and try again.

#### Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

## Latest commit

## Git stats

## Files

Failed to load latest commit information.

## README.md

A list of state-of-the-art image quality assessment algorithms and databases collected by Wei Zhou. If you find that important resources are not included, please feel free to contact me.

According to the availability of the whole or partial original undistorted image (often called reference image), image quality assessment (IQA) algorithms are typically divided into three categories as follows: 1) full-reference IQA (FR-IQA), 2) reduced-reference IQA (RR-IQA), and 3) no-reference IQA (NR-IQA).

**Traditional FR-IQA**

Image Quality Assessment: from Error Visibility to Structural Similarity (SSIM), IEEE TIP, 2004, Wang Z et al. [PDF] [Code]

Multi-scale Structural Similarity for Image Quality Assessment (MS-SSIM), IEEE Asilomar Conference on Signals, Systems and Computers, 2003, Wang Z et al. [PDF] [Code]

FSIM: A Feature Similarity Index for Image Quality Assessment (FSIM), IEEE TIP, 2011, Zhang L et al. [PDF] [Code]

Information Content Weighting for Perceptual Image Quality Assessment (IW-SSIM), IEEE TIP, 2010, Wang Z et al. [PDF] [Code]

An Information Fidelity Criterion for Image Quality Assessment Using Natural Scene Statistics (IFC), IEEE TIP, 2005, Sheikh H R et al. [PDF] [Code]

Image Quality Assessment Based on A Degradation Model (NQM), IEEE TIP, 2000, Damera-Venkata N et al. [PDF] [Code]

VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images (VSNR), IEEE TIP, 2007, Chandler D M et al. [PDF] [Code]

**Traditional RR-IQA**

- Reduced-reference Image Quality Assessment in Free-energy Principle and Sparse Representation (FSI), IEEE TMM, 2017, Liu Y et al. [PDF][Code]

**Traditional NR-IQA**

No-reference Image Quality Assessment in the Spatial Domain (BRISQUE), IEEE TIP, 2012, Mittal A et al. [PDF] [Code]

A Feature-Enriched Completely Blind Image Quality Evaluator (IL-NIQE), IEEE TIP, 2015, Zhang L et al. [PDF] [Code]

Perceptual Quality Prediction on Authentically Distorted Images Using A Bag of Features Approach (FRIQUEE), Journal of Vision, 2017, Ghadiyaram D et al. [PDF] [Code]

A Novel Blind Image Quality Assessment Method Based on Refined Natural Scene Statistics (NBIQA), IEEE ICIP, 2019, Ou F Z et al. [PDF] [Code]

Blind Quality Assessment of Compressed Images via Pseudo Structural Similarity (PSS), IEEE ICME, 2016, Min X et al. [PDF] [Code]

Blind Image Quality Estimation via Distortion Aggravation (BMPRI), IEEE TBC, 2018, Min X et al. [PDF] [Code]

**Deep Learning Based Approaches**

**(1) FR-IQA**

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment, IEEE TIP, 2017, Bosse S et al. [PDF] [Code]

Deep Learning of Human Visual Sensitivity in Image Quality Assessment Framework (DeepQA), IEEE CVPR, 2017, Kim J et al. [PDF] [Code]

PieAPP: Perceptual Image-Error Assessment through Pairwise Preference (PieAPP), IEEE CVPR, 2018, Prashnani E et al. [PDF] [Code] [Project]

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric (LPIPS), IEEE CVPR, 2018, Zhang R et al. [PDF] [Code]

Image Quality Assessment: Unifying Structure and Texture Similarity (DISTS), IEEE TPAMI, 2021, Ding K et al. [PDF] [Code]

**(2) NR-IQA**

Convolutional Neural Networks for No-Reference Image Quality Assessment (CNNIQA), IEEE CVPR, 2014, Kang L et al. [PDF] [Code]

Visual Importance and Distortion Guided Deep Image Quality Assessment Framework (VIDGIQA), IEEE TMM, 2017, Guan J et al. [PDF] [Code]

A Deep Neural Network for Image Quality Assessment (gbIQA), IEEE ICIP, 2016, Bosse S et al. [PDF] [Code]

SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment (SGDNet), ACM MM, 2019, Yang S et al. [PDF] [Code]

Fully Deep Blind Image Quality Predictor (BIECON), IEEE JSTSP, 2016, Kim J et al. [PDF] [Code]

Saliency-Based Deep Convolutional Neural Network for No-Reference Image Quality Assessment (Saliency-CNN), Multimedia Tools and Applications, 2018, Jia S et al. [PDF] [Code]

Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning (HIQA), IEEE CVPR, 2018, Lin K Y et al. [PDF] [Code]

Simultaneous Estimation of Image Quality and Distortion via Multi-Task Convolutional Neural Networks, IEEE ICIP, 2015, Kang L et al. [PDF] [Code]

Blind Image Quality Assessment Using A Deep Bilinear Convolutional Neural Network (DBCNN), IEEE TCSVT, 2018, Zhang W et al. [PDF] [Code]

End-to-End Blind Image Quality Assessment Using Deep Neural Networks (MEON), IEEE TIP, 2017, Ma K et al. [PDF] [Code]

dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs (dipIQ), IEEE TIP, 2017, Ma K et al. [PDF] [Code]

## Module: metrics ¶

Compute Adapted Rand error as defined by the SNEMI3D contest.

Return the contingency table for all regions in matched segmentations.

Calculate the Hausdorff distance between nonzero elements of given images.

Returns pair of points that are Hausdorff distance apart between nonzero elements of given images.

Compute the mean-squared error between two images.

Compute the normalized mutual information (NMI).

Compute the normalized root mean-squared error (NRMSE) between two images.

Compute the peak signal to noise ratio (PSNR) for an image.

Compute the mean structural similarity index between two images.

Return symmetric conditional entropies associated with the VI.

## adapted_rand_error¶

Compute Adapted Rand error as defined by the SNEMI3D contest. [1]

Parameters **image_true** ndarray of int

Ground-truth label image, same shape as im_test.

**image_test** ndarray of int

**table** scipy.sparse array in crs format, optional

A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed on the fly.

**ignore_labels** sequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

Returns **are** float

The adapted Rand error; equal to \(1 — \frac<2pr>

\) , where p and r are the precision and recall described below.

**prec** float

The adapted Rand precision: this is the number of pairs of pixels that have the same label in the test label image *and* in the true image, divided by the number in the test image.

**rec** float

The adapted Rand recall: this is the number of pairs of pixels that have the same label in the test label image *and* in the true image, divided by the number in the true image.

Pixels with label 0 in the true segmentation are ignored in the score.

Arganda-Carreras I, Turaga SC, Berger DR, et al. (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front. Neuroanat. 9:142. DOI:10.3389/fnana.2015.00142

### Examples using skimage.metrics.adapted_rand_error ¶

## contingency_table¶

Return the contingency table for all regions in matched segmentations.

Parameters **im_true** ndarray of int

Ground-truth label image, same shape as im_test.

**im_test** ndarray of int

**ignore_labels** sequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

**normalize** bool

Determines if the contingency table is normalized by pixel count.

Returns **cont** scipy.sparse.csr_matrix

A contingency table. cont[i, j] will equal the number of voxels labeled i in im_true and j in im_test.

## hausdorff_distance¶

Calculate the Hausdorff distance between nonzero elements of given images.

The Hausdorff distance [1] is the maximum distance between any point on image0 and its nearest point on image1 , and vice-versa.

Parameters **image0, image1** ndarray

Arrays where True represents a point that is included in a set of points. Both arrays must have the same shape.

Returns **distance** float

The Hausdorff distance between coordinates of nonzero pixels in image0 and image1 , using the Euclidian distance.

### Examples using skimage.metrics.hausdorff_distance ¶

## hausdorff_pair¶

Returns pair of points that are Hausdorff distance apart between nonzero elements of given images.

The Hausdorff distance [1] is the maximum distance between any point on image0 and its nearest point on image1 , and vice-versa.

Parameters **image0, image1** ndarray

Arrays where True represents a point that is included in a set of points. Both arrays must have the same shape.

Returns **point_a, point_b** array

A pair of points that have Hausdorff distance between them.

### Examples using skimage.metrics.hausdorff_pair ¶

## mean_squared_error¶

Compute the mean-squared error between two images.

Parameters **image0, image1** ndarray

Images. Any dimensionality, must have same shape.

Returns **mse** float

The mean-squared error (MSE) metric.

Changed in version 0.16: This function was renamed from skimage.measure.compare_mse to skimage.metrics.mean_squared_error .

### Examples using skimage.metrics.mean_squared_error ¶

## normalized_mutual_information¶

Compute the normalized mutual information (NMI).

The normalized mutual information of \(A\) and \(B\) is given by:

where \(H(X) := — \sum_

It was proposed to be useful in registering images by Colin Studholme and colleagues [1]. It ranges from 1 (perfectly uncorrelated image values) to 2 (perfectly correlated image values, whether positively or negatively).

Parameters **image0, image1** ndarray

Images to be compared. The two input images must have the same number of dimensions.

**bins** int or sequence of int, optional

The number of bins along each axis of the joint histogram.

Returns **nmi** float

The normalized mutual information between the two arrays, computed at the granularity given by bins . Higher NMI implies more similar input images.

If the images don’t have the same number of dimensions.

If the two input images are not the same shape, the smaller image is padded with zeros.

C. Studholme, D.L.G. Hill, & D.J. Hawkes (1999). An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognition 32(1):71-86 DOI:10.1016/S0031-3203(98)00091-0

## normalized_root_mse¶

Compute the normalized root mean-squared error (NRMSE) between two images.

Parameters **image_true** ndarray

Ground-truth image, same shape as im_test.

**image_test** ndarray

**normalization** <‘euclidean’, ‘min-max’, ‘mean’>, optional

Controls the normalization method to use in the denominator of the NRMSE. There is no standard method of normalization across the literature [1]. The methods available here are as follows:

‘euclidean’ : normalize by the averaged Euclidean norm of im_true :

where || . || denotes the Frobenius norm and N = im_true.size . This result is equivalent to:

‘min-max’ : normalize by the intensity range of im_true .

‘mean’ : normalize by the mean of im_true

Returns **nrmse** float

The NRMSE metric.

Changed in version 0.16: This function was renamed from skimage.measure.compare_nrmse to skimage.metrics.normalized_root_mse .

## peak_signal_noise_ratio¶

Compute the peak signal to noise ratio (PSNR) for an image.

Parameters **image_true** ndarray

Ground-truth image, same shape as im_test.

**image_test** ndarray

**data_range** int, optional

The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.

Returns **psnr** float

The PSNR metric.

Changed in version 0.16: This function was renamed from skimage.measure.compare_psnr to skimage.metrics.peak_signal_noise_ratio .

### Examples using skimage.metrics.peak_signal_noise_ratio ¶

## structural_similarity¶

Compute the mean structural similarity index between two images.

Parameters **im1, im2** ndarray

Images. Any dimensionality with same shape.

**win_size** int or None, optional

The side-length of the sliding window used in comparison. Must be an odd value. If gaussian_weights is True, this is ignored and the window size will depend on sigma.

**gradient** bool, optional

If True, also return the gradient with respect to im2.

**data_range** float, optional

The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.

**channel_axis** int or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

New in version 0.19: channel_axis was added in 0.19.

If True, treat the last dimension of the array as channels. Similarity calculations are done independently for each channel then averaged. This argument is deprecated: specify channel_axis instead.

**gaussian_weights** bool, optional

If True, each patch has its mean and variance spatially weighted by a normalized Gaussian kernel of width sigma=1.5.

**full** bool, optional

If True, also return the full structural similarity image.

Returns **mssim** float

The mean structural similarity index over the image.

**grad** ndarray

The gradient of the structural similarity between im1 and im2 [2]. This is only returned if gradient is set to True.

**S** ndarray

The full SSIM image. This is only returned if full is set to True.

Other Parameters **use_sample_covariance** bool

If True, normalize covariances by N-1 rather than, N where N is the number of pixels within the sliding window.

**K1** float

Algorithm parameter, K1 (small constant, see [1]).

**K2** float

Algorithm parameter, K2 (small constant, see [1]).

**sigma** float

Standard deviation for the Gaussian when gaussian_weights is True.

**multichannel** DEPRECATED

Deprecated in favor of channel_axis.

Deprecated since version 0.19.

To match the implementation of Wang et. al. [1], set gaussian_weights to True, sigma to 1.5, and use_sample_covariance to False.

Changed in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity .

Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612. https://ece.uwaterloo.ca/

Avanaki, A. N. (2009). Exact global histogram specification optimized for structural similarity. Optical Review, 16, 613-621. arXiv:0901.0065 DOI:10.1007/s10043-009-0119-z

### Examples using skimage.metrics.structural_similarity ¶

## variation_of_information¶

Return symmetric conditional entropies associated with the VI. [1]

The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). If X is the ground-truth segmentation, then H(X|Y) can be interpreted as the amount of under-segmentation and H(X|Y) as the amount of over-segmentation. In other words, a perfect over-segmentation will have H(X|Y)=0 and a perfect under-segmentation will have H(Y|X)=0.

Parameters **image0, image1** ndarray of int

Label images / segmentations, must have same shape.

**table** scipy.sparse array in csr format, optional

A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed with skimage.evaluate.contingency_table. If given, the entropies will be computed from this table and any images will be ignored.

**ignore_labels** sequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

Returns **vi** ndarray of float, shape (2,)

The conditional entropies of image1|image0 and image0|image1.