PATCH-IQ: A patch based learning framework for blind image quality assessment

Redzuan Abdul Manap, Ling Shao, Alejandro F. Frangi

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
9 Downloads (Pure)


Most well-known blind image quality assessment (BIQA) models usually follow a two-stage framework whereby various types of features are first extracted and used as an input to a regressor. The regression algorithm is used to model human perceptual measures based on a training set of distorted images. However, this approach requires an intensive training phase to optimise the regression parameters. In this paper, we overcome this limitation by proposing an alternative BIQA model that predicts image quality using nearest neighbour methods which have virtually zero training cost. The model, termed PATCH based blind Image Quality assessment (PATCH-IQ), has a learning framework that operates at the patch level. This enables PATCH-IQ to provide not only a global image quality estimation but also a local image quality estimation. Based on the assumption that the perceived quality of a distorted image will be best predicted by features drawn from images with the same distortion class, PATCH-IQ also introduces a distortion identification stage in its framework. This enables PATCH-IQ to identify the distortion affecting the image, a property that can be useful for further local processing stages. PATCH-IQ is evaluated on the standard IQA databases, and the provided scores are highly correlated to human perception of image quality. It also delivers competitive prediction accuracy and computational performance in relationship to other state-of-the-art BIQA models.
Original languageEnglish
Pages (from-to)329–344
Number of pages16
JournalInformation Sciences
Early online date26 Aug 2017
Publication statusPublished - Dec 2017


  • Image quality assessment
  • Blind image quality assessment
  • Interest point detection
  • Spatial domain features
  • Nearest neighbour classification and regression

Cite this