Robust Stereoscopic Crosstalk Prediction

Jianbing Shen, Yan Zhang, Zhiyuan Liang, Chang Liu, Hanqiu Sun, Xiaopeng Hao, Jianhong Liu, Jian Yang, Ling Shao

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
5 Downloads (Pure)

Abstract

We propose a new metric to predict perceived crosstalk using the original images rather than both the original and ghosted images. The proposed metrics are based on color information. First, we extract a disparity map, a color difference map, and a color contrast map from original image pairs. Then, we use those maps to construct two new metrics (Vdispc and Vdlogc). Metric Vdispc considers the effect of the disparity map and the color difference map, while Vdlogc addresses the influence of the color contrast map. The prediction performance is evaluated using various types of stereoscopic crosstalk images. By incorporating Vdispc and Vdlogc, the new metric Vpdlc is proposed to achieve a higher correlation with the perceived subject crosstalk scores. Experimental results show that the new metrics achieve better performance than previous methods, which indicate that color information is one key factor for crosstalk visible prediction. Furthermore, we construct a new data set to evaluate our new metrics.
Original languageEnglish
Pages (from-to)1158-1168
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume28
Issue number5
Early online date28 Dec 2016
DOIs
Publication statusPublished - 1 May 2018

Cite this