Pub. online:1 Jan 2019Type:Research ArticleOpen Access
Volume 30, Issue 1 (2019), pp. 53–72
Saliency detection has been deeply studied in the last few years and the number of the designed computational models is increasing. Starting from the assumption that spatial and temporal information of an input video frame can provide better saliency results than using each information alone, we propose a spatio-temporal saliency model for detecting salient objects in videos. First, spatial saliency is measured at patch-level by fusing local contrasts with spatial priors to label each patch as a foreground or a background one. Then, the newly proposed motion distinctiveness feature and gradient flow field measure are used to obtain the temporal saliency maps. Finally, spatial and temporal saliency maps are fused together into one final saliency map.
On the challenging SegTrack v2 and Fukuchi benchmark datasets we significantly outperform the state-of-the-art methods.