OR-PCA with Dynamic Feature Selection for ... - ACM Digital Library

10 downloads 96 Views 3MB Size Report
80 Daehak-ro, Buk-go, Daegu,. 702-701, Republic of Korea [email protected]. Andrews Sobral. Laboratoire L3I. Université de La Rochelle. 17000, France.
OR-PCA with Dynamic Feature Selection for Robust Background Subtraction Sajid Javed

Andrews Sobral

School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea

Laboratoire L3I Université de La Rochelle 17000, France

[email protected]

[email protected] Thierry Bouwmans Laboratoire MIA Université de La Rochelle 17000, France

[email protected]

Soon Ki Jung



School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea

[email protected]

ABSTRACT

to the weighted sum of total features. Experimental results on challenging datasets such as Wallflower, I2R and BMC 2012 show that the proposed scheme outperforms the state of the art approaches for the background subtraction task.

Background modeling and foreground object detection is the first step in visual surveillance system. The task becomes more difficult when the background scene contains significant variations, such as water surface, waving trees and sudden illumination conditions, etc. Recently, subspace learning model such as Robust Principal Component Analysis (RPCA) provides a very nice framework for separating the moving objects from the stationary scenes. However, due to its batch optimization process, high dimensional data should be processed. As a result, huge computational complexity and memory problems occur in traditional RPCA based approaches. In contrast, Online Robust PCA (ORPCA) has the ability to process such large dimensional data via stochastic manners. OR-PCA processes one frame per time instance and updates the subspace basis accordingly when a new frame arrives. However, due to the lack of features, the sparse component of OR-PCA is not always robust to handle various background modeling challenges. As a consequence, the system shows a very weak performance, which is not desirable for real applications. To handle these challenges, this paper presents a multi-feature based ORPCA scheme. A multi-feature model is able to build a robust low-rank background model of the scene. In addition, a very nice feature selection process is designed to dynamically select a useful set of features frame by frame, according

Categories and Subject Descriptors I.4.9 [Image Processing and Computer Vision]: Applications.

General Terms System, Algorithm

Keywords Multiple features, Online Robust-PCA, Feature selection, Foreground detection, Background modeling

1.

INTRODUCTION

Separating moving objects from video sequence is the first step in many computer vision and image processing applications. This pre-processing step consists of isolation of moving objects called “foreground” from the static scene called “background”. However, it becomes really hard task when the scene has sudden illumination change or geometrical changes such as waving trees, water surfaces, etc. [6] Many algorithms have been developed to tackle the challenging problems in the background subtraction (also known as foreground detection) [6], [5]. Among them, Robust Principal Component Analysis (RPCA) based approach shows a very nice framework for separating foreground objects from highly dynamic background scenes. Excellent surveys on background subtraction via RPCA can be found in [1]. Although RPCA based approach for background subtraction attracts a lot of attention, it currently faces some limitations. First, the algorithm includes batch optimization. In order to decompose an input image A into low-rank matrix L and sparse component S, a chunk of samples are required

∗Prof. Jung is a corresponding author.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’15 April 13-17, 2015, Salamanca, Spain. Copyright 2015 ACM 978-1-4503-3196-8/15/04...$15.00. http://dx.doi.org/10.1145/2695664.2695863

86

to store in memory. As a result, it suffers from huge memory usage and high computational cost. Second, there is no RPCA based approach which uses features instead of pixel intensity for background modeling, because it causes much more memory usages. Therefore, RPCA based approach is not suitable for practical background subtraction systems. In contrast, Online Robust Principal Component Analysis (OR-PCA) [4] process one frame per time instance via stochastic optimization provides a very interesting solution of RPCA based scheme. In [6], OR-PCA is modified to be adapted for background/foreground separation using image decomposition with initialization scheme. Only intensity features are considered in this work and due to the parameters setting the system is not applicable for visual surveillance system. In this paper, we present a multi-feature based OR-PCA scheme for robust background subtraction. We briefly summarize our methodology here. First, multiple feature extraction process is performed on a sliding block of N video frames, then the feature model is updated when a new sample arrives. Second, OR-PCA is applied on every incoming video block per frame by multiple features. Third, a similarity measures are computed between the background feature model and extracted low-dimensional subspace model for each feature. In addition, a weighted sum of similarity measures for all features is computed and dynamic feature selection scheme is applied according to background statistics. Finally, the foreground detection is performed on the results of OR-PCA. Multiple feature integration into ORPCA improves the quality of foreground and increases the quantitative performance as compared to other RPCA via PCP based methods [1] and single feature OR-PCA [6]. The rest of this paper is organized as follows. In Section 2, the related work is reviewed. Section 3 describes the proposed framework based on OR-PCA with Dynamic Feature Selection (ORPCA-DFS). Experimental results are discussed in Section 4. Finally, Section 5 concludes our work.

2.

Feng and Xu [4] recently proposed Online Robust-PCA (OR-PCA) algorithm which processes one chunk per time instance using stochastic approximations (no batch optimization is needed). A nuclear norm objective function is reformulated in this approach, and therefore all the samples are decoupled in optimization process for sparse error separation but no interesting results are observed for background subtraction application in their work. Therefore, Javed et al. [5] modified OR-PCA for background/foreground separation. Only intensity information via image decomposition with Markov Random Field (MRF) is utilized to enhance the sparse component for dynamic background subtraction. A number of encouraging results are shown in [5]. But annoying parameter tunning is the main drawback in their approach. All these RPCA based schemes either works in online or batch optimization manners. In addition, only intensity or single color information is used for sparse error separation. As a result, the foreground detection is not always robust, since the pixel values are insufficient to perform in different background scenes. To deal with this situation, we present a multiple feature scheme which is integrated into OR-PCA with dynamic feature selection for handling different background dynamics.

3.

METHODOLOGY

In this section, we discuss our multi-feature based ORPCADFS scheme for robust background subtraction in detail. Our scheme consists of several steps: multiple features extraction, feature background model, update feature model, OR-PCA, dynamic feature selection and foreground detection, which are described as a system diagram in Fig. 1. To proceed, initially the features are extracted and the feature background model is created using video block of N frames. Then, the model is updated continuously to adapt changes of background scene. Modified OR-PCA methodology is applied to each feature model for every incoming video frame. Then similarity measures are computed between the low-rank features model and features of input frame. Moreover, a frame by frame dynamic feature selection scheme is designed according to weighted sum of all features and finally the foreground detection is performed. In the following sections, we will describe each module in detail.

RELATED WORK

Over the past few years, excellent methods have been proposed for background subtraction using subspace learning model [1]. Among them, Oliver et al. [8] are the first authors to model the background using Principal Components Analysis (PCA). The foreground detection is then achieved by thresholding the difference between the reconstructed background and input image. PCA provides a robust subspace learning model but it is not robust when the data is corrupted and outliers appear in the new subspace basis. In contrast, recent RPCA based approaches in [1] can tackle the problem of traditional PCA. A remarkable improvement has been found on RPCA for background modeling. Excellent surveys on background modeling using RPCA can be found in [1]. Candes et al. [2] proposed a robust convex optimization technique to address the PCA problems. Many RPCA approaches, such as Augmented Lagrangian Multiplier (ALM), Singular Value Thresholding (SVT) and Linearized Alternating Direction Method with an Adaptive Penalty (LADMAP) discussed in [1] solve the sub-optimization problem to separate the low-rank matrix and sparse error in each iteration under defined convergence criteria. These RPCA methods work in a batch optimization manner, as a result huge memory usage and high time complexity issues occur.

3.1

Build Feature Model

First, a feature extraction process is described. Our work is different from the previous background modeling schemes where the model is created directly from grayscale image or color information. In this paper, the background model is created using a multiple feature extraction process. The sliding block is created to store the last N frames in a data matrix e.g At ∈