Image Segmentation with Superpixel-based Covariance ... - arXiv

32 downloads 0 Views 1MB Size Report
May 18, 2016 - 1. Image Segmentation with Superpixel-based. Covariance Descriptors in Low-Rank. Representation. Xianbin Gu, Jeremiah D. Deng, Member, ...
1

Image Segmentation with Superpixel-based Covariance Descriptors in Low-Rank Representation

arXiv:1605.05466v1 [cs.CV] 18 May 2016

Xianbin Gu, Jeremiah D. Deng, Member, IEEE, and Martin K. Purvis Department of Information Science, University of Otago, Dunedin 9054, New Zealand

F

Abstract—This paper investigates the problem of image segmentation using superpixels. We propose two approaches to enhance the discriminative ability of the superpixel’s covariance descriptors. In the first one, we employ the Log-Euclidean distance as the metric on the covariance manifolds, and then use the RBF kernel to measure the similarities between covariance descriptors. The second method is focused on extracting the subspace structure of the set of covariance descriptors by extending a low rank representation algorithm on to the covariance manifolds. Experiments are carried out with the Berkly Segmentation Dataset, and compared with the state-of-the-art segmentation algorithms, both methods are competitive. Index Terms—Image segmentation, superpixels, low rank representation, covariance matrices, manifolds

1

I NTRODUCTION

Covariance matrix is widely used in image segmentation with whom those pixel-wise features, like color, gradient, etc., are assembled together into one symmetric positive definite matrix (Sym+ d ) named covariance descriptor. Research shows that such descriptor is a feature that highly discriminative for many image processing tasks, such as image classification, segmentation, and object recognition [12], [14], [25], [32]. In image segmentation, the covariance descriptor is often built on some basic features of a group of pixels, like color, intensity, positions, or gradients, etc. And different combinations of basic features bring different covariance descriptors, which often influence the algorithms’ performance. However, the research about how to construct covariance descriptors for a specific task is insufficient, and the relations between basic feature combinations and performance of the algorithms are still unclear. Generally, researchers construct covariance descriptors by trying different basic feature combinations and taking the one that gives the best performance [13], [17], i.e. by an empirical way. This may be handy for practice but lack of theory support. Corresponding author: Jeremiah D. Deng (email: [email protected]).

Fortunately, there are some other options, which can enhance the discriminative ability of covariance descriptors without repeatedly testing the combinations of the features. These methods are based on two facts. First, in the view of differential geometry, the covariance descriptors are symmetric positive definite matrices lying on a convex cone in the Euclidean space, i.e. they are the points on some Sym+ d manifolds. So, a proper defined metric on the Sym+ d will improve the performance of covariance descriptors. Secondly, there are always some correlations between the the basic features that built up the covariance descriptor, which may bring noises. Thus, removing the noise in covariance descriptors is also beneficial. To this end, we study the RBF kernel and Low Rank Representation algorithm on the Sym+ d manifolds and compare them with the state-of-the-art methods by experimenting on different covariance descriptors extracted from superpixels. The rest of the paper is organized as follows. Section 2 presents a brief review of the algorithms and concepts that are necessary for this paper. Section 3 discusses different methods to measure the similarity of covariance descriptors. Section 4 is the comparison between the algorithms, and Section 5 gives the conclusion.

2

BACKGROUND

2.1 Manifold, covariance descriptor and multicollinearity A manifold M of dimension d is a topological space that is locally homeomorphic to open subset of the Euclidean space Rd [31]. For analysis on M, there usually have two options. One is by embedding M into Rd so that to create a linear structure and the methods from Euclidean space can be directly applied. The alternative is concentrated on the intrinsic properties of M, i.e. define a properly intrinsic metric of M and launch the analysis on the true structure of M. The first method is more intuitive but less accurate because the extrinsic metric may not align with the intrinsic properties of the manifold. While

2

the second method is able to represent the manifold structure precisely, but for analysis, it is not as friendly as in the Euclidean space. Let F = (f1 , ..., fn )T be a feature array, where fi is a vector whose entries are the observations of the i-th feature. A covariance descriptor is the covariance matrix of F, which is defined as,   (1) cov(F) = E((fi − µi )T (fj − µj )) n×n , where µi is the mean of the i-th feature fi , and [·]n×n indicates an n × n matrix whose entry at position (i, j) is represented by the “·” . Apparently, different sets of fi generate different cov(F), which makes the performance vary. Collinearity (or multi-collinearity) is a term from statistics, which refers a linear association between two (or more) variables. Specifically, given a feature array F, if there exists a set of not-all-zero scalar λ1 , ..., λn that makes the following equation holds, λ1 f1 + λ2 f2 + · · · + λn fn + u = 0

(2)

If u = 0, F is perfect multi-collinearity; while if u ∼ N (0, σ), F is nearly multi-collinearity. This multicollinearity phenomenon is common in image segmentation because the variables in the feature array F, like gradient, are computed from other containing variables (i.e. inclusion). If we consider Eq.(2) and Eq.(1) simultaneously, it is easy to see the inclusion may produce redundant entries and noises in cov(F). 2.2

Segmentation with superpixels

A cluster of pixels is called superpixel for whom its members are more similar between each other than those nonmembers. In [6], [18], the authors have shown that image segmentation benefits from using superpixels. There are three advantages for superpixel based segmentation. Firstly, they dramatically reduce computation cost by representing pixels inside one superpixel as a whole. Secondly, regional features can be extracted from superpixels, which are more discriminative than pixelwise features in many vision tasks. Thirdly, multi-scale techniques can be applied when considering different parameter settings (or, different algorithms) for superpixel generating as different scales. In this paper, the segmentation is performed by a spectral clustering algorithm proposed in [18], which takes advantage of the superpixels. Briefly, this algorithm mainly contains three parts. The first one is superpixel generation. Similar to [18], [12], [29], we produce superpixels by Mean Shift algorithm [7] and FelzenszwalbHuttenlocher algorithm [8] with a few sets of parameters. The second is graph construction. We construct a bipartite graph on the pixels and superpixels, and the superpixels are represented by covariance descriptors. We adopt different ways to measure the similarities of the superpixels, and the results are reported in Section 5. The third part is spectral clustering. We use T -cut [18] to

reduce the computational cost. The framework is shown in Alg.1. Algorithm 1 Superpixel-based Segmentation Input: An image M and the number of segmentation k. Output: A k-way segmentation of M . 1: Create over segmentation of M by superpixel algorithms; 2: Construct bipartite graph G(X, Y, B) (refer to Section 3.1); 3: Compute weights on the edges by some similarity measure (refer to Alg.2 and Alg.3); 4: Partition G by T -cut.

3 3.1

S IMILARITY

MEASUREMENT

Construction of bipartite graph

Let P = {p1 , ..., pn } denotes the set of pixels of a given (i) images and S be a collection {Si }, where Si = {st } is a set of superpixels. A given bipartite graph G(X, Y, B) is built in this way: set X = P ∪ S, Y = S and let B = E1 ∪ E2 be the edges, where E1 and E2 represent (i) (i) (i) the edges in every vertex pair (pk , st ) and (sk , sl ) respectively. A number of methods have been proposed to the similarities between superpixels (i.e.,the weights on the edges in E2 ). Some are based on geometry constrains [18], some are based on encoding techniques [29]. While in [12], a covariance descriptor based method was proposed, which employed color and a covariance matrix of the color value [R, G, B] to assess the similarities in superpixels. However, it is still necessary to explore the mechanism of making use of covariance descriptors. Our first thinking is to use the geodesics distance measuring the distance between the covariance descriptors. Secondly, we note that the multi-collinearity is almost inevitable when building the covariance descriptors, this encourages us to use the low rank representation algorithm. 3.2

The RBF kernel

A dataset of d × d covariance matrices is lying on a manifold embedding in d2 -dimensional Euclidean space because the space of d×d covariance matrices is a convex cone in the d2 -dimensional Euclidean space. It has been proofed that the geometry of the space Sym+ d can be well explained with a Riemannian metric which induced an infinite distance between an Sym+ d matrix and a non-Sym+ d matrix [2], [24], [15]. But different to [12], we use the Log-Euclidean distance, a geodesic distance, which makes it possible to embed the manifold into RKHS with RBF kernel [15]. The Log-Euclidean distance between covariance matrix Xi and Xj is defined as dLE (Xi , Xj ) := kLog(Xi ) −

3

Log(Xj )kF . Respectively, the RBF kernel can be rewritten as, kLE : (X×X) → R : kLE (Xi , Xj ) := exp(−σd2LE (Xi , Xj )) (3) where σ > 0 is a scale parameter. The similarity matrix is defined as [B]ij := (kLE (Xi , Xj )). Details are as Alg.2. Algorithm 2 Construct the Graph via RBF kernel Input: The Graph G(X, Y, B); parameter α and σ . ˜ Output: A weighted Graph G(X, Y, B). 1: for all e in B do 2: if e ∈ E2 then (i) (i) 3: e = exp(−σd2LE (sk , sl )) 4: else 5: e=α 6: end if 7: end for Theoretically, the kernel method is more efficient, but it may not hold in our case. We give some possible explanation in Section 5. 3.3

Low rank representation for Sym+ d

The low rank representation (LRR) algorithm [4], [30] is proposed to remove the redundancy and noises in the dataset. For a given dataset X, the LRR algorithm finds a matrix Z, called low rank representation of X, such that X = XZ + E holds, where E represents the noises. The performance of LRR is promising when X is a set of points in Euclidean space [11], [5], [21], and recently, it has been extended to data sets lying on manifolds [10], [27], [28]. Let X be a 3-order tensor, which is stacked from covariance matrices (Xi )d×d , i = 1, 2, ..., n. By embedding X into the d2 -dimensional Euclidean space, our LRR model is set as follows, minE,Z kEk2F + λkZk∗ , s.t. X = X×3 Z + E,

(4)

where k · kF is the Frobenius norm; k · k∗ is the nuclear norm; λ is the balance parameter; ×3 means mode-3 multiplication of a tensor and matrix [16]. Eq.(4) can be solved via Augment Lagrangian Multiplier (ALM) [19]. Details are in Appendix. Let ∆ be a symmetric matrix, whose entries are ∆ij = ∆ji = tr(Xi Xj ), we can obtain a solution of Eq.(4) by iteratively updating the following variables, J = Θ(Z + and,

Y ), µ

Z = (λµJ − λY + 2∆)(2∆ + λµI)−1 ,

(5)

(6)

where λ and µ are pre-setting parameters, Y is the Lagrange coefficient, Θ(·) is the singular value thresholding operator [3].

Since the coefficient matrix Z contains the subspace information of the dataset, it is reasonable to define the eU e T ]ij )2 for spectral clussimilarity matrix as [B]ij := ([U e tering, where U is the row-normalized singular vector of Z [20].

4 E XPERIMENTS The experiments are set to compare the effects of different similarity measure algorithms of covariance descriptors. We construct the covariance descriptors with three different feature vectors named as CovI, CovII, CovIII respectively, • CovI: [R, G, B], ∂I ∂I ∂I ∂I • CovII:[R, G, B, I, ∂x , ∂y , ∂ 2 x , ∂ 2 y ], ∂R ∂R ∂G ∂G ∂B • CovIII: [R, G, B, ∂x , ∂y , ∂x , ∂y , ∂x , ∂B ∂R ∂R ∂G ∂G ∂B ∂B ∂y , ∂ 2 x , ∂ 2 y , ∂ 2 x , ∂ 2 y , ∂ 2 x , ∂ 2 y ]. From CovI to CovIII, the dimensionality of the covariance descriptor is increasing. For example, CovI contains the patterns in the R, G, B channels, while in CovIII, the patterns of their derivatives are also included. This means the covariance descriptors become more discriminative. But, since the partial derivatives are directly computed from other contained features, the tendencies of multi-collinearity are also growing. The methods for similarity measurement include RBF kernel with Log-Euclidean distance (RBFLE) and Low Rank Representation (LRR). All experiments are done with the Berkly Segmentation Dataset, a standard benchmark image segmentation dataset, which includes 300 natural images selected from diverse scene categories [1]. Besides, it also provide a number of human-annotated ground-truth descriptions for each image. In our experiments, every image is partitioned into K regions with K ∈ [2, 40]. And, the reported evaluation results are based on the K that provides the best performance of the algorithms. 4.1 The settings The same as in [12], [29], [18], the superpixels are generated by the Mean Shift algorithm with 3 different sets of parameters and the Felzenszwalb-Huttenlocher algorithm with 2 different sets of parameters; the weights on E1 are fixed to 1 × 10−3 . For the RBF-LE, parameter σ is set as 20. In the LRR model, parameter λ is sensitive to noise level [20]. Since the BSDS contains images from different categories, which indicates the noise levels are different between images, we tuned λ for every image by a grid research in {1, 0.1, 0.01, 0.001}; a higher noise level is associated with a greater λ. In addition, we apply “k-nearest neighbor” to refine the similarity graph with k = 1. 4.2 The evaluation The performance of the algorithms is assessed by four popular segmentation evaluation methods, which includes the Probabilistic Rand Index (PRI) [26], the Variation of Information (VoI) [23], the Global Consistency

4

Algorithm 3 Construct the Graph via LRR Input: A collection of covariance matrix S; parameter λ. ˜ Output: A weighted similarity matrix B. (i) n 1: for Si = {st }t=1 in S do 2: Compute Zi of Si : 3: Initialize: J = Zi = 0, Y = 0, µ = 10−6 , µmax = 1010 , ρ = 1.9,  = 10−8 ; 4: for i=1:n do 5: for j=1:n do (i) (j) 6: ∆ij = tr[(st st )] 7: end for 8: end for 9: while not converged do 10: Fix Zi and update J by J ← Θ(Zi + Yµ ); 11: Fix J and update Z by Z ← (λµJ − λY + 2∆)(2∆ + λµI)−1 12: Check convergence, 13: if kZi − JkF <  then 14: break 15: else 16: Update Y and µ, Y ← Y + µ(Zi − J) µ ← min(ρµ, µmax ) 17: end if 18: end while 19: Compute Singular Value Decomposition of Zi , Zi = Ui Σi ViT (i) (i) 20: Compute the weights on e ∈ {(sk , sl )}nk,l=1 ⊂ B by t (U˜i ∗ U˜i )2 , where U˜i is the Ui normalized in rows. 21: end for

Error (GCE) [22], and the Boundary Displacement Error (BDE) [9]. PRI is a nonparametric test, which measures the probability of an arbitrary pair of samples being labeled consistently in the two segmentations. A higher PRI value means better segmentation. The VoI is a metric that relates to the conditional entropies between the class label distribution. It measures the sum of information loss and information gain between the two partitions, so it roughly measures the extent to which one clustering can explain the other. A lower VoI value indicates better segmentation result. The GCE measures the difference between two regions that contain the same pixel in different segmentations. Particularly, this metric compensates for the difference in granularity. For GCE values, being close to 0 implies a good segmentation. The BDE measures the average displacement error of boundary pixels between two segmentations. The error of one boundary pixel is defined as the distance between the pixel and the closest pixel in the other boundary image. A lower BDE value means less deviation between the segmentation and ground truth.

In addition, we rank the algorithms on each index and give the average rank (Avg.R, lower is better) of each algorithm, which gives a straightforward comparison between the methods.

4.3

Results

The evaluations of the algorithms are listed in the following tables. The results the state-of-art algorithms in [12], [29], [18] (i.e. SAS, `0 -sparse, col+CovI) are also listed for reference. Table.1 shows the best results of the algorithms proposed in this paper together with those reported in other papers. For CovII+LRR, the PRI is the highest and the rest index values are very close to the other algorithms. Table.2 demonstrates the results from RBFLE TABLE 1 Performance of different algorithms Algorithms SAS [18] `0 -sparse [29] Col+CovI [12] CovII+LRR CovIII+RBFLE

PRI 0.8319 0.8355 0.8495 0.8499 0.8397

VoI 1.6849 1.9935 1.6260 1.7418 1.9026

GCE 0.1779 0.2297 0.1785 0.1915 0.2103

BDE 11.2900 11.1955 12.3034 12.7635 11.5557

Avg.R 2.5 3.75 2.25 3 3.5

and LRR. The performance of LRR is overwhelming, which indicates noises reduction inside the covariance descriptor benefits the segmentation results. For RBFLE, the performance is inferior even the similarities are measured by geodesic metric and kernel method. Because the inner noises (due to multi-collinearity) of the covariance descriptors may distort the true data values. TABLE 2 Performance of RBFLE and LRR with different covariance descriptors Algorithms CovI+RBFLE CovI+LRR CovII+RBFLE CovII+LRR CovIII+RBFLE CovIII+LRR

PRI 0.8349 0.8454 0.8372 0.8499 0.8397 0.8451

VoI 2.0148 1.7564 1.9466 1.7418 1.9026 1.7698

GCE 0.2218 0.1885 0.2148 0.1915 0.2103 0.1932

BDE 12.5276 13.0427 11.6658 12.7635 11.5557 12.4837

Moreover from Table.2, we notice that the performance of RBFLE goes up with the growing of the entries in the covariance descriptor while the result of LRR varies. One possible reason is that we use extrinsic metric in the LRR algorithm, which may influence the performance.

5

(a)

(b)

(a)

(b)

(c)

(d)

(c)

(d)

(e)

(f)

(e)

(f)

(g)

(h)

(g)

(h)

Fig. 1. Influence of different covariance descriptors to RBFLE and LRR: (a) original image; (b) ground truth; (c) result of CovI + RBFLE; (d) result of CovI + LRR; (e) result of CovII + RBFLE; (f) result of CovII + LRR; (g) result of CovIII + RBFLE; (h) result of CovIII + LRR.

Fig. 2. Another example: (a) original image; (b) ground truth; (c) result of CovI + RBFLE; (d) result of CovI + LRR; (e) result of CovII + RBFLE; (f) result of CovII + LRR; (g) result of CovIII + RBFLE; (h) result of CovIII + LRR.

5

In addition, Figure.1 and Figure.2 display some results of the algorithms with different covariance descriptors. In Figure.1, the chaos appears in both algorithms when the covariance descriptor becomes more complicated. While in Figure.2, the performance of the algorithms benefits from the dimensionality increasing of the covariance descriptors.

C ONCLUSION

The multi-collinearity usually happens when constructing covariance descriptors. It brings redundancy and noise into the covariance descriptors, which can distort the true data. We present two approaches for reducing the effect of multi-collinearity. One is to measure the distance between covariance descriptors by a RBF kernel with a particular geodesic distance; another is to apply low rank representation algorithm on the Riemannian manifold. Our empirical experiments show that LRR method is good for covariance matrix based segmentation since it captures the subspace structure of the data set which is less effected by the noises. However, the LRR algorithm in this paper is based on an extrinsic metric of the manifold. In the future, we will explore the LRR algorithm with the intrinsic properties of the manifold, i.e. by a geodesic distance.

6

R EFERENCES [1] [2] [3] [4] [5] [6]

[7] [8] [9] [10] [11]

[12]

[13] [14]

[15] [16] [17] [18]

[19] [20] [21] [22]

[23] [24]

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. TPAMI, 33(5):898–916, 2011. V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Fast and simple computations on tensors with log-euclidean metrics. 2005. J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM (JACM), 58(3):11, 2011. J. Chen and J. Yang. Robust subspace segmentation via low-rank representation. Cybernetics, IEEE Transactions on, 44(8):1432–1445, 2014. B. Cheng, G. Liu, J. Wang, Z. Huang, and S. Yan. Multi-task lowrank affinity pursuit for image segmentation. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2439–2446. IEEE, 2011. D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24(5):603–619, 2002. P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2):167–181, 2004. ˜ J. Freixenet, X. Munoz, D. Raba, J. Mart´ı, and X. Cuf´ı. Yet another survey on image segmentation: Region and boundary information integration. In ECCV’02, pages 408–422. Springer, 2002. Y. Fu, J. Gao, X. Hong, and D. Tien. Low rank representation on riemannian manifold of symmetric positive definite matrices. In Proceedings of SDM. SIAM, 2015. A. Ganesh, Z. Lin, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast algorithms for recovering a corrupted low-rank matrix. In Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2009 3rd IEEE International Workshop on, pages 213–216. IEEE, 2009. X. Gu, J. D. Deng, and M. K. Purvis. Improving superpixel-based image segmentation by incorporating color covariance matrix manifolds. In Image Processing (ICIP), 2014 IEEE International Conference on, pages 4403–4406. IEEE, 2014. ˘ ¨ Y. H. Habiboglu, O. Gunay, and A. E. C ¸ etin. Covariance matrixbased fire and flame detection method in video. Machine Vision and Applications, 23(6):1103–1113, 2012. M. T. Harandi, C. Sanderson, A. Wiliem, and B. C. Lovell. Kernel analysis over riemannian manifolds for visual recognition of actions, pedestrians and textures. In Applications of Computer Vision (WACV), 2012 IEEE Workshop on, pages 433–439. IEEE, 2012. S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi. Kernel methods on riemannian manifolds with gaussian rbf kernels. 2015. T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009. I. Kviatkovsky, A. Adam, and E. Rivlin. Color invariants for person reidentification. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(7):1622–1634, 2013. Z. Li, X.-M. Wu, and S.-F. Chang. Segmentation using superpixels: A bipartite graph partitioning approach. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 789– 796. IEEE, 2012. Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010. G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(1):171–184, 2013. G. Liu and S. Yan. Latent low-rank representation for subspace segmentation and feature extraction. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1615–1622. IEEE, 2011. D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV’01, volume 2, pages 416–423, 2001. M. Meil˘a. Comparing clusterings: an axiomatic view. In ICML’05, pages 577–584. ACM, 2005. X. Pennec, P. Fillard, and N. Ayache. A riemannian framework for tensor computing. International Journal of Computer Vision, 66(1):41–66, 2006.

[25] O. Tuzel, F. Porikli, and P. Meer. Region covariance: A fast descriptor for detection and classification. In Computer Vision– ECCV 2006, pages 589–600. Springer, 2006. [26] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. TPAMI, 29(6):929– 944, 2007. [27] B. Wang, Y. Hu, J. Gao, Y. Sun, and B. Yin. Kernelized low rank representation on grassmann manifolds. arXiv preprint arXiv:1504.01806, 2015. [28] B. Wang, Y. Hu, J. Gao, Y. Sun, and B. Yin. Low rank representation on grassmann manifolds: An extrinsic perspective. arXiv preprint arXiv:1504.01807, 2015. [29] X. Wang, H. Li, S. Masnou, and L. Chen. Sparse coding and midlevel superpixel-feature for `0 -graph based unsupervised image segmentation. In Computer Analysis of Images and Patterns, pages 160–168. Springer, 2013. [30] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma. Robust principal component analysis: Exact recovery of corrupted lowrank matrices via convex optimization. In Advances in neural information processing systems, pages 2080–2088, 2009. [31] Y. Xie, J. Ho, and B. Vemuri. On a nonlinear generalization of sparse coding and dictionary learning. In Machine learning: proceedings of the International Conference. International Conference on Machine Learning, page 1480. NIH Public Access, 2013. [32] J. Yao and J.-M. Odobez. Fast human detection from videos using covariance features. Technical report, Idiap, 2007.

A PPENDIX The solution of Eq.(4) is partly referred to the work of Wang et al. [28], but the distance induced by Frobenius norm is not geodesic. The problem is rephrased as follows. Find a matrix Z that satisfied, minE,Z kEk2F + λkZk∗ , s.t. X = X×3 Z + E

(7)

where X is a 3-order tensor stacking by covariance matrices (Xi )d×d , i = 1, 2, ..., n; k·kF is the F robenius norm; k · k∗ is the nuclear norm; λ is the balance parameter; ×3 means mode-3 multiplication of a tensor and matrix [16]. For the error term E, we have kEk2F = kX − X×3 Zk2F , and we can rewrite kEk2F as, kEk2F =

N X

kEi k2F ,

(8)

i

PN where Ei = Xi − j zij Xj , i.e. the i-th slice of E. Note that for matrix A, it holds kAk2F = tr(AT A), and Xi is symmetric, so, the above equation can be expanded as, PN PN kEi k2F = tr[(Xi − j zij Xj )T (Xi − j zij Xj )] P PN N = tr(XiT Xi ) − tr(XiT j zij Xj ) − tr( j zij XjT Xi ) PN PN +tr( j1 zij1 XjT1 j2 zij2 Xj2 ) PN PN = tr(Xi Xi ) − 2tr( j zij Xi Xj ) + tr( j1 ,j2 zij1 zij2 Xj1 Xj2 ). (9) Let ∆ be a symmetric matrix of size N × N , whose entries are ∆ij = ∆ji = tr(Xi Xj ). Because Xi is a symmetric matrix, ∆ij can be written as ∆ij = vec(Xi )T vec(Xj ), where vec(·) is an operator that vectorized a matrix. As a Gram matrix, ∆ is positive semidefinite. So, we have, PN PN PN kEi k2F = ∆ii − 2 j=1 zij ∆ij + j1 j2 zij1 zij2 ∆j1 j2 PN = ∆ii − 2 j=1 zij ∆ij + zi ∆zTi . (10)

7

For ∆ = P P T , kEk2F

PN = i=1 ∆ii − 2tr[Z∆] + tr[Z∆Z T ] = C + kZP − P k2F .

(11)

Then, the optimization is equivalent to: min kZP − P k2F + λkZk∗ . Z

(12)

Let ∆ be a symmetric matrix, whose entries are ∆ij = 1 ∆ji = tr(Xi Xj ), and P = ∆ 2 . First, we transform the above equation into an equivalent formulation minZ λ1 kZP − P k2F + kJk∗ , s.t. J = Z.

(13)

Then by ALM, we have, 1 µ min kZP − P k2F + kJk∗ + < Y, Z − J > + kZ − Jk2F , Z,J λ 2 (14) where Y is the Lagrange coefficient, λ and µ are scale parameters. The above problem can be solved by the following two subproblems [19], µ Jk+1 = min(kJk∗ + < Y, Zk − J > + kZk − Jk2F ) (15) J 2 and, 1 µ Zk+1 = min( kZP − P k2F + < Y, Z − Jk > + kZ − Jk2F ). Z λ 2 (16) Fortunately according to [3], the solutions for the above subproblems have the following close forms, J = Θ(Z +

Y ), µ

Z = (λµJ − λY + 2∆)(2∆ + λµI)−1 ,

(17) (18)

where Θ(·) is the singular value thresholding operator [3]. Thus, by iteratively updating J and Z until the converge conditions are satisfied, a solution for Eq.(4) can be found.