- Home
- Documents
*Direct Divergence Approximation between Probability ... Direct Divergence Approximation between*

prev

next

out of 24

View

221Download

0

Embed Size (px)

1Journal of Computing Science and Engineering. vol.7, no.2, pp.99111, 2013.

Direct Divergence Approximationbetween Probability Distributions

and Its Applications in Machine Learning

Masashi SugiyamaTokyo Institute of Technology, Japan.

sugi@cs.titech.ac.jp http://sugiyama-www.cs.titech.ac.jp/sugi

Song LiuTokyo Institute of Technology, Japan.

song@sg.cs.titech.ac.jp

Marthinus Christoffel du PlessisTokyo Institute of Technology, Japan.

christo@sg.cs.titech.ac.jp

Masao YamanakaTokyo Institute of Technology, Japan.

yamanaka@sp.dis.titech.ac.jp

Makoto YamadaNTT Corporation, Japan.

yamada.makoto@lab.ntt.co.jp

Taiji SuzukiThe University of Tokyo, Japan.

s-taiji@stat.t.u-tokyo.ac.jp

Takafumi KanamoriNagoya University, Japan.kanamori@is.nagoya-u.ac.jp

Direct Divergence Approximation for Machine Learning 2

Abstract

Approximating a divergence between two probability distributions from their sam-ples is a fundamental challenge in statistics, information theory, and machine learn-ing. A divergence approximator can be used for various purposes such as two-samplehomogeneity testing, change-point detection, and class-balance estimation. Further-more, an approximator of a divergence between the joint distribution and the prod-uct of marginals can be used for independence testing, which has a wide range ofapplications including feature selection and extraction, clustering, object matching,independent component analysis, and causal direction estimation. In this paper, wereview recent advances in divergence approximation. Our emphasis is that directlyapproximating the divergence without estimating probability distributions is moresensible than a naive two-step approach of first estimating probability distributionsand then approximating the divergence. Furthermore, despite the overwhelmingpopularity of the Kullback-Leibler divergence as a divergence measure, we arguethat alternatives such as the Pearson divergence, the relative Pearson divergence,and the L2-distance are more useful in practice because of their computationallyefficient approximability, high numerical stability, and superior robustness againstoutliers.

Keywords

Machine learning, probability distributions, Kullback-Leibler divergence, Pearsondivergence, L2-distance.

1 Introduction

Let us consider the problem of approximating a divergence D between two probabilitydistributions P and P on Rd from two sets of independent and identically distributedsamples X := {xi}ni=1 and X := {xi}n

i=1 following P and P.

A divergence approximator can be used for various purposes such as two-sample testing[1, 2], change detection in time-series [3], class-prior estimation under class-balance change[4], salient object detection in images [5], and event detection from movies [6] and Twitter[7]. Furthermore, an approximator of the divergence between the joint distribution andthe product of marginal distributions can be used for solving a wide range of machinelearning problems [8], including independence testing [9], feature selection [10, 11], featureextraction [12, 13], canonical dependency analysis [14], object matching [15], independentcomponent analysis [16], clustering [17, 18], and causal direction learning [19]. For thisreason, accurately approximating a divergence between two probability distributions fromtheir samples has been one of the challenging research topics in the statistics, informationtheory, and machine learning communities.

A naive way to approximate the divergence from P to P , denoted by D(PP ), is tofirst obtain estimators PX and P

X of the distributions P and P

separately from their

samples X and X , and then compute a plug-in approximator D(PXP X ). However, thisnaive two-step approach violates Vapniks principle [20]:

Direct Divergence Approximation for Machine Learning 3

If you possess a restricted amount of information for solving some problem,try to solve the problem directly and never solve a more general problem asan intermediate step. It is possible that the available information is sufficientfor a direct solution but is insufficient for solving a more general intermediateproblem.

More specifically, if we know the distributions P and P , we can immediately know theirdivergence D(PP ). However, knowing the divergence D(PP ) does not necessarilyimply knowing the distributions P and P , because different pairs of distributions canyield the same divergence value. Thus, estimating the distributions P and P is moregeneral than estimating the divergence D(PP ). Following Vapniks principle, directdivergence approximators D(X ,X ) that do not involve the estimation of distributions Pand P have been developed recently [21, 22, 23, 24, 25].

The purpose of this article is to give an overview of the development of such di-rect divergence approximators. In Section 2, we review the definitions of the Kullback-Leibler divergence, the Pearson divergence, the relative Pearson divergence, and the L2-distance, and discuss their pros and cons. Then, in Section 3, we review direct approx-imators of these divergences that do not involve the estimation of probability distribu-tions. In Section 4, we show practical usage of divergence approximators in unsupervisedchange-detection in time-series, semi-supervised class-prior estimation under class-balancechange, salient object detection in an image, and evaluation of statistical independencebetween random variables. Finally, we conclude in Section 5.

2 Divergence Measures

A function d(, ) is called a distance if and only if the following four conditions aresatisfied:

Non-negativity: x, y, d(x, y) 0

Non-degeneracy: d(x, y) = 0 x = y

Symmetry: x, y, d(x, y) = d(y, x)

Triangle inequality: x, y, z d(x, z) d(x, y) + d(y, z)

A divergence is a pseudo-distance that still acts like a distance, but it may violate someof the above conditions. In this section, we introduce useful divergence and distancemeasures between probability distributions.

Direct Divergence Approximation for Machine Learning 4

2.1 Kullback-Leibler (KL) Divergence

The most popular divergence measure in statistics and machine learning is the KL diver-gence [26] defined as

KL(pp) :=

p(x) logp(x)

p(x)dx,

where p(x) and p(x) are probability density functions of P and P , respectively.Advantages of the KL divergence are that it is compatible with maximum likelihood

estimation, it is invariant under input metric change, its Riemannian geometric structureis well studied [27], and it can be approximated accurately via direct density-ratio estima-tion [21, 22, 28]. However, it is not symmetric, it does not satisfy the triangle inequality,its approximation is computationally expensive due to the log function, and it is sensitiveto outliers and numerically unstable because of the strong non-linearity of the log functionand possible unboundedness of the density-ratio function p/p [29, 24].

2.2 Pearson (PE) Divergence

The PE divergence [30] is a squared-loss variant of the KL divergence defined as

PE(pp) :=

p(x)

(p(x)

p(x) 1)2

dx. (1)

Because both the PE and KL divergences belong to the class of Ali-Silvey-Csiszar di-vergences (which is also known as f -divergences) [31, 32], they share similar theoreticalproperties such as the invariance under input metric change.

The PE divergence can also be accurately approximated via direct density-ratio esti-mation in the same way as the KL divergence [23, 28]. However, its approximator canbe obtained analytically in a computationally much more efficient manner than the KLdivergence, because the quadratic function the PE divergence adopts is compatible withleast-squares estimation. Furthermore, the PE divergence tends to be more robust againstoutliers than the KL divergence [33]. However, other weaknesses of the KL divergencesuch as asymmetry, violation of the triangle inequality, and possible unboundedness ofthe density-ratio function p/p remain unsolved in the PE divergence.

2.3 Relative Pearson (rPE) Divergence

To overcome the possible unboundedness of the density-ratio function p/p, the rPE di-vergence was recently introduced [24]. The rPE divergence is defined as

rPE(pp) := PE(pq)

=

q(x)

(p(x)

q(x) 1)2

dx, (2)

Direct Divergence Approximation for Machine Learning 5

where, for 0 < 1, q is defined as the -mixture of p and p:

q = p+ (1 )p.

When = 0, the rPE divergence is reduced to the plain PE divergence. The quantityp/q is called the relative density ratio, which is always upper-bounded by 1/ for > 0because

p(x)

q(x)=

1

+ (1 )p(x)p(x)