Monday 18 October 2010

Gaussian Mixture Models and Expectation-Maximization

Like K-Means, Gaussian Mixture Models (GMM) can be regarded as a type of unsupervised learning or clustering methods. They are among the most statistically mature methods for clustering. But unlike K-Means, GMMs are able to build soft clustering boundaries, i.e., points in space can belong to any class with a given probability.





The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Contents



gaussians


  1. Introduction

  2. Source code

    1. Normal Distributions

    2. Mixture Distributions

    3. Expectation-Maximization

    4. Gaussian Mixture Models


  3. Using the code

  4. Sample application

  5. Conclusion

  6. References

  7. See also



Introduction


In statistics, a mixture model is a probabilistic model which assumes the underlying data to belong to a mixture distribution. In a mixture distribution, its density function is just a convex combination (a linear combination in which all coefficients or weights sum to one) of other probability density functions:







The individual pi(x) density functions that are combined to make the mixture density p(x) are called the mixture components, and the weights w1, w2, …, wn associated with each component are called the mixture weights or mixture coefficients.



The most common mixture distribution is the Gaussian (Normal) density function, in which each of the mixture components are Gaussian distributions, each with their own mean and variance parameters.






To give a more solid and concrete understanding of the Gaussian mixture models, we will be jumping directly on how to represent those abstractions in source code through the use of class diagrams. Hopefully class diagrams may give a better and direct overview on the subject than a lengthy theoretical discussion.



Source code


First we will discuss about the characteristics of any probability distribution. The IDistribution interface (depicted below) says any probability distribution may have a probability function and a distribution function. It also says that probability distributions can also be fitted to sets of observations through a parameter estimation method.



IDistribution



Since distribution functions can be either univariate or multivariate, the methods above accept any number of scalar values as input through the use of the params keyword. The Fit method for parameter estimation also accepts a general System.Array because we could be given either an array of univariate observations in the form of a double[] or an array of multivariate observations in the form of a jagged double[][] array. Since each observation may have a different weight in the estimation process, the weights parameter can be used to inform those weights to the fitting method.



Probability distributions may also have some associated measures, mainly the Expectation and the Variance of the modeled random variable. Depending if the distribution is univariate or multivariate, the expected values and variances can be either a single scalar or a vector of scalars. For multivariate distributions the variance can be also represented by a symmetric, positive-definite matrix known as the variance-covariance matrix.



IDistributions



Moreover, probability distributions can also be continuous or discrete. In continuous distributions the probability function is known as the Probability Density Function (pdf), and in discrete distributions the probability function is known as the Probability Mass Function (pmf). However, since our distribution of interest, the Gaussian, is a continuous distribution, be focusing only on continuous multivariate distributions:



All



The Normal (Gaussian) distribution and the mixture distributions fall under the multivariate continuous distributions category and are implemented as such.




Normal distributions



The Normal (or Gaussian) multivariate distribution is a multivariate distribution whose parameters are the mean vector μ and a variance-covariance matrix Σ. Its probability density function is given by:





The more detailed class diagram for the normal distribution shows the multivariate characteristics of most of its methods, in particular of the Fit method for estimating Gaussian parameters from a multivariate set of data. Each sample, given in the form of a double[], may also have different associated weights. In this case, the weight associated with each of the samples can be passed through the weights parameter.



NormalDistribution

To estimate the parameters of a Gaussian distribution all we have to do is compute the mean and the variance-covariance matrix out of the given data. This is a very straightforward procedure and can be done by using the standard methods of the Tools class in the Accord.Statistics namespace. The Fit method returns the fitted distribution as a return value and leaves the current distribution untouched.



Mixture distributions


Mixture distributions are just convex combinations of other probability distributions. Thus said, mixture distributions have an array of component distributions and a coefficient array which contains the weights of the each of the component probability distributions.



Mixture



In the generic implementation above, T is the type of the distributions used in the mixture. The most common mixture distribution, the Gaussian Mixture Distribution, could then be created by instantiating a Mixture<NormalDistribution> object passing the initial Normal distributions as constructor parameters.



Alternatively, the parameters of the mixture distributions could be estimated from a set of observations by calling the Fit method. To estimate the parameters of a mixture distribution we will be using a common technique known as the Expectation-Maximization algorithm.




Expectation Maximization


Expectation-Maximization (EM) is a well established maximum likelihood algorithm for fitting a mixture model to a set of training data. It should be noted that EM requires an a priori selection of model order, namely, the number of M components to be incorporated into the model.



The general E-M algorithm is comprised of the following simple steps:




  1. Initialization

    Initialize the distribution parameters, such as the means, covariances and mixing coefficients and evaluate the initial value of the log-likelihood (the goodness of fit of the current distribution against the observation dataset)’;

  2. Expectation

    Evaluate the responsibilities (i.e. weight factors of each sample) using the current parameter values;

  3. Maximization

    Re-estimate the parameters using the responsibilities found in the previous step;

  4. Repeat

    Re-evaluate the log-likelihood and check if it has changed; if it has changed less than a given threshold, the algorithm has converged.


 



Below is the actual realization of the algorithm as implemented in the Fit method of the Mixture<T> class.





/// <summary>
/// Fits the underlying distribution to a given set of observations.
/// </summary>
///
/// <param name="observations">The array of observations to fit the model against. The array
/// elements can be either of type double (for univariate data) or
/// type double[] (for multivariate data).</param>
/// <param name="weights">The weight vector containing the weight for each of the samples.</param>
/// <param name="options">Optional arguments which may be used during fitting, such
/// as regularization constants and additional parameters.</param>
///
/// <remarks>
/// Although both double[] and double[][] arrays are supported,
/// providing a double[] for a multivariate distribution or a
/// double[][] for a univariate distribution may have a negative
/// impact in performance.
/// </remarks>
///
public override void Fit(double[] observations, double[] weights, IFittingOptions options)
{
// Estimation parameters
double threshold = 1e-3;
IFittingOptions innerOptions = null;

if (options != null)
{
// Process optional arguments
MixtureOptions o = (MixtureOptions)options;
threshold = o.Threshold;
innerOptions = o.InnerOptions;
}


// 1. Initialize means, covariances and mixing coefficients
// and evaluate the initial value of the log-likelihood

int N = observations.Length;
int K = components.Length;

double weightSum = weights.Sum();

// Initialize responsabilities
double[] norms = new double[N];
double[][] gamma = new double[K][];
for (int k = 0; k < gamma.Length; k++)
gamma[k] = new double[N];


// Clone the current distribution values
double[] pi = (double[])coefficients.Clone();
T[] pdf = new T[components.Length];
for (int i = 0; i < components.Length; i++)
pdf[i] = (T)components[i].Clone();

// Prepare the iteration
double likelihood = logLikelihood(pi, pdf, observations, weights);
bool converged = false;

// Start
while (!converged)
{
// 2. Expectation: Evaluate the component distributions
// responsibilities using the current parameter values.
Array.Clear(norms, 0, norms.Length);

for (int k = 0; k < gamma.Length; k++)
for (int i = 0; i < observations.Length; i++)
norms[i] += gamma[k][i] = pi[k] * pdf[k].ProbabilityFunction(observations[i]);

for (int k = 0; k < gamma.Length; k++)
for (int i = 0; i < weights.Length; i++)
if (norms[i] != 0) gamma[k][i] *= weights[i] / norms[i];

// 3. Maximization: Re-estimate the distribution parameters
// using the previously computed responsibilities
for (int k = 0; k < gamma.Length; k++)
{
double sum = gamma[k].Sum();

for (int i = 0; i < gamma[k].Length; i++)
gamma[k][i] /= sum;

pi[k] = sum / weightSum;
pdf[k].Fit(observations, gamma[k], innerOptions);
}

// 4. Evaluate the log-likelihood and check for convergence
double newLikelihood = logLikelihood(pi, pdf, observations, weights);

if (Double.IsNaN(newLikelihood) || Double.IsInfinity(newLikelihood))
throw new ConvergenceException("Fitting did not converge.");

if (Math.Abs(likelihood - newLikelihood) < threshold * Math.Abs(likelihood))
converged = true;

likelihood = newLikelihood;
}

// Become the newly fitted distribution.
this.initialize(pi, pdf);
}


 



Gaussian Mixture Models



Gaussian mixture models are among the most commonly used examples of mixture distributions. The GaussianMixtureModel class encompasses a Mixture<NormalDistribution> object and provides methods to learn from data and to perform actual classification through a simplified interface.



Class diagram for Gaussian Mixture Model



Moreover, a common problem which rises in mixture model fitting through E-M is the proper initialization of the mixture parameters. One popular approach is to initialize the mixture parameters using the centroids detected by the K-Means algorithm. The code below shows how the mixture parameters are computed by first creating a KMeans object and then passing the clusters to the mixture model for further processing.







    /// <summary>
/// Divides the input data into K clusters modeling each
/// cluster as a multivariate Gaussian distribution.
/// </summary>
public double Compute(double[][] data, double threshold)
{
int components = this.gaussians.Count;

// Create a new K-Means algorithm
KMeans kmeans = new KMeans(components);

// Compute the K-Means
kmeans.Compute(data, threshold);

// Initialize the Mixture Model with data from K-Means
NormalDistribution[] distributions = new NormalDistribution[components];
double[] proportions = kmeans.Clusters.Proportions;
for (int i = 0; i < components; i++)
{
double[] mean = kmeans.Clusters.Centroids[i];
double[,] covariance = kmeans.Clusters.Covariances[i];
distributions[i] = new NormalDistribution(mean, covariance);
}

// Fit a multivariate Gaussian distribution
model = Mixture<NormalDistribution>.Estimate(data, threshold, proportions, distributions);

// Return the log-likelihood as a measure of goodness-of-fit
return model.LogLikelihood(data);
}


 



Using the code


Code usage is rather simple. First, instantiate a new GaussianMixtureModel object passing the desired number of components in the Gaussian mixture. Then, call the Compute(double[][], double) method passing the learning samples and a desired convergence threshold.








// Create a new Gaussian Mixture Model with 2 components
GaussianMixtureModel gmm = new GaussianMixtureModel(2);

// Compute the model (estimate)
gmm.Compute(samples, 0.0001);

// Classify a single sample
int c = gmm.Classify(sample);





After training has been completed, a new sample can be classified by calling the Classify(double[]) method.




 



Sample application


The accompanying sample application demonstrates how Gaussian Mixture Models can be used to cluster normally-distributed samples of data. By clicking the “Create random data” button, a given number of Normal distributions will be created in the two-dimensional space using random mean and covariance matrices.



After the data has been created, you can use the “Fit a Gaussian Mixture Model” button to fit a mixture of Gaussians to the data. Each of the Gaussians will receive a random color and the samples which have the greatest probability of belonging to any of the Gaussians will be colored accordingly.



 



Sample7 Sample8



Left: a random dataset containing three Gaussian clusters. Right: the clusters as identified by the Gaussian Mixture Model.



 



Sample9 Sample10



Left: a random dataset containing 10 Gaussian clusters. Right: the clusters as identified by the Gaussian Mixture Model.




Conclusion


In this article we found how Gaussian Mixture Models can be successfully used to create soft clustering boundaries around data. Those soft boundaries are possible because in a mixture model each sample is said to belong to a cluster only within certain probability.



A main drawback of GMMs is that the number of Gaussian mixture components, as in K-Means, is assumed known as prior, so it cannot be considered as totally unsupervised clustering method. To aid in this situation, one could use additional algorithms such as the Minimum Message Length(MML) criteria.



Another problem arises with the correct initialization of the mixture components. A poor choice of initial parameters will invariably lead to poor results, as the E-M algorithm converges only to a local optimization point. A commonly used solution is initialization by randomly sampling in the mixture data. Other common solution, covered by the implementation presented here, is to use K-Means to perform initialization. However, K-Means itself may also converge to a poor solution and impact negatively in the mixture model fitting.




References



  • Raja, Y., Shaogang, G. Gaussian Mixture Models. Department of Computer Science, Queen Mary and Westfield College, England.



  • Bishop, C. M. (2007), Pattern Recognition and Machine Learning (Information Science and Statistics), Springer.



  • Wikipedia contributors, "Normal distribution," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Normal_distribution (accessed October 18, 2010).



  • Wikipedia contributors, "Probability density function," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Probability_density_function (accessed October 18, 2010).



  • Wikipedia contributors, "Mixture density," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Mixture_density (accessed October 18, 2010).






See also


No comments:

Post a Comment