MLE estimator for mean always equal to the mean?

  • Thread starter Bipolarity
  • Start date
  • Tags
    Mean Mle
In summary, the MLE and the sample mean are equal for Gaussian, Poisson, and Bernoulli distributions, but not for all distributions.
  • #1
Bipolarity
776
2
Suppose you have a distribution ##p(x, \mu)##.
You take a sample of n points ## (x_{1}...x_{n})## from independent and identical distributions of ##p(x, \mu)##.

The maximum likelihood estimator (MLE) for the mean ## \mu ## is the value of ## \mu ## that maximizes the joint distribution ## \prod^{n}_{i = 1} p(x_{i},\mu) ##. It is easy to find using calculus.

The sample mean is simply ## \frac{(x_{1}+x_{2}+...+x_{n})}{n} ##.
It turns out that for Gaussian, Poisson, and Bernoulli distributions, the MLE estimator for the mean equals the sample mean. I was curious if this is the case for ALL distributions? If so, how would I prove this? If not, what is one distribution for which this isn't the case?

Thanks!

BiP
 
Physics news on Phys.org
  • #2
I would start by trying some stuff with the uniform distribution.
 
  • #3
Consider a family of discrete densities defined by a parameter N that have the form p(X=N) = 0.5 p(X = N+1)= 0.5.

Suppose we take 3 independent samples from such a distribution and get {2,2,2}.
 
  • #4
Bipolarity said:
Suppose you have a distribution ##p(x, \mu)##.
You take a sample of n points ## (x_{1}...x_{n})## from independent and identical distributions of ##p(x, \mu)##.

The maximum likelihood estimator (MLE) for the mean ## \mu ## is the value of ## \mu ## that maximizes the joint distribution ## \prod^{n}_{i = 1} p(x_{i},\mu) ##. It is easy to find using calculus.

The sample mean is simply ## \frac{(x_{1}+x_{2}+...+x_{n})}{n} ##.
It turns out that for Gaussian, Poisson, and Bernoulli distributions, the MLE estimator for the mean equals the sample mean. I was curious if this is the case for ALL distributions? If so, how would I prove this? If not, what is one distribution for which this isn't the case?

Thanks!

BiP
You have noticed something special about so-called "exponential families". https://en.wikipedia.org/wiki/Exponential_family Many famous families of distributions are exponential families but there are also plenty of famous families of distributions which aren't.
 
  • #5
Bipolarity said:
I was curious if this is the case for ALL distributions?
Consider the distribution with pdf given by ##\frac 1 {\pi(1+x^2)}## for ##x \in \mathbb R##. This is the Cauchy distribution. Given a finite sample drawn from this distribution, you certainly can calculate ##\frac{\sum x_i} n##, but this has no meaning because this distribution does not have a mean. This is a pathological distribution. The mean and variance are undefined (do the integrals).
 
  • #6
My example would be the Laplace distribution aka double exponential (warning: there are more distributions with the same name) with pdf given by exp(-|x - mu|)/2. The mean is well-defined and it's mu. The mle based on a sample of size n is the median (the middle observation if n is odd, and anything between the two middle observations if n is even).

To bring the Cauchy distribution into the story, we should make it a one-parameter distribution with pdf proportional to 1 /(1 + (x - mu)^2). Now we have a family of distributions depending on mu. The parameter mu is the centre of symmetry of these distributions but indeed they do not have an expectation value (nor a variance). But the mle, based on a sample of size n from this distribution, is for large n the best you can possibly do. You must look out for local maxima then. There is a theorem that for large n there will be one "good" global maximum of the likelihood, and a Poisson (1) distributed number of "bad" local maxima.
 

Related to MLE estimator for mean always equal to the mean?

1. What is the MLE estimator for mean and how does it work?

The Maximum Likelihood Estimator (MLE) for mean is a statistical method used to estimate the population mean from a sample of data. It works by finding the value of the mean that maximizes the likelihood of obtaining the observed data.

2. Is the MLE estimator for mean always equal to the true mean?

In theory, the MLE estimator for mean should be equal to the true mean if the sample is large and representative of the population. However, in practice, the MLE estimator may be slightly different due to sampling error.

3. Can the MLE estimator for mean be used for any type of data?

Yes, the MLE estimator for mean can be used for any type of data as long as the data follows a known probability distribution. If the data does not follow a known distribution, then alternative methods may need to be used.

4. How does the MLE estimator for mean compare to other methods of estimating the mean?

The MLE estimator for mean is considered to be one of the most efficient and unbiased methods for estimating the mean. It also has desirable properties such as consistency and sufficiency.

5. Can the MLE estimator for mean be used for small sample sizes?

While the MLE estimator can technically be used for small sample sizes, it may not be as reliable as other methods. This is because the MLE method relies on large sample sizes to accurately estimate the population mean.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
597
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
985
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
970
Back
Top