Proof involving means in continuous distributions

In summary, the mean value of a continuous variable is situated at a point that acts as a fulcrum about which all other values are considered "weights".
  • #1
Bipolarity
776
2
I recall reading somewhere that the mean value of a continuous variable is situated at a point that acts as a fulcrum about which all other values are considered "weights".

In other words, if we define the mean as

[tex] μ = \int^{∞}_{-∞} x ρ(x) dx [/tex] (where rho is the probability density)

then can we prove that

[tex] \int^{∞}_{μ} |x-μ| ρ(x) dx = \int^{μ}_{-∞} |x-μ| ρ(x) dx [/tex]

I am not sure my question is very clear considering I don't understand this too well, but perhaps someone understands what I mean?
I'm also not sure the equation is even correct, but my memory tells me I did this a while ago and my gut tells me my memory is not wrong. :D

EDIT: Made a big error with the LaTeX. Just fixed it.

BiP
 
Last edited:
Physics news on Phys.org
  • #2
Bipolarity said:
I recall reading somewhere that the mean value of a continuous variable is situated at a point that acts as a fulcrum about which all other values are considered "weights".

In other words, if we define the mean as

[tex]μ = \int^{∞}_{-∞} x ρ(x) dx [/tex] (where rho is the probability density)

then can we prove that

[tex]\int^{∞}_{μ} |x-μ| ρ(x) dx = \int^{μ}_{-∞} |x-μ| ρ(x) dx [/tex]

I am not sure my question is very clear considering I don't understand this too well, but perhaps someone understands what I mean?
I'm also not sure the equation is even correct, but my memory tells me I did this a while ago and my gut tells me my memory is not wrong. :D

EDIT: Made a big error with the LaTeX. Just fixed it.

BiP

Hey BiPolarity.

Basically the fulcrum argument says that if you look at the mean equation, then the contribution of [tex]\int^{∞}_{μ} xp(x)dx = \int^{μ}_{-∞}xp(x)dx[/tex]

Now in terms of proving this, we can use the property that:

[tex]\int^{∞}_{-∞}xp(x) = μ[/tex]. Now we know that [tex]\int^{∞}_{μ} xp(x)dx + \int^{μ}_{-∞}xp(x)dx = \int^{∞}_{-∞}xp(x) = μ[/tex] So if both are equal, then both must equal [tex]\frac{μ}{2}[/tex] which is the intuitive explanation of the fulcrum interpretation.

Now in terms of showing this as a general property, I'm afraid that you will need to do it as far as I know for each PDF individually, since p(x) is a general probability density function.

What you can do is for certain types of PDE's like symmetric and even functions, you can show that this property holds by using properties of these kinds of general classes, but other than that, to prove it for a general PDF, you will need to plug in the function and prove it.

One way though that I think would be optimal for the general case is if you standardized your random variable to have a mean 0 and a symmetric distribution about that mean, which would prove it for the general symmetric distribution, but the thing is many are not symmetric under a normal standardization.
 
Last edited:
  • #3
After reading your post, it would make sense that you should standardized your random variable to have mean = 0 using a simple transformation [tex]Y = X - μ[/tex] so that you have a zero mean.

You can then move on to prove the identity [tex]\int^{0}_{-∞}yp(y)dy = \int^{∞}_{0}yp(y)dy[/tex] which removes μ completely from the proof.

So with regard to your original expression, you need to lose the absolute value signs and you need to use a random variable transformation to get the above result as opposed to a normal integral substitution.
 
Last edited:
  • #4
chiro said:
After reading your post, it would make sense that you should standardized your random variable to have mean = 0 using a simple transformation [tex]Y = X - μ[/tex] so that you have a zero mean.

You can then move on to prove the identity [tex]\int^{0}_{-∞}yp(y)dy = \int^{∞}_{0}yp(y)dy[/tex] which removes μ completely from the proof.

So with regard to your original expression, you need to lose the absolute value signs and you need to use a random variable transformation to get the above result as opposed to a normal integral substitution.

I see. Thanks chiro!

By the way, I think you mean [tex]\int^{0}_{-∞}yp(y)dy = -\int^{∞}_{0}yp(y)dy[/tex].
There needs to be a negative sign because if the PDF is standardized, then one side of the fulcrum must have negative values. Or you could put absolute values in front of the y?

Oh and another thing, suppose that the probability distribution is not symmetric. Then does the property still hold?BiP
 
  • #5
Bipolarity said:
I see. Thanks chiro!

By the way, I think you mean [tex]\int^{0}_{-∞}yp(y)dy = -\int^{∞}_{0}yp(y)dy[/tex].
There needs to be a negative sign because if the PDF is standardized, then one side of the fulcrum must have negative values. Or you could put absolute values in front of the y?

Oh and another thing, suppose that the probability distribution is not symmetric. Then does the property still hold?

BiP

No that's not correct.

Remember both are the same value as you get the same quantity on the left as on the right which means they are equal and not negatives of each other.

As an example if you have a fulcrum and you have two equal weights: they are both say 2, not 2 and -2: Remember they are balanced just like the fulcrum and this is expressed with an equality.
 
  • #6
chiro said:
No that's not correct.

Remember both are the same value as you get the same quantity on the left as on the right which means they are equal and not negatives of each other.

As an example if you have a fulcrum and you have two equal weights: they are both say 2, not 2 and -2: Remember they are balanced just like the fulcrum and this is expressed with an equality.

Actually chiro, I would have to disagree. If the PDF is [itex] ρ(x) = e^{-x^{2}} [/itex], then the left wing of the fulcrum will be the negative of the right wing, so you need to negate one of them to get the other.

Also, I think even if the probability distribution is not symmetric, the equality in my original post should hold, as long as the mean is defined. I should like to prove this for non-symmetric probability distributions, but I don't know how to.

BiP
 
  • #7
Bipolarity said:
Actually chiro, I would have to disagree. If the PDF is [itex] ρ(x) = e^{-x^{2}} [/itex], then the left wing of the fulcrum will be the negative of the right wing, so you need to negate one of them to get the other.

Also, I think even if the probability distribution is not symmetric, the equality in my original post should hold, as long as the mean is defined. I should like to prove this for non-symmetric probability distributions, but I don't know how to.

BiP

Yeah you are right, because if it is symmetric then you will end up having a non-symmetric function when multiplying by x, when x can be both positive and negative. Also this means I might be wrong about above for the relationship with [tex]\frac{\mu}{2}[/tex] and that it will only hold if x is always positive. If x is negative, then you will need to deal with a more general case.

In terms of non-symmetric distributions, I'm not sure how you prove the general case either because the function is so general that unless it has a property like being symmetric or whatever, then it's really unconstrained and too general to work with.

The only way I can think of is if you transform the PDF to something symmetric by introducing a transformation of the random variable and then showing some kind of argument relating the transformed variable and transformed mean back to the original variable and original mean.

I don't know if the above would work though, but it's the only thing I can think of currently.
 
  • #8
ρ
chiro said:
Yeah you are right, because if it is symmetric then you will end up having a non-symmetric function when multiplying by x, when x can be both positive and negative. Also this means I might be wrong about above for the relationship with [tex]\frac{\mu}{2}[/tex] and that it will only hold if x is always positive. If x is negative, then you will need to deal with a more general case.

In terms of non-symmetric distributions, I'm not sure how you prove the general case either because the function is so general that unless it has a property like being symmetric or whatever, then it's really unconstrained and too general to work with.

The only way I can think of is if you transform the PDF to something symmetric by introducing a transformation of the random variable and then showing some kind of argument relating the transformed variable and transformed mean back to the original variable and original mean.

I don't know if the above would work though, but it's the only thing I can think of currently.

Actually I think it is true even if x is negative. I have been doing some calculations on maple with various probability distributions, and the equation I posted in the OP appears to hold no matter the probability distribution, as long as the mean for that distribution is defined, i.e.
As long as
[tex] \int^{∞}_{-∞}xρ(x)dx[/tex] is finite, then the "idea" of the mean being the fulcrum must be true. I can only say this out of intuition however.

However I myself do not know that much about probability distributions. Could you recommend me some PDFs that I can perform calculations on just to see whether or not it's correct?

EDIT: IF you look at the equation in my OP, the absolute value is designed to account for negative x values left of the mean.

BiP
 
Last edited:
  • #9
Bipolarity said:
ρ

Actually I think it is true even if x is negative. I have been doing some calculations on maple with various probability distributions, and the equation I posted in the OP appears to hold no matter the probability distribution, as long as the mean for that distribution is defined, i.e.
As long as
[tex] \int^{∞}_{-∞}xρ(x)dx[/tex] is finite, then the "idea" of the mean being the fulcrum must be true. I can only say this out of intuition however.

However I myself do not know that much about probability distributions. Could you recommend me some PDFs that I can perform calculations on just to see whether or not it's correct?

EDIT: IF you look at the equation in my OP, the absolute value is designed to account for negative x values left of the mean.

BiP

No, the negative property doesn't hold when x is always positive because p(x) is always positive which means the integral is always positive for all x which means that LHS integral is positive and the RHS is positive since xp(x) is always positive. Again think about having 2 and dividing into 1 on LHS and 1 on RHS because the integral for both parts is positive. It's when you have negative x that you need to consider the negative case that you have mentioned (like the normal distribution).

In terms of distributions, I'd try it on the exponential, normal, uniform for starters: basically anything you can integrate out for the mean expression to get an analytic answer so that you can check it symbolically: setup the integral and see what you get.
 
  • #10
chiro said:
No, the negative property doesn't hold when x is always positive because p(x) is always positive which means the integral is always positive for all x which means that LHS integral is positive and the RHS is positive since xp(x) is always positive. Again think about having 2 and dividing into 1 on LHS and 1 on RHS because the integral for both parts is positive. It's when you have negative x that you need to consider the negative case that you have mentioned (like the normal distribution).

In terms of distributions, I'd try it on the exponential, normal, uniform for starters: basically anything you can integrate out for the mean expression to get an analytic answer so that you can check it symbolically: setup the integral and see what you get.

I actually do not know the PDF for those distributions myself, could you list the actual functions?

BiP
 
  • #11
Bipolarity said:
I actually do not know the PDF for those distributions myself, could you list the actual functions?

BiP

Google is your friend ;), but for these distributions you have:

[itex]U(x) = \frac{1}{b-a}[/itex] for b > a and a < x < b which is the uniform distribution.
[itex]E(x) = ae^{-ax}[/itex] for x > 0 and a > 0 which is the exponential distribution
[itex]N(x) = \frac{1}{(2\pi)^{1/2}}*e^{-\frac{x^2}{2}}[/itex] which is the standard normal N(0,1)
 
Last edited:

Related to Proof involving means in continuous distributions

1. What is a continuous distribution?

A continuous distribution is a type of probability distribution that describes the probability of a continuous random variable taking on a certain value within a given range. It is characterized by a smooth, unbroken curve and its probability density function (PDF) is defined as the derivative of its cumulative distribution function (CDF).

2. What is the mean in a continuous distribution?

The mean, also known as the average, is a measure of central tendency in a continuous distribution. It represents the balance point of the distribution and is calculated by taking the sum of all the values in the distribution and dividing it by the total number of values.

3. How is the mean used in continuous distributions?

The mean is used in continuous distributions to summarize the data by providing a single value that represents the central tendency of the distribution. It is also used in various statistical tests and analyses to make inferences about the population from which the data is drawn.

4. What is the proof involving means in continuous distributions?

The proof involving means in continuous distributions is a mathematical demonstration that shows the relationship between the mean and other measures of central tendency, such as the median and mode. It also shows how the mean can be used to calculate other important parameters, such as variance and standard deviation.

5. Why is understanding proof involving means in continuous distributions important?

Understanding proof involving means in continuous distributions is important because it allows us to make accurate and meaningful interpretations of data and to draw valid conclusions about the population. It also helps us to choose the appropriate statistical techniques and models for analyzing continuous data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
986
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
25
Views
5K
  • Advanced Physics Homework Help
Replies
16
Views
341
  • Classical Physics
Replies
1
Views
749
  • Calculus and Beyond Homework Help
Replies
1
Views
763
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
Back
Top