How cosmologists determine universe flatness

  • Thread starter Buzz Bloom
  • Start date
  • Tags
    Universe
In summary: data sets is that you can pretty easily identify spurious correlations which would be difficult to detect with a smaller set of data.
  • #1
Buzz Bloom
Gold Member
2,519
467
What criteria are, or reasonably might be, used by cosmologists to decide whether or not the assumption that curvature equals zero produces a better cosmological model than one with a non-zero curvature?

I read (as best as I could) Section 6.2.4. Curvature, pp 37-39 of
http://planck.caltech.edu/pub/2015results/Planck_2015_Results_XIII_Cosmological_Parameters.pdf .
The following is a summary of what I interpreted Section 6.2.4 to be saying.

Several cosmological models were discussed, all of which seems to have the following parameters : H0, Ωm, ΩΛ, and Ωk. For some of the datasets there may also have been some additional model parameters that I did not understand. A variety of combined data sets were used to create the various models. I did not understand the labels used to describe the combinations of datasets :
1. Planck TT+lowP posterior (Figure 25)
2. Planck TT,TE,EE+lowP (Figure 26)
3. Planck TT,TE,EE+lowP+lensing (Figure 26)
4. Planck TT,TE,EE+lowP+lensing+BAO (Figure 26)​
The text then gives the following respective values for Ωk, each with a 2 sigma, 95% confidence level error range.
1. (Equation 47) -0.053 +0.049 -0.055
2. (Equation 48) -0.040 +0.038 -0.041
3. (Equation 49) +0.005 +0.016 -0.017
4. (Equation 50) +0.000 +/-0.005​
The text adds, “We adopt Eq. (50) as our most reliable constraint on spatial curvature. Our universe appears to be spatially flat to an accuracy of 0.5%”

As best as I can tell, the text seems to be saying that the choice of Equation 50 is based on the fact that the error range is the smallest. I would appreciate it if someone can authoritatively say whether or not this is the case. If this is the case, I have some concerns about the protocol.

In my work for several years before I retired, I was involved in the development of data mining software, and became aware of the phenomenon of over-fitting a model to a dataset. This means continuing to “improve” a model's figure of merit past the optimum point of the model's ability to make good predictions on data that was not used to build the model. The concept is that although the figure of merit improved, what was happening was that the specifics of the training data set, including any statistical anomalies or outliers, were being modeled rather than the the general characteristics of the population from which the dataset was a sample.

A protocol commonly used to avoid over-fitting is to divide the dataset into two subsets. One subset is used to build the model, and the second is used to determine how good a predictor that model is. I am unable to tell whether such a protocol was used in developing the models described in the article.
 
Space news on Phys.org
  • #2
That was selected because that choice includes the most data. To get an accurate measurement of the spatial flatness, it's best to combine data over a very large range of distances. So data which combines the extremely far-away CMB with relatively nearby data is the best. The "TT+TE+EE" data is the far-away data set, while lowP, lensing ,and BAO are all different methods of measuring the distribution of structure in the (relatively) nearby universe.

The problem with overfitting is an issue that arises if you have too many parameters, which can become very easy to do if you start adding parameters. What they're doing here is the opposite: they're adding additional data to constrain the same number of parameters. When your model is a good one, you expect the addition of new data to reduce the errors.
 
  • #3
Hi @Chalnoth:

Thank you for your explanation. I understand that most issues regarding overfitting involve a large number parameters and an iterative process for adjusting parameter values, but overfitting can also occur with a relatively small number of parameters if the process of building a series of models involves adding parameters.

The earlier models were built to fit observations of brightness vs. red-shift. The later models involve fitting other observation that I confess I don't understand. How certain are you that when fitting a model to this different data for CMB observations and lensing observations that additional parameters were not needed so apples could be fitted along with oranges?

Regards,
Buzz
 
  • #4
Buzz Bloom said:
Hi @Chalnoth:

Thank you for your explanation. I understand that most issues regarding overfitting involve a large number parameters and an iterative process for adjusting parameter values, but overfitting can also occur with a relatively small number of parameters if the process of building a series of models involves adding parameters.

The earlier models were built to fit observations of brightness vs. red-shift. The later models involve fitting other observation that I confess I don't understand. How certain are you that when fitting a model to this different data for CMB observations and lensing observations that additional parameters were not needed so apples could be fitted along with oranges?

Regards,
Buzz
Pretty confident. One of the nice things about working with large-scale astrophysics observations is that there are generally independent ways of checking the data. For example, with the CMB, if you get the calibration of the detectors off, you'll tend to have huge obvious stripes in the data that run along the path that the telescope traces as it scans the sky. If you get the average value of the calibration off, then some methods to extract the foregrounds from the CMB will utterly fail. Then you can check the result against the results of other experiments.

Data mining is usually dealing with far, far more complicated systems (human systems), where simple models to describe the system can never be accurate. It becomes much harder to separate assumption from measurement, as changes in some assumptions can change the data pretty dramatically.

With large-scale astrophysics data, for the most part assumptions only change the data by about one standard deviation or so, which isn't enough to change the overall meaning (since we expect one-standard-deviation differences anyway).

This breaks down when we start looking at smaller-scale data, such as galaxies and galaxy clusters. The physics there gets complicated enough that there are pretty big uncertainties due to assumptions made (typically approximations performed to make the calculations tractable). But at large scales, all those niggling details average out and we get a very clean result that is highly independent of these kind of assumptions.
 
  • Like
Likes Buzz Bloom
  • #5
Chalnoth said:
Pretty confident
Hi @Chalnoth:

Thank you very much for your thoughtful and informative answers to my questions.

Regards,
Buzz
 

Related to How cosmologists determine universe flatness

1. How do cosmologists determine if the universe is flat?

Cosmologists determine the flatness of the universe by measuring the amount of matter and energy present in the universe. This is done through observations of the cosmic microwave background radiation and the large-scale distribution of galaxies. If the universe has a critical density of matter and energy, it is considered to be flat.

2. What evidence supports the idea of a flat universe?

One of the main pieces of evidence for a flat universe is the observed uniformity of the cosmic microwave background radiation. This indicates that the universe is homogeneous and isotropic, which is a key characteristic of a flat universe. Additionally, measurements of the large-scale structure of the universe also support the idea of a flat universe.

3. What does it mean for the universe to be flat?

In terms of cosmology, a flat universe refers to a universe that has a critical density of matter and energy, meaning that it is neither positively curved (closed) nor negatively curved (open). It is considered to be a "middle ground" between these two types of universes.

4. How does the concept of dark energy affect the determination of universe flatness?

Dark energy, which is a mysterious force that is thought to be responsible for the accelerating expansion of the universe, plays a crucial role in the determination of universe flatness. This is because the amount of dark energy present in the universe affects its overall density and curvature, and therefore has an impact on whether the universe is flat or not.

5. Are there any competing theories to the idea of a flat universe?

There are alternative theories to a flat universe, such as the idea of a closed or open universe. However, the current observational evidence and measurements strongly support the concept of a flat universe. This is also supported by the theory of cosmic inflation, which predicts a flat universe as a result of the rapid expansion in the early universe.

Similar threads

Replies
96
Views
9K
Replies
5
Views
2K
Replies
33
Views
5K
Replies
4
Views
1K
  • Cosmology
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
3K
Replies
1
Views
1K
Replies
61
Views
1K
Replies
7
Views
4K
Back
Top