Marginal evidence for cosmic acceleration from Type Ia SNe

In summary, a recent paper published on ArXiv questions the empirical basis of the acceleration of the expansion of the universe. The authors, Jeppe Trst Nielsen, Alberto Guanti, and Subir Sarkar, suggest that their analysis shows only marginal evidence for the widely accepted claim of cosmic acceleration. However, others argue that their method does not take into account the results from other major cosmological tests, such as the CMB and BAO data. Additionally, their proposed alternative, a linearly expanding model, has not been extensively studied or tested in statistical rigor.
  • #36
Garth said:
Absolutely wabbit, there is acceleration from the vanilla pre-1998 totally non DE, decelerating, model, but the question is it it sufficient to produce the standard [itex]\Lambda[/itex]CDM model or less thus producing a linear or near-linearly expanding one?

The way I read the text and Fig 3 is that it seems they are saying it is more consistent with the Milne model (which has less acceleration but hyperbolic space to give nearly the same luminosity distance for any z), but I need to understand the statistical analysis of the probability densities better to make a statistical comparison of the two models.

Garth

Just to.clarify, I wasn't comparing to non-DE but to their non accelerating model - I have been referring exclusively to the content of the article.

To me, fig. 3 is by far the least informative - given the large noise, I cannot discern a best fit by visual inspection there. So I was basing my reading mainly on figure 2 showing the no-acceleration line lying at the edge of the likely ellipsoid, and table I giving the log-likelihoods of various models including unaccelerated model, compared to a best fit, and close behind best flat fit which is LCDM-ish - they do not list the LCDM with reference parameters in that table though, not sure why.

I can't say I find their exposition particularly clear, and I don't know all these models well, so maybe I misunderstood the nature of that table or what they claim.
 
Space news on Phys.org
  • #37
Garth said:
Right Chalnoth, but here we are dealing with the SNe 1a data.
I.e., cherry picking. It makes no sense to say, "But this other model fits the data too!" but fail to leave out that it's only a small subset of full variety of cosmological data that exists, especially if the broader data doesn't come anywhere close to fitting the model.

Just for a rough estimate of how bad this is, the Union compilation of SN1a data contains data from a little over 800 supernovae. That's a little over 800 data points relating distance to redshift, each with pretty big error bars individually.

The Planck observations, by contrast, measure the CMB power spectrum out to approximately ##\ell = 1800## or so (depending upon your error cutoff). Each ##C_\ell## is drawn from ##2\ell + 1## components, such that the total components up to a specific ##\ell## is ##\ell^2##. Planck isn't quite able to measure the full sky. They use a mask that retains about 73% of the sky area, which reduces the number of independent components. So the total number of observables measured by Planck is somewhere in the general range of ##1800^2 \times 0.73 \approx 2.4 \times 10^6##. This is a rich, complex data set, and the physics that are active in the emission of the CMB are much simpler and cleaner than with supernovae, leading to lower systematic errors.

Because of this, any time I see somebody proposing a new cosmological model, if they don't even try to explain the CMB data, then there is just no reason to lend that model any credence whatsoever. In this case there's the additional problem that it flies in the face of our entire understanding of gravity.
 
  • #38
I agree Chalnoth about the robustness and precision of the CMB data.

There is the question of the priors adopted to interpret the CMB data, particularly the immensely flexible theory of Inflation that has had its many free parameters highly fine tuned to fit the power spectrum, and may be adjusted further either way to fit the evidence concerning the presence, or absence, of gravitational waves that were erroneously thought to be present in the BICEP 2 experiment data.

However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"

Garth.
 
Last edited:
  • #39
Garth said:
However the main question that has possibly raised by this paper is "has DE evolved since the z = 1100, or earlier, era?"
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.

This paper really doesn't raise that question, though. It's just putting up an unphysical model that, due to the fact that the cosmological constant and matter are close in magnitude, sort of kinda looks like it also fits the data (except it doesn't).
 
  • #40
Chalnoth said:
There have been multiple investigations into whether or not dark energy has changed over time, and so far no evidence of any deviation from a cosmological constant.
You seem to be very sure of that: Model independent evidence for dark energy evolution from Baryon Acoustic Oscillations.
Our results indicate that the SDSS DR11 measurement of H(z)=222±7 km/sec/Mpc at z=2.34, when taken in tandem with measurements of H(z) at lower redshifts, imply considerable tension with the standard ΛCDM model.

Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2) (http://arxiv.org/abs/1503.04923)
Recently, Sahni et al. combined two independent measurements of H(z) from BAO data with the value of the Hubble constant
apjl511819ieqn1.gif
in order to test the cosmological constant hypothesis by means of an improved version of the Om diagnostic. Their result indicated considerable disagreement between observations and predictions of the Λ cold dark matter (ΛCDM) model. However, such a strong conclusion was based only on three measurements of H(z). This motivated us to repeat similar work on a larger sample. By using a comprehensive data set of 29 H(z), we find that discrepancy indeed exists. Even though the value of
apjl511819ieqn2.gif
inferred from the Omh2 diagnostic depends on the way one chooses to make summary statistics (using either the weighted mean or the median), the persisting discrepancy supports the claims of Sahni et al. that the ΛCDM model may not be the best description of our universe.
Garth
 
Last edited:
  • #41
Garth said:
It's a 2-sigma detection. Those happen all the time, and are usually spurious. No reason to believe there is anything here (yet).

Garth said:
Furthermore: IS THERE EVIDENCE FOR DARK ENERGY EVOLUTION? (The Astrophysical Journal Letters Volume 803 Number 2)
This paper claims to support the previous paper, but I'm not sure I buy it. If you look at table 2, it looks like there are some significant discrepancies between the different data sets they use. The different subsets of the data don't even agree with one another on the correct ##Omh^2## value to within their errors. In particular, if they take the full data set but subtract only a single data point, the ##Omh^2## differs from the Planck measurement by less than 1-sigma. So the smart money here is on there being something wrong with the ##z=2.34## measurement from the Lyman-alpha forest. This suggests the need for more independent high-redshift data to resolve the issue.
 
  • #42
So we'll wait and see...

But meanwhile we have the OP paper to discuss.

Garth
 
  • #43
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
 
  • #44
wabbit said:
I had a look at the second paper at http://arxiv.org/abs/1503.04923.

Their statistical methodology is strange and while I have not redone the analysis, I am skeptical here. They basically formulate the LCDM hypothesis nicely as a function of H(z) is constant - but instead of testing this hypothesis directly on their sample using a standard test, they build a non standard one with highly correlated 2-point comparisons. Are their statistics on this test correct?
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
 
  • #45
Chalnoth said:
I didn't look at that in detail, but their error bars on different subsets of the data don't come close to intersecting. So either their error bars are wrong or the data has a significant unaccounted-for systematic error.
Yes, my concern is with their error analysis. Apart from the choice of two-point comparison which for a curve fit is strange as it mixes noise at a given z with a non constant tendency as a function of z, they do not explain (or maybe I missed) how they include the error bars of the individual measurements, which should be a key input in the test. Part of the problem with their method is that some points are just not aligned - this shows an outlier compared to any smooth curve, but appears as a series of "bad" two point comparisons - I think there are much more robust ways to analyze a series of measurements to test a relationship.

Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
 
Last edited:
  • #46
wabbit said:
Maybe I'll copy their data and redo a different test to see what it gives... Is ##\sigma_H## in the table of ##H(z)## measurements the reported standard error of each data point?
That's what it looks like to me.

I find it very odd that they're quoting these data points as ##z## vs. ##H(z)##, though. That makes sense for the differential age measurements (DA). But it doesn't make sense for the BAO measurements, which measure distance as a function of redshift (which is an integral of ##H(z)##). I don't think it is sensible to reduce the BAO constraints to a single ##H(z)## at a single redshift.

I'm going to have to read a bit more about the DA approach, though. I hadn't heard of that. Here's one paper I found:
http://arxiv.org/abs/1201.3609

This is potentially very interesting because when you're measuring only the integral of ##H(z)##, the errors on ##H(z)## itself are necessarily going to be significantly noisier (taking a derivative increases the noise).
 
  • Like
Likes wabbit
  • #47
Yes the extraction of these 29 points is weird, I hadn't thought about that. Actually the test of the z-Hz dependency is already contained in the best fit done in supernova and other studies, one can test on the integrals or distance functions directly, I agree taking the derivative is not going to give better results. And testing the differences between derivatives seems bound to add plenty of noise that a directvtest would not suffer.

Edit : thanks for the link to http://arxiv.org/abs/1201.3609 - this looks very cool. Probably more than I can readily digest but maybe a nibble at a time willl do : )
 
Last edited:
  • #48
  • #49
JuanCasado said:
I would like to point out that H(z) measurements compiled in this last paper also point to a linear expanding universe. So do data reported in:

http://arxiv.org/pdf/1407.5405v1.pdf
Thanks for the link but can you clarify why you see this paper as supporting linear expansion? The authors do not seem to draw that conclusion if I read this corrrectly :
we can conclude that the considered observations of type Ia supernovae [3], BAO (Table V) and the Hubble parameter H(z) (Table VI) confirm effectiveness of the ΛCDM model, but they do not deny other models. The important argument in favor of the ΛCDM model is its small number Np of model parameters (degrees of freedom).
This number is part of information criteria of model selection statistics, in particular, the Akaike information criterion is [52] AIC = min χ^2 Σ + 2Np. This criterion supports the leading position of the ΛCDM model.
 
Last edited:
  • #50
Well a picture is worth a thousand words... (Data plotted from Table 1 of 'Is there evidence for dark energy evolution?') (http://arxiv.org/abs/1503.04923)

upload_2015-6-10_16-29-11.png


The solid black line is the linearly expanding model plot, the hatched red line is the [itex]\Lambda[/itex]CDM plot, with h0 = 0.673 and [itex]\Omega_m[/itex] = 0.315, the Planck 2014 results.

Make of it what you will...

(I come from the age of pencil, plotting paper and slide rule - I still have it!)Garth
 
Last edited:
  • #51
Had a look at that data, it doesn't really distinguish between the models. The LCDM mean square error is a little better than the linear models, but nothing dramatic - chi2s are fine for both and errors (relative to reported standard errors) do not show any clear pattern. (except I see a slight bias with linear(67), linear(62) looks a tad better on that count, also with a lower overall error)
 

Attachments

  • lcdmlin29.jpg
    lcdmlin29.jpg
    10.5 KB · Views: 359
Last edited:
  • #52
Yes, thanks wabbit, your computer graphics picture is certainly smarter than my 'scanned-in' free-hand (but at least I show the error bars).

As Chalnoth said in #41
So the smart money here is on there being something wrong with the z=2.34 measurement from the Lyman-alpha forest. This suggests the need for more independent high-redshift data to resolve the issue.

But it is intriguing that in this different set of analysis to the OP paper the linear model again is 'surprisingly consistent with the data'; and further high z data may take it either way - the "money" is not necessarily so "smart"! We'll wait and see...

Just a thought.

Garth
 
Last edited:
  • #53
Nah I must say your chart looks a lot better than my ugly graphic.
The z=2.34 doesn't look that far off to me, it is 2-sigma (sigma from their table) from either model curve (above or below depending if you chose linear62 or LCDM) - a bit high but not dramatically so.

The errors around lcdm get noisy for large z which suggests that the table' sigmas might be somewhat underestimated. Attached are the normalized errors ## \frac{H(z)-H_{LCDM}(z)}{\sigma_H}##
 

Attachments

  • lcdm-err.jpg
    lcdm-err.jpg
    9.4 KB · Views: 414
  • #54
Thanks, that makes it clearer; it just shows we need more data - as always!

Garth
 
  • #55
wabbit said:
The errors around lcdm get noisy for large z which suggests that the table' sigmas might be somewhat underestimated. Attached are the normalized errors ## \frac{H(z)-H_{LCDM}(z)}{\sigma_H}##

Well that is if the 'prior' is the [itex]\Lambda[/itex]CDM model. If the 'prior' is the R=ct model then errors for large z would presumably (from a cursory look at the plot) get quieter, which might suggest something about that model. Would it be possible for you to do an equivalent error diagram for the 'linear' model? That would be great.

Such a cursory look at my plot reveals that between z = 0.4 and 1.0 the data fits the [itex]\Lambda[/itex]CDM model more closely, however from z = 1.0 to 2.4 the data fits the 'linear' model better.

Now I know some think I am making a meal of this, however if in the OP paper the 'linear' model is also "surprisingly quite consistent" with the data as the [itex]\Lambda[/itex]CDM model that would be just coincidence, but here we have two completely independent sets of data in which the same "rather surprising" 'coincidental' concordance is true.

That might be more than just coincidence.

So what is the data telling us?

Just a thought...

Garth
 
Last edited:
  • #56
I don't think they get let noisy - the total squared normalized error is greater with the linear model, especially with h0=0.67.
I'll post those later, don't have it at hand now - for h0=0.67 the worst part is in the mid-z range where you get at least one 3-sigma error. But if you use h0=0.62 (a better fit for those 29 points), as I recall the errors are also within 2 sigma and z=2.34 has as much error as in lcdm, just of opposite sign.
 
  • #57
Garth said:
Yes, thanks wabbit, your computer graphics picture is certainly smarter than my 'scanned-in' free-hand (but at least I show the error bars).

As Chalnoth said in #41

But it is intriguing that in this different set of analysis to the OP paper the linear model again is 'surprisingly consistent with the data'; and further high z data may take it either way - the "money" is not necessarily so "smart"! We'll wait and see...

Just a thought.

Garth
The model still doesn't come anywhere near explaining either nucleosynthesis or the CMB power spectrum. The complete and utter failure to come close to fitting these pieces of data means the model can't possibly be correct.

If you want to argue for evolving dark energy, that's a different issue. There's still no evidence for that, but there's also no reason to believe that evolving dark energy would have anything to do with the linearly-evolving model.
 
  • #58
wabbit said:
I don't think they ger let noisy - the total squared normalized error is greater with the linear model, especially with h0=0.67.
I'll post those later, don't have it at hand now - for h0=0.67 the worst part is in the mid-z range where you get at least one 3-sigma error. But if you use h0=0.62 (a better fit for those 29 points), as I recall the errors are also within 2 sigma and z=2.34 has as much error as in lcdm, just of opposite sign.
Thank you wabbit, I have tried to use accurate values of h0 = 0.67(3) from 2013/14
(I know latest values Planck 2015 give h0 = 0. 678 and [itex]\Omega_m[/itex] = 0.308 - would that make a difference?)

We need that further data!

And Chalnoth - we have gone beyond looking at BBN or the CMB power spectrum and concentrating on what the independent data sets might be telling us about the later cosmic expansion history. As I said in #55 "here we have two completely independent sets of data in which the same "rather surprising" 'coincidental' concordance is true." What is that (the data) telling us?

Garth
 
Last edited:
  • #59
OK same as before (obs-model)/sigma_H

Stock lcdm is best of the three (mean squared normalized error criterion) but these 29 points are not enough really, the conclusions are I think much stronger with the whole data - even just SNIa.

With the criteria I'm using here (key assumption is reliance on reported uncertainties), the linear model with h0=0.67 isn't good - it has a 0.05 p value and a >2.5-sigma error on 3 of 29 points. While lin62 is close in global fit quality to lcdm, lin67 really isn't.
 

Attachments

  • lin62err.jpg
    lin62err.jpg
    8.1 KB · Views: 347
  • lcdmerr.jpg
    lcdmerr.jpg
    8.1 KB · Views: 400
  • lin67err.jpg
    lin67err.jpg
    8.2 KB · Views: 340
Last edited:
  • Like
Likes Garth
  • #60
wabbit said:
OK same as before (obs-model)/sigma_H

Stock lcdm is best of the three (mean squared normalized error criterion) but these 29 points are not enough really, the conclusions are I think much stronger with the whole data - even just SNIa.
Great, thank you wabbit very much!

The lin67 errors look nice and symmetrical...

Garth
 
Last edited:
  • #61
Garth said:
Great, thank you wabbit very much!

The lin67 errors look nice and symmetrical...

Garth
Added comment about that in previous post - no they don't, they show a significant negative bias and I had to extend the scale for all graphs just so lin67 would fit. Lcdm errors are all within 1.5 sigma except one at 2 sigma, not so lin67.

This was the reason I added lin62, to give a fair chance to the linear model.
 
  • #62
wabbit said:
Added comment about that in previous post - no they don't, they show a significant negative bias and I had to extend the scale for all graphs just so lin67 would fit. Lcdm errors are all within 1.5 sigma except one at 2 sigma, not so lin67.

This was the reason I added lin62, to give a fair chance to the linear model.
Okay - I see your point. The important thing is to use the most accurate values available and if the model doesn't fit the data, then it doesn't fit.
Added comment: I wasn't disputing the better fit overall of the lcdm fit, just that of the lin figures the lin67 plot was more symmetrical than the lin62 one.
Garth
 
Last edited:
  • #63
Garth said:
And Chalnoth - we have gone beyond looking at BBN or the CMB power spectrum and concentrating on what the independent data sets might be telling us about the later cosmic expansion history.
Like I said. Cherry picking.

Disregarding the highest-accuracy data that we have is highly disingenuous.

Now, if you had some different model that behaves like ##CDM## at early times, but like ##R=ct## at late times, maybe it would be more reasonable. But the fact that this model requires gravity at large scales to be completely independent of the matter density makes this sort of model highly unlikely to be sensible.

Garth said:
As I said in #55 "here we have two completely independent sets of data in which the same "rather surprising" 'coincidental' concordance is true." What is that (the data) telling us?
Except the fit isn't even all that good. As wabbit pointed out, you need an unreasonably-low value of ##h## to make the fit close to good.
 
  • #64
Chalnoth said:
Like I said. Cherry picking.

Disregarding the highest-accuracy data that we have is highly disingenuous.
As the models diverge at high z, where for any particular z the universe is considerably older (and H(z) subsequently smaller) in the linear model, the significant data points will be those in this earlier regime.

The age tension with highly developed quasars and their associated SMBHs at the highest observed red shifts may also be also pertinent to the H(z) v z plot.
Now, if you had some different model that behaves like ##CDM## at early times, but like ##R=ct## at late times, maybe it would be more reasonable. But the fact that this model requires gravity at large scales to be completely independent of the matter density makes this sort of model highly unlikely to be sensible.
Such as the Steady Flow cosmological model, (Juan Casado's #27) where subsequent to the radiation dominated era '[itex]\Lambda[/itex]' evolves and varies as a(t)-2?

This is delivered by an EoS of [itex]p = -\frac{1}{3}\rho[/itex], so decoupling gravity at large scales from matter density. Such an evolving DE might also ease the coincidence problem of the standard model.

Developing a mechanism, an action, to deliver such a model is the challenge...

Garth
 
Last edited:
  • #65
Garth said:
Such as the Steady Flow cosmological model, (Juan Casado's #27) where subsequent to the radiation dominated era '[itex]\Lambda[/itex]' evolves and varies as a(t)-2?

This is delivered by an EoS of [itex]p = -\frac{1}{3}\rho[/itex], so decoupling gravity at large scales matter from density. Such an evolving DE might also ease the coincidence problem of the standard model.

Developing a mechanism, an action, to deliver such a model is the challenge...

Garth

I liked very much the idea in that paper of raising coincidences to laws and seeking a natural explanation to the value of the cosmological constant, but as you say what appears to be missing there (yet ? ) is a well developed physical model with experimental support. Very intriguing still.
 
  • #66
Thank you, Garth an Wabbit for the great job. Well, I agree that both models seem to fit similarly with data and that more measurements are needed to decide in favor of one of them (as usual). Notice, however, that two "free" parameters are needed to obtain the fitting for the LCDM model, meanwhile only one parameter is enough in the case of linear expansion in the late universe...
Chalnoth, the issues of primordial nucleosynthesis and CMB are addressed in the paper on the Steady Flow model I quoted previously.
 
  • #67
wabbit said:
I liked very much the idea in that paper of raising coincidences to laws and seeking a natural explanation to the value of the cosmological constant, but as you say what appears to be missing there (yet ? ) is a well developed physical model with experimental support. Very intriguing still.
I guess that you refer to observational support when you say experimental support. I hope it will arrive in the near future. I am also open to collaboration in order to develop a better physical model...
 
  • #68
JuanCasado said:
I am also open to collaboration in order to develop a better physical model...
You are on!

Garth
 
Last edited:
  • #69
JuanCasado said:
I guess that you refer to observational support when you say experimental support.
I was referring in general to any experiments which might be done to investigate proposed fundamental mechanisms, but presumably most would be done using astronomical instruments pointed at the sky : )
 
  • #70
Garth said:
As the models diverge at high z, where for any particular z the universe is considerably older (and H(z) subsequently smaller) in the linear model, the significant data points will be those in this earlier regime.
The CMB is the earliest data point from radiation (and has the lowest systematic uncertainties). Nucleosynthesis, though it does have more systematic uncertainties, is from an even earlier time. I think you've just made my point for me.

Garth said:
The age tension with highly developed quasars and their associated SMBHs at the highest observed red shifts may also be also pertinent to the H(z) v z plot. Such as the Steady Flow cosmological model, (Juan Casado's #27) where subsequent to the radiation dominated era '[itex]\Lambda[/itex]' evolves and varies as a(t)-2?

This is delivered by an EoS of [itex]p = -\frac{1}{3}\rho[/itex], so decoupling gravity at large scales from matter density. Such an evolving DE might also ease the coincidence problem of the standard model.
Oh, wow. That paper is terrible. He claims to say that Newtonian arguments suggest that matter should not impact the expansion rate. That's just wildly incorrect.

It might be possible to come up with some elaborate model where you have a dark energy-like fluid which interacts with matter in such a way to produce a combined ##w=-1/3##. But that would be extremely elaborate. And it's not likely to work for the early universe anyway.
 
Last edited:

Similar threads

Replies
7
Views
3K
Replies
8
Views
2K
Replies
16
Views
3K
Replies
3
Views
2K
Replies
26
Views
3K
Replies
2
Views
1K
Replies
46
Views
6K
Replies
4
Views
2K
  • Cosmology
Replies
1
Views
1K
Replies
7
Views
2K
Back
Top