Is Mirror Matter Still Considered a Viable Hypothesis Among Physicists?

In summary: In short, you could argue that predictions that go beyond the current data are justified if there is evidence that the hypothesis is more likely true than false. But even then, it's important to be sceptical and to test the hypothesis empirically.
  • #1
johne1618
371
0
What is the current state of the hypothesis of mirror matter today?
Are there any experimental data or theoretical arguments that exclude it by now, or is it still considered viable among physicists?The relationship between mirror matter and ordinary matter is different from matter-antimatter which are related by space-time reflection.

As far as I can make out Matter-Mirror matter is hypothesised to be related by space reflection alone.

Mirror matter theorists like Dr.Robert Foot believe that dark matter might be made up of mirror hydrogen and mirror helium.
 
Last edited:
Physics news on Phys.org
  • #2
This was one of Foot's predictions:

Higgs production and decay rate should be 50% lower than in the standard model of particle physics due to Higgs - mirror Higgs mixing [4,1]. This holds assuming that the two mass eigenstate Higgs fields have mass separation much larger than their decay widths
LHC results seem to rule that out.
 
  • #3
In ATLAS and CMS hints for a mirror Higgs boson Robert Foot states in the abstract:

ATLAS and CMS have provided hints for the existence of a Higgs-like particle with mass of about
144 GeV with production cross section into standard decay channels which is about 50% that of the
standard model Higgs boson. We show that this 50% suppression is exactly what the mirror matter
model predicts when the two scalar mass eigenstates, each required to be maximal admixtures of a
standard and mirror-Higgs boson, are separated in mass by more than their decay widths but less
than the experimental resolution. We discuss prospects for the future confirmation of this interesting hint for non-standard Higgs physics.

The latest Higgs result at 125 GeV doesn't seem to have this 50% anomaly.

These are Robert Foot's latest papers mentioning the Higgs:

Quark-lepton symmetric model at the LHC

Electroweak scale invariant models with
small cosmological constant


Both too technical for me to see what he's now saying about mirror-matter theory and the Higgs boson!

Could anyone enlighten me?
 
Last edited:
  • #4
johne1618 said:
The latest Higgs result at 125 GeV doesn't seem to have this 50% anomaly.
In addition, any signal of 50% of the standard model value near 144 GeV is excluded now, if I remember correctly.
That paper looks like a typical example how useless some theory papers can be. There was a statistical fluctuation, a theoretician adapted his pet theory to the fluctuation, the fluctuation went away, and we learned nothing new - that particular idea is excluded, but it was so fine-tuned to the fluctuation that the exclusion is irrelevant anyway.
 
  • #5
mfb said:
That paper looks like a typical example how useless some theory papers can be. There was a statistical fluctuation, a theoretician adapted his pet theory to the fluctuation, the fluctuation went away, and we learned nothing new - that particular idea is excluded, but it was so fine-tuned to the fluctuation that the exclusion is irrelevant anyway.

That's a little unfair I think. In science we need as rich a variety of hypotheses to test as possible, and the place of theorists is to generate them. Most of them are crazy and tuned and implausible, but then again the Standard Model is incredibly tuned as well, so who knows? Maybe you are right and there is a good reason why we should know for sure in advance that these things are not going to correlate with reality, but as far as I know our understanding of epistemology has not made it that far yet.

Also it is the job of theorists to tell us of all the possible interpretations of any fluctuations, even though we all know that said fluctuation is probably just that. Sure, it's not very exciting when the fluctuation goes away, but the field still moves forward inch by inch. Maybe these papers are only interesting to other theorists who work on similar things, but so what? This is the case with minor advances in every field.
 
  • #6
Well, if the models are so flexible that we can postdict everything, how can we trust any predictions? How can we test a model, if there is a version for every possible measurement?
The Standard Model is able to predict hundreds of measurement results with just ~25 free parameters. If a new theory requires more free parameters than it can predict measurements (different from SM) within the reach of current experiments, it is hard to test it.
 
  • #7
mfb said:
Well, if the models are so flexible that we can postdict everything, how can we trust any predictions?

Well, this is a philosophical question now, and the answer depends on what you think about the problem of induction. If you adhere strictly to falsificationism then you can't rely on any predictions that go outside the domain of your current data (i.e. even if all the swans you have ever seen are white (say in England), the hypothesis that a swan you see when you go to Australia will be black is still not falsified, so you are not justified in the generalisation that all swans are white.)

Broadly speaking this is the approach to hypothesis testing adopted in physics today. A model is not falsified until it is falsified, that is, a model with one set of parameters is not damned by the failure of the same model with a different set of parameters. From a frequentist perspective, a separate p-value is computed for every set of parameters; that is each parameter space point is judged independently.

mfb said:
How can we test a model, if there is a version for every possible measurement?

A model is the theoretical framework plus the parameters, and you can only test them one set of parameters at a time. So each measurement rules out some parameters and not others, and does not tell you anything about the framework itself until *no* parameter set let's you fit the data.


mfb said:
The Standard Model is able to predict hundreds of measurement results with just ~25 free parameters. If a new theory requires more free parameters than it can predict measurements (different from SM) within the reach of current experiments, it is hard to test it.

Yes, it is indeed hard to test it.

-

So that is the orthodox story. But I share your general feelings on the matter, and consider it more of an indictment of falsificationism and frequentist hypothesis testing than I do a solid defense of current practice. Unfortunately, however, it is not really clear how to do better. It seems to me like Bayesian methods have the right philosophical angle (and can incorporate considerations of fine-tuning etc) but so far there is no agreement about how to correctly apply them to model comparison in this kind of grand setting.
 
  • #8
I see a big difference between the work of Robert Foot and some other models. As an example, compare those two:
Paul Dirac: "for every fermion, there is an antiparticle with the same mass and spin, but opposite charge". Free parameters: zero. The whole theory can be falsified if no partner of the electron can be found.
Robert Foot: "with mirror matter, with some new parameters with specific values, we get 50% of the SM Higgs signal at 144 GeV". Free parameters: at least as many as the number of measurements the model describes. I do not see any clear predictions in the paper apart from the precise amplitude the signal should have (relative to the SM).

A model is the theoretical framework plus the parameters, and you can only test them one set of parameters at a time. So each measurement rules out some parameters and not others, and does not tell you anything about the framework itself until *no* parameter set let's you fit the data.
Sure, but if a model needs 10 parameters to describe 1 measurement I am "a bit" skeptical.
 
  • #9
mfb said:
I see a big difference between the work of Robert Foot and some other models. As an example, compare those two:
Paul Dirac: "for every fermion, there is an antiparticle with the same mass and spin, but opposite charge". Free parameters: zero. The whole theory can be falsified if no partner of the electron can be found.
Robert Foot: "with mirror matter, with some new parameters with specific values, we get 50% of the SM Higgs signal at 144 GeV". Free parameters: at least as many as the number of measurements the model describes. I do not see any clear predictions in the paper apart from the precise amplitude the signal should have (relative to the SM).

Sure, but if a model needs 10 parameters to describe 1 measurement I am "a bit" skeptical.

Well this is where yes, there are some principles that help guide our thinking, but what I meant was there is no rigorous formalism that can be applied. The somewhat vague advice of falsificationist philosophy is that you should adopt as your working model whichever one is the *most* falsifiable, since this is the model that has been the most "bold", if you will, with it's predictions. So yes, no-one suggests that there is any good reason to believe the predictions of these theories that you can tune the bejesus out of (and they are of course highly vulnerable to being overfitted, as you point out), but nor are they excluded, so we should still study them just in case.

Also, once we go beyond the Standard Model, it is pretty unclear what model best meets this criteria of "most falsifiable", so casting the net broadly seems a good strategy to me.
 
  • #10
so casting the net broadly seems a good strategy to me.
That is done on the experimental side anyway. 6 out of the last 8 ATLAS papers look for new particles, for example. CMS has published some papers about the SM Higgs boson recently, excluding them most papers are searches for new particles as well.
 
  • #11
mfb said:
That is done on the experimental side anyway. 6 out of the last 8 ATLAS papers look for new particles, for example. CMS has published some papers about the SM Higgs boson recently, excluding them most papers are searches for new particles as well.

They certainly try, but (without checking...) if I remember correctly the vast majority of the new particle searches are for SUSY particles, and they are mostly in weird simplified scenarios that make no theoretical sense and make strange assumptions like that the new particle decays 100% into one channel or another. Of course this is because they are just trying to cook up something simple enough to optimise a search around, with the hope being that if they capture the phenomenology cleanly enough then they might catch a glimpse of whatever the "real thing" is. This is their job after all; trying to find something. It is not really up to them to try and properly constrain realistic theories, it is better if they can produce broadly applicable results which can be used by theorists to constrain their models.

Anyway, I guess I mean that it is good if they can include in their search strategies some searches that would be useful for constraining even these weird less popular models, and they are going to need the people working on those models to tell them what to look for. But I guess SUSY is still the go-to framework and will be the main focus of effort for a while yet (unless the experimentalists see something spectacularly non-SUSY...)
 
  • #12
I don't think it's black and white, but a theory that when faced with a statistically insignificant excess claims "I predicted it all along" doesn't look so good when it goes away. Real chutzpah is when the theory is said to predict the excess going away as well.
 
  • #13
Vanadium 50 said:
I don't think it's black and white, but a theory that when faced with a statistically insignificant excess claims "I predicted it all along" doesn't look so good when it goes away. Real chutzpah is when the theory is said to predict the excess going away as well.

Yes, but it is still just an overfitting problem. People usually hedge their predictions, saying something more like "this excess *can* be explained by such and such theory", they practically never claim that an excess is an unambiguous prediction of the theory. Usually you just change the parameters a bit and boom, no more prediction of an excess. I think people should still publish these things though; if nothing else they tell you something about what parameters are going to be disfavoured when the excess does go away. It's not mind-blowingly interesting physics but something is still learned from doing it I think.

...within reason that is. Less than 2 sigma excesses are surely a waste of time since those points predicting it will not even be ruled out at 95% confidence when it vanishes (roughly speaking).
 
Last edited:

Related to Is Mirror Matter Still Considered a Viable Hypothesis Among Physicists?

1. What is the Mirror Matter Hypothesis?

The Mirror Matter Hypothesis is a theory in physics that proposes the existence of a parallel universe made up of "mirror matter," which is essentially a mirror image of the matter that makes up our own universe.

2. What evidence supports the Mirror Matter Hypothesis?

One of the main pieces of evidence supporting the Mirror Matter Hypothesis is the observation of an imbalance in the amount of matter and antimatter in the universe. This suggests that there may be another source of matter that we have not yet detected.

3. How does the existence of mirror matter impact our understanding of the universe?

If the Mirror Matter Hypothesis is proven to be true, it would greatly expand our understanding of the universe and the laws of physics. It could help explain phenomena such as dark matter and help us better understand the origins of the universe.

4. How is mirror matter different from regular matter?

Mirror matter is believed to have the same properties as regular matter, such as mass and charge, but with opposite parity. This means that it would behave like a mirror image of regular matter, with left-handed particles in our universe corresponding to right-handed particles in the mirror universe.

5. What are the potential implications of the Mirror Matter Hypothesis?

If mirror matter is discovered, it could have a major impact on our current understanding of particle physics, cosmology, and the universe as a whole. It could also have practical applications, such as in developing new technologies or energy sources.

Similar threads

Replies
20
Views
2K
  • Classical Physics
Replies
18
Views
2K
  • Beyond the Standard Models
4
Replies
105
Views
11K
Replies
15
Views
2K
  • Beyond the Standard Models
2
Replies
39
Views
1K
  • Beyond the Standard Models
Replies
11
Views
2K
Replies
13
Views
2K
Replies
46
Views
9K
  • Beyond the Standard Models
Replies
14
Views
3K
Replies
1
Views
1K
Back
Top