Do TOE candidates predict SM parameters?

In summary, the theories that rely on #3 predict that CP symmetry is spontaneously broken in some way. Some predictions about particle masses can be made based on this.
  • #1
Dmitry67
2,567
1
I wonder if TOE candidate theories (loop gravity, superstrings, twixtors, or <you-name-it>) predict from the first principles SM parameters (all, some of them, or some correlations like Koide formula)?

If they don't predict SM parameters, then the only option is "10**500 different universes with different SM parameters, born from the false vacuum", correct?

What is a relationship between different theories and "10**500 universes" approach:
1. absolutely required,
2. optional,
3. or not compatible at all?

Of course, the answer may be different for different theories.
Thanks.
 
Physics news on Phys.org
  • #2
Kaluza-Klein theory predicts some SM parameters and does it wrong. Namely it provides a wrong value of the electron mass and predicts a massless scalar particle. It was one of the reasons it was abandoned from the mainstream. It is sad IMO, since some Higgs mechanism could actually correct the values.
 
  • #3
Explaining the SM parameters is a long journey. Thousands of QFT papers have been written, trying to explain just parts of the general structure - the order-of-magnitude relations between fermion masses, or between the elements of the mixing matrices. String theory is then a unified framework which, on the right background, might produce these GUTs or other BSM models, but it is very hard to get the predictions even for a specific background, because of hard-to-calculate quantum corrections. So explaining the SM is a jigsaw, you need to work separately on different parts of it, and you need to have multiple ideas for how each part fits together. Eventually there will be another leap in understanding and a deeper logic will become apparent, as happened when the SM was created in the 1970s, but for now there is no other path than to keep pushing in all directions.
 
  • #4
I understand the issue with quantum corrections. But let's discuss the narrower question, CP violation. The exact value is not important, what is interesting is that it is 0 or not 0. So we can forget about the exact value of quantum corrections.

As mathematics itself is deterministic, it is hard to get assymetry from symmetrical theory. There are 3 options:

1. The assymetry is injected into otherwise symmerical framework with initial/boundary conditions. However, TOE should not depend on initial conditions, othwerwise it is the same as just postulating some specific values of SM parameters.

2. The assymetry is the core property of the mathematical framework of TOE. For example, exp(x) is not symmetric at x=0. Say, negative charges are principally different from positive ones, however, at low energies that assymetry is almost hidden by quantum corrections. However, we know that at high energies we see more symmetries, not less!

3. Symmetry is symmetrically and deterministically broken (MWI-cat-style). Essentially it is the same as 10*500 "baby-universes" approach. In every baby universe symmetry is somehow broken, but the total ensemble of universes is perfectly symmetric.

Anything missing?

So I was asking what TOE theories rely on #3 and what on #2.
 
Last edited:
  • #5
In string theory, CP symmetry is part of ten-dimensional gauge symmetry, so any CP violation has to be spontaneous. That can happen in a variety of ways, e.g. from fluxes in the extra dimensions or from supersymmetry-breaking, so exactly why you get CP violation from the weak interaction but not from the strong interaction, etc., will depend on the model.
 
  • #6
Is Connes' non-commutative geometry approach a ToE candidate? Then there is a ToE candidate NCG which predicts SM parameters
 
  • #7
tom.stoer said:
Is Connes' non-commutative geometry approach a ToE candidate? Then there is a ToE candidate NCG which predicts SM parameters

Yes!
http://arxiv.org/pdf/1208.1030v2.pdf
Resilience of the Spectral Standard Model
Ali H. Chamseddine and Alain Connes

I'm glad to see them using the terminology "spectral geometry" more often now instead of "noncommutative geometry". And consequently saying "spectral Standard Model" instead of noncommutative standard model. It is a more descriptive. Urs Schreiber advocated this terminology over 5 years ago in a tutorial he gave.

They do seem to reproduce the SM and make some predictions about particle masses. Their idea of a "ToE" seems to involve the "big desert" hypothesis that the Standard Model we have might be good all the way to Planck scale, or unification scale. Perhaps a theory of gravity and matter that is good to Planck scale is all one can reasonably as of a "ToE".

It's impressive that they get the Standard Model and even predict some parameters based on what is a fairly simple geometric scheme.

I'm not sure just how they acquire gravity. Could someone give an intuitive explanation of how they do that? They obviously have all the appropriate machinery---a 4D manifold and an ability to describe the metric on it.

This paper of Connes and Chamseddine was on the "Most Important Paper" poll and got three votes:
https://www.physicsforums.com/poll.php?do=showresults&pollid=2289
If you haven't voted in the poll yet, and want to, go here:
https://www.physicsforums.com/showthread.php?t=640181
 
Last edited by a moderator:
  • #8
marcus said:
It's impressive that they get the Standard Model and even predict some parameters based on what is a fairly simple geometric scheme.
I have to understand the classification of their NCGs better in order to understand what it means that "they get the Standard Model". In their last paper they don't get the Higgs mass (even if they claim in the abstract) but they show that 125 GeV is reasonable within their theory; it is at best a post-diction

marcus said:
I'm not sure just how they acquire gravity.
I am not, either; that's why I asked whether "... Connes' non-commutative geometry approach [is] a ToE candidate"
 
  • #9
marcus said:
Yes!
http://arxiv.org/pdf/1208.1030v2.pdf
Resilience of the Spectral Standard Model

They do seem to reproduce the SM and make some predictions about particle masses. Their idea of a "ToE" seems to involve the "big desert" hypothesis that the Standard Model we have might be good all the way to Planck scale, or unification scale. Perhaps a theory of gravity and matter that is good to Planck scale is all one can reasonably as of a "ToE".

As I understand it the symmetries of the SM continue to exist in any spacetime, no matter how curled up or how fast it is expanding, since they are internal symmetries. So it would seem that these symmetries existed from the start even during inflation. I would assume that something must have happened to begin to differentiate one symmetry from another during reheating when inflation stopped. Could it be that each of the symmetries of the U(1)SU(2)SU(3) begin to look like the others at small enough distances or high enough energies. And then as the temperature cooled, the distance lengths grew large enough to make the differences in the symmetries appearent? Is that what is happening in the NCG model?
 
  • #10
friend said:
As I understand it the symmetries of the SM continue to exist in any spacetime...?

There is a revolutionary new idea here that you may not taking count of. A new way to represent 4D spacetime that does NOT use a manifold. The manifold (a set of points, a set of maps, map overlap consistency, smooth functions defined locally, tangentspace at each point) was invented in 1850 and is the math object everybody usually thinks of in this context representing a spacetime or any other continuum. GR was defined on manifolds.

But there is a more abstract way to present spacetime! In a sense it is simpler and more generalizable. You can present a spacetime as an abstract NUMBER SYSTEM! It is very unintuitive why and how you can do this until you get used to it.

I don't mean ordinary numbers--integers, reals, complex numbers---NO! but a set of abstract things not numbers but that you nevertheless can add, subtract, multiply. For example imagine the set of all smooth real-valued functions defined on the surface of a donut. An algebraic object: you can add or multiply two functions to obtain a third. And hidden deep in the combination rules of that algebraic object can still be a simple geometric object like the surface of a donut. An algebraist can find the hidden geometrical thing which is sometimes called the "spectrum" of the algebraic object. It is revealed to him by the way the functions defined on the donut add, subtract, and multiply with each other to give other functions

The good thing is that the algebraic object can be modified ever so slightly and then its spectrum will consist not merely of, say, the surface of a donut but so to speak a very fussy surface of a donut which will only allow to be defined on it functions which have a certain symmetry (like U(1)SU(2)SU(3), but perhaps not that exactly, some symmetry)

There is no more manifold, no set of points papered with overlapping maps, and the symmetry is in some sense "dyed in the wool" intrinsic in the spectrum of this abstract algebra system, the spectrum which would have represented an ordinary spacetime if it hadn't been generalized or tweaked.

So Connes and Chamseddine figured out how to mimic a spacetime with SM living in it (in this abstract way) and in the process they got MORE. they were able to extract predictions that you couldn't get from the ordinary SM.
They goofed on one of their first predictions (as they explain in the "Resilience" paper) overlooked an important detail. but that happens. So they found their mistake and now they have a "post-diction" instead of a prediction. the main thing is that their way of realizing the SM gives something MORE than just the Standard Model.

Also it is kind of elegant.

One of the regulars at BtSM, arivero, knows a lot about NCG (or spectral geometry as it is coming to be called) and can, I think, explain how you rig the algebra so that the U(1)SU(2)SU(3) SM arises from it. IIRC he has studied this some years back like 2007 and 2008.

Here is the first reference in the "Resilience" paper:
[1] Ali H. Chamseddine and Alain Connes, Why the Standard Model, J. Geom. Phys. 58 (2008) 38-47.

http://arxiv.org/abs/0706.3688
Why the Standard Model
Ali H. Chamseddine, Alain Connes
13 pages
(Submitted on 25 Jun 2007)

"The Standard Model is based on the gauge invariance principle with gauge group U(1)xSU(2)xSU(3) and suitable representations for fermions and bosons, which are begging for a conceptual understanding. We propose a purely gravitational explanation: space-time has a fine structure given as a product of a four dimensional continuum by a finite noncommutative geometry F. The raison d'etre for F is to correct the K-theoretic dimension from four to ten (modulo eight). We classify the irreducible finite noncommutative geometries of K-theoretic dimension six and show that the dimension (per generation) is a square of an integer k. Under an additional hypothesis of quaternion linearity, the geometry which reproduces the Standard Model is singled out (and one gets k=4)with the correct quantum numbers for all fields. The spectral action applied to the product MxF delivers the full Standard Model,with neutrino mixing, coupled to gravity, and makes predictions(the number of generations is still an input)."

And here's their second reference:

[2] Ali H. Chamseddine and Alain Connes, Noncommutative Geometry as a Framework for Unification of all Fundamental Interactions including Gravity. Part I, Fortsch. Phys. 58 (2010) 553-600 http://arxiv.org/abs/1004.0464
 
Last edited:
  • #11
So if this new theory is so good that it predicts SM parameters... then there are no 10**500 "baby universes" with different parameters... why are these, then only possible parameters so life-friendly?
 
  • #12
marcus said:
I'm not sure just how they acquire gravity. Could someone give an intuitive explanation of how they do that?
First let me point out that according to Connes, they get the other forces from gravity's extension to the noncommutative part of space-time. The recipe is:
  • Start with an "almost commutative manifold" M x F, where M is Minkowski space and F is a "finite geometry".
  • Define a Dirac operator on M x F.
  • Construct the bosonic part of the "spectral action" for D (a standard formula, motivated by the need for local observables). Connes says that conceptually, this action is "pure gravity", but fluctuations of the metric on M x F are of two types, the familiar sort that are gravitons, and "inner fluctuations" associated with F which give rise to the gauge bosons.
  • Finally, you add another standard term to the spectral action to get the fermions and their interactions.
This synopsis is based on expositions like this one. I've only skimmed it and can't tell you why this procedure gives you gravity.
Dmitry67 said:
So if this new theory is so good that it predicts SM parameters...
It implies a few extra relationships among SM parameters. So the total number of free parameters is slightly reduced, but there's still about 20 of them. In the NCG framework, most of these are moduli of the Dirac operator, the object used to construct the SM action.

I have seen a few perplexed online discussions in which people who know conventional particle physics and QFT struggle to understand where these extra relationships come from. Since the 1970s, there is a standard way to think about the action of a renormalizable gauge field theory, as containing all renormalizable interactions consistent with the specified gauge symmetry. These interaction terms will have undetermined coefficients and these are the free parameters of the theory.

The standard model is a renormalizable theory and it fits this framework. Connes et al have re-expressed the standard model in this new "noncommutative" or "spectral" framework, and they get the extra relations. So the challenge is to understand, from a perspective based on conventional QFT, where these extra constraints on the parameters come from. The fact that gravity is part of the noncommutative construction from the beginning may be related. I have seen string theorists speculate that the noncommutative theory may be a truncation of a Kaluza-Klein model (Lubos Motl) or a perspective on a non-geometric phase of string theory (Urs Schreiber), but what's really clear is that these are speculations, and there's still no rigorous understanding of how this relates to the 1970s-standard framework.

I've also noticed that the recent papers (Chamseddine and Connes, Estrada and Marcolli) which adjust the NC standard model to get the 125 GeV Higgs, use RG flow equations constructed by people working with the conventional SM, and I don't know if that's OK. They seem to be hypotheses about how the RG flow in the NC SM might work, rather than RG equations derived from the postulates of the NC SM.
 
  • #13
So we are not saying farewell to AP as there still free parameters...
 
  • #14
There could also just be non-anthropic arbitrariness.

But the ultimate foundations of the NC SM are somewhat obscure, e.g. (technical detail) consider the function f appearing in the spectral action which then gets expanded. The function is quite unspecified, all that matters are the first few coefficients, which then contribute to the observed parameters.

In the standard framework of renormalizable field theory, the procedure for theory construction is understood well enough that we can say that the free parameters really are free. In the case of string theory, we can say that specifying the vacuum ought to determine all the "free parameters", but whether there is some deeper principle (anthropic or dynamical) which selects the vacuum or favors a certain type of vacuum is completely unclear, because string cosmology remains an unsettled or even badly-founded subject. But in the case of Connes et al's approach to physics, the foundation is simply obscure and therefore it's hard to say how predictive it becomes in its final form.

(By the way, there is no consensus or demonstration that eternal inflation works within string theory. Eternal inflation is originally a field-theoretic model, in which different regions of the inflating universe settle into different ground states. Some string critics of the concept, like Tom Banks, seem to favor a sort of holographic cosmic Copenhagenism in which the universe outside our cosmological horizon is just disregarded. I think that's dumb as an outlook - galaxies don't cease to exist when they cross our horizon - but Banks also makes some technical criticisms of the field-theoretic assumptions behind the model of eternal inflation, arguing that they don't apply to string theory. And in general, quantum gravity in de Sitter space is just not worked out, e.g. the existence and meaning of the asymptotic instabilities in Castro and Maloney's latest. For all I know, those instabilities are a sign that eternal inflation is right, they may be the beginning of the validation of the paradigm. I'm just pointing out (1) the possibility that the different vacua of string theory are different "theories" - different superselection sectors - rather than states of the same theory which might be simultaneously realized within the one universe (2) the physical mechanisms behind eternal inflation still have an uncertain status within string theory (3) string cosmology is very much a work in progress (unlike e.g. string perturbation theory), the discussions about it are still highly heuristic, and in the end the theory may dictate an entirely different approach.)
 
  • #15
The basic question is what "gravity" in the spectral model means: propagating gravitons or non-perturbative, arbitrarily curved spacetime
 
  • #16
mitchell porter said:
In the case of string theory, we can say that specifying the vacuum ought to determine all the "free parameters", but whether there is some deeper principle (anthropic or dynamical) which selects the vacuum or favors a certain type of vacuum is completely unclear, because string cosmology remains an unsettled or even badly-founded subject.

Then it is not TOE yet (by the definition of TOE) as one more level is expected. Hopefully Max Tegmark's MUH is true, so we will be able to construct turtles all way down from TOE, it is called by Max 'Physics from scratch'
 
  • #17
Dmitry67 said:
Then it is not TOE yet (by the definition of TOE) as one more level is expected.
I don't agree. A ToE is a theory which explains all known physical phenomena. String theory has the potential to do that.

Let's make an example: From the SM and its effective or low-energy approx. you can derive the existence of several different phases of matter (gas, liquid, ..., steam, water, ice, ..., iron, ..., ). The fact that the SM doesn't tell you whether you observe carbon-hydrogen based organisms here instead of ice, iron, ... has nothing to do with limitations of the SM but only with the initial conditions. Therefore it is absolutely unclear whether string theory should contain any selection principle in order to derive a unique vacuum we are living in.

In addition I think the above mention conservative definition of a ToE (which requires that it can predict any experimental result related to all known interactions) is widely accepted. Going beyond that definition i.e. requiring that a ToE can ground its own uniqueness and consistency in itself is beyond that definiton - and is a logical nightmare.
 
  • #18
I agree with you, but the main point was about "free parameters". TOE with "baby universes" should not have any free parameters. Of course, it doesn't predict the values of SM parameters in *our* universe - it might be based on AP.

So may be it is a problem of terminology, parameters which appear to be "free" on the scope of individual baby universe, are not free on the scope of the full multiverse. My point was that on multiverse level TOE can't contain any free parameters. Do you agree?
 
  • #19
Let's make an example: From the SM and its effective or low-energy approx. you can derive the existence of several different phases of matter (gas, liquid, ..., steam, water, ice, ..., iron, ..., ). The fact that the SM doesn't tell you whether you observe carbon-hydrogen based organisms here instead of ice, iron, ... has nothing to do with limitations of the SM but only with the initial conditions. Therefore it is absolutely unclear whether string theory should contain any selection principle in order to derive a unique vacuum we are living in.

To be a potential ToE, it should be proved that ST contains the vacuum we are living in (even among 100^500 others), which AFAIK is not (yet) the case...
 
  • #20
Dmitry67 said:
My point was that on multiverse level TOE can't contain any free parameters. Do you agree?
No. It may contain some free parameters which you can fix via a small number of experiments.

Again: it seems that we do not agree an "ToE". A theory describing all known phenomena consistentyl is a ToE. There can be more than one ToE and there can be free parameters in a ToE (to be fixed in the above mentioned sense).

There need not be a unique ToE; different ToEs may exist, and experiment will select the 'correct' one. It's called 'Theory of Everything' = ToE, not 'Unique Theory of Everything' = UToE.
 
  • #21
nicoo said:
To be a potential ToE, it should be proved that ST contains the vacuum we are living in (even among 100^500 others), which AFAIK is not (yet) the case...
In order to be a potential candidate ToE it need not prove that ;-)

But I think what we are discussing here is irrelevant; it's only the meaning of "ToE". It is much more interesting if a certain theory is ab le to describe nature better than other theories. If ST is better than the SM it's fine and nobody cares whether it is a ToE, a candidate ToE, a potential ToE, a candidate for potential ToEs, ...
 
  • #22
tom.stoer said:
But I think what we are discussing here is irrelevant; it's only the meaning of "ToE". It is much more interesting if a certain theory is ab le to describe nature better than other theories. If ST is better than the SM it's fine and nobody cares whether it is a ToE, a candidate ToE, a potential ToE, a candidate for potential ToEs, ...

You are entitled to your opinion, of course. But a Theory of Everything should at least explain everything physical. If parameters can only be "explained" by measurement, then there is no theory for those parameters and thus no TOE. If you have a theory that explains those parameters as being proababilistic, then at least there is a theory for them, but no way to confirm that probabilistic nature by comparing instances where the parameters are different. A complete TOE would not only predict the value of those parameters but would also explain where QM an GR come from and how they are connected. And as far as explanations go, I think that a complete TOE would have to be derived from logic itself, else you are left wondering about why some part of nature is the way it is.
 
  • #23
friend, this is exactly what I wanted to say. Only such TOE "derived from logic itself" can be compatible with MUH. I had so got used to MUH that it became my "hidden assumption", when I was talking about TOE, I was actually talking about MUH-compatible TOE.

If tom.stoer is right, and there is a non-MUH TOE, where one or more "free" parameters just have these values, just remember these values - we can't derive them - well... then it would be soooooo sad... Because it that case Nature is not beautiful

God doesn't like free parameters (c) me :)
 
  • #24
tom.stoer said:
No. It may contain some free parameters which you can fix via a small number of experiments.

Again: it seems that we do not agree an "ToE". A theory describing all known phenomena consistentyl is a ToE. There can be more than one ToE and there can be free parameters in a ToE (to be fixed in the above mentioned sense).

There need not be a unique ToE; different ToEs may exist, and experiment will select the 'correct' one. It's called 'Theory of Everything' = ToE, not 'Unique Theory of Everything' = UToE.

That's one thing we should have learned from string theory 25 years ago: to rethink our expectations of what a theory of physics should be capable of. Especially crackpots are fond of theories which can explain "why the universe is as it is", probably because of bad popular science. I think for that you quickly run into a discussion of what the precise relation is between mathematics and the world surrounding us. Somehow I like to compare the idea that our mathematical notions are capable of explaining "why the universe is as it is" with the idea that our Earth is the centre of the universe. Why should mathematics do that? Because it is so highly successful in the natural sciences?

I think the most down-to-earth way of shedding light on this question is actually trying to construct extensions of the standard model, or writing down theories of quantum gravity, and see how far one can get ;)
 
  • #25
Dmitry67 is right when considering MUH-ToEs and non-MUH-ToEs. And haushofer is right when preferring down-to-earth-theories.

Suppose we had a class of theories like SUGRA with some special properties:
1) countably many (or even a finite number)
2) perturbatively renormalizable (or even finite)
3 with some very specific and testable predictions (perhaps even in the low-energy = non-Planck-regime)
4) with a few free paramaters which can in principle be determined by a finite number of experiments
Suppose that there is no underlying uniqueness principle, no MUH-like reasoning; ...
nevertheless b/c of (1-4) I would be rather happy and would call this a great success
 
  • #26
Me too! :D I'm not that much into phenomenology, but can people make sense of N>1 SUSY or SUGRA phenomenologically due to the non-chiral nature of these theories? As far as I know it's only for N=8 SUGRA that we still don't know if it's renormalizable or not, but things don't seem to look that good.

On the other hand, if it would be renormalizable but phenomenologically "incorrect", would it be such a huge important succes (apart from all the technicalities one has to invoke to come to such a result)? It's nice to write down a theory of QG which is renormalizable of course, but then?
 
  • #27
The problem-setting is multiform
1) we have to have some sort of internal consisteny by construction
2) we have to to have some predictive power in principal
3) we have to have some phenomenological success

In the case of the standard model criterion 1) is not really satisfied; we know that the theory is renormalizable, but we know that the perturbation deries is not convergent, we know that we miss non-perturbative effects, but we do not have a rigorous proof (disproof) that the SM does exist (does not exist) mathematically.

But we are satisfied with 2-3) -- and b/c we do not expect the SM to meet criteria for a ToE -- whatever this means in detail -- 1) is of minor importance.

Shifting or interest from phenomenological successful theories to ToE-like models we focus more on 1) than on 3) Why? b/c we know that there is no phenomenological reason to be interested in anything else but SM + GR! So the reasons to be interested in ToEs is due to elegance of unification, internal consistency, uniqueness, etc.

That does not mean that 2-3) become irrelevant, but they are secondary criteria. We are in the situation that looking for new theories is neither necessary by 3) nor can the new theories we construct be tested against 3) So there must be other guiding principles, e.g. in the spirit of 1)

It's a philosophical questions whether this means MUH-ToEs, string-like ToEs, xyz-ToEs or whatever.

Back to you question: having a consistent but phenomenological wrong theory would mean that it's ruled out as a theory of nature. But after decades of dominationof theories which are neither phenomenological successful (neither right nor wrong -- not even wrong ;-) nor provable consistent (nor inconsistent) this would be step forward: it would be the first tangible phenomenological result in quantum gravity!
 
  • #28
tom.stoer said:
...Back to you question: having a consistent but phenomenological wrong theory would mean that it's ruled out as a theory of nature. But after decades of dominationof theories which are neither phenomenological successful (neither right nor wrong -- not even wrong ;-) nor provable consistent (nor inconsistent) this would be step forward: it would be the first tangible phenomenological result in quantum gravity!

Yes, that's some pretty scary territory. On the one hand, we'd like to develop a ToE from the requirement of logical consistency alone. Then no one in any way could possibly argue with it, and we'd be assured that everything is absolutely reasonable as we suspected all along. On the other hand, what if it's wrong, and we make measurements that are not compatible with our theory of reason. That would be a dilemma that probably persuades some to not even look for such a theory. I think we need to have more faith than that.
 
  • #29
Another scary option - what is there are more then 1 TOE compatible with our experiments? What if the difference would be only at Planck energies, so we won't have a chance to tell what theory describes our Universe?

As an example, there are 2 theories of gravity compatible with our experiments: GR and Einstein–Cartan theory, and there is no chance to test the difference experimentally. We hope that future TOE would rule out one of them, but what if TOE itself is not unique in the same way?

Almost ANY mathematical theory is extensible... Of course in many (but not all) cases these extensions come with the price of some sacrifices and artifacts (like "dividers of zero" which come with quaternions), and there is no guarantee that TOE is not extendable in that way...
 
  • #30
I agree with everything except for

Dmitry67 said:
... like "dividers of zero" which come with quaternions ...

Quaternions are a division algebra, so there always exist a unique inverse element and there are no two quaternions p,q ≠ 0 such that pq = 0 or qp = 0; but quaternions are non-commuative, i.e. pq ≠ qp in general
 
  • #31
My bad, octonions...
 
  • #32
no, not even octonions ;-)

octonions are a divison algebra w/o zero divisors; but they are not associative
 
Last edited:
  • #33
Another try. Sedenions :)
 
  • #34
yes, they have zero divisors and are not a division algebra!
 

Related to Do TOE candidates predict SM parameters?

1. What is a TOE candidate?

A TOE candidate refers to a theory of everything candidate, which is a theoretical framework that aims to explain all physical phenomena in the universe.

2. How do TOE candidates predict SM parameters?

TOE candidates use mathematical equations and models to predict the parameters of the Standard Model (SM), which is the current theory that describes the fundamental particles and forces in the universe.

3. What are SM parameters?

SM parameters are numerical values that describe the fundamental particles and forces in the universe, such as the mass and charge of particles and the strength of forces.

4. Why is it important for TOE candidates to predict SM parameters?

Predicting SM parameters is important because it allows us to test the validity of TOE candidates and potentially find a unifying theory that can explain all physical phenomena in the universe.

5. How accurate are the predictions of TOE candidates for SM parameters?

The accuracy of TOE candidate predictions for SM parameters varies, as there is currently no widely accepted theory of everything. However, advancements in technology and experimental data continue to improve the accuracy of these predictions.

Similar threads

  • Beyond the Standard Models
Replies
1
Views
2K
Replies
4
Views
1K
  • Beyond the Standard Models
Replies
32
Views
630
  • Beyond the Standard Models
Replies
11
Views
3K
  • Beyond the Standard Models
2
Replies
60
Views
5K
Replies
5
Views
3K
Replies
1
Views
627
  • Beyond the Standard Models
Replies
6
Views
3K
  • Beyond the Standard Models
Replies
1
Views
3K
  • Beyond the Standard Models
Replies
16
Views
3K
Back
Top