What is the significance of the Sahlmann algebra in loop quantum gravity?

  • Thread starter marcus
  • Start date
  • Tags
    Lqg
In summary: Recently he mentioned a 2nd paper that might help explain the first one, so I'll ask him the status of that.I'm cool with G-bundle stuff, though I don't know what you mean by "vertical homomorphisms". I've studied and written about the holonomy-flux algebra, though I didn't know it was called that, and I am familiar with the Sahlmann-Thiemann paper you mentioned and also that other old one by Sahlmann which I think is easier. And the one by Ashtekar-Lewandowski which is easiest of all.I am not sure where you
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
I was just reading a paper co-authored by Jerzy Lewandowski
which is dated June 24 2003----but also dated February 2003, I don't know which applies.

http://arxiv.org/gr-qc/0302059 [Broken]

It begins "the quantum holonomy operators and the quantum flux operators are the basic elements of quantum geometry..."

"...Recently, Sahlmann [1] proposed a new, more algebraic point of view. It opens the door to a representation theory of quantum geometry. The main idea is to spell out a definition of a *-algebra constructed from the holonomies and fluxes, that underlies all the loop quantum gravity framework, and to study its representations..."

Lewandowski calls a certain kind of holonomy-flux *-algebra a "Sahlmann algebra". In his papers Hanno Sahlmann is more modest and does not call the algebra that. (naming things after oneself is not customary in mathematics). Hanno is pretty young to have something named after him. I am trying to understand this a little better.

A good many of the key papers (1994--present) in loop quantum gravity (quantum-general-relativity in effect) have been by Ashtekar and Lewandowski---if there are central figures in LQG then those two would be my pick of the lot. Hanno Sahlmann just appeared---his thesis date is, I think, 2002.

In November 2002 Ashtekar, Lewandowski, and Sahlmann posted
http://arxiv.org/gr-qc/0211012 [Broken]
"Polymer and Fock representations for a Scalar field"
here is the abstract:
"In loop quantum gravity, matter fields can have support only on the polymer-like excitations of quantum geometry, and their algebras of observables and Hilbert spaces of states can not refer to a classical, background geometry. Therefore, to adequately handle the matter sector, one has to address two issues already at the kinematic level. First, one has to construct the appropriate background independent operator algebras and Hilberts spaces. Second, to make contact with low energy physics, one has to relate this polymer description of matter fields to the standard Fock description in Minkowski space. While this task has been completed for gauge fields, imporatant gaps remained in the treatment of scalar fields. The purpose of this letter is to fill these gaps."

edit: forgot earlier to include link to the first paper mentioned
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
Is the first paper you mentioned, the one coauthored by Lewandowski, online? If so, could we have a link to it? If this is the Ashtekar school's approach to putting quantized matter into their quantized spacetime, it is very important indeed.
 
  • #3
Originally posted by selfAdjoint
Is the first paper you mentioned, the one coauthored by Lewandowski, online? If so, could we have a link to it? If this is the Ashtekar school's approach to putting quantized matter into their quantized spacetime, it is very important indeed.

selfAdjoint thanks for flagging my omission. I intended to include a link to that paper but forgot as I was typing. It is:
http://arxiv.org/gr-qc/0302059 [Broken]

At almost the same time (Feb 2003) that paper by Okolow and Lewandowski appeared, there also came out one by Sahlmann and Thiemann which is doing essentially the same thing:
http://arxiv.org/gr-qc/0302090 [Broken]
"On the Superselection Theory of the Weyl Algebra for Diffeomorphism Invariant Quantum Gauge Theories"

Here is a quote from the beginning to give an idea:

<<Abstract

Much of the work in loop quantum gravity and quantum geometry rests on a mathematically rigorous integration theory on spaces of distributional connections. Most notably, a diffeomorphism invariant representation of the algebra of basic observables of the theory, the Ashtekar-Lewandowski representation, has been constructed. This representation is singled out by its mathematical elegance, and up to now, no other diffeomorphism invariant representation has been constructed. This raises the question whether it is unique in a precise sense.

In the present article we take steps towards answering this question. Our main result is that upon imposing relatively mild additional assumptions, the AL-representation is indeed unique. As an important tool which is also interesting in its own right, we introduce a C*-algebra which is very similar to the Weyl
algebra used in the canonical quantization of free quantum field theories.


1. Introduction

Canonical, background independent quantum field theories of connections [1] play a fundamental role in the program of canonical quantization of general relativity (including all types of matter), sometimes called loop quantum gravity or quantum general relativity. For a review geared to mathematical physicists see [2], for a general overview [3]).

The classical canonical theory can be formulated in terms of smooth connections A on principal G-bundles over a D-dimensional spatial manifold &Sigma; for a compact gauge group G and smooth sections of an associated (under the adjoint representation) vector bundle of Lie(G)-valued vector densities E of weight one. The pair (A, E) coordinatizes an infinite dimensional symplectic manifold (M , s) whose (strong) symplectic structure s is such that A and E are canonically conjugate.

In order to quantize (M , s), it is necessary to smear the fields A, E. This has to be done in such a way, that the smearing interacts well with two fundamental automorphisms of the principal G-bundle, namely the vertical automorphisms formed by G-gauge transformations and the horizontal automorphisms formed
by Diff(&Sigma;) diffeomorphisms. These requirements naturally lead to holonomies and electric fluxes, ...>>
 
Last edited by a moderator:
  • #4
Funny thing is, I had found the Sahlmann-Thiemann paper just browsing the archive back when it came out. All that G-bundle theory you quoted brought it back to me. How are you on that stuff? I can do part of it but the vertical homomorphisms are out of my orbit. I have another paper where a very similar construction is used to define the BRST transformation in terms of topology.
 
  • #5
Originally posted by selfAdjoint
Funny thing is, I had found the Sahlmann-Thiemann paper just browsing the archive back when it came out. All that G-bundle theory you quoted brought it back to me. How are you on that stuff? I can do part of it but the vertical homomorphisms are out of my orbit. I have another paper where a very similar construction is used to define the BRST transformation in terms of topology.

I'm willing to try to connect with Sahlmann's work, by going thru the easiest most pedagogical of his recent papers. I've looked over everything from early 2002 onwards (his PhD thesis is 2002 and called "Coupling Matter to Loop Quantum Gravity") and the most accessible, I think, is

http://arxiv.org/gr-qc/0207112 [Broken]
"When Do Measures on the Space of Connections Support the Triad Operators of Loop Quantum Gravity"

Have a look at it, selfAdjoint, and see if you'd be interested in reading some of it with me
 
Last edited by a moderator:
  • #6
Yes, I printed off and I would very much like to go through it with you and work on questions either of us might have along they way. I am currently ready to start on section 2, Measures on thespace of generalized Connections.

Do you want to do this here on the boards or by email?
 
  • #7
Originally posted by selfAdjoint
Yes, I printed off and I would very much like to go through it with you and work on questions either of us might have along they way. I am currently ready to start on section 2, Measures on thespace of generalized Connections.

Do you want to do this here on the boards or by email?

I have it printed out too, and section 2 seems like a good place to start (right after the introduction). I see no harm in proceeding with it in this thread (maybe Hurkyl, who has a longterm interest in LQG, or Lethe will join in) and we can always retire into a corner
(either with PF's personal messaging, or email) if there are too many interruptions.

The "projective limit" mechanism excites me. It is a beautiful device for constructing (not just measures but other) utilities on infinite dimensional spaces. The projective limit is how A-L originally defined the measure &mu;AL on the space of
connections in the landmark 1994 paper "Projective techniques and functional integration for gauge theories"
http://arxiv.org/gr-qc/9411046 [Broken]

What we must understand is two things----invariant Haar measure, which is just the analog of the uniform measure on the circle, and which every compact group has-----and how A-L promoted Haar measure (by projective limit) up to a measure on the space of connections.

It is a neat business. to take the projective limit of a family of measures, they need a DIRECTED SET (basically a partial ordering with something inclusive of each pair) and the GRAPHS in the manifold are a directed set! Given any two you can always merge them to make a larger one containing both.

And associated with each graph there is a bunch of numerical valued "cylindrical functions" c[A] of the connection. To specify a cyl function you give a graph----a set of N edges embedded in the manifold---and you give a function c(h1, h2, ..., hN) defined on GN the cartesian product of N copies of the group. (The group is probably just SU(2) and has an invariant Haar measure.)
What could be simpler than one of these cylindrical functions?!
To evaluate it on a connection A you just run holonomy on the edges of the graph and get an N-tuple of group elements, to which you apply the function c(h1, h2, ..., hN) and get a number c[A]. (By abuse of notation, I'm using the letter c in two related senses.)

To define the measure on the whole of A we just need to know how to integrate cyl functions. A measure can be identified with a linear functional on a function space, corresponding to integrating those functions with that measure. So get ready to do the projective limit----for each graph we need a measure or linear functional defined on the cyl functions with that graph----but for that the old N-fold Haar measure on GN will work!

In their section 2, Sahl/Thiemann, snuck a density function &fnof; into the picture for a bit more generality but then in the last paragraph they remark that the really classical application of it is with the density identically equal to 1, in which case you get the A-L measure. But you get John Baez measures and other variants by letting &fnof; vary.
 
Last edited by a moderator:
  • #8
I am right with you on most of your exposition, and section 2, but I can't quite get my mind around the statement on page 5 where they have taken their projective limit of spaces of connections (bar_A) and say that the closure of the cylindrical funtions on bar_A under the sup norm is a C* algebra. OK so far, then they say the spectrum of the algebra can be identified with bar_A thus endowing it with a Hausdorf topology.

I have been vaguely aware that this is a pretty standard move in function theory, to get the space back from some algebraic operation on the functions defined on it. But I don't have the foggiest how this is achieved. Could you give me a for dummies quicky? Also the statement of the cited Riesz-Markov theorem?

As you say this graph projection approach is obviously powerful and very neat. Seems to be worth while to work a bit to get my head around it.

I do thank you for pointing to the paper and your excellent presentation.
 
  • #9
Originally posted by selfAdjoint
I am right with you on most of your exposition, and section 2, but I can't quite get my mind around the statement on page 5 where they have taken their projective limit of spaces of connections (bar_A) and say that the closure of the cylindrical funtions on bar_A under the sup norm is a C* algebra. OK so far, then they say the spectrum of the algebra can be identified with bar_A thus endowing it with a Hausdorf topology.

I have been vaguely aware that this is a pretty standard move in function theory, to get the space back from some algebraic operation on the functions defined on it...

Let's look at page 11 of the A-L paper they are drawing from:
http://arxiv.org/gr-qc/9411046 [Broken]

<<2.2 The Gel'fand spectrum of bar_Cyl...

A basic result in the Gel'fand-Naimark representation theory assures us that every Abelian C*-algebra ...with identity is realized as the C*-algebra of continuous functions on a compact Hausdorff space, called the spectrum...Furthermore, the spectrum can be constructed purely algebraically...[as a space of maximal ideals]...>>

We have to understand the space of maximal ideals---I, as you, have seen this over and over as a "pretty standard move" as you say. There must be something good about this move that we should understand. I will try to talk about it some---to get and share intuition.

As far as that Riesz Markov theorem goes, I won't forget and will get around to it, but I want to focus on this maximal ideal space business for a while.

As for thanks, mine to you likewise. This is topnotch math and its fun to have someone to talk to about it.
 
Last edited by a moderator:
  • #10
Oh yeah, those maximal ideals. I know I have run into that somewhere recently - not as real exposition, but some kind of historical note in Mathematical Intelligencer. I'll try to find it.
 
  • #11
Via google I found http://www.mth.kcl.ac.uk/~iwilde/notes/calg/s8.ps [Broken], whic establishes the theorem. It's in terms of Hilbert spaces and states rather than compact sets and points, but should carry through.

Warning, the file is in postscript.
 
Last edited by a moderator:
  • #12
the maximal ideals of a ring

this is something which Hurkyl and Lethe may have seen before and could join us on----it is a very cool trick (not being an expert I only halfway recall or understand it, I have to rediscover it, but I dimly know it to be cool)

Some gypsies give you a ring with commutative multiplication and they conceal from you the fact that this ring is simply the continuous complex valued functions on the interval [0,1]. Those crafty bastards! They give it to you in abstract form as a bag of different colored M and M candy which you can add and multiply to get other M and Ms of different color. They don't tell you that the thing originally arose in all its algebraic splendor simply as a space of functions on [0,1]. The next morning the gypsies are nowhere in sight and the chickens are gone too.

But each point x in the interval [0,1] corresponds to an IDEAL in the space of functions on [0,1] consisting of the set of functions which are zero at x

Ix = {f such that f(x) = 0} is an ideal because it has the sticky quality that if you take any g at all and multiply g by some f in the ideal you wind up in the ideal.

for any g in the ring, and any f already in the ideal, gf is in the ideal. Like a black hole or the tarbaby, you touch it, and you're in.

And that Ix is maximal in that it is not contained in any larger ideal except the whole ring.

So there is a one-one correspondence between points x in the interval [0,1] and the set of maximal ideals.

so the gypsies gave you an abstract ring---a mere bag of M&Ms---but you suspect that it arose as the ring of continuous functions on some space X. And you are able to RECOVER the space X as the set of maximal ideals of the ring.

so far its only heuristic. there is more: a way to put a topology on the space of maximal ideals that makes it compact. maybe I will post something, or someone else will, about that later.

there's even a hint of a technique for "compactification" here. Start with a space that isn't compact, make a ring of functions that are well behaved on it in some fashion, take the maximal ideals, with some topology and it may turn out to be a compact space "including" the original. I seem to recall the "almost periodic" functions on the real line being handled this way, as functions on a compact ideal space. maybe someone has heard of this. Ashtekar used that method of compactification in a cosmology paper recently calling it "the Bohr compactification of the real line". I couldn't remember having heard of the Bohr compactification before but I harbor a deep suspicion that maximal ideals were being used there too.

And an algebra? What that? just a ring with scalar multiplication. Lethe! Is that right? And a *-algebra is just an algebra with something analogous to complex conjugation.
It if is the ring of continuous complex-valued functions on something, then it automatically has complex conjugate----f* is just what you think it should be. And if there is a norm too then it is a C*-algebra. so we are just talking rings----function rings, rings with norms, conjugation, addition & multiplication of functions. It does not get any more natural. You can define these things in your sleep without having ever seen the definition.

OK and Hanno Sahlmann has taken the holonomy+flux operators of LQG and made a C*algebra of them (by taming the flux operators to make them bounded) and proved that there is essentially only one irreducible representation of that algebra as operators on a hilbertspace. And it does not even seem to be all that gnarly! Blows me away. Must get down and learn this stuff. "Get my mind around it" to use selfAdjoint's expression.
 
  • #13


selfAdjoint, I just saw your post with the link to
discussion of GNS, for which thanks! I personally
have not the skill to read PS, and read everything in PDF,
but others very likely can profit from that file.
I'm inclined to move ahead at catch up on GNS details
later. but also if anyone is especially interested in
this point or has questions about it we could dwell
on it for a while.

BTW they are going to take the projective limit of
QUOTIENT spaces to get a measure on something
they call A/G
(notation gets awkward because of lack of symbols)
which is the connections with the diffeomorphisms modded out

I am referring a lot to the original A-L 1994 paper because
it is less condensed than the one we are reading in its
coverage of these things.
 
  • #14
Marcus, I have just gone through all the projective development in the 1994 paper, and surprize! I understood it. Ashtekar is a super expositer, and the only difficulty with jis relaxed approach in comparison to the lemma, lemma, theorem, corollary, corollary approach of my youth is that the onus of rigor falls on the reader. It's so easy to skim and say "Sure, sure, yes that's right". Well I worked through about half of their basic proofs and sorted of fuzzed the other half, but I feel that I am well grounded now and we can discuss the application of this projective technique to Sahlmann's theorem.

BTW I finally twigged to the brand of Category thoeory that would apply here. Isham's toposes (or topoi if you want to be clever) include a partially ordered "index" component. They would be naturals for projective reasoning.
 
  • #15
Originally posted by selfAdjoint
Marcus, I have just gone through all the projective development in the 1994 paper, and surprize! I understood it. Ashtekar is a super expositer,...

You are ahead now and I need to catch up
I agree with you about Ashtekar's writing style.
In case anyone wants to join us the paper is

http://arxiv.org/gr-qc/9411046 [Broken]
"Projective techniques and functional integration for gauge theories" by Ashtekar and Lewandowski

maybe this is a good clear place to start reading
and this years papers by Sahlmann and others attach
on to this as an extension

Anyway I should make myself a strong cup of coffee and
do my homework: give a more careful reading to the 1994
paper you mention
 
Last edited by a moderator:
  • #16
I have now reviewed Sahlmann's paper down through the section of diffeomorphism-invariant measures. I need to re-review a lot of the stuff, starting in section 3 where he suddenly jumps from the principle bundle with group G to spacetime with a group SU(2). I think his graphs also suddenly morph into those simplex edges carrying spinor reps of SU(2), taken I suppose directly from his reference 13, the 1997 paper by A&L where they quantize the area operator. I don't really want to go back and review that paper too - it strarts to feel Like I'm Achilles and Sahlmann is the tortoise!

What I need is some help unpacking his notation - what are the X's from and to, and in what sense are the f's "co-vectors"? Mappings from something to the complex numbers, but from what? G (aka SU(2)? Any help would be appreciated.
 
  • #17
this is great
having some definite questions from you will help
me get some traction
so I won't keep slipping and sliding all over the place

You are referring, I see, to

http://arxiv.org/gr-qc/0207112 [Broken]
"When do measures..."

I have this in hand and will try to respond. I was proposing this
paper earlier as a good place to start and it may turn out to
be.

Originally posted by selfAdjoint
I have now reviewed Sahlmann's paper down through the section of diffeomorphism-invariant measures. I need to re-review a lot of the stuff, starting in section 3 where he suddenly jumps from the principle bundle with group G to spacetime with a group SU(2). I think his graphs also suddenly morph into those simplex edges carrying spinor reps of SU(2), taken I suppose directly from his reference 13, the 1997 paper by A&L where they quantize the area operator. I don't really want to go back and review that paper too - it strarts to feel Like I'm Achilles and Sahlmann is the tortoise!

What I need is some help unpacking his notation - what are the X's from and to, and in what sense are the f's "co-vectors"? Mappings from something to the complex numbers, but from what? G (aka SU(2)? Any help would be appreciated.

WOAH! I forgot to respond earlier to your question about the X's.
The clearest explanation is by Lewandowski on page 8 of
http://arxiv.org/gr-qc/0302059 [Broken]
This is the Okolow-Landowski paper which we talked a bit about earlier. It presents itself as a parallel presentation of Sahlmann's ideas with maybe a slight correction. Landowski is a senior person and it was this paper that put me onto Sahlmann's papers. Because it is more expository and takes more time with the definitions---slow and careful---it is easier in some ways to read than the original. On page 8 it says what those X's are.

Tomorrow I will try to give an intuitive reading of page 8. This is the key step of exponentiating the flux operator thru a given surface S in order to "tame" it, and then including it in the holonomy-flux algebra. This may be the gnarliest place---hope
once thru it we have smooth sailing.
 
Last edited by a moderator:
  • #18
Im backing up and getting into low gear with this paper "When do measures..."

To review what he is trying to do, he starts on page 2 with the standard Ashtekar or "new" variables of GR, namely the connection A and the triad field E and he writes two equations (1) and (2) which introduce holonomies and fluxes:

he[A] is the holonomy along edge e using connection A

ES,f[E] is the flux of E through surface S integrated with the help of a covector f that I think of as collapsing the triad so he can get a number (each choice of f gives a different value for the integral so he puts f into the subscript along with the surface S.

"All this makes it worthwhile to study the representation theory of the observables (1) and (2) in somewhat general terms."

He notes that representation theory of the algebra of HOLONOMIES is already well studied. In fact the cyclic representations are in 1 to 1 correspondence with measures
on the space A-bar. What is missing is to also include
the FLUXES in the algebra and then to characterize the representations of the larger algebra----if possible as before by
finding they are in 1-1 correspondence with measures.

The general theory of putting reps into correspondence with measures is an extremely efficient (powerful) math tool which I can describe. I should do this. It leads to uniqueness theorems
and a good control of the reps. Stuff you can actually calculate with!

But the section you asked about is where he begins to TINKER with the fluxes so that he can trick them into coming into the algebra and joining the holonomies in one big happy algebra.
I just got a telephone call and must do something else for an hour or so but will be back to this soon
 
  • #19
BTW a cyclic representation is even more general than an irreducible one.

Any irreducible representation is cyclic

So this great GelfandNaimark theorem that makes the
cyclic reps correspond to measures on a certain maximumidealspace applies very generally

A cyclic rep of some algebraic "thing" (a normed ring, an algebra...) on a hilbertspace is one where there is at least one ONE point in the hilbertspace that goes everywhere under the action of the "thing"

say it is a ring R and the rep is denoted &pi;:R --> GL(H) the operators on H
Now &pi;(R) is a whole bunch of operators on H, the collective image of the whole ring
and cyclic just means there is at least one x in H that gets moved around so much by the bunch of operators that it forms a dense set of points in the hilbertspace

that is, the set of points &pi;(R)[x] is dense in H

x is called a cyclic vector because it "cycles thru" the whole space as you apply successive ring elements to it

I just pulled a book by Naimark himself off a bookshelf and it fell open at a page stating the theorem we need and it isn't even very hard to prove. just one of those good ideas that somebody has and then get used so much the become classics

Naimark's book is called "Normed Rings", the first american edition 1964 with a brief introduction by Naimark himself written in Russian and on page 245 he states the theorem

"Every cyclic representation of a complete completely regular commutative ring R with identity is equivalent to its representation, defined by means of the formula

Ax &xi;(m) = x(m)&xi;(m)

in some space L2(&fnof;) where &fnof; is an integral in the ring C(M) of all continuous functions on the space M of maximal ideals of the ring R."

What he means by an integral defined on C(M) is what we call a MEASURE defined on M-----it is a means of integrating functions on M. So we normally call such a thing a measure &mu; and write the integral as &int; d&mu; But it is just a semantic difference.
So his hilbertspace L2(&fnof;) we would write as
L2(M, &mu;) the square-integrable functions defined on the maximalidealspace M using the measure &mu; on M.

And the representation action is simple MULTIPLICATION.
A point in the hilbertspace is just a function &xi;(m) defined on M
and you want to know how a ring element x in R is going to act
on &xi; to give another function on M, another point in the hilbertspace.

And he refers to the main theorem on page 230 which says that
a the ring R is isomorphic to the continuous functions on its maximal ideals. So there is a natural correspondence between x in the ring and x(m) a function on the maximal ideal space.

So this at-first-mysterious equation
Ax &xi;(m) = x(m)&xi;(m)
finally becomes clear. A point x in the ring corresponds to
a function x(m) defined on M
and the generic representation of the ring goes like this:
x acts on &xi; in the hilbertspace simply by multiplying
the two functions x(m) and &xi;(m) together to get yet another
function defined on M.

It looks to me as if what Naimark was calling normed rings in 1963 are essentially what are now called C* algebras or
*-algebras. And what he calls the maximalidealspace is also called the GelfandNaimark "spectrum" of the ring or algebra.
Terrible how alternative terminology builds up like treebark
wish we could slough it off once and for all and have the ideas
in clean pristine beauty but it never happens.
 
  • #20
Thank you for both posts.

I am still puzzled by the f's, they are co-vectors in the case of SU(2) but simple functions in the case of U(1). Is this because of the densities defined on the triads? That's the only place the group might come into the E's that I can see.

Your second post had a lot of stuff I didn't know, and will have to ponder. So this is the "trading reps for measures" you wrote of in your first post?
 
  • #21
Originally posted by selfAdjoint
Thank you for both posts.

I am still puzzled by the f's, they are co-vectors in the case of SU(2) but simple functions in the case of U(1). Is this because of the densities defined on the triads? That's the only place the group might come into the E's that I can see.

Your second post had a lot of stuff I didn't know, and will have to ponder. So this is the "trading reps for measures" you wrote of in your first post?

Hi, I just edited an earlier post to respond to your question
about those X's.
I found page 8 of the Okolow-Landowski paper helpful. It goes over Sahlmann's definition of the X's and the exponentiated E's
in a more pedagogical way. I put a link to it in the earlier post.
I will try to give a reading of that page 8 material tomorrow.
 
  • #22
HI, I followed your advice and looked up the Okolow-Lewandaowski paper, and you're right. Where Sahlmann just skims the bases he needs for rigor, they go into details and try to motivate what's going on. In the spirit of trying to show you where my difficulties are I am going to make some quotes from the paper with my understanding of them. Bold type is the paper.

Tilde_E is a vector dessity of weight 1 which takes values in the space G'* dual to the Lie algebra G'.

A vector density is an integrand, when you change coordinates it changes like a vector but with an extra factor of the Jacobian of the change, raised to the power of the weight (here 1). The dual space of the Lie algebra is the set of homomorphisms from the Lie algebra to the coefficient module. In the case of SU(2) or U(1) this is the complex numbers. More light on this comes from this quote.

...the fields A and tilde_E are expressed by their components with rspect to a basis ([tau]i[/i]) i= 1,...,n of the Lie algebra G', and the dual basis ([tau]'i) of G'*.

The dual basis is the values the homomorphisms take on the Lie algera basis elements. Now for SU(2) there are three of those, but for U(1) only 1, so tilde_E has vector components for SU(2) but only a scalar component for U(1). The we saee the f's as shown by the following quote.

To define the flux of E fix (i) a finite d-1 submanifold S, (ii) an orientation in the normal bundle to S, and (iii) a function S -> G'.

So the value of f in the expressions will be a vector based on the ([tau]) basis og G'.

So far so good, but I am having trouble in seeing how all of this represents a flux.
 
  • #23
Hi selfAdjoint, I've been reading more in the O-L paper this morning even before turning on the computer (!) and feel growing confidence that this is the right paper to study (plus dipping into the O-L references as necessary). I'm glad your judgement confirms this.

I found the passages in O-L which you quoted, from page 4 and 6, and made some typographical changes for legibility (my own benefit) and do very much appreciate your accurate paraphrasing. Restating something definitely helps and I will try to do some of that with O-L.

Originally posted by selfAdjoint
HI, I followed your advice and looked up the Okolow-Lewandaowski paper, and you're right. Where Sahlmann just skims the bases he needs for rigor, they go into details and try to motivate what's going on. In the spirit of trying to show you where my difficulties are I am going to make some quotes from the paper with my understanding of them. Bold type is the paper.

Tilde_E is a vector density of weight 1 which takes values in the space G'* dual to the Lie algebra G'.

A vector density is an integrand, when you change coordinates it changes like a vector but with an extra factor of the Jacobian of the change, raised to the power of the weight (here 1). The dual space of the Lie algebra is the set of homomorphisms from the Lie algebra to the coefficient module. In the case of SU(2) or U(1) this is the complex numbers. More light on this comes from this quote.

...the fields A and tilde_E are expressed by their components with respect to a basis ([tau]i) i= 1,...,n of the Lie algebra G', and the dual basis ([tau]'i) of G'*.

The dual basis is the values the homomorphisms take on the Lie algera basis elements. Now for SU(2) there are three of those, but for U(1) only 1, so tilde_E has vector components for SU(2) but only a scalar component for U(1). The we see the f's as shown by the following quote.

To define the flux of E fix (i) a finite d-1 submanifold S, (ii) an orientation in the normal bundle to S, and (iii) a function S -> G'.

So the value of f in the expressions will be a vector based on the ([tau]) basis of G'.

So far so good, but I am having trouble in seeing how all of this represents a flux.
 
Last edited:
  • #24
Hi, I did go to the Okolow-Lewandowski paper and as you say it is much clearer. In order to give you a picture of where I am now, and where you can help me, I will quote some text from O-L (bold) with my own interpretations (normal text).

Tilde_E is a vector density of weight 1 which has values in the space G'* dual to the Lie algebra G'.
...the fields A and tilde_E are expressed by their components with respect to a basis of the Lie algebra G', and the dual basis of G'*.


This makes sense for A, which is a sort of connection. A connection in a bundle takes values in the Lie algebra of the group of the bundle. I motivate this by thinking of the old Ricci way of developing the connection in GR by parallel transport. As you moved the vector it encountered fresh geometry which changed the definition of "parallel", so you go a covariant derivative and a Christoffel symbol or connection component.

In the bundle, G acts on the manifold and to represent that action in practice you go to the Lie algebra, which is the differential actions of the group from the identity. To get the action over a space you would integrate these differentials over the space, and in the range of integration you would encounter at each point different actions, so the differantial actions form a field on the manifold and the local values of the field are the images of the Lie algebra under the connection. Thus A.

Now tilde_E. At each point of the manifold you have the image of the Lie algebra, alias the local differentials of the group action. If the group is more complicated than U(1) these differentials will be a vector, with as many components as the Lie algebra has basis elemenets. For SU(2) the number is 3, for the 3 Pauli matrices which generate its Lie algebra. This is a vector of complex numbers. Now tilde_E is another vector of complex numbers, representing not differential group actions but linear functions of those actions. Which function would depend on the point of the manifold so this vector varies as a field too. I am tentitavely interpeting this as a "wave". Each differential group motion in the different basis directions is done with a certain "wave" represented as a complex number with magnitude and phase. Please correct me if you understand this differently.

To define the flux fix (i) a finite d-1 submanifold S..., (ii) an orientation in the normal bundle to S, and (iii) a function f: S -> G'.

So f is Lie Algebra valued and has vector components like A for SU(2) but only a scalar component for U(1). That solve that puzzle.

This is where I am today. I am also very interested in your teaser about trading representations for measures and getting theorems out of that.
 
  • #25
WOAH! I did not see your 8:43 post while I was writing this. So this is not a reply to the immediately preceding, but here it is anyway:
------------
selfAdjoint, have been reading more in the O-L paper expecially with your question about the X's in mind and I think it may gradually be dawning on me what the X's are about

Note that on page 6 they begin section 2.2 "The Ashtekar-Corichi-Zapata Poisson Algebra" and they refer to a 1998 paper by ACZ and on page 8 they define the ACZ algebra of classical variables.

Then they say on page 9, at the beginning of section 2.3 "The Sahlmann holonomy-flux *-algebra", that the quantization simply amounts to assigning every elementary classical variable (in the ACZ Lie algebra) to some operator in the Sahlmann algebra.

I still need to study this to get a satisfactory grasp and am mentioning it here only because it shows their general direction.
They SEEM to be saying, in a not unkind way, "hey this guy Sahlmann didn't invent all this, he just redid the last step, a quantization step, in what seems to be a better way, and got a result. But the basic setup was already there in the 1998 ACZ paper." Verbally they give Sahlmann a lot of credit and well-deserved congratulations. But they also redo everything he did very carefully and link it clearly and explicitly to the earlier work.
I am going to fetch some quotes from that 1998 paper because they motivate the invention of the X's.

I think the secret to understanding them is to picture the configuration space A as a manifold. Only for a minute (because we won't have to actually work with it) think of the set of all connections (on the original 3D manifold) as a manifold itself. And then the Cylindrical functions are numerical-valued functions defined on
A and the question is ...what are VECTOR FIELDS defined on A ?

Well vector fields on any manifold are the "derivations"----they can be identified with the operation of taking directional derivatives of functions defined on the manifold.

And ACZ (that is, Ashtekar and co-workers) found out how to use the triads, together with any surface S and test-function or smearing function f, to actually take a kind of derivative of the cylinder functions defined on the connections.

That is what these XS,f things are----they are in a sense vector fields on A because they are "derivations" defined on Cyl---the cylindrical functions.

This is still a vague notion for me, so I will post this and go hunt some quotes from the ACZ paper that hopefully will clarify it.
 
Last edited:
  • #26
Originally posted by marcus
I think the secret to understanding them is to picture the configuration space A as a manifold. Only for a minute (because we won't have to actually work with it) think of the set of all connections (on the original 3D manifold) as a manifold itself. And then the Cylindrical functions are numerical-valued functions defined on
A and the question is ...what are VECTOR FIELDS defined on A ?

Well vector fields on any manifold are the "derivations"----they can be identified with the operation of taking directional derivatives of functions defined on the manifold.

And ACZ (that is, Ashtekar and co-workers) found out how to use the triads, together with any surface S and test-function or smearing function f, to actually take a kind of derivative of the cylinder functions defined on the connections.

That is what these XS,f things are----they are in a sense vector fields on A because they are "derivations" defined on Cyl---the cylindrical functions.

This is still a vague notion for me, so I will post this and go hunt some quotes from the ACZ paper that hopefully will clarify it.

This sounds right to me. I saw where they defined the X's as derivations, but was unsure as to how to interpret that.

Posting will be spotty during the day today, as I have a bunch of chores to do, but I should be able to concentrate on this tonight.
 
  • #27
The 1998 ACZ paper
http://arxiv.org/gr-qc/9806041 [Broken]
is very much a working paper in the sense that it spend the first half (pages 1-10 out of a total 20 pages) exploring things that don't work and saying why they dont.

it is sending out signals to co-workers "dont try this one, we did and it doesn't work"

so this is not a pedagogical paper and we only need to dip into it at a few points to get what we need for the O-L paper

As usual there is a 3D manifold &Sigma; but now very much center-stage is A the set of all SU(2) connections on that original &Sigma;

And along with A comes Cyl, the ring of cylindrical functions defined on A.

A cylindrical function C&gamma;,c is defined using holonomy on a graph &gamma; and a group-gobbling function c(g1,...,gN) where N is the number of gamma's edges. I'm using ACZ notation and they distinguish between little c : GN --> complex numbers and big C which is a complex valued function of the connection A obtained by running the connection on the graph to get an N-tuple of group elements which you then plug into c(g1,...,gN) to get a complex number C[A].

now on page 12 ACZ say:
"XS,f is a derivation on the ring of cylindrical functions that is

XS,f : Cyl ---> Cyl

such that the map is linear and satisfies the Leibnitz rule:

XS,f (C + &lambda;C') = XS,f C + &lambda;XS,f C [[linearity]]

XS,f (CC') = CXS,f C' + (XS,f C)C' [[product rule]]

for all cylindrical functions C and C' and all complex numbers &lambda;"

And they give a definition of how XS,f works on a sample cylindrical function C------the definition is equation (3.3) on page 12 and it uses a more primitive X term defined earlier on page 8. I am going to discuss this definition (3.3) but first I will motivate it in vague terms.

The neat thing about cylindrical functions is that the guts of one is simply this little c(g1,...,gN) function defined on N copies of the group G. It is completely loose from the original manifold and the graph and the connection and diffeomorphisms and all that complicated stuff. It is just a complexvalued function on GN

This incidentally was what made it so easy to define a measure and integrate these things because you say "forget the graph, I will just use Haar measure on GN!" It is amazing that something that simple works.

And in this case we can define a "derivation" of big C just by infinitesimally scootching so to speak in the k-th component of the N-tuple of group elements that we are going to feed little c. It is ridiculously simple-minded but apparently it works. You scootch in the k-th component if the k-th edge of the graph hits the surface.

Before we do anything at all we have first specified a surface S and a testfunction f. And we now look at a Cyl element C, which gives us a graph. And we ONLY MESS with little c at endpoints of edges that are actually in the surface (!)
Now there are only two cases to consider---either the beginning of the edge is in the surface or the endpoint is in the surface---well maybe four cases because the surface is oriented so we can say the edge is "upwards" out of the surface or downwards into it etc etc---it is moderately messy but not hideous.

And if the k-th edge of the graph has an endpoint in the surface we are going to tweak little c (...gk...) in its k-th argument according as the edge of the graph is in or out relative to the surface. And the triads----the flux idea----gets in at this point.

Now this by any measure of sanity would seem totally loony but here is what Ashtekar et al say about it on page 13:

"Let us summarize. For simple finite dimensional systems, there are two equivalent routes to quantization, one starting from the Poisson algebra of configuration and momentum functions on the phase space and the other from functions and vector fields on the configuration space. It is the second that carries over directly to the present approach to quantum gravity..."

That is what the X's are, they are vector fields (derivations) on the space of connections.

Now back to Okolow-Lewandowski on page 8 they say

"The Ashtekar-Corichi-Zapata algebra can be written as the direct sum

Gothic_A sub ACZ = Cyl + X,

where X are the derivations obtained by taking all the X operators given by all the submanifolds S and [test]functions &fnof; and all the [linear combinations of] commutators."

Inching closer to understanding this. And Sahlmann somehow just put in a factor of i at one point and a minus sign somewhere and got lucky and for some reason hit the jackpot with a Stone vonNeumann theorem and various uniqueness results. Or so it seems, the story is not at all clear to me. I will post this and look at the O-L paper again.
 
Last edited by a moderator:
  • #28
Here is where I am now. We are converging I think.

A cylindrical function is built out of integrating A (i.e. values of connections A, differential group actions) along an edge e, obtaining, ta-da, group actions, i.e. elements of G, and O&L show that this integration respects the group multiplication. Integrate along the concatenation of two edges and you get the product of the two group elements from the individual edges. And reverse the direction of integration and get the inverse of the group element.

So a cylindrical function is just a function of some n-ntuple of thes Ae intgrated group elements, or as they say, it maps into the n-fold product of G.

But the domain of a cylindrical function is A, the space of connections, crossed with Gamma the set of relevant edges.

And the X's are literally derivations on the space of cylinder functions CYL; in the O&L Poisson bracket you can see the X's defined by d/ds.

BTW, I didn't understand before that the Lie algebra valued function f was a test or smearing function. I'm going to have to think about that.
 
Last edited:
  • #29
Originally posted by selfAdjoint

BTW, I didn't understand before that the Lie algebra valued function f was a test or smearing function. I'm going to have to think about that.

I am not sure how to think of the function f, although I have called it a testfunction several times, so I would be glad to
know your thinking about its purpose in the scheme of things
if you get some insight about that

another thing it seems to be there for is to boil triads down to numbers and letting it vary (and letting the ES,f and the XS,f depend on it seem more flexible relaxed than making some choice of basis once and for all (as conceivably you might have to? if you wanted to get rid of the f's?) Well I am not sure about this so for now I just accept it.

BTW I have gone back to having a look at Sahlmann's
"Some comments on the Representation Theory..."
http://arxiv.org/gr-qc/0207111 [Broken]
which as you say is condensed, just touching the bases
he needs for rigor, and I find that it is more understandable
and I have more confidence in what he says now, after
looking at O-L and AGZ papers. So things are, as you say,
converging

The definition of the representation "pi" is interesting
we already have it on the Cyl functions (just multiply an L2 function by a cylinder function) but we don't yet have it
on the "derivations" X, what should [pi](X) do to an L2
function? There is a nice construction for this I am beginning
to understand. Great stuff!
 
Last edited by a moderator:
  • #30
Hi selfAdjoint,

I noticed an inconsequential difference between O-L
and Sahlmann's "Some comments..." that just effects
what you call things.

on page 4 of O-L they introduce two classical canonical conjugate fields
A and E-tilde
A has values in the Lie algebra G'
E-tilde has values in the dual of that G'*
so the "test function" &fnof;, when it makes its appearance later,
is vector valued, not covector
(is that right?, these things are pure formalities I would think)

but Sahlmann has &fnof; be covector
and instead of E-tilde he starts out with a plain E
so I guess for him A and E are a classical canonical conjugate
pair both having values in the Lie algebra G'

Mox Nix, but it's one reason math gets irritating at times
is this bird-like hopping from one notation to another

I'm going to look at the representation of the S. algebra today
and see how the derivation component acts on the hilbert space
 
  • #31
No, the direct and dual are not just notation, they are different things. I know you think worrying about duals and twisted duals in some theories is just for the math's sake, but the whole point of mathematical physics is that the quiddities of the math have physical consequences.

I thought I understood why tilde_E was dual and now I am going to have to go back to Sahlmann and rethink. What does come out of this is a tentative guess; Whatever E is, f is the opposite. Could this be the mathematical way of saying "smearing"?
 
  • #32
Originally posted by selfAdjoint
No, the direct and dual are not just notation, they are different things. I know you think worrying about duals and twisted duals in some theories is just for the math's sake, but the whole point of mathematical physics is that the quiddities of the math have physical consequences.

I thought I understood why tilde_E was dual and now I am going to have to go back to Sahlmann and rethink. What does come out of this is a tentative guess; Whatever E is, f is the opposite. Could this be the mathematical way of saying "smearing"?

You are right of course, part of understanding is keeping track of whether something is in the dual or not. So I should say "minor" difference rather than "inconsequential". I think we both like O-L style of exposition---slower and more pedagogical---but I am inclined to favor Sahlmann's notation where they differ slightly!

I cannot write E-tilde, and Sahlmann uses a plain E and point out the the &fnof; is a covector (what you say is right, they have to be opposite because they have to react together to give a number).

Another thing, merely about vagaries of notation: for the cylinder functions (which you have to write all the time being so basic) I like Sahlmann's choice of C and c
for purely typographical reasons (!)
In the standard PF font I can't see a lot of difference between
the upper and lowercase psi

do you ever envy the Russians who have three alphabets to choose notation from instead of just two? Four if you count Gothic.
Here I am reduced to the penury of C and c, which already mean the complex numbers and the speed of light

must stop complaining and get to the piece de resistance
 
  • #33
I liked your comment on the gothic_A. I suppose there is some involved HTML sequence to generate that, but I think our notation here is fine. I agree with you about C and c. We should settle on that for our posting here. I'll try to go over the relevant part of Sahlmann today and review my thinking. I may have been too reificational, if that's a word.
 
  • #34
Couldn't resist. I just came across this ! We should use it for derivations?
 
  • #35
OK, after rereading Sahlmann's "When do Measures..." I see I was wrong about c functions. It is holonomies that eat a connection and an edge and spit out a group element. Then the c functions depend on those group elements, and for the edges in a graph, the c function depends on them separately so it's a function on Gn as you and Sahlmann both said. So my reification still works for the A variables.

But did you notice that in the integral for ES,f (Sahlmann's display (2)) that E enters the integrand as *E ? So that's a dual, at least according to the notations I learned. And f isn't multplied by *E, rather it is evaluated at *E. So let's see. f takes vectors into the reals. E is a nontrivially densitized triad. So *E would take that triad into the reals, producing 3 real numbers. If the triad has beins 1,2, and 3, associated with densities r1, r2, and r3, and *E takes the beins into reals a1, a2, and a3, then the result of *E I would think would be the triple (a1r1,a2r2,a3r3) and of course f as a covector could map that into some real number. Then you integrate those numbers over S and get ES,f.

Also when he writes (p.5) "Now to each graph one can define a certain equivalence class of connections" does he mean two connections are equvalent if their holonomies on the edges of the graph are equal? I think that would produce the behaviors he cites.

Sorry I haven't got any further after your excellent explanations. I plead craziness in real life.
 
Last edited:
<h2>1. What is the Sahlmann algebra in loop quantum gravity?</h2><p>The Sahlmann algebra, also known as the Ashtekar-Lewandowski algebra, is a mathematical structure that is used in the framework of loop quantum gravity. It is a non-commutative algebra that describes the quantum states of the gravitational field.</p><h2>2. How does the Sahlmann algebra relate to loop quantum gravity?</h2><p>The Sahlmann algebra is a key component of loop quantum gravity, as it provides a mathematical framework for describing the quantum states of the gravitational field. It is used to define the operators that act on these states and to calculate physical observables.</p><h2>3. What is the significance of the Sahlmann algebra in understanding the nature of gravity?</h2><p>The Sahlmann algebra is significant because it allows us to study the quantum nature of gravity, which is essential for understanding the fundamental laws of the universe. It provides a way to reconcile general relativity and quantum mechanics, which are two of the most successful theories in physics but are incompatible with each other.</p><h2>4. How does the Sahlmann algebra differ from other mathematical structures used in quantum gravity?</h2><p>The Sahlmann algebra is unique in that it is specifically designed for use in loop quantum gravity. It differs from other mathematical structures, such as the Wheeler-DeWitt equation, in that it is a non-commutative algebra and can handle the non-locality of loop quantum gravity.</p><h2>5. Are there any current research developments or applications of the Sahlmann algebra in loop quantum gravity?</h2><p>Yes, there are ongoing research developments and applications of the Sahlmann algebra in loop quantum gravity. Some recent studies have used the algebra to investigate the behavior of black holes and the nature of the early universe. It is also being used to explore the possibility of a quantum theory of gravity in higher dimensions.</p>

1. What is the Sahlmann algebra in loop quantum gravity?

The Sahlmann algebra, also known as the Ashtekar-Lewandowski algebra, is a mathematical structure that is used in the framework of loop quantum gravity. It is a non-commutative algebra that describes the quantum states of the gravitational field.

2. How does the Sahlmann algebra relate to loop quantum gravity?

The Sahlmann algebra is a key component of loop quantum gravity, as it provides a mathematical framework for describing the quantum states of the gravitational field. It is used to define the operators that act on these states and to calculate physical observables.

3. What is the significance of the Sahlmann algebra in understanding the nature of gravity?

The Sahlmann algebra is significant because it allows us to study the quantum nature of gravity, which is essential for understanding the fundamental laws of the universe. It provides a way to reconcile general relativity and quantum mechanics, which are two of the most successful theories in physics but are incompatible with each other.

4. How does the Sahlmann algebra differ from other mathematical structures used in quantum gravity?

The Sahlmann algebra is unique in that it is specifically designed for use in loop quantum gravity. It differs from other mathematical structures, such as the Wheeler-DeWitt equation, in that it is a non-commutative algebra and can handle the non-locality of loop quantum gravity.

5. Are there any current research developments or applications of the Sahlmann algebra in loop quantum gravity?

Yes, there are ongoing research developments and applications of the Sahlmann algebra in loop quantum gravity. Some recent studies have used the algebra to investigate the behavior of black holes and the nature of the early universe. It is also being used to explore the possibility of a quantum theory of gravity in higher dimensions.

Similar threads

  • Beyond the Standard Models
Replies
7
Views
1K
  • Beyond the Standard Models
Replies
13
Views
1K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
3
Views
2K
  • Special and General Relativity
Replies
1
Views
626
  • Beyond the Standard Models
Replies
8
Views
4K
Replies
12
Views
2K
  • Beyond the Standard Models
Replies
10
Views
4K
  • Beyond the Standard Models
Replies
20
Views
8K
  • Beyond the Standard Models
Replies
4
Views
3K
Back
Top