Linear & Vector Algebra: Kronecker delta & Levi-Civita symbol

In summary: I don't remember any of the math I learned from high school. I just know that I've taken at least 4 semesters of college math, including a semester of calculus. The current course I am in, Classical Mechanics, is the first time I've seen anything beyond basic algebra in about 8 years. I am a bit rusty and am finding that, while I can follow along with the lectures and the derivations, I cannot seem to duplicate the math on my own without some kind of assistance. I am not sure if it's due to the time lapse or if I am simply not doing enough of the problems to get the concepts down. I am trying to supplement my understanding
  • #1
Dr. Gonzo
8
0
Hello all. Happy to have finally found this forum, sorry that it took so long!

I'm working through a Vector Algebra tutorial and I am having much difficulty with the concepts of Kronecker deltas and the Levi-Civita symbol. I can't fully grasp either of them intiutively.

From what I've been able to gather, [tex] \delta_{ij}= \left\{\begin{array}{cc}1,&\mbox{ if }i=j,\\0, & \mbox{ if } i\neq k\end{array}\right.[/tex]

I'm pretty sure this means that, in the case of two vectors I and J with components [tex] i_{1},i_{2},i_{3}[/tex] and [tex] j_{1},j_{2},j_{3} [/tex] that [tex] i_{1}= j_{1},i_{2}= j_{2}, i_{3}= j_{3} [/tex]. In other words, vectors I and J are parallel and equal. Is this correct? Or am I missing something here?

And regarding the Levi-Civita symbol, it's been pointed out to me that another name for this is the anti-symmetric tensor. Unfortunately, this hint has not helped my understanding one iota. So far, my understanding of this symbol states that it takes Kronecker's delta one step further into a third dimension or plane.

I understand that [tex] \epsilon_{ijk}=\left\{\begin{array}{cc}1,&\mbox{ if }ijk=123, 231, or 312\\-1, & \mbox{ if } ijk= 321, 213, or 132\\0, & \mbox{ if } ijk=anything else\end{array}\right.[/tex]

I am completely confused by these 123 values. What do they represent? Perhaps understanding that will help me complete this puzzle.

I appreciate all and any help on this!
 
Last edited:
Physics news on Phys.org
  • #2
The best way to see what the Kronecker Delta does is to create a 3x3 matrix where the Kronecker Delta determines each number.

So, for the first slot being 11, you put the number 1 (because 1=1). The slot is 12 (row 1 column 2) and you put the number 0 (because 1=/=2). The slot 13 (row 1 column 3) and you put the number 0.

Keep working that out and see what you find.
 
Last edited:
  • #3
Also, the Kronecker Delta has many more uses. The whole point is just to see how the function works.

The Levi-Cevita symbol is almost the same, but using three variables.

http://mathworld.wolfram.com/PermutationSymbol.html

The site also has some relations with the Kronecker Delta. Maybe proving these statements will lead to a better understanding or simply just play around with them.
 
  • #4
JasonRox said:
The best way to see what the Kronecker Delta does is to create a 3x3 matrix where the Kronecker Delta determines each number.
Thanks for the post. I kind of figured that this delta went something like this, but what confused me is when to use 1 and when to use 0. Let me explain:

Am I to understand that for every matrix of this delta, the outcome will always be the same? If I simply put in a 1 for 1=1, 2=2 and 3=3 (a diagonal from top left to bottom right) and zeros in all other places, I don't see how this has any significance. If vector I = vector I, then there should & would be 1's in all places in the matrix. If vector I does not = vector J, then there would only be one's in some spots in the matrix. But when to use which? That's my dilemma. I don't understand where the values in the matrix come from, and then what to do with them.
 
  • #5
Dr. Gonzo said:
Thanks for the post. I kind of figured that this delta went something like this, but what confused me is when to use 1 and when to use 0. Let me explain:

Am I to understand that for every matrix of this delta, the outcome will always be the same? If I simply put in a 1 for 1=1, 2=2 and 3=3 (a diagonal from top left to bottom right) and zeros in all other places, I don't see how this has any significance. If vector I = vector I, then there should & would be 1's in all places in the matrix. If vector I does not = vector J, then there would only be one's in some spots in the matrix. But when to use which? That's my dilemma. I don't understand where the values in the matrix come from, and then what to do with them.

I'm not sure exactly what you are speaking of, but I'll give you another example using vectors.

Let [itex]B=\{v_1, v_2, ...\}[/itex] be an orthonormal set.

So, we have...

[tex]v_i * v_j = \delta_{ij}[/tex]

* is the dot product or inner product.

Ok, I reread your post. Can you tell what section you are working on that requires these functions? Maybe I can help you much more on where the values come from.
 
  • #6
http://planetmath.org/encyclopedia/LeviCivitaPermutationSymbol3.html

There's another website you can look at.

Do you happen to know what an even/odd permutation is?
 
Last edited by a moderator:
  • #7
The letters i and j in [itex]\delta_{ij}[/itex] denote indices, not vectors. They are numbers ranging from 1 through the dimension of your vector space. [itex]\delta_{ij}[/itex] is the (i, j)-th component in the matrix representation of the kroneker delta1, according to whatever basis you've chosen.


1: if you're actually using tensor notation, then since they're both subscripts, both indices are selecting columns -- this would actually a 1xn² matrix that's partitioned into n rows of length n. But if you make the appropriate transpositions, you can treat it as an nxn matrix. A notation for the nxn identity matrix would be [itex]\delta_i^j[/itex].
 
Last edited:
  • #8
OK...things are slowly getting a little bit clearer. I understand that [tex]v_i\bullet v_j=\delta_{ij}=(v_{ix}*v_{jx})+(v_{iy}*v_{jy})+(v_{ik}*v_{jk})=(1*0)+(0*1)+(0*0)=0[/tex]

I also read planetmath's description of permutations (something I am/was totally unfamiliar with): http://planetmath.org/encyclopedia/Permutation.html I'm not entirely sure, but my initial take on even/odd permutations has to do with the number of transpositions...which I do not understand. Is it as simple as seeing a vector with three elements will have 3! permutations?

As for the section I'm working on, it's a tutorial titled Vector Algebra and an Introduction to Matrices. Topics in this tutorial covered prior to this problem include:
1. Euclidean Vectors
1.1 Basic Features and Conventions
2. Vector Manipulations
2.1 Scalar Multiplication
2.2 Vector Addition
2.3 Scalar Product
2.4 The Vector Product
2.5 Vector Components
3. Subscript Algebra
3.1 Summation Convention
3.2 The Kronecker Delta
3.3 The Levi-Civita Symbol

I am a third-semester physics undergrad with previous math that covers through Calculus III. This is my only experience with Linear Algebra, and my tutorial book is very flimsy in the way of instruction. I've gathered most of my understanding by doing my own research. Unfortunately, I just can't seem to get a handle on this one by myself.
 
Last edited by a moderator:
  • #9
Dr. Gonzo said:
Thanks for the post. I kind of figured that this delta went something like this, but what confused me is when to use 1 and when to use 0. Let me explain:

Am I to understand that for every matrix of this delta, the outcome will always be the same? If I simply put in a 1 for 1=1, 2=2 and 3=3 (a diagonal from top left to bottom right) and zeros in all other places, I don't see how this has any significance. If vector I = vector I, then there should & would be 1's in all places in the matrix. If vector I does not = vector J, then there would only be one's in some spots in the matrix. But when to use which? That's my dilemma. I don't understand where the values in the matrix come from, and then what to do with them.

As someone already mentioned, the indices i,j,k usually label the *components* of some vector, with the definition [itex]v_1 = v_x, v_2= v_y, v_3 =v_z [/itex].

You cannot know what value of i,j,k to use unless you have some context in which to use those quantities! The *context* will tell you what value to use. What I am saying is that what you have are the *definitions* of these symbols, but it`s only when you will use them in some specific problem that you will know what value to use for the indices.

The most famous example of using the Levi-Civita symbol is through the definition of the cross product. One way to write [itex] \vec A \times \vec B = \vec C [/itex] is to give a rule to calculate each component of the vector C by saying

[itex] C_i = \sum_{j,k=1}^3 \epsilon_{ijk} A_j B_k [/itex]

(often people do not write the summation explicitly. Not writing explicitly the sums is called ''using Einstein's convention'').

Let`s say you want the x component of C using the above formula. That fixes i to be 1. Then you get

[itex] C_1 = \epsilon_{123} A_2 B_3 + \epsilon_{132} A_3 B_2 [/itex]

(there are many more terms, for example [itex] \epsilon_{112} A_1 B_2[/itex] and so on but they are all zero because the levi civita symbol is zero whenever two indices are equal. So there are really 9 terms corresponding to the 9 values that j and k may take, but of those nine terms, only two are non zero and these are the two given above).

Now plug in the values for the levi-civita symbol for those indices and you get

[itex] C_1 = A_2 B_3 - A_3 B_2 [/itex] which translates to [itex] C_x = A_y B_z - A_z B_y [/itex] as expected.

As an exercise, check that you get the correct results for C_y and C_z!

So you see, when you use the symbols in specific problems you will know what values to take for the indices...

Hope this clarifies things

Patrick
 
  • #10
nrqed said:
As someone already mentioned, the indices i,j,k usually label the *components* of some vector, with the definition [itex]v_1 = v_x, v_2= v_y, v_3 =v_z [/itex].

You cannot know what value of i,j,k to use unless you have some context in which to use those quantities! The *context* will tell you what value to use. What I am saying is that what you have are the *definitions* of these symbols, but it`s only when you will use them in some specific problem that you will know what value to use for the indices.

The most famous example of using the Levi-Civita symbol is through the definition of the cross product. One way to write [itex] \vec A \times \vec B = \vec C [/itex] is to give a rule to calculate each component of the vector C by saying

[itex] C_i = \sum_{j,k=1}^3 \epsilon_{ijk} A_j B_k [/itex]

(often people do not write the summation explicitly. Not writing explicitly the sums is called ''using Einstein's convention'').

Let`s say you want the x component of C using the above formula. That fixes i to be 1. Then you get

[itex] C_1 = \epsilon_{123} A_2 B_3 + \epsilon_{132} A_3 B_2 [/itex]

(there are many more terms, for example [itex] \epsilon_{112} A_1 B_2[/itex] and so on but they are all zero because the levi civita symbol is zero whenever two indices are equal. So there are really 9 terms corresponding to the 9 values that j and k may take, but of those nine terms, only two are non zero and these are the two given above).

Now plug in the values for the levi-civita symbol for those indices and you get

[itex] C_1 = A_2 B_3 - A_3 B_2 [/itex] which translates to [itex] C_x = A_y B_z - A_z B_y [/itex] as expected.

As an exercise, check that you get the correct results for C_y and C_z!

So you see, when you use the symbols in specific problems you will know what values to take for the indices...

Hope this clarifies things

Patrick

I gave the most famous example of the use of the Levi-Civita symbol. The most famous use of the Kronecker delta is to define the scalar product.
One can write [itex] \vec A \cdot \vec B = \sum_{i,j=1}^3 A_i B_j \delta_{i,j} [/itex]. You should verify that this leads to the usual result [itex] A_x B_x + A_y B_y + A_z B_z [/itex].

Patrick
 
  • #11
I'm going to use superscripts and subscripts... but I suppose you don't need to distingush the two, and can just write everything as a subscript.


Again, to restate what's being said so far, if we're working in 3-space, and we've chosen a basis on our vector space...


When we have a vector, we can write down its coordinates. I'm going to write coordinates as superscripts (in particular, the following are not exponents):

[tex]
\vec{v} = \left[ \begin{array}{c} v^1 \\ v^2 \\ v^3 \end{array} \right]
[/tex]

So what is the i-th component of our vector [itex]\vec{v}[/itex]? It's [itex]v^i[/itex].


When we have a covector, we do the same, but we use subscripts.

[tex]
\hat{\omega} = [ \omega_1 \, \omega_2 \, \omega_3 ]
[/tex]

The i-th component of the covector [itex]\hat{\omega}[/itex] is then [itex]\omega_i[/itex].


When we have a matrix, we do the same:

[tex]
\mathbf{A} = \left[ \begin{array}{ccc}
A_1^1 & A_2^1 &A_3^1 \\
A_1^2 & A_2^2 &A_3^2 \\
A_1^3 & A_2^3 &A_3^3
\end{array} \right]
[/tex]

What is the (i, j)-th component of our matrix [itex]\mathbf{A}[/itex]? It's [itex]A^i_j[/itex].


But the point is that we treat [itex]\vec{v}[/itex], [itex]\hat{\omega}[/itex], and [itex]\mathbf{A}[/itex] as simply being arrays of numbers. The first two were one-dimensional arrays, and the latter was a two-dimensional array. Subscripts and superscripts are how we indicate the actual elements of those arrays.
 
  • #12
Levi-Civita symbols make long equations of tensors (or vectors if you prefer) short. For example, I assume that you have come across antisymmetric tensors? Well, if you haven't, they are tensors which change sign whenever you interchange any pair of its indices.

[tex]V(x^1,\dots,x^i,\dots,x^j,\dots,x^n) = -V(x^1,\dots,x^j,\dots,x^i,\dots,x^n)[/tex]

Notice that V is an antisymmetric tensor with n components (or a vector in n dimensions, where each [itex]x^k[/itex] denotes one of its components). Notice that if we then interchange another pair of components (that is, doing it twice) the sign changes again! So we actually have

[tex]V(x^{\pi(1)},x^{\pi(2)},\dots,x^{\pi(n)}) = (-1)^{\pi}V(x^1,x^2,\dots,x^n)[/tex]

where [itex]\pi[/tex] is some permutation of n indices. Now, youve read mathworld's section on permutations, so you know what an even permutation is, and an odd permutation is?

If you have (instead of n indices) say 3 indices, 1,2,3. Then 2,1,3 is an odd permutation because I have swapped a pair of indices 1 times (which is an odd number). What about 2,3,1? Well, I swapped 1,2 and then 1,3 which is two. So this is an even permutation.

When it comes to generalizing this for n indices instead of just three, it is important to keep track of your signs. Because we know for an antisymmetric vector we get a negative sign every time we swap the indices an odd number of times! And the negative sign disappears when we do it an even number of times (thus the (-1)^m bit out the front).

It gets a little more complicated, but the Levi-Civita symbols make this kind of index swapping very concise.
 
  • #13
You said what you were looking for was an intuitive picture of the Kroenecker delta and Levi-Civita symbol? Well, sounds like you're pretty damn close with the delta:

Am I to understand that for every matrix of this delta, the outcome will always be the same? If I simply put in a 1 for 1=1, 2=2 and 3=3 (a diagonal from top left to bottom right) and zeros in all other places, I don't see how this has any significance. If vector I = vector I, then there should & would be 1's in all places in the matrix. If vector I does not = vector J, then there would only be one's in some spots in the matrix. But when to use which? That's my dilemma. I don't understand where the values in the matrix come from, and then what to do with them.

I don't quite get why you start talking about vectors - you're on the right track when you say you have a matrix which is zero everywhere, except for a diagonal string of 1's from top left to bottom right. This matrix is of pretty massive significance: it's the identity matrix! So, that's one way of understanding the delta (as giving the components of the identity matrix). In linear algebra, doing calculations by writing out full matrices can take ages - by talking about indices instead, things can be done much quicker. Having the Kroenecker delta allows us to talk about the identity matrix in terms of indices, which is good news given the identity matrix's ubiquity.

Having a delta pitch up in your calculations is usually good news, since it often allows you to cull some of your summations. A sum over one index (let's say j) of [tex]\delta_{ij}[/tex] is basically an assertion that we can scrap the sum over j, get rid of the delta symbol and replace all incidences of j by i. For example:

[itex]\sum_{j=1}^3 \sum_{k=1}^3 A_{ij} \delta_{ij} B_{jk} = \sum_{k=1}^3 A_{ii} B_{ik}[/itex]

(It's worth checking this out, either by writing out the summation explicitly or rewriting it in matrix notation.)

PS as is probably clear, I'm kind of new and haven't quite got the hang of the latex stuff yet - could someone tell me what I've done wrong? Thanks.
 
Last edited:
  • #14
The L-C symbol is substantially trickier to get an intuitive handle on - I still don't really have a 'gut feel' for it, even after a few years. The closest thing is probably to understand it as a kind of 'shuffler', which swaps about entries inside matrices in a cyclic sort of way. Hence, it crops up in areas where these shufflings are desired: for example, in taking the determinant of a matrix (think of how each element gets multiplied by the elements which it does not share a row or a column with), or in taking the cross product of two vectors (where a scrambling is needed to ensure that the new vector is perpendicular to both the old ones), or in the curl of a vector field (which you may well not have done yet - roughly, the 'spiraliness' of a field). Of course, all three of these applications are linked - I was taught to find the cross product [itex] \vec u \times \vec v [/itex] by taking the determinant of the following matrix:

[tex]\left[ \begin{array}{ccc}\mathbf{i} & \mathbf{j} & \mathbf{k} \\u_1 & u_2 & u_3 \\v_1 & v_2 & v_3\end{array} \right][/tex]

and the curl is found by taking the cross product of the nabla operator.

As to what manner of thing the L-C symbol is - it is indeed an example of a rank-3 tensor, which are (as you mention) extensions of the concept of matrices into three dimensions (if you like, you can imagine a three-dimensional array of numbers - a cube, of 3 numbers to a side - containing the 27 entries, just as a vector can be displayed as a one-dimensional array of the 3 entries, and a linear operator as a two-dimensional array of the 9 entries). However, it's not an extension of the Kroenecker delta specifically - as you can see, the two objects do quite different things to the matrices they act upon.
 

Related to Linear & Vector Algebra: Kronecker delta & Levi-Civita symbol

1. What is the Kronecker delta symbol?

The Kronecker delta symbol, denoted by δ, is a mathematical symbol used to represent a function that takes on the value of 1 if the two indices are equal, and 0 if they are not equal. It is commonly used in linear algebra and vector calculus to simplify equations and represent certain relationships between variables.

2. What is the Levi-Civita symbol?

The Levi-Civita symbol, denoted by ε, is a mathematical symbol that represents a function used to describe the relationship between three indices. It is commonly used in vector calculus to describe cross products and in differential geometry to define the curvature of a surface.

3. How are the Kronecker delta and Levi-Civita symbols related?

The Kronecker delta and Levi-Civita symbols are related in that they are both functions used to represent certain relationships between indices. In particular, the Kronecker delta can be written in terms of the Levi-Civita symbol as δij = εijkεjk, where i, j, and k are indices.

4. What are some applications of the Kronecker delta and Levi-Civita symbols?

The Kronecker delta and Levi-Civita symbols have a wide range of applications in mathematics and physics. They are commonly used in linear algebra, vector calculus, and differential geometry to simplify equations and represent relationships between variables. They also have applications in quantum mechanics, electromagnetism, and fluid dynamics.

5. Are there any limitations to using the Kronecker delta and Levi-Civita symbols?

While the Kronecker delta and Levi-Civita symbols are useful tools in mathematics and physics, there are some limitations to their use. For example, they can only be used to represent certain types of relationships between variables, and may not be applicable in all situations. Additionally, they may become more complex to work with when dealing with higher dimensions or more complicated equations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
14
Views
3K
  • Special and General Relativity
Replies
4
Views
248
  • Special and General Relativity
Replies
2
Views
938
  • Calculus and Beyond Homework Help
Replies
20
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
3K
  • Calculus and Beyond Homework Help
Replies
12
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
10K
  • Special and General Relativity
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
Replies
9
Views
1K
Back
Top