Tensor notation for vector product proofs

In summary, the conversation discusses using tensor notation to prove a vector equation without having to write out all the terms. The confusion arises when trying to assign indices for the cross product of two arbitrary vectors. It is clarified that the indices are arbitrary and can be re-arranged without introducing a negative sign. The conversation also briefly touches on finding the magnitude of a tensor product in vector notation.
  • #1
skate_nerd
176
0
I am new to tensor notation, but have known how to work with vector calculus for a while now. I understand for the most part how the Levi-Civita and Kronecker Delta symbol work with Einstein summation convention. However there are a few things I'm iffy about.
For example, I have a problem where I am to prove
$$\vec{A}\bullet\vec{B}\times\vec{C} = \vec{B}\bullet\vec{C}\times\vec{A} = \vec{C}\bullet\vec{A}\times\vec{B}$$
using tensor notation to avoid having to write out all the terms.
So I know the very left side of this equation would look like
$$\vec{A}\bullet(\vec{B}\times\vec{C}) = A_i (\vec{B}\times\vec{C})_i = A_i \varepsilon_{ijk} B_j C_k$$
But then I get confused when trying to assign the indices for the next two parts of the equation.
Would the second part look like this:
$$\vec{B}\bullet(\vec{C}\times\vec{A}) = B_j (\vec{C}\times\vec{A})_j = B_j \varepsilon_{jkl} C_k A_i$$
Or would the indices of the epsilon be the same as for the first part (\(\varepsilon_{ijk}\))?
Same confusion goes for the first part. The reason I have this uncertainty in my mind is because I know with the triple vector product, you have to introduce 2 extra indices. So I guess my lack of complete understanding of these functions is leaving me confused with my problem. Thanks in advance for any guidance.
 
Physics news on Phys.org
  • #2
skatenerd said:
I am new to tensor notation, but have known how to work with vector calculus for a while now. I understand for the most part how the Levi-Civita and Kronecker Delta symbol work with Einstein summation convention. However there are a few things I'm iffy about.
For example, I have a problem where I am to prove
$$\vec{A}\bullet\vec{B}\times\vec{C} = \vec{B}\bullet\vec{C}\times\vec{A} = \vec{C}\bullet\vec{A}\times\vec{B}$$
using tensor notation to avoid having to write out all the terms.
So I know the very left side of this equation would look like
$$\vec{A}\bullet(\vec{B}\times\vec{C}) = A_i (\vec{B}\times\vec{C})_i = A_i \varepsilon_{ijk} B_j C_k$$
But then I get confused when trying to assign the indices for the next two parts of the equation.
Would the second part look like this:
$$\vec{B}\bullet(\vec{C}\times\vec{A}) = B_j (\vec{C}\times\vec{A})_j = B_j \varepsilon_{jkl} C_k A_i$$
Or would the indices of the epsilon be the same as for the first part (\(\varepsilon_{ijk}\))?
Same confusion goes for the first part. The reason I have this uncertainty in my mind is because I know with the triple vector product, you have to introduce 2 extra indices. So I guess my lack of complete understanding of these functions is leaving me confused with my problem. Thanks in advance for any guidance.
Even if your re-arrange and your indices are in some order say jik, you can always let this new indices be lmn and then let lmn = ijk since the indices are arbitrary so you are right back to ijk.

Is that what you were asking? That is what I thought from the question. If not, clarify were I lost the point.
 
  • #3
I'm not sure I made myself clear enough...
What I am unsure of in constructing this equation in tensor notation, is if I take a cross product of two arbitrary vectors B and C, that would be in a certain arbitrary plane. Would the indices of the epsilon symbol be different than if I took the cross product of vectors C and A? Or would it make sense to call them both just \(\varepsilon_{ijk}\)? That just somehow doesn't seem like it would make sense to me, but I'm not sure what else would be correct.
 
  • #4
I will explain the first equality which may help:
\[
\mathbf{A}\cdot(\mathbf{B}\times\mathbf{C}) = \varepsilon_{ijk}a_ib_jc_k
\]
which is just a summation and a, b, c are the components of the vectors.

We can then write \(\varepsilon_{ijk}a_ib_jc_k = \varepsilon_{jki}b_jc_ka_i\).

We can either do a subsitution or recall that jki doesn't introduce a negative sign since we are still reading right to left. Thus, we can re-write the the equation as
\[
\varepsilon_{ijk}b_ic_ja_k = \mathbf{B}\cdot(\mathbf{C}\times\mathbf{A})
\]
 
  • #5
Ahhh wow thank you so much. Now I see what you meant by the indices being arbitrary. And seeing now that this proof works helps me conceptualize better what these indices are actually doing.
 
  • #6
One other quick question, about a slightly different thing if you don't mind...How would you construct a problem in vector notation and find the magnitude of that tensor product? Like for instance the vector A crossed with the vector B? Would it make sense just to dot the product with itself?
 
  • #7
skatenerd said:
One other quick question, about a slightly different thing if you don't mind...How would you construct a problem in vector notation and find the magnitude of that tensor product? Like for instance the vector A crossed with the vector B? Would it make sense just to dot the product with itself?

I don't quite understand your question. Can you give me an example problem or question?
 
  • #8
I want to do this in tensor notation:
$$|\vec{A}\times\vec{B}|$$
Magnitude of a cross product of two arbitrary vectors. So the way I know to start is:
$$|\varepsilon_{ijk}A_j B_k|$$
To take this magnitude in vector notation is what I am not sure I understand how to do. Would it make sense to write
$$(\varepsilon_{ijk}A_j B_k)(\varepsilon_{ijk}A_j B_k)$$
and then use the "epsilon killer" identity to simplify it? Not really sure of any other way to notate it.
 
  • #9
skatenerd said:
I want to do this in tensor notation:
$$|\vec{A}\times\vec{B}|$$
Magnitude of a cross product of two arbitrary vectors. So the way I know to start is:
$$|\varepsilon_{ijk}A_j B_k|$$
To take this magnitude in vector notation is what I am not sure I understand how to do. Would it make sense to write
$$(\varepsilon_{ijk}A_j B_k)(\varepsilon_{ijk}A_j B_k)$$
and then use the "epsilon killer" identity to simplify it? Not really sure of any other way to notate it.
I would just interpret as
\[
\lvert\mathbf{A}\times\mathbf{B}\rvert = \lVert \mathbf{A}\rVert\lVert \mathbf{B}\rVert\sin(\theta)
\]
Is there a specific end goal of this question? Should you come up with a certain expression?
 
  • #10
Well actually the goal of the question is to prove that expression you just wrote, using tensor notation. I was just having a hard time even getting started with that whole idea of taking a magnitude in tensor notation.
 
  • #11
skatenerd said:
Well actually the goal of the question is to prove that expression you just wrote, using tensor notation. I was just having a hard time even getting started with that whole idea of taking a magnitude in tensor notation.
Here is a homework of mine from Continuum Mechs with Tensor problems worked out:
http://ubuntuone.com/4qjtmJJmCXpewCPoKNJhXf
 
  • #12
That really helps a lot. Definitely going to bookmark that, for future reference :D
 

Related to Tensor notation for vector product proofs

1. What is tensor notation for vector product proofs?

Tensor notation is a mathematical notation that is used to express vector product proofs in a concise and efficient way. It involves using indices to represent the components of vectors and matrices, making it easier to manipulate and solve complex vector product equations.

2. How is tensor notation different from other notations?

Tensor notation is different from other notations, such as matrix notation, because it allows for a more general representation of vector products. It also simplifies the use of Einstein's summation convention, making calculations and proofs more streamlined.

3. Can tensor notation be used for all types of vector products?

Yes, tensor notation can be used for all types of vector products, including scalar and cross products. It is a versatile notation that can be applied to various mathematical and scientific fields.

4. Are there any drawbacks to using tensor notation for vector product proofs?

One potential drawback of tensor notation is that it can be difficult to understand for those who are not familiar with it. It also requires some knowledge of linear algebra and tensor calculus, which may be challenging for some individuals.

5. How can I learn more about tensor notation for vector product proofs?

There are many resources available online and in textbooks that can help you learn more about tensor notation. Some recommended topics to study include tensor algebra, tensor calculus, and the Einstein summation convention. Additionally, practicing with examples and seeking guidance from experts in the field can also be beneficial in understanding and mastering tensor notation for vector product proofs.

Similar threads

  • Linear and Abstract Algebra
Replies
14
Views
716
  • Linear and Abstract Algebra
Replies
1
Views
837
  • Linear and Abstract Algebra
Replies
7
Views
348
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
3
Views
1K
  • Classical Physics
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
2K
  • Special and General Relativity
Replies
1
Views
558
Back
Top