Understanding the Cross Product of Vectors: Perpendicular Properties Explained

In summary, the cross product of two vectors in three-dimensional space produces a new vector that is perpendicular to both of the original vectors. This can be proven by taking the dot product of the cross product with either of the original vectors, which results in a value of zero. The cross product can also be generalized to more than two vectors in n-dimensional space, but it is primarily used and defined in three-dimensional space for its usefulness in various mathematical and physical applications. The derivation of the cross product is not straightforward and may involve concepts such as quaternions.
  • #1
welle
Why does the cross product of two vectors produce a vector which is perpendicularto the plane in which the original two lie?(whenever i go to look it up it is already assumed that it is perpendicular)
 
Physics news on Phys.org
  • #2
Originally posted by welle
Why does the cross product of two vectors produce a vector which is perpendicularto the plane in which the original two lie?(whenever i go to look it up it is already assumed that it is perpendicular)

I don't know what you mean by "why". Do you mean, how do you prove that it has this property? Or what?
 
  • #3
yes, how do you prove it?
 
  • #4
Well, you could do
a X b . a
and
a X b . b

and you would see that both are zero for generic a and b.
 
  • #5
I don't quite understand how that proves that the resulting vector is perpendicular and not at any other angle
 
  • #6
Originally posted by welle
I don't quite understand how that proves that the resulting vector is perpendicular and not at any other angle

The dot product of two vectors is:

[tex]\vec{u}\cdot\vec{v} = |\vec{u}||\vec{v}| \cos\theta[/tex]

where θ is the angle between u and v. If the dot product is zero for two (nonzero) vectors u and v, then the cosine of the angle between them must be zero, which means that the angle has to be 90 or 270 degrees, i.e. they're perpendicular.
 
  • #7
That's true, but you are talking about dot product while my question was about cross product(vector product). Ax B =C
If vectors A and B lie in a plane, why should the resulting vector C be perpendicular to that plane?
 
  • #8
Becase [itex](\vec{a} \times \vec{b}) \cdot \vec{a} = 0 = (\vec{a} \times \vec{b}) \cdot \vec{b}[/itex].
 
  • #9
Originally posted by welle
That's true, but you are talking about dot product while my question was about cross product(vector product). Ax B =C
If vectors A and B lie in a plane, why should the resulting vector C be perpendicular to that plane?

You asked for a proof that the cross product u x v is perpendicular to u and v. NateTG gave you one: from the definition of u x v (in terms of determinants, or whatever; pick your favorite way of computing a cross product), take the dot product of u x v with either u or v. You will find that the dot product is zero. Therefore, u x v is perpendicular to u and v.
 
  • #10
What definition of cross-product are you using?

A perfectly good definition is:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".
 
  • #11
I tried taking the dot product of v x u with v as Ambitwistor and NateTG suggested and got it equal to v^(2) u sin(theta) cos(theta), and when i used the component method i still couldn't get it to zero although i probably did it wrong, can someone show me how to get it to zero( i apologize for my misunderstanding)
 
  • #12
[tex]\vec{u} \times \vec{v} \equiv (u_y v_z-u_z v_y)\hat{x} + (u_z v_x-u_x v_z)\hat{y} + (u_x v_y-v_x u_y)\hat{z}[/tex]

so,

[tex]
\begin{equation*}
\begin{split}
(\vec{u} \times \vec{v})\cdot\vec{u} &= (u_y v_z-u_z v_y) u_x + (u_z v_x-u_x v_z) u_y + (u_x v_y-u_y v_x) u_z \\
&= u_x u_y v_z - u_x u_z v_y + u_y u_z v_x - u_x u_y v_z + u_x u_z v_y - u_y u_z v_x \\
&= 0
\end{split}
\end{equation}
[/tex]

and similarly for [itex](\vec{u} \times \vec{v})\cdot\vec{v}[/itex].
 
  • #13
welle,
What I know is multiplication of vector was developed so as to make it useful for different physical and mathematical applications. It could even have been a.b = a^(1/2) x b^(1/3)
But this isn't useful!


As far as vector multiplication is concerned, we know that the resulting 'thing' should have a direction too. But what direction should it be given? So we search for a unique direction and the choice of the perpendicular direction looks natural and moreover it can be used in different applications. So there.

Tell me if I'm wrong because I'm still searching for it's history.
 
  • #14
What definition of cross-product are you using?
Just to muddy the waters even more, I'll add Spivak's general definition:

If [tex]v_1, \ldots, v_{n-1} \in \mathbb{R}^n}[/tex] and [tex]\varphi[/tex] is defined by

[tex]\varphi(w) = \det \begin{pmatrix} v_1 \\ \vdots \\ v_{n-1} \\ w \end{pmatrix}[/tex]

then [tex]\varphi[/tex] is a 1-form over [tex]\mathbb{R}^n}[/tex] and [I assume by the Riesz Rep. Theorem] there is a unique [tex]z \in \mathbb{R}^n}[/tex] such that the inner product

[tex]\langle w,z\rangle = \varphi(w) = \det \begin{pmatrix} v_1 \\ \vdots \\ v_{n-1} \\ w \end{pmatrix}[/tex]

This [tex]z[/tex] is denoted [tex]v_1\times \cdots \times v_{n-1}[/tex].


Spivak finishes "It is uncommon in mathematics to have a 'product' that depends on more than two factors. In this case of two vectors in [tex]v,w\in \mathbb{R}^3[/tex], we obtain a more conventional looking product, [tex]v\times w \in \mathbb{R}^3[/tex]. For this reason it is sometimes maintained that the cross product can be defined only in [tex]\mathbb{R}^3[/tex]."

I only think this definition is interesting because I was not aware that the cross product could be generalized to more (or less!) than two vectors (always n-1 vectors in n dimensions) until I saw this.
 
  • #15
I think that what welle was trying to get is how the mathematics of it work, not just showing that it works. What might be helpful in this case is is a derivation of the cross product, provided that it isn't too complicated and doesn't require knowledge that we don't have (I believe that it originally came from quaternions somehow, but I don't know how quaternions work, either.).

For example, the non-trig version of the dot product can be derived from the applying the cosine definition. You can also think about it in the opposite direction: You can also see that, given constant maginitueds, the more different (greater angle betwee) two vectors are, the less the dot product will be (which is related the fact that (a+c)(a-c) < a^2), which coincides with the fact that the cosine decreases as the vector are more different (have a greater angle between).

But I'm at a loss for similar explanations for the cross product.
 
  • #16
Basically it would be extremely good to know how, when & who Cropped up Cross Product and for what reasons.

Anyone who knows it?
 
  • #17

Here's a good explanation of the historical development of the cross product.


HTH

Sol.
 
  • #18
Originally posted by Welle
I tried taking the dot product of v x u with v as Ambitwistor and NateTG suggested and got it equal to v^(2) u sin(theta) cos(theta), and when i used the component method i still couldn't get it to zero although i probably did it wrong, can someone show me how to get it to zero( i apologize for my misunderstanding)

Once again, what is your definition of "cross product"??
One definition of cross product is:
(ai+ bj+ ck)x(ui+ vj+ wk)= (bw-cv)i-(aw-cu)j+(ab-bu)k which is what Ambitwistor was doing. Using that definition the proof is a tedious but direct calculation.

Another definition is the one I gave before:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".

Because you write that, for the dot product of u with uxv, you got
"v^(2) u sin(theta) cos(theta)" you appear to be using the second definition but that includes "perpendicular to both u and v" by definition.
 
  • #19
Originally posted by HallsofIvy
Once again, what is your definition of "cross product"??
One definition of cross product is:
(ai+ bj+ ck)x(ui+ vj+ wk)= (bw-cv)i-(aw-cu)j+(ab-bu)k which is what Ambitwistor was doing. Using that definition the proof is a tedious but direct calculation.

Another definition is the one I gave before:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".

Because you write that, for the dot product of u with uxv, you got
"v^(2) u sin(theta) cos(theta)" you appear to be using the second definition but that includes "perpendicular to both u and v" by definition.

Part of his problem might be that he's using theta there twice for two different angles. One of which might not be 90 the other of which must be 90, and hence the expression is zero
 
  • #20
Originally posted by welle
Why does the cross product of two vectors produce a vector which is perpendicularto the plane in which the original two lie?(whenever i go to look it up it is already assumed that it is perpendicular)

welle,

I think the answer to your question about the cross product is this: The cross product is a definition. You can't prove a definition...because its a definition. Similarly you cannot prove the result of a dot product. A cross product and dot products simply describe what is happening physically.

Most textbooks are unsettling because they do not explain "why" the cross product is defined the way it is. Nearly all of mathematics was developed to solve physical or financial problems. To be honest, I don't think anyone really knows "why" accept that it became convention to "define" the cross product the way it is. You have to study the history of the topic.

You can derive the expressions that lead to the result that is "defined" as the cross product by considering a force acting at a distance on an object (torque). When you consider the trigonometry and the definition of torque what you get is the magnitude of the resulting "vector" and define its direction (because it makes the most physical sense)to be normal to the plane containing the force and position vector.

The resulting vector defined by the cross product is sometimes called a "pseudovector" because it is not a result of an agent such as force. For basically the same reason, the centripetal acceleration of a mass times the mass is a "pseudoforce" because it is a force that is not the result of a physical entity.

I can send you the derivations. Let me know.

Hope this helps,

JDH
 
Last edited:
  • #21
Force is a vector, but a vector isn't a force. It's an ordered n-tuple of numbers. The cross product is the operator on the exterior algebra. It so happens that the degree two part of the exterior algebra is isomorphic (as a vector space) to the degree 1 part, which is R^3 for R^3. It is not correct to say it is a pseudo-vector for phyiscal 'force' reasons. That might be one model for you to think of about this, but that isn't in general true. Perhaps if you realized that by no means 'nearly all' of mathematics was devised to solve physical or financial problems...
 
  • #22
I think that i am still a bit confused.How come a product of two vectors can be in one case scalar and in another a vector?Does the product depend on the nature of the vectors?By which i mean that the product of force and displacemnt vectors will produce a scalar while that of force and moment arm a vector.Then does it make sense to take a dot product or a cross product of any two vectors, and if it doesn't, can one really use both of those techniques to prove something as ambitwistor did?Mathematically ambitwistor's proof is perfect and leaves no doubt, yet i still don't understand whether the proof required a definition, an assumption, or nothing but pure algebra and geometry.If it was an assumption, and had purely physical grounds, is there no way of prooving this assumption?

Originally posted by J. D. Heine,
The resulting vector defined by the cross product is sometimes called a "pseudovector" because it is not a result of an agent such as force. For basically the same reason, the centripetal acceleration of a mass times the mass is a "pseudoforce" because it is a force that is not the result of a physical entity.

Is the direction of "preudoforce" dependent on other forces while the force itself is not?
 
  • #23
Originally posted by welle
I think that i am still a bit confused.How come a product of two vectors can be in one case scalar and in another a vector?Does the product depend on the nature of the vectors? By which i mean that the product of force and displacemnt vectors will produce a scalar while that of force and moment arm a vector.Then does it make sense to take a dot product or a cross product of any two vectors, and if it doesn't, can one really use both of those techniques to prove something as ambitwistor did?Mathematically ambitwistor's proof is perfect and leaves no doubt, yet i still don't understand whether the proof required a definition, an assumption, or nothing but pure algebra and geometry.If it was an assumption, and had purely physical grounds, is there no way of prooving this assumption?

Originally posted by J. D. Heine,


Is the direction of "preudoforce" dependent on other forces while the force itself is not?

No, whether the result of a product is a number or a vector does not depend upon the vectors- it depends upon the type of product. There are three different "products" or types of multiplication generally defined for vectors:
scalar product- multiply a number by a vector and the result is a vector.
dot product- multiply two vectors and the result is a number.
cross product- multiply two vectors and the result is a vector.

Each has different applications. It might happen that the correct formula for calculating such things as angular moment involves the cross product while the formula for "work" involves a dot product.
That depends on the formula, not just on what kind of vector is used.
 
  • #24
Perhaps annoyingly 'scalar product' is also widely used in the sense of dot product. (cos the answer is a scalar presumably)

It is also called the inner product, and the cross or vector product is called the outer product too.
 
  • #25
Just to make things worse: the term "outerproduct" is also occasionally used to mean "tensor" product of two vectors:
The outerproduct of vectors written <a, b, c>, <d, e, f> in some coordinate system is the tensor with components
[ad ae af]
[bd be bf]
[cd ce cf]
 
  • #26
Let me throw some more geometric intution into this.

I'm going to argue that (1) the cross product defines an area in space, (2) you get the perpendicular vector only after you've defined what "perpendicular" means, (3) you get perpendiculars when you define an inner or dot product.

(1)

We have an inutitive notion that a square is the product of two adjacent sides, a length side and a breadth side. After all, A = l x b. Expressing things with vectors means that we keep track of the directions of things like sides as well as their magnitudes. So it would be nice if we could speak of multiplying the unit vectors i and j together, to get the unit square in the xy-plane (where we're writing vectors as xi+yj+...

It would be even better if we could do that for any two vectors a and b. That is, we would like to have a calculation of the area of the parallelogram with a and b as sides.

The cross product does just that:

If a = a1 i + a2 j and b = b1 i + b2 j

then

a x b = a1 b1 (i x i) + a1 b2 (i x j) +a2 b1 (j x i) +a2 b2 (i x i) (1)

= a1 b2 (i x j) + a2 b1 (j x i) (2)

= (a1 b2 - a2 b1) (i x j) (3)

We get (2) because i x i = 0, etc. Which makes sense because a parallelogram with both adjacent sides the same is squashed flat, i.e. has zero area. We get (3) because this is a vector idea of parallelogram, which let's you keep track of which is the top face and which is the bottom.

You can verify that (a1 b2 - a2 b1) actually is the area of the parallelogram.

The excellent thing about this, is we now have calculations for a concept of "vector area" which keeps track of its orientation in space; just as "vector" keeps track of the direction of an interval in space.

Even better: the trick still works if instead of a rectangular cartestian coordinate system, we use an oblique system with "unit vectors", i.e. basis vectors, which need not be at right angles. In fact, angles haven't even got a mention so far. We can do without angles completely, in fact, but if we do then we also have to get along without a calculation for scalar area: all we get out of the cross-multiplication is a set of "area components" such as (a1 b2 - a2 b1).

And better still: the trick works in any number of dimensions. In, 4,5,6,..n dimensions, vector areas are well defined. The standard name for the cross-multiplication giving a vector area is "bivector"; that's what people always call it when working in n dimensions.

But wait, there's more: we can go on from vector areas in 3,...n-space, to vector volumes, 4-volumes and so on: trivectors, quadvectors,...,n-vectors. The calculation of the components is done by writing out the vector components:

a1 a2 a3 ... an

b1 b2 b3 ... bn

c1 c2 c3 ... cn

... etc

and then calculating all the determinants of the square matrices you get by selecting as many columns as there are rows. The scalar determinant values are the n-vector components.

In 3-space, we get some extra structure for bivectors. There are just as many unit bivectors as there are unit vectors, so there's a natural identification:

i x j = k

j x k = i

k x i = j

That is (and here is the crux of the matter): for every vector area A (i x j) in the xy-plane, there's a vector A k along the z-axis; etc.

(2)

But, is there a special reason why we make the identification above? Why not

i x j = k - i - j

for example, apart from simplicity? That is, why insist that a vector area orientation corresponds to a particular vector direction?

Well the reason is, if we do the trivector calculation for volume, we are just calculating the determinant of

a1 a2 a3

b1 b2 b3

c1 c2 c3

(There's only one component, because there's only one way a volume can be oriented in 3-space.)

The determinant works out as

(a1 b2 - a2 b1) c3 + (a2 b3 - a3 b2) c1 + (a3 b1 - a3 b1) c2

in other words, it's a dot product, (a x b) . c.

Now if we suspend our identification for a moment, and give the vectors which correspond to bivectors new names:

i x j = k*

j x k = i*

k x i = j*

Then we have a sensible orthonormal system if we can say that

i . i* = 1

j . j* = 1

k . k* = 1

and every other combination such as i . j* = 0.

In an orthonormal system we can state that the volume spanned by the unit vectors is 1. Equivalently we can say: every vector in the xy-plane is perpendicular to the unit bivector in the xy-plane: (i x j) . (a i + b j) = k* . (a i + b j) = 0.

If we can say that, then we have an orthonormal or Euclidean vector basis. Otherwise we have an oblique or stretched basis.

(3)

Now here I've been defining "perpendicular" as a relation between bivectors and vectors. But if we make the specific identification:

i* = i

j* = j

k* = k

then our statement above becomes: every vector in the xy-plane is perpendicular to the vector cross product (k = k* = i x j) of the unit vectors in the xy-plane. Which is exactly what we wanted to establish in the first place.

The identification we make between bivectors and vectors defines the metric that we're imposing on the vector space. When we choose

i* = i

j* = j

k* = k

we are saying that

i . i = j . j = k . k = 1

i . j = 0 etc

and that mean's we're choosing the Euclidean or Pythagorean metric, in which

a . b = a1 b1 + a2 b2 + a3 b3

In summary then:

(1) Cross multiplication defines a vector area or bivector.

(2) In 3-space we can identify bivectors with vectors, specifying a metric.

(3) The metric in which unit bivectors are identified with unit vectors is the metric which makes the unit vectors perpendicular to each other.

Now I realize that one can just say: do the calculation in components and you'll see that (a x b) . a = 0, i.e. they're perpendicular. But taking the "high road" ...

(a) makes the idea of cross product as "vector area" explicit.

(b) opens up the concept of vector areas, vector volumes, etc, which in advanced work are multivector or wedge products, the natural extension of the cross product.

(c) allows for advanced work where we transform into oblique and stretched coordinate systems, where we have to keep track of the metric.
 
  • #27
Originally posted by matt grime
It is also called the inner product, and the cross or vector product is called the outer product too.

i think i saw selfAdjoint saying this in some thread somewhere too. that's two people who have called the cross product an outer product.

so perhaps i am wrong.

in my head, it works like this:

an inner product takes two vectors and makes a scalar

a vector product takes two vectors and makes a new vector

and outer product takes two vectors and makes a tensor

so in my world, you cannot call a cross product an outer product (there is a relationship between the cross product and the exterior product, and the exterior product is constructed from the outer product, but as i said to selfAdjoint, the cross product is not isomorphic to the exterior product)

and it seems to make sense: the inner product somehow takes you to an "inner space" (the scalar field) and the outer product takes you to a larger "outer space", the space of tensors

but perhaps i should stop harping on about what i think the definitions should be, and find out what they actually are
 
  • #28
lethe writes:

i think i saw selfAdjoint saying this in some thread somewhere too. that's two people who have called the cross product an outer product.

so perhaps i am wrong.

in my head, it works like this:

an inner product takes two vectors and makes a scalar

a vector product takes two vectors and makes a new vector

and outer product takes two vectors and makes a tensor

I'm afraid the terminology depends on context.

an inner product takes two vectors and makes a scalar

The only ambiguity here is that sometimes only certain kinds of vector should have inner products formed with other kinds. Really, vectors should only have inner products formed with vectors from the dual space. When you have a metric defined, you can happily convert vectors to duals, so anything goes.

The inner product is also called a dot product or contraction.

a vector product takes two vectors and makes a new vector

Yes, in 3-space with orthonormal basis vectors, i.e. rectangular coordinates.

In other cases, we take two vectors and form a bivector. This is generally called a wedge product or exterior product. However, the Clifford algebra community, following Grassmann, call this an outer product.

and outer product takes two vectors and makes a tensor

Mathematicians usually call that a tensor product. However, in computing usage, e.g. Wolfram Mathematica, it's called an outer product - or at least the operation on components is.
 
  • #29
Originally posted by saski
lethe writes:



I'm afraid the terminology depends on context.
well, i would like to establish whether this terminology as i have it is correct (in context).

The only ambiguity here is that sometimes only certain kinds of vector should have inner products formed with other kinds. Really, vectors should only have inner products formed with vectors from the dual space. When you have a metric defined, you can happily convert vectors to duals, so anything goes.
nonsense. the inner product on a vector space exists between vectors and vectors (not dual vectors). there is no ambiguity here.

it is possible to induce an inner product on the dual space, given an inner product on a vector space, but still, this inner product is between dual vectors and dual vectors.

sometimes the contraction of a (m,n) rank tensor (with n>0) with a vector is called an inner product in differential geometry, but this terminology is misleading; this is certainly not an inner product space.

The inner product is also called a dot product or contraction.
hmm... the dot product is a special case of an inner product, but they are not the same thing. contraction is also not the same thing. both of those are never applied to, for example, Hilbert spaces. Hilbert spaces have an inner product, not contraction (since there are no indices), and not a dot product (unless it is a finite dimensional Hilbert space)


Yes, in 3-space with orthonormal basis vectors, i.e. rectangular coordinates.
nonsense. vector product has nothing to do with orthogonality or basis vectors. consider, for example, the Lie bracket in a Lie algebra. or the matrix multiplication in the algebra of matrices. what have these to do with orthogonal bases? nothing at all. remember, a vector product is a product which is a vector. just like it sounds.


In other cases, we take two vectors and form a bivector. This is generally called a wedge product or exterior product. However, the Clifford algebra community, following Grassmann, call this an outer product.
ahh... now that answers my question. the vector product in a Clifford algebra is sometimes called an outer product.

but you seem a little confused... why would the Clifford algebra people follow Grassman? Grassman invented the Grassman algebra (also known as the exterior algebra. you know, the one with the wedge product). i think the Clifford people must be following Clifford, not Grassman... eh?

do you mean to imply that Grassman also called his wedge product an outer product?

Mathematicians usually call that a tensor product. However, in computing usage, e.g. Wolfram Mathematica, it's called an outer product - or at least the operation on components is.
you bet. i usually call it a tensor product too. that is a good name for a product that produces a tensor (just like vector product is a good name for a product that produces a vector)

but i am aware that this is also called an outer product, and that name makes sense to me.

so, you agree with me, and would like to add that Clifford product is sometimes also called outer product. i don't like it, but i will take your word for it (or at least check in a book.)
 
Last edited:
  • #30
lethe writes:

nonsense. the inner product on a vector space exists between vectors and vectors (not dual vectors). there is no ambiguity here.

...

sometimes the contraction of a (m,n) rank tensor (with n>0) with a vector is called an inner product in differential geometry, but this terminology is misleading; this is certainly not an inner product space.

You're right, I'm wrong: see

http://courses.cs.vt.edu/~cs5485/notes/ch1-2/linalg.pdf [Broken]

hmm... the dot product is a special case of an inner product, but they are not the same thing.

Sure.

contraction is also not the same thing. both of those are never applied to, for example, Hilbert spaces. Hilbert spaces have an inner product, not contraction (since there are no indices)[...]

But I can quote MTW: "Contraction seals off two of the tensor's slots, reducing the rank by two." That has to include the contraction of the tensor product of a contravariant and a covariant vector. If we're not to call that a contraction, what should we call it?

As for Hilbert spaces, see e.g.

http://farside.ph.utexas.edu/teaching/qm/fundamental/node9.html [Broken]

"Mathematicians term <B|A> the inner product of a bra and a ket." Bras are defined as linear functionals operating on kets, i.e. dual vectors. So <B|A> is a contraction by MTW's definition. Howvever, there's a norm on the Hilbert space allowing one to convert between bras and kets, so <B|A> is also the inner product of |A> and |B>.

nonsense. vector product has nothing to do with orthogonality or basis vectors.

Consider:

[itex] w_i = \epsilon_i_j_k u^j v^k [/itex]

It becomes a vector product only by raising the index on w, which requires a metric, i.e. definition of orthogonality. Or you can write the exterior product:

[itex] u^j v^k - v^j u^k [/itex]

but you need a Hodge star to make a vector out of it, and you need the metric for the Hodge star.

the Lie bracket in a Lie algebra. or the matrix multiplication in the algebra of matrices. what have these to do with orthogonal bases?

The Lie bracket expresses non-commutation of Lie derivative operators; it's not a simple matter of alternating tensor products, and it's certainly not the same thing as a vector product.

And matrix multiplication is entirely the multiplication of row with column vectors. Again, not a vector product.
I stand by what I said.

i think the Clifford people must be following Clifford, not Grassman... eh?

do you mean to imply that Grassman also called his wedge product an outer product?

en.wikipedia.org/wiki/Hermann_Grassmann

"Following an idea of his father, as Grassmann himself quotes in the A1, he invented a new type of product, the exterior product which he calls also combinatorial product (In German: äußeres Produkt or kombinatorisches Produkt)."

I think the usage "outer" caught on as a synomym for "exterior". That's how "outer" is used in e.g.

www.mrao.cam.ac.uk/~clifford/introduction/intro/node5.html

However, I've permitted confusion between that exterior product and the Clifford product.

What the Clifford people do is make a grand basis containing scalar unity, the unit vectors, unit bivectors, unit trivectors, etc; the whole graded sequence of exterior products. Then they define a super-product on the span of that, which they call the "associative" OR "geometric" OR "Clifford" product.

Thanks - I'll watch my expression more carefully.
 
Last edited by a moderator:
  • #31
Where did you get the impression that orthogonality and metrics are linked, as in the phrase

'requires a metric, ie the definition of orthogonality'

It is perfectly possible to define exterior products without reference to a metric. Indeed, I cannot think of one off hand that uses a metric, but then I'm an algebraist.

Also, I would rather that quote about bra and ket said the physicists and some applied mathematicians used the terminology. The identification of the hilbert space and its dual implies reflexivity, which is not true for a general banach space (or, hilbert spaces I believe) where we also have linear fuctionals, though I don't believe you want to see the generalization of Rietz's Representation Theorem (actually, it isn't really a generalization, as much as the hilbert space version you're used to is a specialization).
 
  • #32
matt grime writes:

Where did you get the impression that orthogonality and metrics are linked, as in the phrase

'requires a metric, ie the definition of orthogonality'

It is perfectly possible to define exterior products without reference to a metric. Indeed, I cannot think of one off hand that uses a metric, but then I'm an algebraist.

You're right, exterior products don't require a metric. However, inner products do, even if the metric is just the boring old sum-of-squares norm.

Here's my reasoning.

(1) How do you measure a vector (in a vector space V), i.e. resolve it to its components v^i? Answer: with a set of linear functions, e_i: V -> R, in other words, covectors. No metric is required.

(2) But here's a harder one. Can you resolve a vector using only the basis vectors?

Obviously, you take one basis vector at a time, project it and drop a perpendicular to it from the vector you want to resolve.

Obvious, that is, in rectangular coordinates - but incorrect in oblique coordinates. In oblique coordinates, the line you have to drop is not a perpendicular to the basis vector, but rather a line in the (n-1)-plane spanned by all the other basis vectors.

In fact what you're doing there is constructing the wedge product of the other base vectors, so as to use it as a covector. You're calculating

( v ^ e2 ^ ... en) / ( e1 ^ e2 ^ ... en)

or in components in some arbitrary basis,

det( v, e2, ... en) / det( e1, e2, ... en) [Kramer's Rule]

to get your first component, and so on.

With an orthogonal basis, you don't need to go to that trouble, because each basis vector corresponds directly to its covector, courtesy of the metric.

The calculation for the first component then becomes

(v ^ j ^ k ^ ...) / (i ^ j ^ k ^ ...)

= (v ^ j ^ k ^ ...) / 1

= (v . i)

as we all learned in school.

Let's try it out.

v = (3, 4)

Basis vectors are

a = (1, 0)
b = (1, 1)

(v . a, v . b) = (3, 7)

But

(v ^ b)/(a ^ b) = -1
(a ^ v)/(a ^ b) = 4

(-1) a + 4 b = (3, 4)

Constructing covectors as (n-1)-vectors works, regardless of metric; making inner products with the basis vectors only works in orthogonal systems.
Also, I would rather that quote about bra and ket said the physicists and some applied mathematicians used the terminology. The identification of the hilbert space and its dual implies reflexivity, which is not true for a general banach space (or, hilbert spaces I believe) where we also have linear fuctionals, though I don't believe you want to see the generalization of Rietz's Representation Theorem (actually, it isn't really a generalization, as much as the hilbert space version you're used to is a specialization).

It was news to me, actually, that there is an identification of bras with kets. I don't know QM well; the reference I quoted was the first I've heard of Rietz.
 
  • #33
You absolutely do not need a metric. An inner product is just a non-degenerate positive definite bilinear form. If you are not in characteristic two then the parallelogram law let's you link a NORM and an inner product. I think you are using metric in a non-metric space sense. Which is why I was being pernickity about the term 'metric'.

Actually any basis element 'corresponds' to its dual basis element by definition, and independent of 'angles' - these are all defined for arbitrary vector spaces, not those over char zero. Note we are assuming finite dimensional.

And in answer to the question of resolving without using a metric:

let e_i be a basis, f_i the corresponding dual basis for any finite dimensional v.s. then any vector is:

v= sum over i f_i(v)e_i

no mention of norms.

I think there is some confusion of "the dual basis", and dual basis wrt some inner product here.

There are more innerproducts lying around than simply the obvious one defined component wise which presumes a choice of basis a priori.

You should always try and work co-ordinate free - a gentleman only takes bases when he has to.

As for QM, usually one thinks about l_2, which has the nice property that

f_y(x) = <x,y> is a linear functional and that every linear functional arises in this way. (note linear in first factor, conj in second). This is Rietz rep theory in its nice form and allows an unnatural isomorphism between the Hilbert space and its dual
 
  • #34
Originally posted by saski

But I can quote MTW: "Contraction seals off two of the tensor's slots, reducing the rank by two." That has to include the contraction of the tensor product of a contravariant and a covariant vector. If we're not to call that a contraction, what should we call it?
contraction refers to what you do when you get rid of an index in tensor index notation by summing over one of them.

i have never ever heard this term applied to Hilbert spaces. i believe the reason is because no one uses indices to label the states of their Hilbert space (which would be a very ackward notation indeed if it is not finite)


"Mathematicians term <B|A> the inner product of a bra and a ket." Bras are defined as linear functionals operating on kets, i.e. dual vectors. So <B|A> is a contraction by MTW's definition. Howvever, t there's a norm on the Hilbert space allowing one to convert between bras and kets, so <B|A> is also the inner product of |A> and |B>.
well, i am not a mathematician, so perhaps i shouldn't speak for them, but as far as i can tell, mathematicians do not use bra ket notation at all, because it is extremely sloppy.

and every math book i know defines an inner product as a positive definite bilinear (or perhaps antilinear in one argument) form. some more physically minded texts allow for more general nondegenerate (instead of positive definite)

but the point is, it is an operation on two VECTORS



Consider:

[itex] w_i = \epsilon_i_j_k u^j v^k [/itex]

It becomes a vector product only by raising the index on w, which requires a metric, i.e. definition of orthogonality. Or you can write the exterior product:

[itex] u^j v^k - v^j u^k [/itex]

but you need a Hodge star to make a vector out of it, and you need the metric for the Hodge star.
when you say "metric", do you mean Riemannian metric?

anyway, which point are you trying to prove here? that you need orthogonality to define the cross product? i am a little lost with what you are trying to show here, but i have a strong suspicion that it is wrong.



The Lie bracket expresses non-commutation of Lie derivative operators; it's not a simple matter of alternating tensor products, and it's certainly not the same thing as a vector product.
perhaps you should review the definition of Lie algebra.

want me to tell you? ok:
firstly, a Lie algebra is an algebra, which means it is a vector space with a vector product.

this vector product is bilinear (as all products must be), but neither commutative nor associative.

perhaps you can tell me your definition of vector product, so we can make sure we both know what the other is talking about. i told you mine: a product which is vector valued. the Lie bracket certainly satisfies this requirement. if you think otherwise, you are just wrong.
And matrix multiplication is entirely the multiplication of row with column vectors. Again, not a vector product.
I stand by what I said.
see above. please tell me your definition of vector product. if you define the vector product to be the cross product in R3, then of course anything else will not be.



However, I've permitted confusion between that exterior product and the Clifford product.

What the Clifford people do is make a grand basis containing scalar unity, the unit vectors, unit bivectors, unit trivectors, etc; the whole graded sequence of exterior products. Then they define a super-product on the span of that, which they call the "associative" OR "geometric" OR "Clifford" product.
yeah, i know what a Clifford algebra is, but thanks.

according to that link, geometric algebra people call the exterior product the outer product. OK, although this is different from your previous answer, i find it more plausible, so i will accept this answer.

and so according to your quote above, Grassman invented the Grassman algebra, aka, the exterior algebra. this is exactly what i claimed.
 
  • #35
Originally posted by matt grime
- a gentleman only takes bases when he has to.

that is an awesome quote. Physicsforums should have a quote of the day feature. this should be it at least once a month.

now, i was hoping you would respond to my original question above, which boils down to "what exactly is the definition of an outer product, and why is the vector cross product an outer product?"
 
<h2>1. What is the cross product of vectors?</h2><p>The cross product of two vectors is a vector that is perpendicular to both of the original vectors. It is also known as the vector product.</p><h2>2. How is the cross product calculated?</h2><p>The cross product of two vectors can be calculated by taking the product of their magnitudes and the sine of the angle between them.</p><h2>3. What is the significance of the cross product being perpendicular?</h2><p>The perpendicularity of the cross product is significant because it allows us to determine the direction of the resulting vector. The direction is always perpendicular to the plane formed by the two original vectors.</p><h2>4. What are some real-world applications of the cross product?</h2><p>The cross product has many applications in physics, engineering, and computer graphics. It is used to calculate torque, magnetic fields, and 3D rotations, among other things.</p><h2>5. Can the cross product be used with more than two vectors?</h2><p>Yes, the cross product can be extended to any number of vectors. The resulting vector will still be perpendicular to all of the original vectors and its direction will depend on the order in which the vectors are multiplied.</p>

1. What is the cross product of vectors?

The cross product of two vectors is a vector that is perpendicular to both of the original vectors. It is also known as the vector product.

2. How is the cross product calculated?

The cross product of two vectors can be calculated by taking the product of their magnitudes and the sine of the angle between them.

3. What is the significance of the cross product being perpendicular?

The perpendicularity of the cross product is significant because it allows us to determine the direction of the resulting vector. The direction is always perpendicular to the plane formed by the two original vectors.

4. What are some real-world applications of the cross product?

The cross product has many applications in physics, engineering, and computer graphics. It is used to calculate torque, magnetic fields, and 3D rotations, among other things.

5. Can the cross product be used with more than two vectors?

Yes, the cross product can be extended to any number of vectors. The resulting vector will still be perpendicular to all of the original vectors and its direction will depend on the order in which the vectors are multiplied.

Similar threads

  • Linear and Abstract Algebra
Replies
32
Views
3K
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
33
Views
606
  • Linear and Abstract Algebra
Replies
11
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
496
  • Linear and Abstract Algebra
Replies
4
Views
766
  • Precalculus Mathematics Homework Help
Replies
5
Views
458
Replies
10
Views
646
  • Linear and Abstract Algebra
Replies
22
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Back
Top