Orthogonal Transformation in Euclidean Space

In summary, to prove the given problem, we need to construct a linear map that takes the vector \(u\) to \(v\). This can be done by choosing an orthonormal basis for the Euclidean space \(V\) and defining the map on this basis, then extending it linearly to the whole space. It is important to note that the map must be orthogonal and this can be achieved by rotating the chosen basis to align \(e_1\) with \(v\).
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

Here's one of the questions that I encountered recently along with my answer. Let me know if you see any mistakes. I would really appreciate any comments, shorter methods etc. :)

Problem:

Let \(u,\,v\) be two vectors in a Euclidean space \(V\) such that \(|u|=|v|\). Prove that there is an orthogonal transformation \(f:\, V\rightarrow V\) such that \(v=f(u)\).

Solution:

We assume that \(u\) and \(v\) are non zero. Otherwise the result holds trivially.

Let \(B\) denote the associated symmetric bilinear function of the Euclidean space. Let us define the linear transformation \(f\) as,

\[f(x)=\begin{cases}x&\mbox{if}&x\neq u\\v&\mbox{if}&x=u\end{cases}\]

It's clear that, \(B(f(x),\,f(y))=B(x,\,y)\) whenever \(x,\,y\neq u\). Also \(B(f(u),\,f(u))=B(v,\,v)\) and since \(|v|=|u|\Rightarrow B(v,\,v)=B(u,\,u)\) we have \(B(f(u),\,f(u))=B(u,\,u)\).

It remains to show that, \(B(f(x),\,f(u))=B(x,\,u)\) for \(x\neq u\).

\[B(f(v+u),\,f(v+u))=B(f(v),\,f(v))+2B(f(v),\,f(u))+B(f(u),\,f(u))\]

Also since \(v+u\neq u\),

\[B(f(v+u),\,f(v+u))=B(v+u,\,v+u)=B(v,\,v)+2B(v,\,u)+B(u,\,u)\]

Using the above two results and the fact that \(B(u,\,u)=B(v,\,v)\) we get,

\[B(f(v),\,f(u))=B(v,\,u)\]

Now consider \(B(f(x+v),\,f(x+u))\).

Case I: \(x+v\neq u\)

\[B(f(x+v),\,f(x+u))=B(f(x),\,f(x))+B(f(x),\,f(u))+B(f(v),\,f(x))+B(f(v),\,f(u))\]

Also,

\[B(f(x+v),\,f(x+u))=B(x+v,\,x+u)=B(x,\,x)+B(x,\,u)+B(x,\,v)+B(v,\,u)\]

Using the above two results and the fact that \(B(f(v),\,f(u))=B(v,\,u)\) we get,

\[B(f(x),\,f(u))=B(x,\,u)\]

Case II: \(x+v=u\)

\[B(x,\,v)=B(u-v,\,v)=B(u,\,v)-B(v,\,v)\]
\[B(x,\,u)=B(u-v,\,u)=B(u,\,u)-B(v,\,u)\]
Therefore, \[B(x,\,u)=-B(x,\,v)~~~~~~(1)\]

\[B(f(x),\,f(u))=B(f(u-v),\,f(u))=B(f(u),\,f(u))-B(f(v),\,f(u))=B(v,\,v)-B(v,\,v)=0\]

Then since \(B(f(x),\,f(u))=B(x,\,v)=0\) by (1) we get \(B(x,\,u)=0\)

\[\therefore B(f(x),\,f(u))=B(x,\,u)\]
 
Physics news on Phys.org
  • #2
Sudharaka said:
Problem:

Let \(u,\,v\) be two vectors in a Euclidean space \(V\) such that \(|u|=|v|\). Prove that there is an orthogonal transformation \(f:\, V\rightarrow V\) such that \(v=f(u)\).

Solution:

We assume that \(u\) and \(v\) are non zero. Otherwise the result holds trivially.

Let \(B\) denote the associated symmetric bilinear function of the Euclidean space. Let us define the linear transformation \(f\) as,

\[f(x)=\begin{cases}x&\mbox{if}&x\neq u\\v&\mbox{if}&x=u\end{cases}\]
The problem with this is that the map $f$ can never be linear (unless $u=v$).

It may help to think in terms of a simple example. In the space $V=\mathbb{R}^2$, let $u=(1,0)$ and $v=(0,1)$. The only orthogonal transformations taking $u$ to $v$ are a rotation of the whole space through a right angle, or a reflection of the whole space in the line $y=x$. Either way, the transformation has to shift just about every vector in the space. The map that just takes $u$ to $v$ and leaves everything else fixed is not linear, and certainly not orthogonal.

To prove this result, you need to construct a linear map $f$ taking $u$ to $v$. The way to do that is to define $f$ on an orthonormal basis for $V$ and then extend it by linearity to a map on the whole of $V$. Start by constructing an orthonormal basis $\{e_1,e_2,\ldots,e_n\}$ such that $e_1$ is a multiple of $u$. Then do the same for $v$, showing that there is an orthonormal basis $\{g_1,g_2,\ldots,g_n\}$ such that $g_1$ is a multiple of $v$. You can then define $f$ by $f(e_k) = g_k$ for $1\leqslant k\leqslant n$.

It should then be straightforward to check that the map $f$ is orthogonal.
 
  • #3
Opalg said:
The problem with this is that the map $f$ can never be linear (unless $u=v$).

It may help to think in terms of a simple example. In the space $V=\mathbb{R}^2$, let $u=(1,0)$ and $v=(0,1)$. The only orthogonal transformations taking $u$ to $v$ are a rotation of the whole space through a right angle, or a reflection of the whole space in the line $y=x$. Either way, the transformation has to shift just about every vector in the space. The map that just takes $u$ to $v$ and leaves everything else fixed is not linear, and certainly not orthogonal.

To prove this result, you need to construct a linear map $f$ taking $u$ to $v$. The way to do that is to define $f$ on an orthonormal basis for $V$ and then extend it by linearity to a map on the whole of $V$. Start by constructing an orthonormal basis $\{e_1,e_2,\ldots,e_n\}$ such that $e_1$ is a multiple of $u$. Then do the same for $v$, showing that there is an orthonormal basis $\{g_1,g_2,\ldots,g_n\}$ such that $g_1$ is a multiple of $v$. You can then define $f$ by $f(e_k) = g_k$ for $1\leqslant k\leqslant n$.

It should then be straightforward to check that the map $f$ is orthogonal.

Thanks so much for the informative reply. I think I am getting the idea. First we can choose an orthonormal basis \(\{e_1,e_2,\ldots,e_n\}\) such that \(e_1\) is a multiple of \(u\). Then if we rotate this basis by a certain angle so as to align \(e_1\) with \(v\) we could get the basis \(\{g_1,g_2,\ldots,g_n\}\). Since \(|v|=|u|\) our new basis would have \(g_1\) a multiple of \(v\). Am I correct? Or is there a more formal way of doing this? :)
 
  • #4
I think what Opalg is getting at is this:

If you have a linear map that takes a basis to a basis, it is certainly invertible.

For $e_1$, we can always choose $e_1 = u/|u|$, and use something like Gram-Schmidt to turn any basis extension we create into an orthogonal basis:

$\{e_1,\dots,e_n\}$.

The same process is then used to create the 2nd basis:

$\{g_1,\dots,g_n\}$, where $g_1 = v/|v|$.

We then DEFINE, for any $x \in V$:

$T(x) = T(c_1e_1 + \cdots + c_ne_n) = c_1g_1 + \cdots + c_ng_n$.

Note that $T(u) = T(|u|e_1) = |u|T(e_1) = |v|T(e_1)$ (since $|u| = |v|$)

$= |v|g_1 = |v|(v/|v|) = v$.

Now proving orthogonality is a bit of a mess to write explicitly, but the idea is this:

Since both bases are ORTHOGONAL (we can actually insist on orthonormal by scaling the basis vectors to unit vectors), we have:

$B(e_i,e_i) = B(g_i,g_i) = 1$
$B(e_i,e_j) = B(g_i,g_j) = 0,\ i \neq j$.

So if:

$x = c_1e1 + \cdots + c_ne_n$
$y = d_1e_1 + \cdots + d_ne_n$, then:

$B(x,y) = B(c_1e1 + \cdots + c_ne_n,d_1e_1 + \cdots + d_ne_n)$

$\displaystyle = \sum_{i,j} c_id_jB(e_i,e_j) = \sum_i c_id_i$

by the bilinearity of $B$ and the orthogonality of our basis.

Similarly, evaluating $B(T(x),T(y))$ gives the same answer, and there you go.
 
  • #5
Forgive the double-post, but I thought I would give an explicit example for $\Bbb R^3$.

For our symmetric Euclidean bilinear form, I will use the standard dot-product. It is possible to use so-called "weighted" inner products, but they just add needlessly complicated calculation to the scenario.

For our first vector, we will take $u = (2,0,0)$. For the second, we will take $v = (1,\sqrt{2},1)$, which I think are perfectly reasonable choices.

For our first basis, the usual $\{(1,0,0),(0,1,0),(0,0,1)\}$ will do quite nicely. The second basis is a bit of a pain to come up with, we start with the unit vector:

$g_1 = (\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$.

To get a basis, we'll just add in (0,1,0) and (0,0,1) and apply Gram-Schmidt:

First, we calculate:

$(0,1,0) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,1,0)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$

$= (\frac{-\sqrt{2}}{4},\frac{1}{2},\frac{-\sqrt{2}}{4})$

and normalizing this gives us:

$g_2 = (\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

Finally, we calculate:

$(0,0,1) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,0,1)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2}) - \frac{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(0,0,1)}{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})}(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

$= (\frac{-1}{2},0,\frac{1}{2})$ which upon normalization gives us:

$g_3 = (\frac{-\sqrt{2}}{2},0,\frac{\sqrt{2}}{2})$.

It is clear, then, then that the orthogonal linear mapping we are looking for is given by the matrix (relative to the standard basis for $\Bbb R^3$):

$[T] = \begin{bmatrix}\frac{1}{2}&\frac{-1}{2}&\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2}&\frac{\sqrt{2}}{2}&0\\ \frac{1}{2}&\frac{-1}{2}&\frac{\sqrt{2}}{2} \end{bmatrix}$

which obviously (heh!) has determinant 1, and is orthogonal, and moreover:

$T(u) = T(2,0,0) = (1,\sqrt{2},1) = v$.
 
  • #6
Deveno said:
I think what Opalg is getting at is this:

If you have a linear map that takes a basis to a basis, it is certainly invertible.

For $e_1$, we can always choose $e_1 = u/|u|$, and use something like Gram-Schmidt to turn any basis extension we create into an orthogonal basis:

$\{e_1,\dots,e_n\}$.

The same process is then used to create the 2nd basis:

$\{g_1,\dots,g_n\}$, where $g_1 = v/|v|$.

We then DEFINE, for any $x \in V$:

$T(x) = T(c_1e_1 + \cdots + c_ne_n) = c_1g_1 + \cdots + c_ng_n$.

Note that $T(u) = T(|u|e_1) = |u|T(e_1) = |v|T(e_1)$ (since $|u| = |v|$)

$= |v|g_1 = |v|(v/|v|) = v$.

Now proving orthogonality is a bit of a mess to write explicitly, but the idea is this:

Since both bases are ORTHOGONAL (we can actually insist on orthonormal by scaling the basis vectors to unit vectors), we have:

$B(e_i,e_i) = B(g_i,g_i) = 1$
$B(e_i,e_j) = B(g_i,g_j) = 0,\ i \neq j$.

So if:

$x = c_1e1 + \cdots + c_ne_n$
$y = d_1e_1 + \cdots + d_ne_n$, then:

$B(x,y) = B(c_1e1 + \cdots + c_ne_n,d_1e_1 + \cdots + d_ne_n)$

$\displaystyle = \sum_{i,j} c_id_jB(e_i,e_j) = \sum_i c_id_i$

by the bilinearity of $B$ and the orthogonality of our basis.

Similarly, evaluating $B(T(x),T(y))$ gives the same answer, and there you go.

Deveno said:
Forgive the double-post, but I thought I would give an explicit example for $\Bbb R^3$.

For our symmetric Euclidean bilinear form, I will use the standard dot-product. It is possible to use so-called "weighted" inner products, but they just add needlessly complicated calculation to the scenario.

For our first vector, we will take $u = (2,0,0)$. For the second, we will take $v = (1,\sqrt{2},1)$, which I think are perfectly reasonable choices.

For our first basis, the usual $\{(1,0,0),(0,1,0),(0,0,1)\}$ will do quite nicely. The second basis is a bit of a pain to come up with, we start with the unit vector:

$g_1 = (\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$.

To get a basis, we'll just add in (0,1,0) and (0,0,1) and apply Gram-Schmidt:

First, we calculate:

$(0,1,0) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,1,0)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$

$= (\frac{-\sqrt{2}}{4},\frac{1}{2},\frac{-\sqrt{2}}{4})$

and normalizing this gives us:

$g_2 = (\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

Finally, we calculate:

$(0,0,1) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,0,1)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2}) - \frac{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(0,0,1)}{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})}(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

$= (\frac{-1}{2},0,\frac{1}{2})$ which upon normalization gives us:

$g_3 = (\frac{-\sqrt{2}}{2},0,\frac{\sqrt{2}}{2})$.

It is clear, then, then that the orthogonal linear mapping we are looking for is given by the matrix (relative to the standard basis for $\Bbb R^3$):

$[T] = \begin{bmatrix}\frac{1}{2}&\frac{-1}{2}&\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2}&\frac{\sqrt{2}}{2}&0\\ \frac{1}{2}&\frac{-1}{2}&\frac{\sqrt{2}}{2} \end{bmatrix}$

which obviously (heh!) has determinant 1, and is orthogonal, and moreover:

$T(u) = T(2,0,0) = (1,\sqrt{2},1) = v$.

Hi Denevo,

Thanks very much for both of your posts. After reading them I understood almost everything that is required to solve the problem. Now I think I should read more about the Gram-Schimdt process. :)
 

Related to Orthogonal Transformation in Euclidean Space

1. What is an orthogonal transformation in Euclidean space?

An orthogonal transformation in Euclidean space is a type of linear transformation that preserves distances and angles between vectors. It is also known as an isometry, meaning it preserves the shape and size of objects in a geometric space.

2. How is an orthogonal transformation different from a regular transformation?

An orthogonal transformation is different from a regular transformation because it preserves distances and angles, while a regular transformation can change the shape and size of objects. In other words, an orthogonal transformation is an isometry, while a regular transformation is not necessarily an isometry.

3. Can an orthogonal transformation be represented by a matrix?

Yes, an orthogonal transformation can be represented by a matrix. In fact, every orthogonal transformation can be represented by an orthogonal matrix, which is a square matrix with orthogonal columns (i.e. columns that are perpendicular to each other).

4. What are some applications of orthogonal transformations in real life?

Orthogonal transformations have many applications in real life, such as in computer graphics, image processing, and 3D modeling. They are also used in physics and engineering to describe the motion of objects and in statistics to analyze data.

5. How do I know if a transformation is orthogonal?

A transformation is orthogonal if its corresponding matrix is an orthogonal matrix. To check if a matrix is orthogonal, you can multiply it by its transpose and see if the result is the identity matrix. If it is, then the transformation is orthogonal. Additionally, if a transformation preserves distances and angles, it is also orthogonal.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
18
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
1K
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
2K
Replies
7
Views
882
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
1K
Back
Top