Basic Fixed Matrices: Proving Equations with Formal Calculations

  • Thread starter converting1
  • Start date
  • Tags
    Matrices
In summary: BC matrix and so is CIn summary, In summary, when m = 2000, the
  • #1
converting1
65
0
for fixed ## m \geq 2 ## let ## \epsilon (i,j) ## denote the mxm matrix ## \epsilon (i,j)_{rs} = \delta _{ir} \delta _{js} ##

when m = 2000 show by formal calculations that

i) ## \epsilon (500,199) \epsilon (1999,10) = \epsilon (500,10) ##

ii) ## \epsilon (1999,10) \epsilon (500,1999) = 0##

hence generalise for ##\epsilon (i,j) \epsilon (k,l)## and find a single equation using the kroneeker delta symbol for the generalisation

my attempt thus far:

i) rewriting this as ##\delta _{500r} \delta _{1999s} \delta _{1999r} \delta _{10s} ## then we note that ## \delta _{1999s} \delta _{1999r} = \epsilon (1999,1999) ## which gives a 0 everywhere except the 1999th ith position and 1999th jth position but I'm not sure how I can conclude that this gives ##\epsilon (500,10)##

that's all Ihave so far unfortunately
 
Physics news on Phys.org
  • #2
hi converting1! :smile:
converting1 said:
i) rewriting this as ##\delta _{500r} \delta _{1999s} \delta _{1999r} \delta _{10s} ## …

nooo …

##(\epsilon (i,j) \epsilon (k,l))_{r,s} = (\epsilon (i,j))_{r,t} (\epsilon (k,l))_{t,s}## :wink:
 
  • #3
tiny-tim said:
hi converting1! :smile:nooo …

##(\epsilon (i,j) \epsilon (k,l))_{r,s} = (\epsilon (i,j))_{r,t} (\epsilon (k,l))_{t,s}## :wink:

oh damn I am so stupid, so here is what I got so far let me know if my logic is flawed:

i) ## (\epsilon (500,1999))_{r,t} (\epsilon (1999,10))_{t,s} = \delta _{500r} \delta _{1999t} \delta _{1999t} \delta _{10s} ##

we know that ##\delta _{1999t} \delta _{1999t} = 1 ## if t = 1999, else it will be 0, so we get the required result

for ii) ## (\epsilon (1999,10))_{r,t} (\epsilon (500,1999))_{t,s} = \delta _{1999r} \delta _{10t} \delta _{500t} \delta _{1999s} ## we now see that ## \delta _{10t} \delta _{500t} = 0 ## as if t = 10, then ## \delta _{500t} = 0 ## and if t = 500 then ## \delta _{10t} = 0 ## hence we will get 0 always, the desired result

now generalising,

consider ## (\epsilon (i,j) \epsilon (k,l))_{rs} = (\epsilon (i,j))_{r,t} (\epsilon (k,l))_{t,s} = \delta _{ir} \delta _{jt} \delta _{kt} \delta _{ls} ##

## \delta _{jt} \delta _{kt} = 1 ## if j = k, else it's 0, hence we get ## \delta _{ir} \delta _{ls}## if j = k, i.e. we get ## \epsilon (i,l) ## if j = k, else we get 0.

now for the final part I am not sure how to express it as a single equation, could you give me a little hint?
 
Last edited:
  • #4
hi converting1! :smile:
converting1 said:
we know that ##\delta _{1999t} \delta _{1999t} = 1 ## if t = 1999, else it will be 0, so we get the required result

ah, you're still not thinking in terms of the summation convention:

##\delta _{1999t} \delta _{1999t} = 1 ## :wink:

similary ##\delta _{10t} \delta _{500t} = 0 ##​

once you've convinced yourself of this, you should be able to do the general case :smile:
 
  • #5
tiny-tim said:
hi converting1! :smile:ah, you're still not thinking in terms of the summation convention:

##\delta _{1999t} \delta _{1999t} = 1 ## :wink:

similary ##\delta _{10t} \delta _{500t} = 0 ##​

once you've convinced yourself of this, you should be able to do the general case :smile:

hmm, I'm not so sure I understand, what if t = 20? i.e. ##\delta _{1999,20} \delta _{1999,20} = 0.0 = 0 ? ##
 
  • #6
also take a 2x2 matrix as an example, ## \epsilon (1,2) _{22} = \delta _{12} \delta_{22} = 0 \times 1 = 0 ##
 
  • #7
converting1 said:
hmm, I'm not so sure I understand, what if t = 20? i.e. ##\delta _{1999,20} \delta _{1999,20} = 0.0 = 0 ? ##

ah, that's right, you don't understand …

you can't substitute one value for t into δ1999,tδ1999,t

the einstein summation convention means that that's a ∑ …

(over all values of t) δ1999,tδ1999,t :wink:
 
  • #8
tiny-tim said:
ah, that's right, you don't understand …

you can't substitute one value for t into δ1999,tδ1999,t

the einstein summation convention means that that's a ∑ …

(over all values of t) δ1999,tδ1999,t :wink:
really? our lecturer defined ## \epsilon (i,j) _{r,s} = 1 ## if r = i and j = s , else it is 0, so I am just using that definition

so how do you know ## \delta_{1999,t} \delta_{1999,t} = 1 ##?
 
  • #9
has your lecturer not taught you the summation convention (the einstein summation convention)?
 
  • #10
tiny-tim said:
has your lecturer not taught you the summation convention (the einstein summation convention)?

No, not yet, however in my applied mathematics lecturer we just started it today - so I know very little about it (almost nothing).
 
  • #11
converting1 said:
No, not yet, however in my applied mathematics lecturer we just started it today - so I know very little about it (almost nothing).

oooh :redface:

ok: this sort of question is very difficult to do without either the summation convention or a ∑
whenever you see a δr,s, you know it's going to be multiplying something else with either an r or an s (or both)

and since all these things are parts of matrices, you know it's actually going to be a matrix multiplication:

A = BC means Ar,s = ∑(all values of t) Br,tCt,s

you can either write it with a ∑ (like that),

or you can use the summation convention and just write Ar,s = Br,tCt,s (with a "∑" being understood but not written)

that enables you to say "δr,s replaces any s in the thing next to it by r", eg δr,sAs,u = Ar,u

alternatively, use a ∑ : ∑ δr,sAs,u = Ar,u

in particular, δr,sδs,u = δr,u

(or ∑s δr,sδs,u = δr,u)


ok, try all that in the original question :smile:
 
  • #12
tiny-tim said:
oooh :redface:

ok: this sort of question is very difficult to do without either the summation convention or a ∑
whenever you see a δr,s, you know it's going to be multiplying something else with either an r or an s (or both)

and since all these things are parts of matrices, you know it's actually going to be a matrix multiplication:

A = BC means Ar,s = ∑(all values of t) Br,tCt,s

you can either write it with a ∑ (like that),

or you can use the summation convention and just write Ar,s = Br,tCt,s (with a "∑" being understood but not written)

that enables you to say "δr,s replaces any s in the thing next to it by r", eg δr,sAs,u = Ar,u

alternatively, use a ∑ : ∑ δr,sAs,u = Ar,u

in particular, δr,sδs,u = δr,u

(or ∑s δr,sδs,u = δr,u)


ok, try all that in the original question :smile:
ok using all for part c):

## (\epsilon (i,j) \epsilon (k,l) )_{r,s} = \displaystyle \sum_{t=1} ^{n} \epsilon (i,j) _{r,t} \epsilon (k,l) _{t,s} = \displaystyle \sum_{t=1}^{n} \delta _{ir} \delta _{jt} \delta _{kt} \delta _{ls} = \displaystyle \sum_{t=1}^n \delta _{ir} \delta _{jt} \delta _{jt} \delta _{ls}## if j = k, which then is ## = \delta _{ir} \delta _{jj} \delta _{jj} \delta _{ls} + \displaystyle \sum_{t \not= j } \delta_ {ir} \delta _{jt} \delta _{kt} \delta _{ls} = \delta _{ir} \delta _{ls} + 0 = \epsilon (i,l) ## if ## j \not= k ## then we get ## \displaystyle \sum_{t=1}^n \delta _{ir} \delta _{jt} \delta _{kt} \delta _{ls} = 0 ## as t cannot equal k and j simultaneously

is that ok?
 
  • #13
converting1 said:
ok using all for part c):

what is part c) ? :confused:
 
  • #14
tiny-tim said:
what is part c) ? :confused:

sorry I didn't label in the original post part c is this:

" hence generalise for ##\epsilon (i,j) \epsilon (k,l)##"
 
  • #15
converting1 said:
ok using all for part c):

## (\epsilon (i,j) \epsilon (k,l) )_{r,s} = \displaystyle \sum_{t=1} ^{n} \epsilon (i,j) _{r,t} \epsilon (k,l) _{t,s} = \displaystyle \sum_{t=1}^{n} \delta _{ir} \delta _{jt} \delta _{kt} \delta _{ls} = \displaystyle \sum_{t=1}^n \delta _{ir} \delta _{jt} \delta _{jt} \delta _{ls}## if j = k,

no, that last item should be ## = \displaystyle \delta _{ir} \delta _{jk} \delta _{ls}##

and then rewrite that as ##= \delta _{jk} (\delta _{ir} \delta _{ls})##

so now you have ##(\epsilon (i,j) \epsilon (k,l) )_{r,s}= \delta _{jk} (\delta _{ir} \delta _{ls})##

sooo … ? :smile:

(btw, you don't need to write "\displaystyle" on this forum :wink:)
 
  • #16
tiny-tim said:
no, that last item should be ## = \displaystyle \delta _{ir} \delta _{jk} \delta _{ls}##

and then rewrite that as ##= \delta _{jk} (\delta _{ir} \delta _{ls})##

so now you have ##(\epsilon (i,j) \epsilon (k,l) )_{r,s}= \delta _{jk} (\delta _{ir} \delta _{ls})##

sooo … ? :smile:

(btw, you don't need to write "\displaystyle" on this forum :wink:)

## = 1\times \epsilon (i,l) ## ?

is everything else correct that I've written?
 
  • #17
converting1 said:
## = 1\times \epsilon (i,l) ## ?

nearly :smile:

(what happened to the δjk ? :wink:)
 
  • #18
tiny-tim said:
nearly :smile:

(what happened to the δjk ? :wink:)

is it not 1 if j = k??
 
  • #19
yes, but you want a general formula for ## \epsilon (i,j) \epsilon (k,l) ## :wink:
 
  • #20
tiny-tim said:
yes, but you want a general formula for ## \epsilon (i,j) \epsilon (k,l) ## :wink:

I'm really not sure what to put
 
  • #21
say it (partly) in words, starting "ε(i,j)ε(k.l) is … ", and then we'll put it completely into symbols :smile:
 
  • #22
tiny-tim said:
say it (partly) in words, starting "ε(i,j)ε(k.l) is … ", and then we'll put it completely into symbols :smile:

it's epsilon(i,l) if j = k else it's 0,

thanks for your patience
 
  • #23
converting1 said:
it's epsilon(i,l) if j = k else it's 0,

ok, so it's epsilon(i,l) times … ? :smile:
 
  • #24
tiny-tim said:
ok, so it's epsilon(i,l) times … ? :smile:

## \delta _{jk} ## ?
 
  • #25
(just got up :zzz:)

yes!

ε(i,j)ε(k,l) = δjkε(i,l) :smile:

work your way through it, and compare it with parts (i) and (ii), until you're convinced how it works :wink:
 
  • #26
thank you tim, really appreciate it.
 

Related to Basic Fixed Matrices: Proving Equations with Formal Calculations

What are basic fixed matrices?

Basic fixed matrices are square arrays of numbers, arranged in rows and columns, that contain a fixed number of elements. They are used to represent linear transformations and perform calculations in linear algebra.

How do you prove equations with formal calculations using fixed matrices?

To prove equations using fixed matrices, you need to perform the same operations on both sides of the equation. This involves using properties of matrix operations, such as addition, subtraction, and multiplication, to manipulate the matrices until they are equal on both sides.

What are the properties of matrix operations?

The properties of matrix operations include commutativity, associativity, and distributivity. Commutativity means that the order of matrix operations does not affect the result. Associativity means that the grouping of matrix operations does not affect the result. Distributivity means that multiplication can be distributed over addition or subtraction.

Can you use fixed matrices to solve systems of equations?

Yes, fixed matrices can be used to solve systems of equations. This involves setting up a matrix equation with the coefficients from the system of equations, and then using matrix operations to manipulate the matrix until it is in reduced row-echelon form. The solution to the system of equations can then be read directly from the matrix.

What are some real-world applications of fixed matrices?

Fixed matrices have many real-world applications, including computer graphics, data compression, and optimization problems in engineering and economics. They are also used in machine learning and artificial intelligence algorithms for tasks such as image and speech recognition.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
8
Views
2K
  • Special and General Relativity
Replies
15
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • Special and General Relativity
Replies
20
Views
4K
  • Advanced Physics Homework Help
Replies
8
Views
3K
  • Atomic and Condensed Matter
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
5K
  • Introductory Physics Homework Help
Replies
7
Views
3K
  • Programming and Computer Science
Replies
4
Views
1K
  • Quantum Physics
Replies
11
Views
9K
Back
Top