- #1
Pencilvester
- 184
- 42
Hello PF, I’m going through a book called “A First Course in the Calculus of Variations.” I can’t remember who the author is at the moment, I’ll post it later. Anyway, I’m having trouble with one part: suppose we have a function ##y (x)## that gives a continuous polygonal curve from ##x = a## to ##x = b## with ##n + 1## pieces and values ##y_i## separated by ##Δx = \frac {b - a} {n + 1}## and endpoints fixed at ##y_0## and ##y_{n + 1}##. We also have$$J = \sum_{i = 0} ^n f(x_i , y_i , \frac {y_{i + 1} - y_i} {Δx}) Δx$$Now we want to take the derivative of ##J## with respect to a specific ##y_k##, which will appear in 2 terms of the sum
(##i = k## and ##i = k - 1##). As far as I know, ##f## could be any sort of function, not necessarily linear on ##x##, ##y##, and ##y’##
(##y’ ≡ \frac {dy} {dx}## which of course is the same as ##\frac {y_{i + 1} - y_i} {Δx}## as ##Δx## approaches 0 and ##n## approaches infinity, which is the limit we will eventually be taking). So without any explanation, the book says$$\frac {∂J} {∂y_k} = f_y (x_k , y_k , \frac {y_{k + 1} - y_k} {Δx}) Δx + f_{y’} (x_{k - 1} , y_{k - 1} , \frac {y_k - y_{k - 1}} {Δx}) - f_{y’} (x_k , y_k , \frac {y_{k + 1} - y_k} {Δx})$$(call this eq. 1) And this is what I am having trouble with. First of all, they use subscripts on ##f## that I assume are to indicate partial derivatives, but I am not certain as this is the first time in the book they use this notation, and they do not have any place in the book that gives all of their notation conventions. Anyway, the only way I can make sense of this is if, in general,$$\frac {∂} {∂y} f (x , y , g (y) ) = \frac {∂} {∂y} (f) + \frac {∂} {∂g} (f) ⋅ \frac {dg} {dy}$$(call this eq. 2) where ##f## is any function of variables ##x## and ##y## and function ##g##, which is itself a function of ##y##, but on the RHS of the equation we treat ##g## as just another variable (holding ##g## constant while we vary ##y## a little, and vise versa). So this is my main question: is equation 2 true in general? If so, where could I find a proof for it? If not, how do we get eq. 1? Any help would be much appreciated.
(##i = k## and ##i = k - 1##). As far as I know, ##f## could be any sort of function, not necessarily linear on ##x##, ##y##, and ##y’##
(##y’ ≡ \frac {dy} {dx}## which of course is the same as ##\frac {y_{i + 1} - y_i} {Δx}## as ##Δx## approaches 0 and ##n## approaches infinity, which is the limit we will eventually be taking). So without any explanation, the book says$$\frac {∂J} {∂y_k} = f_y (x_k , y_k , \frac {y_{k + 1} - y_k} {Δx}) Δx + f_{y’} (x_{k - 1} , y_{k - 1} , \frac {y_k - y_{k - 1}} {Δx}) - f_{y’} (x_k , y_k , \frac {y_{k + 1} - y_k} {Δx})$$(call this eq. 1) And this is what I am having trouble with. First of all, they use subscripts on ##f## that I assume are to indicate partial derivatives, but I am not certain as this is the first time in the book they use this notation, and they do not have any place in the book that gives all of their notation conventions. Anyway, the only way I can make sense of this is if, in general,$$\frac {∂} {∂y} f (x , y , g (y) ) = \frac {∂} {∂y} (f) + \frac {∂} {∂g} (f) ⋅ \frac {dg} {dy}$$(call this eq. 2) where ##f## is any function of variables ##x## and ##y## and function ##g##, which is itself a function of ##y##, but on the RHS of the equation we treat ##g## as just another variable (holding ##g## constant while we vary ##y## a little, and vise versa). So this is my main question: is equation 2 true in general? If so, where could I find a proof for it? If not, how do we get eq. 1? Any help would be much appreciated.