# Question on substitution of variables in natural deduction of predicate logic.

#### Mathelogician

##### Member
Hi all,
I need Explanation on the attached image from Van Dalen's Logic and Structure; specially on how the red part follows from the lines before it!
Regards.

#### Attachments

• 31.2 KB Views: 29

#### Evgeny.Makarov

##### Well-known member
MHB Math Scholar
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.

#### topsquark

##### Well-known member
MHB Math Helper
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.
Well said. I completely disagreed with you until half-way through the second paragraph. In Physics it is very common to see the versions of a theorem made simpler by substituting constants for variables. But you are specifically talking about "free variables" and that reminded me of some of the fields that come up in the GR generalizations of inner product spaces. The induced fields only have the properties of satisfying the local boundary conditions. You could never replace those with any kind of scalars imaginable.

Thanks!

-Dan

#### Mathelogician

##### Member
Thanks...
I'll try it and talk about my thoughts.

#### Mathelogician

##### Member
As you said, I tried to solve it explicitly and exactly.
The book doesn't explain about definition derivations in predicate calculi; It just adds two rules to the propositional rules of natural deduction. But if i guess right about the atomic formulas that they are the BASIC step of the recursive definition of derivations, the first part of my proof seems to be true:

1-Basics - Since (as we assumed above) the atomic formulas are themselves derivations too (with single nodes), then for any atomic formula phi we have a derivation (D ... phi) with (the only one) hypothesis phi itself. Now the desired property would obviously hold, because since (phi|- phi) implies that (phi[x/c]|- phi[x/c]).

2- For propositional connectives, i believe that they behave similarly about the problem. So i just solved the (^I) case. [Attached pic 1]

3- For universal elimination rule, i solved it without using the substitution theorem you mentioned. [Attached pic 2]

4- For universal introduction rule, i couldn't use the trick (red starred around the green line!) in the previous solution, because here we have the problem of cancelling that hypotheses in Gama that contain x as a free variable. [Therefore not all sets that contain all hypotheses of the the final derivation (using univ introduction rule), would contain hypotheses of the first derivation (used as the inductive hypothesis)].