Points about the Ackermann's function

In summary: A(m,n)+B(m,n,p) &\text{ for all } m,n,p \in \mathbb{N} \\ A(m,n,0)+B(m,n,0,p) &\text{ for all } m,n,p \in \mathbb{N} \\ A(m,n,p)+B(m,n,1,p) &\text{ for all
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

I am taking the course computability and at the end of the semester I will have a presentation about the Ackermann's function.

Should the structure of the presentation be the following?

-History
-Definition
-Proof that the definition is well defined
-Table of values
-Proof that the Ackermnann's function is recursive/Turing-computable
-Proof that the Ackermann's function is not primitive recursive
-Use of the Ackermann's function

What do you think? Is it a good structure for the presentation or could I change something for example to add something, or to delete a point or to change the order of the points? (Wondering)
 
Physics news on Phys.org
  • #2
This outline seems OK. You should probably mention that there is a whole family of functions with similar definitions, so different sources may have slightly different definitions of Ackermann function. I would also add the connection with the addition-multiplication-exponentiation (Knuth up-arrow) hierarchy I described in another thread.

My general advice is not to make the presentation too technical. For example, you may show on a slide the series of lemmas required to prove that the function is not primitive recursive, but don't expect anyone to understand more than one (I am talking about the statements). I would stress the extreme growth by using pictures. For instance, you may give a visual intuition of the lexicographic order on the plane and then show how in the clause
\[
A(m+1,n+1)=A(m,A(m+1,n))
\]
the pair of arguments $(m,A(m+1,n))$ is smaller lexicographically than $(m+1,n+1)$ due to the decrease of the first argument, even though the second argument increases dramatically.

By the way, for an interesting fact about the use of up-arrow notion, see Graham's number.
 
  • #3
Evgeny.Makarov said:
My general advice is not to make the presentation too technical. For example, you may show on a slide the series of lemmas required to prove that the function is not primitive recursive, but don't expect anyone to understand more than one (I am talking about the statements).

Do you mean that I should just state the lemmas without proving them?
 
  • #4
Definitely. Well, you may include a proof of one lemma that illustrates the problems and techniques used in others as well. I think that if you include more than one proof of those lemmas, most people in the audience will not appreciate them either because they won't catch all the details or because they will be bored. The place for detailed proofs is in the report that you may write as an outcome of the course but not so much in the presentation, which is for the sake of the audience and not to prove that you've learned everything.
 
  • #5
mathmari said:
-History

If I write the following at the part "History" :

  • $1926$ David Hilbert claims that each computable function is primitive recursive.
  • $1928$ Wilhelm Ackermann publicates a counterexample, a function that grows very fast. Its ratio of growth is greater than that of a primitive recursive function.
  • $1955$ Rozsa Peter constructs a simplified version of that function, that has the same properties and that is known today as the Ackermann's function.

is this enough or is something missing?? (Wondering)
 
Last edited by a moderator:
  • #6
I don't know much about the history of Ackermann's function. What you wrote was new and interesting to me.
 
  • #7
Evgeny.Makarov said:
I don't know much about the history of Ackermann's function. What you wrote was new and interesting to me.

Ok! (Smile)

mathmari said:
-Definition

At the part "Definition" should I mention the original version (Ackermann's version) of Ackermann's function and the version of Rozsa Peter?? (Wondering)

Or should I write something else or something more?? (Wondering)

What do you mean with
Evgeny.Makarov said:
there is a whole family of functions with similar definitions, so different sources may have slightly different definitions of Ackermann function.
?? That there are many different versions of the Ackermann's function?? (Wondering)
 
Last edited by a moderator:
  • #8
mathmari said:
At the part "Definition" should I mention the original version (Ackermann's version) of Ackermann's function and the version of Rozsa Peter?
It's up to you. I looked again, and it seems that the original Ackermann's definition is what I described in another thread as the hierarchy of addition, multiplication, exponentiation and so on. In this sense it seems more natural to me than the two-argument version, so yes, I would mention it.

mathmari said:
That there are many different versions of the Ackermann's function?
Didn't you yourself mention two versions above?

Edit: If I remember correctly, Péter's book Recursive Functions has a quite readable account of primitive recursive functions.
 
  • #9
Should I write it as followed at the "Definition" ?? (Wondering)

Ackermann's original three-argument function $\phi(m,n,p)$ is defined recursively as follows for nonnegative integers $m$, $n$, and $p$ :

$$\phi(m,n,p)=\left\{\begin{matrix}
\phi(m,n,0)=m+n & \\
\phi(m,0,1)=0 & \\
\phi(m,0,p)=m & \text{ for } p>2 \\
\phi(m,n,p)=\phi(m,\phi(m,n-1,p),p-1) & \text{ for } n>0 \text{ and } p>0
\end{matrix}\right.$$

For $p=0,1,2$, this function reproduces the basic operations of addition, multiplication and exponentiation as $$\phi(m,n,0)=m+n \\ \phi(m,n,1)=m \cdot n \\ \phi(m,n,2)=m^n$$

The simplified form of Peter is a two-argument function that is defined for nonnegative integers $m$ and $n$ as follows: $$A(m,n)=\left\{\begin{matrix}
n+1 & \text{ if } m=0 \\
A(m-1,1) & \text{ if } m>0 \text{ and } n=0 \\
A(m-1,A(m,n-1)) & \text{ if } m>0 \text{ and } n>0
\end{matrix}\right.$$

Ackermann's function is one of the simplest and earlist discovered examples of a computable function that is not primitive recursive. It is also related to Knuth's up-arrow notation. The idea of this notation is the following:
  • addition is a repeated addition of $1$
  • multilication is repeated addition
  • exponentiation is repeated multiplication
This list can be continued by making the function at level $k+1$ to be repeated application of function at level $k$.

Knuth's up-arrow notation is the following:

$$a \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \cdot a \cdot \dots \cdot a}}=a^b \\ a \uparrow \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow a \uparrow \dots \uparrow a}}=a \uparrow (a \uparrow \dots )a \uparrow a) \dots )=\underset{b \text{ copies of } a}{\underbrace{a^{a^{a^{.^{.^{.^a}}}}}}}$$

In general, $$a \uparrow^m b=a\overset{m \text{ copies of } \uparrow}{\overbrace{\uparrow \uparrow \dots \uparrow}} b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow^{m-1}a \uparrow^{m-1} \dots \uparrow^{m-1}a}}=a \uparrow^{m-1} (a \uparrow^{m-1} \dots (a \uparrow^{m-1} a) \dots )$$

Ackermann's function can be written, using Knuth's up-arrow notation, as followed:
$$A(m,n)=2 \uparrow^{m-2} (n+3)-3$$
The part of the definition $A(m,0)=A(m-1,1)$ corresponds to $2 \uparrow^{m+1} 3=2 \uparrow^m 4$.

What do you think about that?? Could I improve something?? (Wondering)

Do I have to prove that the Ackermann's function can be written as $A(m,n)=2 \uparrow^{m-2} (n+3)-3$ ?? (Wondering)
 
  • #10
Your description of Ackermann's function has many phrases copied literally from Wikipedia. I would recommend using your own words. And what about $\phi(m,0,2)$?

In the formula

mathmari said:
$$a \uparrow \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow a \uparrow \dots \uparrow a}}=a \uparrow (a \uparrow \dots )a \uparrow a) \dots )=\underset{b \text{ copies of } a}{\underbrace{a^{a^{a^{.^{.^{.^a}}}}}}}$$
the distribution of parentheses is not clear and the number of opening and closing parentheses is different.

You describe the idea behind the up-arrow notation (each subsequent level is iterated previous level), but you don't say that both definitions of Ackermann's function follow the same idea and how this can be seen. Even more, you don't describe the connection between Ackermann's original definition and the up-arrow notation. It seems that they are the same function, only shifted by one level: $\phi(m,n,2)=m^n=m\uparrow^1n$. You should check whether this is true.

mathmari said:
Do I have to prove that the Ackermann's function can be written as $A(m,n)=2 \uparrow^{m-2} (n+3)-3$ ?
You don't necessarily have to include the proof in the presentation, but you should be able to prove it on the board if asked to. Maybe you can make a presentation with links to proofs that you can show if asked.

By the way, you can use the LaTeX environment [m]cases[/m] for typesetting things like the definition of $A(m,n)$.
 
  • #11
Evgeny.Makarov said:
Your description of Ackermann's function has many phrases copied literally from Wikipedia. I would recommend using your own words. And what about $\phi(m,0,2)$?

I tried it again... Is the following better?? (Wondering)

Ackermann original function is the mapping $\phi:\mathbb{N}^3\rightarrow \mathbb{N}$ which is defined recursively as followed:

$$p=0:\phi(m,n,0)=m+n\\ p=1: \phi(m,n,1)=m\times n\\ p=2: \phi(m,n,2)=m^n\\ p>2: \phi(m,n,p)=\phi(m,\phi(m,n-1,n),p-1) \\ \forall m,n,p \in \mathbb{N}$$

The simplifies form of Peyer is the mapping $A:\mathbb{N}^2\rightarrow \mathbb{N}$ which is defined recursively as followed : $$A(m,n)\left\{\begin{matrix}
n+1 & , m=0\\
A(m-1,1) & , m>0 , n=0\\
A(m-1,A(m,n-1))& ,m>0,n>0
\end{matrix}\right. \\ \forall m,n \in \mathbb{N}$$
Could I improve something?? (Wondering)

At the original version do we have to write separate the case $\phi(m,0,2)$ ??
 
Last edited by a moderator:
  • #12
What do you mean by "$\forall m,n,p\in\Bbb N$"? Please don't say, "It holds for all natural $m$, $n$ and $p$".

Your definition of $\phi$ differs from the one in Wikipedia, for example, at $\phi(m,0,3)$.
 
  • #13
Evgeny.Makarov said:
What do you mean by "$\forall m,n,p\in\Bbb N$"? Please don't say, "It holds for all natural $m$, $n$ and $p$".

Your definition of $\phi$ differs from the one in Wikipedia, for example, at $\phi(m,0,3)$.

At the german version of Wikipedia Ackermannfunktion – Wikipedia the definition is the following:

$$\phi(a,b,0)=a+b \\ \phi(a,0,n+1)=\alpha (a,n) \\ \phi(a,b+1, n+1)=\phi(a, \phi(a,b,n+1),n) \\ \text{ where }\alpha=\left\{\begin{matrix}
0 & , n=0\\
1 & , n=1\\
a & , n,m>0
\end{matrix}\right.$$

Which version should I write at the presentation?? (Wondering)
 
  • #14
Evgeny.Makarov said:
In the formula

the distribution of parentheses is not clear and the number of opening and closing parentheses is different.

You describe the idea behind the up-arrow notation (each subsequent level is iterated previous level), but you don't say that both definitions of Ackermann's function follow the same idea and how this can be seen. Even more, you don't describe the connection between Ackermann's original definition and the up-arrow notation. It seems that they are the same function, only shifted by one level: $\phi(m,n,2)=m^n=m\uparrow^1n$. You should check whether this is true.

You don't necessarily have to include the proof in the presentation, but you should be able to prove it on the board if asked to. Maybe you can make a presentation with links to proofs that you can show if asked.

I tried also this part again... Ackermann's function is one of the simplest and earlist discovered examples of a computable function that is not primitive recursive. It is also related to Knuth's up-arrow notation. The idea of this notation is the following:
  • addition is a repeated addition of $1$
  • multilication is repeated addition
  • exponentiation is repeated multiplication
This list can be continued by making the function at level $k+1$ to be repeated application of function at level $k$.

Knuth's up-arrow notation is the following:

$$a \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \cdot a \cdot \dots \cdot a}}=a^b \\ a \uparrow^2=a \uparrow \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow a \uparrow \dots \uparrow a}}=a \uparrow (a \uparrow \dots (a \uparrow a) \dots )=\underset{b \text{ copies of } a}{\underbrace{a^{a^{a^{.^{.^{.^a}}}}}}}$$

In general, $$a \uparrow^m b=a\overset{m \text{ copies of } \uparrow}{\overbrace{\uparrow \uparrow \dots \uparrow}} b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow^{m-1}a \uparrow^{m-1} \dots \uparrow^{m-1}a}}=a \uparrow^{m-1} (a \uparrow^{m-1} ( \dots (a \uparrow^{m-1} a) \dots ))$$

Theorem:
For each $m \geq -1, n \geq 2$ we have that $$a \uparrow^m n=a \uparrow^{m-1}a \uparrow^m (n-1)$$
Proof:
$$a \uparrow^m n=\underset{n \text{ copies of } a}{\underbrace{a \uparrow^{m-1}a \dots \uparrow^{m-1}a}}=a \uparrow^{m-1}(\underset{n-1 \text{ copies of } a}{\underbrace{a \uparrow^{m-1} \dots \uparrow^{m-1}a}})=a \uparrow^{m-1} a \uparrow^m (n-1)$$

If we denote $a \uparrow^m n$ by $f(a,n,m)$ the Theorem can be written as followed:
Ackermann's function can be written, using Knuth's up-arrow notation, as followed:
$$f(a,n,m)=f(a,f(a,m,n-1),m-1)$$
For a fixed value of $a$,this corresponds exactly the general recurrence of the definition of the Ackermann's function. The closed form of the Ackermann's function, using Knuth's up-arrow notation is the following:
$$A(m,n)=2 \uparrow^{m-2} (n+3)-3$$
Proof:
Using the Theorem we get for $a=0$ the following:
$$A(m,n)=2 \uparrow^{m-2} (n+3)-3=2 \uparrow^{m-3} 2 \uparrow^{m-2} (n+2)-3$$
$$A(m-1, A(m,n-1))=2 \uparrow^{m-3} (A(m,n-1)+3)-3=2 \uparrow^{m-3} (2 \uparrow^{m-2} (n+2)-3+3)-3=2 \uparrow^{m-3} (2 \uparrow^{m-2} (n+2))-3=A(m,n)$$
What do you think about that?? Could I improve something?? (Wondering)
 
  • #15
So, would I write it as followed?? (Wondering)

Ackermann's original function is the mapping $\phi : \mathbb{N}^3 \rightarrow \mathbb{N}$ which is defined recursively as followed:
$$\phi(a,b,0)=a+b \\ \phi(a,0,n+1)=\alpha (a,n) \\ \phi(a,b+1, n+1)=\phi(a, \phi(a,b,n+1),n) \\ \text{ where }\alpha(a,n)=\left\{\begin{matrix}
0 & , n=0\\
1 & , n=1\\
a & , n>1
\end{matrix}\right.$$

The simplified form of Peter, which is today known as the Ackermann's function, is the mapping $A: \mathbb{N}^2 \rightarrow \mathbb{N}$ which is defined recursively as followed:
$$A(m,n)\left\{\begin{matrix}
n+1 & , m=0\\
A(m-1,1) & , m>0 , n=0\\
A(m-1,A(m,n-1))& ,m>0,n>0
\end{matrix}\right.$$

Ackemann's function is well defined, so for all $x, y \in \mathbb{N}$ there exists a $z \in \mathbb{N}$ such that $A(x,y)=z$.
Proof: $\dots \dots \dots \dots \dots$

Ackermann's function is one of the most important function in computer science. Its most outstanding property is that it grows astronishingly fast, as we can see at the table of values: $\dots \dots \dots \dots \dots$

Even for small inputs the function gives huge results. For example, $A(4,2)$ already has $19.729$ digits!
To symbolize such large numbers Knuth has invented the up-arrow notation.
The idea of this notation is the following:
  • addition is a repeated addition of $1$
  • multilication is repeated addition
  • exponentiation is repeated multiplication
This list can be continued by making the function at level $k+1$ to be repeated application of function at level $k$.

Knuth's up-arrow notation is the following:

$$a \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \cdot a \cdot \dots \cdot a}}=a^b \\ a \uparrow^2=a \uparrow \uparrow b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow a \uparrow \dots \uparrow a}}=a \uparrow (a \uparrow \dots (a \uparrow a) \dots )=\underset{b \text{ copies of } a}{\underbrace{a^{a^{a^{.^{.^{.^a}}}}}}}$$

In general, $$a \uparrow^m b=a\overset{m \text{ copies of } \uparrow}{\overbrace{\uparrow \uparrow \dots \uparrow}} b=\underset{b \text{ copies of } a}{\underbrace{a \uparrow^{m-1}a \uparrow^{m-1} \dots \uparrow^{m-1}a}}=a \uparrow^{m-1} (a \uparrow^{m-1} ( \dots (a \uparrow^{m-1} a) \dots ))$$

Theorem:
For each $m \geq -1, n \geq 2$ we have that $$a \uparrow^m n=a \uparrow^{m-1}a \uparrow^m (n-1)$$
Proof:
$$a \uparrow^m n=\underset{n \text{ copies of } a}{\underbrace{a \uparrow^{m-1}a \dots \uparrow^{m-1}a}}=a \uparrow^{m-1}(\underset{n-1 \text{ copies of } a}{\underbrace{a \uparrow^{m-1} \dots \uparrow^{m-1}a}})=a \uparrow^{m-1} a \uparrow^m (n-1)$$

If we denote $a \uparrow^m n$ by $f(a,n,m)$ the Theorem can be written as followed:
Ackermann's function can be written, using Knuth's up-arrow notation, as followed:
$$f(a,n,m)=f(a,f(a,m,n-1),m-1)$$
For a fixed value of $a$,this corresponds exactly the general recurrence of the definition of the Ackermann's function. The closed form of the Ackermann's function, using Knuth's up-arrow notation is the following:
$$A(m,n)=2 \uparrow^{m-2} (n+3)-3, m, n>0$$
Proof:
Using the Theorem we get for $a=0$ the following:
$$A(m,n)=2 \uparrow^{m-2} (n+3)-3=2 \uparrow^{m-3} 2 \uparrow^{m-2} (n+2)-3$$
$$A(m-1, A(m,n-1))=2 \uparrow^{m-3} (A(m,n-1)+3)-3=2 \uparrow^{m-3} (2 \uparrow^{m-2} (n+2)-3+3)-3=2 \uparrow^{m-3} (2 \uparrow^{m-2} (n+2))-3=A(m,n)$$ Is this correct?? Could I improve something?? (Wondering)

Do I maybe have not to say so much for the Knuth's up-arrow notation?? (Wondering)
 
  • #16
I recommended describing the up-arrow notation because it is more natural (since it represents the hierarchy of addition, multiplication, exponentiation and so on) than the two-argument Ackermann function. The latter is probably simpler to work with and has similar properties as the three-argument variant, but its idea is a bit less clear. But at that time I have not yet read about the original three-argument Ackernann function, which looks quite similar to the up-arrow notation. So it makes sense to describe the idea and details about one of them and briefly introduce the other, describing their difference. And since your presentation is about Ackernann function, I would take all the ideas behind the up-arrow notation you are describing and apply them to the three-argument Ackernann function.

Now about their connection. They indeed satisfy the same recursion step but differ in initial conditions. First, Ackernann function start one level "lower": its level 0 corresponds to addition while up-arrow's level 0 is multiplication. More importantly, Ackernann's and up-arrow's bases case differ as follows.
\begin{align}
\phi(a,0,0)&=a\\
\phi(a,0,1)&=0&a\uparrow^00&=0\\
\phi(a,0,2)&=1&a\uparrow^10&=1\\
\phi(a,0,n)&=a&a\uparrow^{n-1}0&=1&&n>2
\end{align}
The first three lines for Ackernann function correspond to $a+0=a$, $a\cdot0=0$, $a^0=1$. However, the two functions differ on levels beyond exponentiation: $\phi(a,0,n)=a$ while $a\uparrow^{n-1}0=1$. (This is, of course, assuming the definitions of Ackernann function, in both English and German versions, is correct. It would make sense to look in some book, maybe Péter's Recursive Functions, that has Ackernann's original definition.)

One can prove by induction on $m$ that $a\uparrow^m1=a$. Therefore,
\[
a\uparrow^{m+1}n=(a\uparrow^{m})^n(a\uparrow^{m+1}0)=(a\uparrow^{m})^n(1)\\\quad =a\uparrow^{m}(a\uparrow^{m}(\dots\uparrow^{m}(a\uparrow^{m}1)\dots))\\\quad = a\uparrow^{m}(a\uparrow^{m}(\dots\uparrow^{m}a\dots))
\]
($n$ copies of $a$ in the last expression). In contrast, if $m>1$, then writing $\phi_m(a,b)$ for $\phi(a,b,m)$,
\[
\phi_{m+1}(a,n)=\phi_m(a,\cdot)^n(\phi_{m+1}(a,0))=\phi_m(a,\cdot)^n(a)\\
\quad =\phi_m(a, \phi_m(a, \dots\phi_m(a,a)\dots))
\]
($n+1$ copies of $a$ in the last expression). Thus, $\phi_3(a,2)=\phi_2(a,\phi_2(a,a))=a^{a^a}$, while $a\uparrow^22$=$a\uparrow (a\uparrow1)=a\uparrow a=a^a$. Therefore, up-arrow notation seems more natural. I am not sure why $\phi(a,0,n)$ is defined to be $a$ and not 1 for $n>2$.

So, I would apply your description of ideas behind up-arrow notation to the three-argument Ackermann function, and when the audience becomes impatient and starts asking, "But why on Earth there is an extra $a$ on levels greater than 2?" you would say, "Ahh, that's why Stanford professor Donald Knuth came up aith a more natural definition". Of course, you should double-check what I wrote.

mathmari said:
Ackermann's function is one of the most important function in computer science.
This is probably an overstatement.

mathmari said:
If we denote $a \uparrow^m n$ by $f(a,n,m)$ the Theorem can be written as followed:
Ackermann's function can be written, using Knuth's up-arrow notation, as followed:
$$f(a,n,m)=f(a,f(a,m,n-1),m-1)$$
It should be $f(a,n,m)=f(a,f(a,n-1,m),m-1)$. The structure of your sentence is unclear. Also, it's "as follows", not "as followed".
 
  • #17
I have to read again about Knuth's up-arrow notation to understand it better.

After the part History of the post #5 could I write it the collary ? (The part of the Knuth's up-arrow notation will have to be added)

Ackermann's original function is the mapping $\phi : \mathbb{N}^3 \rightarrow \mathbb{N}$ which is defined recursively as follows:
$$\phi(a,b,0)=a+b \\ \phi(a,0,n+1)=\alpha (a,n) \\ \phi(a,b+1, n+1)=\phi(a, \phi(a,b,n+1),n) \\ \text{ where }\alpha(a,n)=\left\{\begin{matrix}
0 & , n=0\\
1 & , n=1\\
a & , n>1
\end{matrix}\right.$$

The simplified form of Rozsa Peter, which is today known as the Ackermann's function, is the mapping $A: \mathbb{N}^2 \rightarrow \mathbb{N}$ which is defined recursively as follows:
$$A(m,n)\left\{\begin{matrix}
n+1 & , m=0\\
A(m-1,1) & , m>0 , n=0\\
A(m-1,A(m,n-1))& ,m>0,n>0
\end{matrix}\right.$$ Ackermann's function grows astronishingly fast, as we can see at the table of values:
$$ \begin{matrix}
m \setminus n & | & 0 & 1 & 2 & 3 & 4 & 5 & \dots & y \\
- & - & -& - & - & - & - & - & - \\
0 & | & 1 & 2 & 3 & 4 & 5 & 6 & \dots & y+1 \\
1 & | & 2 & 3 & 4 & 5 & 6 & 7 & \dots & y+2 \\
2 & | & 3 & 5 & 7 & 9 & 11 & 13 & \dots & 2y+3 \\
3 & | & 5 & 13 & 29 & 61 & 125 & 253 & \dots & 8 \cdot 2^{y}-3\\
4 & | & 13 & 65533 & & & & & \dots & 2^{2^{.^{.^.}}^2}-3\\
5 & | & 65533 & & & & & & \dots & \\
\dots & | & & & & & & & \dots &\\
\end{matrix} $$

Even for small inputs the function gives huge results. For example, $A(4,2)$ already has $19.729$ digits! After that, to show that the Ackermann's function is recursive do we have to show that it is $\mu-$recursive?

According to the book that I'm reading, we can define the $\mu-$recursive functions inductively, as foolows:
  1. The constant, projection, and successor functions are all $\mu-$recursive.
  2. If $g_1, \dots , g_m$ are $n-$variable $\mu-$recursive functions and $h$ is an $m-$variable $\mu-$recursive function, then teh composite function $f=h \circ (g_1, \dots , g_m)$ is also $\mu-$recursive.
  3. If $g$ and $h$ are $n-$ and $(n+2)-$variable $\mu-$recursive functions, then the function $f$ defined from $g$ and $h$ by primitive recursion is also $\mu-$recursive.
  4. If $g$is a total $(n+1)-$variable $\mu-$recursive function, then the function $f$ defined from $g$ by unbounded minimalization is also $\mu-$recursive.

Which is the difference between a recursive and $\mu-$recursive function ? (Wondering) To prove that $A$ is not primitive recursive, we need the following properties concerning the values of $A$.

  1. $A(x,y)>y$.
  2. $A(x,y+1)>A(x,y)$.
  3. If $y_2>y_1$, then $A(x,y_2)>A(x,y_1)$.
  4. $A(x+1, y) \geq A(x,y+1)$.
  5. $A(x,y)>x$.
  6. If $x_2>x_1$, then $A(x_2, y)>A(x_1, y)$.
  7. $A(x+2, y)>A(x,2y)$.

Thus the value of $A$ is greater than that of either of its arguments, $A$ is strictly monotonic in each argument, and the value of $x$ has a greater influence on the value of $A$ than does the value of $y$.

We will prove that Ackermann's function is not primitive recursive by showing that it "grows faster" than any primitive recursive function. That means that we need a precise way of comparing the "growth rate" of the two-variable function $A$ with that of an arbitrary $n-$variable function. What we shall attempt to do is show that for any given $n-$variable primitive recursive function $f$, there exists a natural number $k$ such that $$A(k, \max (x_1, \dots , x_n)) > f(x_1, \dots , x_n) \tag {*}$$ for all values of $x_1, \dots , x_n$.
The proof is accomplished by induction on the number of compositions and primitive recursions needed to define the function $f$. In order to carry out the induction step of the proof, we need two auxiliary results. The results of these results ensures that if the functions $g_1, \dots , g_m$ and $h$ satisfy $(*)$, so does the function $f$ obtained from $g_1, \dots , g_m$ and $h$ by functional composition.

Lemma 1. Let the $n-$variable function $f=h \circ (g_1, \dots , g_m)$ be obtained from the functions $g_1, \dots , g_m$ and $h$ by composition. Assume the existence of natural numbers $k_1, \dots , k_m$, and $k_0$ such that $$A(k_i, \max (x_1, \dots , x_n)) > g_i (x_1, \dots , x_n) \text{ for } 1 \leq i \leq m\\ \text{ and } \\ A(k_0, \max (y_1, \dots , y_m )) > h(y_1, \dots , y_m)$$ for all $x_1, \dots , x_n$ and $y_1, \dots , y_m$. Define $k$ to be the natural number $\max (k_0, k_1, \dots , k_m)+2$. Then $A(k, \max (x_1, \dots , x_n)) > f(x_1, \dots , x_n)$ for all $x_1, \dots , x_n$.

Lemma 2. Let the $(n+1)-$variable function $f$ be defined by primitive recursion from the $n-$variable function $g$ and the $(n+2)-$variable function $h$, so that $$f(x_1, \dots , x_n, 0)=g(x_1, \dots , x_n) \\ f(x_1, \dots x_n, y+1)=h(x_1, \dots , x_n , y, f(x_1, \dots , x_n, y))$$ Assume the existence of natural numbers $k_g$ and $k_h$ such that $$A(k_g ,\max (x_1, \dots , x_n)) > g(x_1, \dots , x_n) \\ \text{ and } \\ A(k_h, \max (x_1, \dots , x_n , y , z)) > h(x_1, \dots , x_n , y, z)$$ for all $x_1, \dots ,x_n, y$ and $z$. Define $k$ to be the natural number $\max (k_g, k_h)+3$. Then $A(k, \max (x_1, \dots , x_n , y)) > f(x_1, \dots ,x_n , y)$ for all $x_1, \dots , x_n, y$. Theorem 1. For each $n-$variable primitive recursive function $f$, there exists a natural number $k$ such that $A(k, \max (x_1, \dots , x_n)) > f(x_1, \dots , x_n)$, for all $x_1, \dots , x_n$.

Proof. By iduction on the number of compositions and primitive recursions needed to define $f$. We use $\hat{x}$ to denote $\max (x_1, \dots , x_n)$.

Basis. If the derivation of $f$ requires no compositions or primitive recursions, three cases are possible.
  1. If $f$ is the constant function whose value is $c$, choose $k=c$. Property $5$ then guarantees that $A(k, \hat {x})=A(c, \hat{x})>c=f(x_1, \dots , x_n)$.
  2. If $f$ is the projection function whose valuee is $x_i$, choose $k=0$. Then $A(k,\hat{x})=A(0,\hat(x))=\hat(x)+1>x_i=f(x_1, \dots , x_n)$.
  3. If $f$ is the successor function, choose $k=1$. Then $A(k,x)=A(1,x)>A(0,x)=x+1=f(x)$.

Induction step. Assume the statement of the theorem to be true for all functions requiring $w$ or fewer compositions and primitve recursions. Let $f$ be a function requiring a total of $w+1$ compositions and primitive recursions. Two cases are possible.
  1. If $f$ is derived from functions $g_1, \dots , g_m$ and $h$ by composition, the induction hypothesis must apply to each of $g_1, \dots , g_m$, and $h$. Lemma $1$ then guarantees the existence of a number $k$ such that $A(k,\hat{x})>f(x_1, \dots , x_n)$.
  2. If $f$ is derived from the functions $g$ and $h$ by primitive recursion, the induction hypothesis must apply to $g$ and $h$. Lemma $2$ then guarantees the existence of a number $k$ such that $A(k, \hat{x})>f(x_1, \dots , x_n)$.
Theorem $1$ provides a formal expression of the notopn that $A$ "grows faster" than any primitive recursive function. It is now a simple matter to establish:

Theorem 2. Ackermann's function is not primitive recursive.

Proof. Assume that Ackermann's function is primitive recursive. Then according to Theorem 1, there must exist a natral number $k$ such that $$A(k, \max (x,y)) > A(x,y)$$ for all $x$ and $y$ . Setting $x=y=k$ then yields the contradiction $$A(k,k) > A(k,k)$$ from which we conclude that $A$ cannot be primitive recursive.
Is this correct? Could I improve something? (Wondering)
 
Last edited by a moderator:

Related to Points about the Ackermann's function

1. What is Ackermann's function?

Ackermann's function is a mathematical function that was developed by Wilhelm Ackermann in 1928. It is a recursive function that takes in two non-negative integers as inputs and returns another integer as its output.

2. What is the purpose of Ackermann's function?

Ackermann's function is used to test the limitations of computer systems and programming languages. It is also used in theoretical computer science to analyze the complexity of algorithms and data structures.

3. How is Ackermann's function defined?

The function is defined by two base cases and two recursive cases. The base cases are when the first input is 0, in which case the output is the second input plus 1, and when the second input is 0, in which case the output is the result of calling the function with the first input minus 1. The recursive cases involve calling the function with the first input minus 1 and the result of calling the function with the same first input and the second input minus 1.

4. What are the limitations of Ackermann's function?

Ackermann's function grows extremely quickly, making it difficult to compute for large inputs. In fact, it grows faster than any primitive recursive function and is not computable by a Turing machine. This makes it a theoretical construct rather than a practical one.

5. How is Ackermann's function related to other mathematical concepts?

Ackermann's function plays a crucial role in the study of recursive functions and their complexity. It is also related to other mathematical concepts such as Godel's incompleteness theorems and the Busy Beaver function. It is also used in the proof of the undecidability of the halting problem.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
Back
Top