Exploring Undefinable Numbers: Are They Useful?

  • Thread starter Warp
  • Start date
  • Tags
    Numbers
In summary, the conversation is discussing the concept of "expressible" numbers, which are defined as numbers that can be defined with a finite amount of information. It is argued that the set of "expressible" numbers is countable, meaning that there are real numbers that cannot be expressed in any way with a finite amount of information. The question is raised as to whether these "non-expressible" numbers are useful in any way. The conversation also touches on the concepts of computable and definable numbers, as well as Skolem's paradox and the difficulties of defining sets from an external perspective.
  • #1
Warp
128
13
This is, perhaps, more a question of philosophy of math rather than math itself.

While it may be trivial to most people fluent in math, it was a bit surprising for me to learn recently that the set of algebraic numbers is countable. However, I quickly realized why that is: Each algebraic number can be expressed with a finite polynomial, and the set of all possible finite polynomials (and, more generally, the set of all possible finite strings) is countable. Therefore the set of all algebraic numbers is countable.

Then I realized that this can be further extended. The set of all numbers that can be defined with a finite closed-form expression is likewise also countable. The same is true for all numbers that can be defined with a finite analytical expression.

In fact, we can generalize this the most by saying: Let's define a number as "expressible" if it can be expressed (unambiguously) with a finite amount of information. (It doesn't matter if you need a special formalism to formulate the expression, as long as you can define said formalism with a finite amount of information. Basically, the formalism becomes part of the definition of the number.) The set of all "expressible" numbers is countable.

Now, I understand perfectly that this set is most probably ill-defined. However, let's assume for the moment that it is well-defined.

This means that there are real numbers that can not be expressed in any way with a finite amount of information. (Because the set of all "expressible" numbers is countable, that means that there are real numbers that do not belong to that set.)

The (mostly philosophical) question becomes: Are these "non-expressible" numbers useful in any way?

They cannot be the answer to any problem (because a number that's the answer to a problem belongs to the "expressible" set: It can be defined by the problem in question.) These numbers cannot be used in any way, at least not individually, because there's no way you can define them. At most you can define them in (uncountably) infinite sets, but never individually.

If they can't be the answer to any problem, are they useful?
 
Mathematics news on Phys.org
  • #2
Warp said:
This is, perhaps, more a question of philosophy of math rather than math itself.

While it may be trivial to most people fluent in math, it was a bit surprising for me to learn recently that the set of algebraic numbers is countable. However, I quickly realized why that is: Each algebraic number can be expressed with a finite polynomial, and the set of all possible finite polynomials (and, more generally, the set of all possible finite strings) is countable. Therefore the set of all algebraic numbers is countable.

Then I realized that this can be further extended. The set of all numbers that can be defined with a finite closed-form expression is likewise also countable. The same is true for all numbers that can be defined with a finite analytical expression.

In fact, we can generalize this the most by saying: Let's define a number as "expressible" if it can be expressed (unambiguously) with a finite amount of information. (It doesn't matter if you need a special formalism to formulate the expression, as long as you can define said formalism with a finite amount of information. Basically, the formalism becomes part of the definition of the number.) The set of all "expressible" numbers is countable.

Now, I understand perfectly that this set is most probably ill-defined. However, let's assume for the moment that it is well-defined.

This means that there are real numbers that can not be expressed in any way with a finite amount of information. (Because the set of all "expressible" numbers is countable, that means that there are real numbers that do not belong to that set.)

The (mostly philosophical) question becomes: Are these "non-expressible" numbers useful in any way?

They cannot be the answer to any problem (because a number that's the answer to a problem belongs to the "expressible" set: It can be defined by the problem in question.) These numbers cannot be used in any way, at least not individually, because there's no way you can define them. At most you can define them in (uncountably) infinite sets, but never individually.

If they can't be the answer to any problem, are they useful?

Wouldn't pi be considered a non-expressible number? Its certainly useful. oe e? another useful number.
 
  • #3
jedishrfu said:
Wouldn't pi be considered a non-expressible number? Its certainly useful. oe e? another useful number.
What Warp has discovered is the concept of the computable numbers. Wiki link: http://en.wikipedia.org/wiki/Computable_number. (Or possibly he's discussing the concept of the definable numbers, but that's on a bit flimsier ground than is the concept of computable numbers.) Both pi and e are computable numbers. For example, [itex]e=\sum_{n=0}^{\infty}\frac1{n!}[/itex]. That expression is only 31 LaTeX characters long, so obviously finite.

Warp is correct that the set of computable numbers is countable. That means almost all of the reals are uncomputable! For an example of a non-computable number, see http://en.wikipedia.org/wiki/Chaitin's_constant.
 
  • Like
Likes 1 person
  • #4
You are basically facing skolems paradox here. Succinctly, skolems paradox is that there exists a countable model for set theory. This means that although there are uncountable sets, they may not be uncountable from "outside". From "within" set theory, there may not be a surjection from the set of natural numbers to some other set (meaning it is uncountable), but it is perfectly possible for a model to contain two countable sets for which there exist no bijection from "within" (you can't define it by the axioms of set theory). You basically run into issues when defining sets by a meta-language. You are not really in set theory, you are looking at axiomatic set theory from outside.

And no, you can't really define the set of definable real numbers. In fact, the very notion of a definable number is not expressible in set theory. Although, looking from the "outside" there are only countable many such numbers, this doesn't yield a sensible concept of countably many such numbers in set theory.
 
Last edited:
  • #5
D H said:
What Warp has discovered is the concept of the computable numbers. Wiki link: http://en.wikipedia.org/wiki/Computable_number. (Or possibly he's discussing the concept of the definable numbers, but that's on a bit flimsier ground than is the concept of computable numbers.) Both pi and e are computable numbers. For example, [itex]e=\sum_{n=0}^{\infty}\frac1{n!}[/itex]. That expression is only 31 LaTeX characters long, so obviously finite.

Warp is correct that the set of computable numbers is countable. That means almost all of the reals are uncomputable! For an example of a non-computable number, see http://en.wikipedia.org/wiki/Chaitin's_constant.

Interesting, so while PI can't be known, it can be computed to any level of accuracy with a finite set of calculations... I was thinking the OP was excluding the numbers computed by infinite series like PI.
 
  • #6
What do you mean with "pi can't be known"? Sure we know pi.
We do not know all of its digit in a decimal representation (and we will never know because that number is not finite), but that is a completely different thing.
For any digit in this representation, it is possible to calculate it with finite effort.
 
  • #7
[itex]\pi[/itex] certainly is "definable" it can be defined as "the ratio of the circumference of a circle to its diameter" or as "the smallest positive value of x such that sin(x)= 0" or as the sum of any number of convergent series. Similarly [itex]e[/itex] can be defined as [itex]\lim_{n\to\infty} \left(1+ \frac{1}{n}\right)^n[/itex].

Perhaps you are not clear on what is meant by a number being "definable" or "undefinable".
 
  • #8
My apologies, I didn't intend to create any confusion here. I thought the OP discussion on finite calculations for expressible numbers excluded PI and e but have since discovered otherwise. Thanks for the clarification. I realize that PI is definable and that we can never represent it as a finite set of calculations and that's what I was keying on in the OP's post.
 
Last edited:
  • #9
jedishrfu said:
My apologies, I didn't intend to create any confusion here. I thought the OP discussion on finite calculations for expressible numbers excluded PI and e but have since discovered otherwise. Thanks for the clarification. I realize that PI is definable and that we can never represent it as a finite set of calculations and that's what I was keying on in the OP's post.

Please read my original post carefully. I started with the realization that the set of algebraic numbers is countable. Algebraic numbers include things like the square root of 2. While the decimal expansion of that number is infinite, the set of all algebraic numbers is still countable, and that's because you can define each algebraic number with a finite polynomial.

Then I realized that I could expand this further and include things like the set of numbers that are definable by closed-form expressions (this set contains a lot of numbers that are not algebraic, including pi.)

The ultimate conclusion is that the set of all numbers that can be defined (with a finite definition) is countable for the exact same reason, and this means that the rest of the real numbers cannot be defined with any expression (no matter what formalism you use). Only these "definable" numbers are "useful" in the sense that only they can be the answer to any problem you could ever posit.

(Of course, technically speaking it may be impossible to define the exact set of "definable numbers", but it's undeniable that there are uncountably many real numbers that cannot be defined with a finite amount of information, and thus cannot be the answer to any problem.)
 
  • #10
D H said:
Warp is correct that the set of computable numbers is countable. That means almost all of the reals are uncomputable! For an example of a non-computable number, see http://en.wikipedia.org/wiki/Chaitin's_constant.

This is a very interesting point, as it raises (at least in my mind) the question of whether that constant would fall into my hypothetical set of "definable numbers" or not.

As I see it, the answer to "can you give me an example of a non-definable number?" ought to be "no" for obvious reasons: If you could, then you would have just defined the number in question, and thus it belongs to the "definable" set.

However, if I ask you "can you give me an example of an uncomputable number", you can answer "yes, for example Chaitin's constant".

But I suppose it comes down to what is meant by "define a number". Clearly Chaitin's constant has been "defined" by a finite amount of information, but on the other hand that definition doesn't help you know what that number actually is.

I suppose I'll have to settle with the set of all computable numbers, which might be the "largest" countable subset of real numbers (where each individual number can be expressed with a finite amount of information) that can be well-defined (although since I'm not a mathematician, I have no idea if this is correct.) I am assuming that the set of all computable numbers is countable (assuming that "computable" means "can be computed with a finite algorithm".)

So perhaps a more well-defined version of my original question: Are uncomputable numbers "useful"?
 
  • #11
Warp said:
Please read my original post carefully. I started with the realization that the set of algebraic numbers is countable. Algebraic numbers include things like the square root of 2. While the decimal expansion of that number is infinite, the set of all algebraic numbers is still countable, and that's because you can define each algebraic number with a finite polynomial.

Then I realized that I could expand this further and include things like the set of numbers that are definable by closed-form expressions (this set contains a lot of numbers that are not algebraic, including pi.)

The ultimate conclusion is that the set of all numbers that can be defined (with a finite definition) is countable for the exact same reason, and this means that the rest of the real numbers cannot be defined with any expression (no matter what formalism you use). Only these "definable" numbers are "useful" in the sense that only they can be the answer to any problem you could ever posit.

(Of course, technically speaking it may be impossible to define the exact set of "definable numbers", but it's undeniable that there are uncountably many real numbers that cannot be defined with a finite amount of information, and thus cannot be the answer to any problem.)

Be careful: Algebraic numbers are roots of polynomials with integer (or rational - same thing) coefficients.
 
  • #12
Warp said:
I suppose I'll have to settle with the set of all computable numbers, which might be the "largest" countable subset of real numbers (where each individual number can be expressed with a finite amount of information) that can be well-defined (although since I'm not a mathematician, I have no idea if this is correct.) I am assuming that the set of all computable numbers is countable (assuming that "computable" means "can be computed with a finite algorithm".)

There are many numbers which can be defined "by a finite amount of information" which are not computable. Restricting to the set of computable reals several important theorems of analysis fails. For example, the intermediate value theorem. Although you can define a sequence recursively which converges to a particular number in question, that doesn't mean that sequence is computable. You do have analogous results for computable numbers and computable functions in constructive analysis however.
 
  • #13
Warp said:
I suppose I'll have to settle with the set of all computable numbers, which might be the "largest" countable subset of real numbers (where each individual number can be expressed with a finite amount of information) that can be well-defined (although since I'm not a mathematician, I have no idea if this is correct.)
No, I think you were right to observe that the set of definable numbers is countable:
The set of putative definitions of numbers (in a finite alphabet) is countable,
thus the set of definable numbers is countable (however that's defined),
the (smaller) set of computable numbers is countable.
An aside: what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words?
 
  • #14
121,121,121,121 ?

read as

one hundred twenty one billion, one hundred twenty one million, one hundred twenty one thousand, and one hundred twenty one.

Another possible answer is 1 if read as

Eleven minus one minus one minus one...minus one

And so a related question would be what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words where no words are repeated?
 
Last edited:
  • #15
jedishrfu said:
121,121,121,121 ?

read as

one hundred twenty one billion, one hundred twenty one million, one hundred twenty one thousand, and one hundred twenty one.

Another possible answer is 1 if read as

Eleven minus one minus one minus one...minus one

And so a related question would be what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words where no words are repeated?

No, you misunderstand. It's not whether it can be written out in the usual way in twenty words, but whether there is any phrase that will completely define it in twenty words. Your first number above could be "eleven squared, times a thousand and one, times a million and one". Some other number might be defined as "the ninety third prime".
 
  • #16
haruspex said:
No, you misunderstand. It's not whether it can be written out in the usual way in twenty words, but whether there is any phrase that will completely define it in twenty words. Your first number above could be "eleven squared, times a thousand and one, times a million and one". Some other number might be defined as "the ninety third prime".

One has to be careful here. A number being "defined by a finite number of words" and the like is a potentially ill-conceived notion, depending on what you mean by it precisely. The reason is that one may fall into a skolem-like-paradox by saying "let x be the least integer which cannot be described in 100 characters or less". Then we have just defined x in 100 characters words or less. And since most integers cannot be expressed in the way (there is a limit to how much information 100 characters can contain), x is well-defined by induction (naively). Note I'm using "character" instead of "word", because a word can potentially be an arbitrarily long concatenation of characters depending on what meaning you give it (if we let the decimal expansion of an integer count as a "word", this trivially makes all integers expressible in words). So x is an integer which cannot be defined in 100 characters or less despite its definition, a contradiction.



We can similarly find an example contradiction when considering the set of reals definable by words (naively). This set must be countable due to the countable nature of finite collections of words, but by a cantor-diagonalizing argument we may (in a finite number of words) define a real number not contained in this set, a contradiction.

This shows again the problems that arises where we transcend the boundary of our formal axiomatic system into a metalanguage (allowing integers to be defined by words and characters) and then referring to this metalanguage inside our axiomatic system. This is analogous to the reason why we can't have a set of definable numbers. So while in the metalanguage the definable numbers may form a countable set, and thus provide us with a countable model for the real numbers, this is a property which cannot be expressed in set theory (where the reals are necessarily uncountable).
 
Last edited:
  • #17
haruspex said:
No, you misunderstand. It's not whether it can be written out in the usual way in twenty words, but whether there is any phrase that will completely define it in twenty words. Your first number above could be "eleven squared, times a thousand and one, times a million and one". Some other number might be defined as "the ninety third prime".

Considering you found the smallest such number, probably, if this problem gets famous enough, one would give that number some name, and then it would fail its own test! E.g. googolplex a very big number, but expressible as 1 English word, or Graham's number, a very large number indeed. .

@OP: I don't really know much about this field, but I did find this wiki article on "definable real numbers" http://en.wikipedia.org/wiki/Definable_number
 
  • #18
Matterwave said:
Considering you found the smallest such number, probably, if this problem gets famous enough, one would give that number some name, and then it would fail its own test!
You don't need to name it. How many words did I take to define it?
 
  • #19
haruspex said:
You don't need to name it. How many words did I take to define it?

Indeed, disregardthat seems to have picked up on it. :)
 
  • #21
I think a more precise definition of "definable" could be to actually have infinitely many "orders" of "definability" (see http://en.wikipedia.org/wiki/Berry_paradox#Resolution).

A first-order definition would be a definition in terms of only the formal symbols in the system. For example, 2 could be defined as 1 + 1. Or pi could be defined as [itex]2 \int_{-1}^{1}\sqrt{1-x^2}\ dx[/itex]. A definition such as "The smallest number that can be defined in less than 100 english words" would neither be precise enough (which order of definability are we talking about?) and would certainly not be a first-order definition.

A second-order definition would be a definition in terms of the formal symbols and in terms of the collection of first-order definable numbers. That is, a second-order definition can invoke the set of all first-order definitions in its own definitions, which would allow an expression like "the smallest number that cannot be defined with a first-order definition in less than 100 words". That would be a second-order definition, and thus it does not lead to a contradiction since, although this definition clearly has less than 100 words, it is not a first-order definition.

And so on, we could say an nth-order definition can use the set of all (n-1)th-order definitions in its own statement. Then the original problem could be reformulated as saying that the set of all first-order definable numbers is countable but the reals are uncountable, and thus some real numbers are not first-order definable. Those numbers will never be the answer to any equation or formula that can be written with "math symbols", and yet they can still be defined (using a higher-order definition).
 
Last edited:
  • #22
Perhaps we could work out a theorem relating the words to numbers with some rules like:

0) N is the number of english words (or some other language of choice)
1) number can't be named unless name words = N
2) number is positive integer
2) minimum of N words in english to describe it
3) may use (or limited to use) any named math expressions and math operators like plus, minus, multiplied by, divided by, square root of, sine / cosine / ... of, x to the power of, natural log of , base ten log of, up-arrow, ...
4) ...
 
  • #23
Boorglar said:
I think a more precise definition of "definable" could be to actually have infinitely many "orders" of "definability" (see http://en.wikipedia.org/wiki/Berry_paradox#Resolution).

A first-order definition would be a definition in terms of only the formal symbols in the system. For example, 2 could be defined as 1 + 1. Or pi could be defined as [itex]2 \int_{-1}^{1}\sqrt{1-x^2}\ dx[/itex]. A definition such as "The smallest number that can be defined in less than 100 english words" would neither be precise enough (which order of definability are we talking about?) and would certainly not be a first-order definition.

A second-order definition would be a definition in terms of the formal symbols and in terms of the collection of first-order definable numbers. That is, a second-order definition can invoke the set of all first-order definitions in its own definitions, which would allow an expression like "the smallest number that cannot be defined with a first-order definition in less than 100 words". That would be a second-order definition, and thus it does not lead to a contradiction since, although this definition clearly has less than 100 words, it is not a first-order definition.

And so on, we could say an nth-order definition can use the set of all (n-1)th-order definitions in its own statement. Then the original problem could be reformulated as saying that the set of all first-order definable numbers is countable but the reals are uncountable, and thus some real numbers are not first-order definable. Those numbers will never be the answer to any equation or formula that can be written with "math symbols", and yet they can still be defined (using a higher-order definition).

While it may seem like a valid idea, I fail to see any practical use. We still can't say anything about formulas; to do that we would require a whole new axiom schema reserved for first-order formulas (and so on). Thus we have to add layer upon layer of new axioms each time we want to transcend to a higher degree of abstraction. And what would these axioms be? Even to do analysis on numbers defined in order 2 would require some very strong (possibly inconsistent) axioms.

As a side note, we would not (in the metalanguage for formulas of order n) exceed countability.
 
Last edited:
  • #24
Boorglar said:
I think a more precise definition of "definable" could be to actually have infinitely many "orders" of "definability" (see http://en.wikipedia.org/wiki/Berry_paradox#Resolution).

A first-order definition would be a definition in terms of only the formal symbols in the system. For example, 2 could be defined as 1 + 1. Or pi could be defined as [itex]2 \int_{-1}^{1}\sqrt{1-x^2}\ dx[/itex]. A definition such as "The smallest number that can be defined in less than 100 english words" would neither be precise enough (which order of definability are we talking about?) and would certainly not be a first-order definition.

A second-order definition would be a definition in terms of the formal symbols and in terms of the collection of first-order definable numbers. That is, a second-order definition can invoke the set of all first-order definitions in its own definitions, which would allow an expression like "the smallest number that cannot be defined with a first-order definition in less than 100 words". That would be a second-order definition, and thus it does not lead to a contradiction since, although this definition clearly has less than 100 words, it is not a first-order definition.

And so on, we could say an nth-order definition can use the set of all (n-1)th-order definitions in its own statement. Then the original problem could be reformulated as saying that the set of all first-order definable numbers is countable but the reals are uncountable, and thus some real numbers are not first-order definable. Those numbers will never be the answer to any equation or formula that can be written with "math symbols", and yet they can still be defined (using a higher-order definition).

It looks like you're describing an informal version of what model theorists call the "definable closure" of a structure. It's basically an abstraction of the algebraic closure from algebra.
 
  • #25
I suppose this is true. But for the purposes of the question, we can limit ourselves to two levels of axioms/symbols. The first level would be the set of usual symbols of arithmetic, analysis and set theory together with the axioms of set theory. The second level would be a meta-language, where the first-order symbols are treated as objects. The second-order axioms would be rules for how each symbol can be combined with the others to form meaningful formulas (a "grammar" in a sense). So using the second-order system, we can talk about every possible formula expressible in the first-order system. This would include every mathematical expression in arithmetic, analysis and set theory.
 
  • #26
Using this ordered structure how would we meet the criteria of N words?

Say a number Z is defined as "X plus Y" where defining X needed 10 words and Y needed 5 words so do we say Z needed 3 words or do we say Z needed 10+5+1 words?
 
  • #27
Since X and Y themselves are not words in our agreed-upon initial set of words, I would say this definition of Z requires at least 10+5+1 words. Or create a higher-order language in which X and Y are words, and then Z would be defined with 3 words (but that would be a higher-order definition).

EDIT: It doesn't actually have to be higher-order, just a different language of the same order with 2 more symbols added to it.
 
Last edited:
  • #28
The meaning of the phrase "the set of numbers that can be represented by a finite string of symbols" is ambiguous until we establish a system for interpreting a finite string of symbols as a number. I think the claim that there are numbers that cannot be represented by a finite string of symbols is correct in the sense that once you pick a system for interpreting numbers as finite strings, you'll be missing some of the numbers. But the claim that there are numbers that cannot be represented by a finite string of symbols given we have freedom to pick any system of representing numbers seems too grandiose.

Trying to discuss this mathematically might be completely pardoxical. You can say "Let S be the set of numbers that cannot be represented by a finite string of symbols", but you can't say "let X be an elemenet of S" because you just represented it by the finite string "X". Perhaps that's a pedestrian example of what disregardthat said about settling matters within set theory.
 
  • #29
Stephen Tashi said:
The meaning of the phrase "the set of numbers that can be represented by a finite string of symbols" is ambiguous until we establish a system for interpreting a finite string of symbols as a number.

When I used the term "a finite amount of information" I was kind of implying that the formalism, ie. the definition of the system used to interpret the information, is part of the representation of the number.

I do understand better now, however, that what I'm talking about is pretty close to the concept of "computable": If the definition of the number (using finite information) is, basically, a set of instructions that define the number, or even tell how to construct the decimal representation of the number, then that definition is, pretty much, a program/algorithm that can be used to calculate the number. In other words, I'm talking about computable numbers.

I have the intuition (as I'm no mathematician nor understand these things very well), that trying to go beyond that goes to the realm of ambiguity and impossibility. (Perhaps it's related, or conceptually similar, to how the consistency of ZFC cannot be proved within ZFC itself.)

I like to ponder about the idea of a set of all "definable" numbers (that goes beyond just computable numbers, ie. include any number that can be described in any manner, even if it's not computable), but I understand that the concept may be too ambiguous to define mathematically.

But still I find it fascinating that no matter what, you can't define all individual real numbers so that each number is defined by a finite amount of information. Even more fascinating is that, apparently, there's no way of telling where that boundary is. There's no way of telling exactly which numbers belong to this hypothetical "definable" set. The "non-definable" numbers are elusive. (Perhaps I'm being carried too much away by the philosophical thoughts that this brings.)

But the claim that there are numbers that cannot be represented by a finite string of symbols given we have freedom to pick any system of representing numbers seems too grandiose.

Given that we can define a representation system only with a finite amount of information (having infinite formalisms would change things, because now we would be talking about sets of infinite strings, which can be uncountable; in practice a formalism needs to be finite because we can't read an infinite number of axioms to understand the formalism), no matter how many formalisms you can have (even infinite), doesn't it restrict all possible formalisms, and thus all numbers that can be represented with said formalisms, to a countable set?

Wouldn't it, thus, be contradictory to postulate that you can represent all possible real numbers (even if we can use any formalism we want), given that the set of reals is uncountable?
 
  • #30
Warp said:
Given that we can define a representation system only with a finite amount of information (having infinite formalisms would change things, because now we would be talking about sets of infinite strings, which can be uncountable; in practice a formalism needs to be finite because we can't read an infinite number of axioms to understand the formalism), no matter how many formalisms you can have (even infinite), doesn't it restrict all possible formalisms, and thus all numbers that can be represented with said formalisms, to a countable set?

ZFC uses axiom schemas, hence does not have a finite number of axioms.
 
  • #31
Warp said:
doesn't it restrict all possible formalisms, and thus all numbers that can be represented with said formalisms, to a countable set?

Suppose we think of "the real numbers" as a reality existing outside of mathematics - think of them as points on a line (the intuitive idea of a line, not a formally defined one.) When we represent some of these points as numbers in the usual manner, we can imagine picking any point on the line we wish and calling it "zero". To talk about "the set of numbers defined by all possible systems of representing them by finite strings of symbols", raises the question of whether two copies of the same system of representaion necessarily represent the same set of numbers. (For example, do two copies of the usual representation of numbers refer to the same "zero"?)

Suppose we ditch the Platonic idea of a number line existing outside of mathematics. We still need some way to determine if string a1 in system S1 means the same number as string a2 in system S2. So not only do we need to talk about all possible systems of reprsenting numbers as strings; we also need to talk about all possible ways of relating strings in one system to strings in another.
 
  • #32
Some mathematicians disliked real numbers in the past (and who knows, there may still be some today) because there are "too many" of them. I'm starting to understand them.
 

Related to Exploring Undefinable Numbers: Are They Useful?

1. What are undefinable numbers?

Undefinable numbers are numbers that cannot be expressed or defined using conventional mathematical operations or symbols. They are often abstract and cannot be easily visualized or understood through traditional mathematical concepts.

2. Why should we explore undefinable numbers?

Exploring undefinable numbers can expand our understanding of mathematics and challenge our traditional ways of thinking. It can also lead to the discovery of new mathematical concepts and the development of new mathematical tools.

3. Are undefinable numbers useful in practical applications?

While undefinable numbers may not have direct practical applications, they can be useful in theoretical mathematics and can inspire new ideas and approaches in various fields such as physics, computer science, and cryptography.

4. How do we study undefinable numbers?

Studying undefinable numbers often involves abstract thinking and exploring non-traditional mathematical concepts. This can include using logic and set theory to understand their properties and relationships with other numbers.

5. Can anyone understand undefinable numbers?

Yes, anyone can understand undefinable numbers with an open mind and a willingness to explore abstract concepts. While they may seem complex and difficult to grasp at first, with patience and practice, anyone can gain a deeper understanding of these numbers.

Similar threads

Replies
8
Views
942
Replies
4
Views
682
Replies
3
Views
347
  • General Math
Replies
3
Views
815
Replies
7
Views
672
Replies
1
Views
1K
Replies
13
Views
1K
  • General Math
Replies
1
Views
1K
Replies
35
Views
3K
Back
Top