Logic & Computation: Same Coin? How?

In summary: If any of those symbols are the symbol for a proof of P, stop execution(3) If the string doesn't end with a proof of P, then P is provable.The answer is no, because the algorithm could potentially keep evaluating symbols indefinitely.
  • #1
heman
361
0
Is logic & computation two sides of the same coin.?if yes,How?
 
Physics news on Phys.org
  • #2
u need logic to do computation...but are you talking about the fields of study

logic is more about truths and proofs
where as computation or computability is based on 1. if you can design an algorithm to do a problem 2. is that algorithm reasonable NP v P.
 
  • #3
Isn't logic the set of rules that direct computation? Something like grammer and language, maybe.
 
  • #4
it depends on the definition of logic, in my view logic is the ability to deduce facts from raw data.
you can use computation to deduce the facts, but you can't deduce all the facts from computation (e.g. the tiling problem).
 
  • #5
An algorithm maps an input to an output. The set of all inputs that the algorithm can receive is that algorithm's domain, and the set of outputs is the range. For every algorithm A there is an equivalent mathematical function f(x) where x is the number given by the binary representation of the input.
For example, take the sorting problem, which is given a sequence of n numbers (each with a maximum of k bits). If you take the binary representation of these n numbers and append them to each other, forming a string of k*n bits, you now have your input x. Now a function f(x) only needs to produce a y which is the string formed by appending the bits given by the numbers in the sorted sequence.
For every sorting algorithm there is an equivalent function f(x). By equivalent i mean that f(x) maps the same inputs to the same outputs as the sorting algorithm. From this perspective every sorting algorithm is equivalent to the function f(x). The difference between the algorithms is only in the speed of execution, which traditional mathematics doesn't really care about. There are infinitely many f(x) but notice that a common mathematical function doesn't have flow control, which means that, every time you need to to get a y from an x, the whole function f(x) needs to be computed (you can't skip portions of the function f(x)).
What we can do is split f(x) into a bunch of statements:
x1 = f1(x)
x2 = f2(x1)
x3 = f3(x2)
...
y = fi(xi)
Now, f1, f2, f3 may be of the kind:
f(x) = <some math expression> if a<x<b
= <some constant> else

Because these satements map inputs (x) to outputs (y) in the same manner as f(x), they are "equivalent" to the function f(x). The difference is that the statements can produce a much faster speed of execution by using flow control (if/else) to avoid unnecessary computation.
Notice that computer algorithms are exactly these statements. Every algorithm has many equivalent f(x) but the algorithm speeds up execution by breaking the function f(x) into pieces (instructions).
When you ask if logic and computation are equivalent, then you are really asking whether logic and mathematics are equivalent because the field of Theory Of Computation is a branch of mathematics that is particularly interested in how fast we can map an input to an output.
 
  • #6
Job...is it someway you mean that you break the bigger problem into smaller component and solve them seperately...
it was very interesting...can you illustrate this with one example...like how you taking fi's and how they actually speeding up ..
 
  • #7
Yes, formal logic and computability theory are extremely close subjects. In fact, a good fraction of what I know about formal logic has been from the computability theory viewpoint.

First, I would like to point out this quote from Wikipedia:
The theory of computation is the branch of computer science that deals with whether and how efficiently problems can be solved on a computer. The field is divided into two major branches: computability theory and complexity theory, but both branches deal with formal models of computation.
-Job- seemed to be mainly speaking about complexity theory, but it's computability theory that has the close relation to formal logic.


The connection1 comes from the fact that if a statement has a proof, then there exists an algorithm that can find the proof. It's a stupid algorithm, but it works.

To prove a statement P:
(1) For every possible sequence of symbols:
(2) ::: If the sequence of symbols denotes a valid proof of P.
(3) :::::: Then accept P as provable.

This assumes that there are only countably many possible symbols, and that there exists an algorithm to verify if a given statement is one of the axioms under consideration.

Conversely, if any algorithm is capable of generating a proof of P, then obviously P is provable!


This connection allows important questions about logic to be transferred to a question about Turing machines. For example:

I might ask if my theory is complete: for any P, we have that either P or ~P is a part of my theory. (I.E. provable from the axioms)

Which translates into the question about whether the following algorithm is guaranteed to terminate:

(1) Simultaneously run two copies of the above machine on P and on ~P.
(2) If the first copy terminates, accept P as provable.
(3) If the second copy terminates, reject P as disprovable.

The proof I know of Gödel's First Incompleteness Theorem was done using Turing machines.


Another topic one might consider is that of a recursive function. It turns out that this is the same thing as a computable function!


I'm sure I read a quote in a textbook to the effect that computability theory and formal logic are so tightly intertwined, that it would be a futile effort to try and mark a line to say what the difference between the two theories are.


1: Don't construe this to mean I'm convinced that this is the only connection between the two fields. :-p
 
  • #8
Notice also that the hardware of a computer is built upon very basic logical operations, these are the AND and the NOT:
Code:
a AND b = 1 if a=b=1, 0 otherwise
NOT a = 0 if a is 1, 1 otherwise
From these we can build the OR and the XOR operations:
Code:
a OR b = 0 if a=b=0, 1 otherwise
a XOR b = 0 if a=b=0 or a=b, 1 otherwise
Computers represent numbers in binary:
Code:
0 = ...00000
1 = ...00001
2 = ...00010
3 = ...00011
4 = ...00100
...
What this means is that we can easily perform the above operations on these binary numbers, bitwise, for example:
Code:
001 AND 101 = 001
With the AND, OR, NOT, XOR operations we can perform addition and subtraction. With addition and subtraction we can perform multiplication and division and so on. These basic operations are implemented as circuits inside the processor in a region called the ALU (Arithmetic Logic Unit).
The way you use the ALU is you pass one or two numbers to it and an operation (AND, OR, NOT, Addition, Subtraction, Multiplication...) and it gives you the output.
An ALU has a bit sequence for each operation, like:
Code:
000 = AND
001 = OR
010 = NOT
011 = Addition
...
So if you pass the numbers 2 (...010) and 3 (...011) and the operation addition(...011) to the ALU it will give you 5. This is called an instruction and every program (browser, word processor...) is converted into a sequence of these instructions, each specifying the operation, the operands and where the result should be put. For example it might look like (if processor is 32 bits):
Code:
00011001001001000111100000100111
00011010111001000111100000100110
00011001001001000111100110001101
00011001001001000101110111010110
...
Obviously it's not very easy to program with 0s and 1s, so Programming Languages were invented (C, C++, Fortran, Lisp...) which have constructs like:
Code:
if <...> do <...> else do <...>
while <...> do <...>
...
These languages have associated compilers which create the corresponding machine code.
Consider an algorithm written in C. Before you can run it you have to compile it into machine code (0s and 1s). When you do run it these instructions are fed to the processor one by one which uses the ALU as needed. So you can see how every algorithm has a corresponding sequence of logical operations. This is another way how logic and computation are related.
This sequence of logical operations can be combined into a single function f(x) but it's not a simple procedure at all. If you try to conver even a simple algorithm into a mathematical function it will probably end up being a pretty large function. If in turn you were to feed that function to a computer and compare the runtime to the runtime of the sequence of instructions you'd very likely find that it would be slower by a constant factor or worse.
Computability theory is closely related to logic, but that's because it's built upon logic. If you take a Theory of Computation class you'll be discussing sets and all that stuff which i believe comes from logic. As a matter fact i hope there isn't a field of science that isn't related to logic.
 
Last edited:
  • #9
fargoth said:
it depends on the definition of logic, in my view logic is the ability to deduce facts from raw data.
you can use computation to deduce the facts, but you can't deduce all the facts from computation (e.g. the tiling problem).
Logic is the study of argument.I think logic is the secience to find out if somthing is ture or not using other data and comparing it to the current argument to detirme weather it is ture or not.
 
  • #10
Formal logic isn't really about truth: it's about provability.

In one text I have checked out, Mathematical Logic (Ebbinghaus, Flum, and Thomas), the word "truth" doesn't appear until page 183 out of 289. (If I can believe the index) Furthermore, the word is only used in quotes, indicating that it's only speaking loosely. (the text quickly clarifies what it really meant to say)


Formal logic starts off by discussing pure syntax -- studying grammar in a natural language would be a decent analogy. (but, of course, formal logic is much more precise, and would be easier too if we haven't spent all our lives learning grammar. :biggrin:) In fact, formal logic uses words such as "language" and "grammar"!

There are all sorts of things about which one can speak purely syntactically, such as the notion of proof. A proof is merely a sequence of "calculations": you start with some collection of sentences (the hypotheses), and you produce new sentences from the old ones using a certain set of operations. (In fact we call such a set of operations a "calculus")


Then, one might speak about semantics: how one might interpret strings of symbols. E.G. if I take a language with operator symbols [itex]\cdot, {}^{-1}, e[/itex] which are binary, unary, and nullary respectively, I might be interested in the following set of sentences:

[tex](\forall x)x \cdot x^{-1} = e[/tex]
[tex](\forall x)(\forall y)(\forall z) (x \cdot y) \cdot z = x \cdot (y \cdot z)[/tex]
[tex](\forall x) e \cdot x = x \cdot e = x[/tex]

these are just strings of symbols. But if I can give them a meaning: I can talk about some set G, and define operations on G that correspond to the three operator symbols. Then, I might ask if this interpretation is a model of the above sentences: that is, through the correspondence I just mentioned, whether these three sentences are valid. (heuristically speaking, if they are "true" in the model)

A model of these three sentences, of course, is nothing more than a group!


One is also interested in theories: that is, sets of sentences that are closed under deduction. For example, the theory of the above three sentences I mentioned would also include sentences such as [itex]e \cdot e = e[/itex].


One might now want to ask questions such as:
If I can model a set of sentences, are they consistent?
If I have a consistent set of sentences, can I find a model of them?
Is a theory complete? I.E. for any sentence P, is either P or ~P in the theory?
If a sentence is true in every model of a theory, must it have a proof? (And thus be a part of the theory)
Can I have two non-isomorphic models of the same theory? Even if the theory is complete?
How can I build new models out of old ones?

Gödel's completeness theorem and two incompleteness theorems are related to these questions. The foundations of nonstandard analysis are firmly rooted in formal logic as well.
 
  • #11
Hurkyl, some of the stuff you mentioned, like completeness, recognizability & corecognizability, i learned in computer classes together with Regular Expressions, Grammars, FSMs, PDAs, Turing Machines etc. I didn't realize that much of this stuff came from the field of Logic.
 
  • #12
Hurkyl said:
Formal logic isn't really about truth: it's about provability.
Would you mind elaborating? I can't imagine what formal logic would be without truth. (Though meaningless comes first to mind.) Call the objects that are usually called 'truth' and 'falsehood' (or whatever 'truth'-values you have) whatever you wish; if you don't assign those objects to your statements, well, who cares what you can prove? You can prove anything -- proofs are just strings of symbols, as you said (and you get to make up the rules generating them).

Perhaps you mean that you tend to spend more time and have more options with the syntax, or calculus, than with the semantics. (?) But truth seems to be the main reason (in addition to perhaps efficiency, convenience, tradition, beauty) for choosing some set of connectives, inference rules, or axioms over others. All connectives in classical logic are truth-functional; inference rules preserve truth; axioms are tautologous. Do you think that's a coincidence? Is any old calculus worth spending your time in?

Can you define validity, satisfaction, consistency, soundness, or completeness without the objects usually called truth-values?

Or maybe you mean that truth has special meanings in logic. (?) I'm really curious about what you have in mind, because if you asked me, I'd say that logic is about truth, proof, and their relationship.
In one text I have checked out, Mathematical Logic (Ebbinghaus, Flum, and Thomas), the word "truth" doesn't appear until page 183 out of 289. (If I can believe the index) Furthermore, the word is only used in quotes, indicating that it's only speaking loosely. (the text quickly clarifies what it really meant to say)
Nope, it lies. If Amazon's search works, true first appears on page 27 and truth on page 31.
 
  • #13
honestrosewater said:
Can you define validity, satisfaction, consistency, soundness, or completeness without the objects usually called truth-values?
Here are some excerpts from Mathematical Logic (Ebbinghaus, Flum, Thomas)


3.2 Definition of the Satisfaction Relation. For all interpretations [itex]\mathfrak{J} = (\mathfrak{A}, \beta)[/itex], we let
[itex]
\begin{array}{l @{\quad \mbox{:iff} \quad} l}

\mathfrak{J} \models t_1 \equiv t_2 & \mathfrak{J}(t_1) = \mathfrak{J}(t_2) \\
\mathfrak{J} \models Rt_1 \ldots t_n & R^{\mathfrak{A}}\mathfrak{J}(t_1) \ldots \mathfrak{J}(t_n) \mbox{\quad (i.e. $ R^\mathfrak{A}$ holds for $\mathfrak{J}(t_1), \ldots, \mathfrak{J}(t_n)$ )} \\
\mathfrak{J} \models \neg \varphi & \mbox{not } \mathfrak{J} \models \varphi \\
\mathfrak{J} \models (\varphi \wedge \psi) & \mathfrak{J} \models \varphi \mbox{ and } \mathfrak{J} \models \psi \\
\end{array}
[/itex]
(and so forth)

7.1 Definition. (a) [itex]\Phi[/itex] is consistent (written: [itex]\mbox{Con } \Phi[/itex] if and only if there is no formula [itex]\varphi[/itex] such that [itex]\Phi \vdash \varphi[/itex] and [itex]\Phi \vdash \neg \varphi[/itex].
(b) [itex]\Phi[/itex] is inconsistent (written: [itex]\mbox{Inc } \Phi[/itex] if and only if [itex]\Phi[/itex] is not consistent (that is, if there is a formula [itex]\varphi[/itex] such that [itex]\Phi \vdash \varphi[/itex] and [itex]\Phi \vdash \neg \varphi[/itex]).


(Though, in commenting on definition 3.2, the text does say "the reader should convince himself that [itex]\mathfrak{J} \models \varphi[/itex] if and only if [itex]\varphi[/itex] becomes a true statement under the interpretation [itex]\mathfrak{J}[/itex].")


The text also makes a big deal that the usual operations on truth values are not the same as the logical connectives in the formal language. As far as I can tell, it only talks about them in one section, and uses symbols such as [itex]\dot{\vee}[/itex] and [itex]\dot{\wedge}[/itex] to distinguish them.


I say that logic isn't about truth because I never really see it used in anything I've read on formal logic. Certainly there is an intimate connection between boolean algebra and the usual rules of deduction ({T, F} is not the only suitable Boolean algebra), but Boolean algebra seems to be more useful for algebraic and computational tasks.

I am by no means an expert on formal logic -- this is merely the impression I've gotten from what I've studied thus far.


Would you mind elaborating? I can't imagine what formal logic would be without truth. (Though meaningless comes first to mind.) Call the objects that are usually called 'truth' and 'falsehood' (or whatever 'truth'-values you have) whatever you wish; if you don't assign those objects to your statements, well, who cares what you can prove? You can prove anything -- proofs are just strings of symbols, as you said (and you get to make up the rules generating them).
Formal logic does have plenty of direct mathematical use -- it has rather strong connections to things like Universal Algebra, Nonstandard Analysis, and Real Algebraic Geometry.

But as for what I think was the intent of your question, I think that I've been a formalist a long time -- I believe that mathematics makes no attempt to ascribe meaning to anything. That task is better left to other disciplines (such as physics, or even metamathematics)
 
Last edited:
  • #14
Hurkyl said:
Here are some excerpts from Mathematical Logic (Ebbinghaus, Flum, Thomas)
3.2 Definition of the Satisfaction Relation. For all interpretations [itex]\mathfrak{J} = (\mathfrak{A}, \beta)[/itex], we let
[itex]
\begin{array}{l @{\quad \mbox{:iff} \quad} l}
\mathfrak{J} \models t_1 \equiv t_2 & \mathfrak{J}(t_1) = \mathfrak{J}(t_2) \\
\mathfrak{J} \models Rt_1 \ldots t_n & R^{\mathfrak{A}}\mathfrak{J}(t_1) \ldots \mathfrak{J}(t_n) \mbox{\quad (i.e. $ R^\mathfrak{A}$ holds for $\mathfrak{J}(t_1), \ldots, \mathfrak{J}(t_n)$ )} \\
\mathfrak{J} \models \neg \varphi & \mbox{not } \mathfrak{J} \models \varphi \\
\mathfrak{J} \models (\varphi \wedge \psi) & \mathfrak{J} \models \varphi \mbox{ and } \mathfrak{J} \models \psi \\
\end{array}
[/itex]
(and so forth)
<snip>
(Though, in commenting on definition 3.2, the text does say "the reader should convince himself that [itex]\mathfrak{J} \models \varphi[/itex] if and only if [itex]\varphi[/itex] becomes a true statement under the interpretation [itex]\mathfrak{J}[/itex].")
Okay, satisfaction is enough. (BTW, their 'extensional' is my 'truth-functional'.) The objects that I'm referring to would be assigned by what they call an assignment ([itex]\beta[/itex]). That's what I'm asking about: the maps from the formulas to the objects that let you define, for example, how the truth-value, or semantic entailment, of a compound formula depends on the truth-value, or semantic entailment, of its constituents. Where are the objects that allow you to state that? It seems they've relegated them to the preceding discussion and to your informal understanding. But when they say, for example, [itex]\mathfrak{J} \models \neg \varphi \mbox{ :iff not } \mathfrak{J} \models \varphi[/itex], they're referring to the set of truth-values {T, F} that they've just finished talking about. I don't know why they don't just put them directly into the definitons -- the truth-values aren't even necessarily part of the formal language (they are part of the metalanguage, of course). I think the only requirement is that the truth-values be distinct.

My book, for example, simply uses truth-values to define satisfaction (using Ebbinghaus et al's terms): If f is a formula and J is an interpretation such that J(f) = T, we say that J satisfies f and write J |= f. And similarly for the others.

Is [itex]\mathfrak{J} \models \varphi[/itex] left undefined, or did I miss the definition?
The text also makes a big deal that the usual operations on truth values are not the same as the logical connectives in the formal language. As far as I can tell, it only talks about them in one section, and uses symbols such as [itex]\dot{\vee}[/itex] and [itex]\dot{\wedge}[/itex] to distinguish them.
Sure, the maps are part of the metalanguage. They are applied to the connectives in the formal, object language so that you can determine, for example, whether a formula you've proved is a tautology or not.
I say that logic isn't about truth because I never really see it used in anything I've read on formal logic.
Perhaps you don't see it because it's already been worked into the calculus. :biggrin:
Anywho, whatever you call it, it seems necessary to me. Of course, I could be wrong -- that's why I'm asking. :smile: Do you think 'if I can prove it, is it true?' and 'if it's true, can I prove it?' are central questions in logic?
But as for what I think was the intent of your question, I think that I've been a formalist a long time -- I believe that mathematics makes no attempt to ascribe meaning to anything. That task is better left to other disciplines (such as physics, or even metamathematics)
Hm, I only mean to use truth and such as they are used in formal logic. I'm not talking about 'real world truth' or anything of that sort. I'm all for the cleanest divide possible between the object language and ...everything else. If a system can do without those (in classical logic) two distinct objects that are normally called truth-values and assigned to formulas, more power to them, as far as I'm concerned. I just don't see how it's possible.
 
Last edited:
  • #15
honestrosewater said:
Okay, satisfaction is enough. (BTW, their 'extensional' is my 'truth-functional'.) The objects that I'm referring to would be assigned by what they call an assignment ([itex]\beta[/itex]). That's what I'm asking about: the maps from the formulas to the objects that let you define, for example, how the truth-value, or semantic entailment, of a compound formula depends on the truth-value, or semantic entailment, of its constituents.
Their [itex]\beta[/itex] isn't a truth-assignment: it's a map from the set of variables into the domain A defined by the underlying structure.

From this, for the interpretation [itex]\mathfrak{J} = (\mathfrak{A}, \beta)[/itex], they define a map [itex]\mathfrak{J}(_)[/itex] from the terms into the domain A.

In particular, neither of these functions are defined on the set of formulas, and do not map into anything resembling a set of truth values.

honestrosewater said:
Is [itex]\mathfrak{J} \models \varphi[/itex] left undefined, or did I miss the definition?
The above is the definition of [itex]\mathfrak{J} \models \varphi[/itex]: it defines it first for the atomic formulae, and then recursively for compound formulae.


If you were following their exposition, you'd have to do things the opposite way that your book does: the truth-valuation would be defined in terms of the satisfaction relation.


The two ideas, of course, naturally correspond: for any particular [itex]\mathfrak{J}[/itex], the satisfaction relation is merely a subset of the set of formulas, and the truth-valuation is the corresponding characteristic function.


Do you think 'if I can prove it, is it true?' and 'if it's true, can I prove it?' are central questions in logic?
No, I don't! Without some additional qualification, I have no idea what those two questions even mean! :smile:
 
Last edited:
  • #16
Sorry, I shouldn't have said anything yesterday; I thought I was in a better condition than I was. Anywho, yeah, I see.
Hurkyl said:
No, I don't! Without some additional qualification, I have no idea what those two questions even mean! :smile:
Is it true that for all formulas f, if |= f, then |- f, and if |- f, then |= f? Does the set of tautologies equal the set of theorems?
 
  • #17
That I certainly find to be an important question. (Assuming that |= P means that P is a consequence of the empty set of formulas)
 
  • #18
Hurkyl said:
(Assuming that |= P means that P is a consequence of the empty set of formulas)
Yep.
Speaking loosely (hoping you understand well enough what I mean), it seems like you might be able to leave truth out of any discussion by just keeping it at least one discussion above, in a meta-level, and using it to kind of sort what you let through to the object-level (e.g. which definitions you choose).
I don't know. I'm just starting to take a new look at logic and its applications, looking more at natural language, meaning and semantics than at computation and syntax. Truth being a part of logic, even formal and symbolic logic, seems like a no-brainer, but I don't see where it enters the picture anymore.
 

Related to Logic & Computation: Same Coin? How?

1. What is the relationship between logic and computation?

The relationship between logic and computation is often described as two sides of the same coin. Logic is the study of reasoning and argumentation, while computation is the process of solving problems using algorithms. Both fields aim to solve problems and make decisions using precise and systematic methods.

2. How does logic play a role in computation?

Logic plays a crucial role in computation as it provides the foundation for creating algorithms and defining the rules and steps to solve a problem. In computer science, logic is used to design and analyze programming languages, create efficient algorithms, and ensure the correctness of computer programs.

3. Can computation be reduced to logic?

Some philosophers argue that computation can be reduced to logic, meaning that any computational process can be described and explained using logical principles. However, others argue that computation involves more than just logic, as it also requires physical hardware and real-time processing.

4. How has the relationship between logic and computation evolved over time?

The relationship between logic and computation has evolved significantly over time. In the early days of computing, logic was used to design and build computer systems. As computers became more advanced, logic was also used to develop programming languages and algorithms. Today, logic and computation continue to influence and shape each other, with advancements in one field often leading to new developments in the other.

5. How does understanding the relationship between logic and computation benefit society?

Understanding the relationship between logic and computation has numerous benefits for society. It has led to the development of powerful computing systems, artificial intelligence, and automation, which have greatly improved our daily lives. It has also paved the way for advancements in fields such as medicine, science, and engineering, allowing us to solve complex problems and make more informed decisions.

Similar threads

  • General Discussion
Replies
24
Views
1K
  • General Discussion
Replies
23
Views
2K
Replies
14
Views
752
  • Quantum Physics
Replies
18
Views
3K
Replies
3
Views
728
  • Programming and Computer Science
Replies
15
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
Replies
11
Views
472
Back
Top