Dualism made intelligible

  • Thread starter hypnagogue
  • Start date
In summary: Chalmers argues that... Dualism (physical/non-physical interactions) made intelligible. In summary, Chalmers argues that the relationship between events in the brain and those private, subjective, introspectible experiences that together constitute our inner mental life can be explained by the existence of consciousness, which is a non-physical entity that nonetheless interacts with our physical world.
  • #36
Originally posted by hypnagogue

The mind exists
Thanks!
 
Physics news on Phys.org
  • #37
Originally posted by THE MIND
Thanks!

Ha ha. Hey, THE MIND, ever heard of Lifegazer? He's like your biggest fan!
 
  • #38
Originally posted by Mentat
Ha ha. Hey, THE MIND, ever heard of Lifegazer? He's like your biggest fan!
Lifegazer ... yes ... he meditates actually in New Mexico desert and while dancing transmutes in the Lemaireocereus chichipe dementia praecox. Pure mind control! Grooooovy.

Love you all PF. x.
 
  • #39
Originally posted by Mentat
Sorry, g2g right now. I will finish my response tomorrow...perhaps you should wait until I finish it before responding?

Still waiting on your complete response... or should I go ahead with what you already wrote?

...and the tourniquet goes 'round...
 
  • #40
First off, my sincerest apologies...I forgot all about my incomplete response . I'll finish it now...

Originally posted by hypnagogue
The point is that the notion of physicality hinges critically on our ability to know about something in our own physical world. Suppose for a minute that we are living in a simulated matrix world. Then there is, say, a chair sitting next to the vat that holds your brain, which is hooked up to the matrix computers. You, I, and everyone else living in this matrix world can never know about the existence of this chair. Do we still call it physical? If we do, then you have posited the existence of a physical thing which does not meet the criterion of observability. You, being unable to observe this chair, would of course say "there exists a physical chair that I cannot even in principle observe? Nonsense!"

The problem with this is that the mind is in a vat, and is thus not hooked up to any input devices (like eyes, or ears) so this person is not qualified to make judgements about whether something is or is not physical, since they can't see anything.

Besides, even on the rare chance that we ever come up with such a complex program, the difference will always be that there is no matrix world, there is no matrix flower, the brain in the vat just thinks there is (and not because they are percieving a "fake" flower, but because they are being electrically stimulated just as they would be if they were beholding some kind of flower).

I don't see how your response negates any of my claims. If we have been in a matrix this whole time, who is to say that electrons and quarks even exist in the physics of the world that houses the computer matrix that we are hooked up to? Even if we assume the physics of the 'real' world and the matrix world are identical, your point still does not negate what I have said here. All the claims we make about quarks and electrons and the like would pertain only to matrix electrons and matrix quarks, because this is all we would have been studying. And despite our best efforts, we would not be able to tell that these electrons and quarks are really composed on a deeper level of information structures in the computer matrix.

That's what I meant to negate: the information is not "structured" into "matrix quarks" and "matrix electrons". We are simply stimulated as we would normally be by electrons or quarks. Basically, what I'm trying to say is that, when I tell you that I saw an electron in the matrix, what I'm really telling you is that some electric probe (or whatever the "mad scientist" is using) has stimulated my brain, not to produce a phenomenological electron (which would lead to the homunculun problem), but to produce the same sensation that one experiences when one observes an electron.

Not at all. Suppose our 'real' brains sitting in the matrix vat operate exactly how you fancy them to operate, with Dennett's MDP and so on. Then we in the matrix have immaterial (unobservable) minds, which are simply our 'real' brains sitting in the vats of the 'real' world. You would like to call these brains physical, but they are unobservable to us even in principle. This creates a standard whereby it is acceptable to call unobservable things physical.

This is a very compelling way to state something that is still logically flawed, IMO. I've considered it, and I think you are definitely on to something; but the difference comes when you are using Dennett's model, which doesn't allow one to percieve a complete "hallucinatory" world. In his model, everything you ever percieve is part of the question/answer process of your brain, and thus, when your pain receptors (for example) "ask" the gut (for example) if it feels pain, it will not get a response. Think about it, you are giving the "mad scientist" more control over your brain than you yourself have, since you cannot make yourself feel actual pain in your gut, no matter how powerfully you try to imagine it.

As it is, I really like the way you put that new standard by which it is acceptable to call unobservable things "physical.

Thus, by this same standard, we could call the abstract 'mind' or 'soul' or whatever a physical entity though we have no means of observing them; since we have defined these things to be physical, we do not have the traditional dualist problem of physical/non-physical interactionism. The mode by which this mind or soul or whatever operates is not important-- it is not by necessity evocative of the homunculan problem, for the same reason the matrix/brain example at the beginning of this paragraph did not fall prey by logical necessity to the homunculan problem. I could simply claim that this unobservable physical soul does not produce subjective experience, but simply is the subjective experience.

But, if the "soul" is the experience, then we have to have some way of "percieving" the experience within our brains. You see, you have taken the "soul" to be an entity, and, since the brain in the vat has no eyes, it cannot see this entity (nor can it hear it or feel it or otherwise percieve it), and thus there must be an internal "observer" capable of percieving this entity, otherwise it (the entity) would be completely useless to our brains.

There is a fine line between postulating a subjective experience in the brain, that is caused by specific electrical stimulations, and even considering an actual entity (physical or otherwise, since this distinction is hardly important anymore :wink:) that can be perceived by the brain, in spite of it's having no input devices.

By the way you are defining things, everything that exists is indeed physical. What is a word, but a set of electrochemical processes in a brain, or a pattern of ink on a page, or a set of electrical processes in a computer monitor?

Everything that exists is physical. The meme of language, the photonic signals coming from your monitor right now, they are all physical, and thus "words" are physical. That's what Materialism means.

The mind exists, axiomatically in fact. Let us define the mind as the complete set of subjective experiences of a particular person for a particular duration of time. You have simply described the mind materialistically in terms of Dennett's MDP, but you have not by any means shown that it does not exist.

I never said the mind doesn't exist. I said "it doesn't work for the mind (seperate from the brain itself) to exist at all." The mind does indeed exist, but mind = most of the brain (mainly frontal cortex).
 
  • #41
Mind-/-Brain

However contentious, the philosophical problem, as distinct from the physiological problem, can be stated quite simply as follows: What, essentially, is the relationship between events in the brain and those private, subjective, introspectible experiences that together constitute our inner mental life? We need not assume here that consciousness is synonymous with mind - consciousness may well be no more than just one aspect of mind - but, with respect to the problem at issue, it is the existence of consciousness that is critical.

Stated thus, the problem admits of only three basic answers:

(1) Events in the brain, operating in accordance with the laws of physics, determine completely both our behaviour and our subjective experiences.

(2) Mental events may be elicited by events in the brain or they may, in turn, elicit brain events and so influence the course of our behaviour (I use here the word 'elicit' rather than 'cause' advisedly since the kind of causation here envisaged is so unlike familiar causation of the physical kind).

(3) There are no such things as private, subjective, introspectible, sense-data or qualia (e.g. that red patch that I am now staring at in the centre of my visual field). Hence there just is no problem. All that exists, in the last resort, are the physical events underlying the information-processing, colour-coding or whatever such as any sophisticated computer or automaton could, in principle, be programmed to perform. It follows that there is no mind-brain problem for humans or animals any more than there is for robots or other artificial intelligence.
 
  • #42


Originally posted by Jeebus
However contentious, the philosophical problem, as distinct from the physiological problem, can be stated quite simply as follows: What, essentially, is the relationship between events in the brain and those private, subjective, introspectible experiences that together constitute our inner mental life? We need not assume here that consciousness is synonymous with mind - consciousness may well be no more than just one aspect of mind - but, with respect to the problem at issue, it is the existence of consciousness that is critical.

Stated thus, the problem admits of only three basic answers:

(1) Events in the brain, operating in accordance with the laws of physics, determine completely both our behaviour and our subjective experiences.

(2) Mental events may be elicited by events in the brain or they may, in turn, elicit brain events and so influence the course of our behaviour (I use here the word 'elicit' rather than 'cause' advisedly since the kind of causation here envisaged is so unlike familiar causation of the physical kind).

(3) There are no such things as private, subjective, introspectible, sense-data or qualia (e.g. that red patch that I am now staring at in the centre of my visual field). Hence there just is no problem. All that exists, in the last resort, are the physical events underlying the information-processing, colour-coding or whatever such as any sophisticated computer or automaton could, in principle, be programmed to perform. It follows that there is no mind-brain problem for humans or animals any more than there is for robots or other artificial intelligence.

I pick 1 and 3, since they coincide enough to be (basically) just one point.

Number 2 falls into both the homunculun problem and the physical/non-physical problem (the first, essentially; the latter, not necessarily).
 
  • #43
Mentat, rather than continue our discussion in the current manner (which can become rather long-winded and tedious), I will just try to focus on one aspect of our conversation for now.

What I want to focus on for now is the existential status of things within the matrix. Let us take, for instance, a chair in the matrix (we have been talking about electrons and such, but using a chair will simplify things). You claim that A) the matrix chair does not exist, but rather B) the brain of the perceiver is stimulated in such a way so as to perceive it existing with its characteristic properties. I agree with B, but not A.

I think you have been focusing entirely on the phenomenological side of the equation, which has lead to some confusion. I agree that there does not exist some "phenomenological chair" floating around independently of the mind/brain of the perceiver hooked up to the computer matrix. However, the perception of a chair in the matrix is analogous to our everyday perception of a chair: namely, our subjective phenomenological perception is indicative of something that exists objectively. In the "real world" we take this objective thing to be the chair itself, composed as it is of electrons, protons, and neutrons. In the matrix, this objective thing is simply information existing in the computers. Presumably the matrix works in an analogous fashion to our own computers: it stores data that can be used to represent whatever the programmer wishes. In the case of the matrix, in addition to the electrical signals sent into the enmatrixed person's brain, there is presumably a set of data representing the matrix chair. This is what I have meant by "information structure": simply the set of data in the matrix that represents the chair. This information structure is used in conjunction with information structures representing other objects in the room, electromagnetic radiaton, the laws of physics, and so on, in order to generate the characteristic electrical signals sent to the brain in order to illicit the perception of a chair.

So, a matrix chair does exist in an objective sense. It is distinct ontologically from what we would call a "real" chair insofar as, at the most basic level, a "real" chair is "physical" and a matrix chair is "computational." But it exists in this manner nonetheless.
 
  • #44
Originally posted by hypnagogue
Mentat, rather than continue our discussion in the current manner (which can become rather long-winded and tedious), I will just try to focus on one aspect of our conversation for now.

Good idea.

What I want to focus on for now is the existential status of things within the matrix. Let us take, for instance, a chair in the matrix (we have been talking about electrons and such, but using a chair will simplify things). You claim that A) the matrix chair does not exist, but rather B) the brain of the perceiver is stimulated in such a way so as to perceive it existing with its characteristic properties. I agree with B, but not A.

Just wanted to point out that you summed up my viewpoint rather well. Ok, continue...

I think you have been focusing entirely on the phenomenological side of the equation, which has lead to some confusion. I agree that there does not exist some "phenomenological chair" floating around independently of the mind/brain of the perceiver hooked up to the computer matrix. However, the perception of a chair in the matrix is analogous to our everyday perception of a chair: namely, our subjective phenomenological perception is indicative of something that exists objectively.

The "perception" is indeed the same, but the presence of a chair (of any kind) is not (unless there is a "phenomenological chair floating around").

In the "real world" we take this objective thing to be the chair itself, composed as it is of electrons, protons, and neutrons. In the matrix, this objective thing is simply information existing in the computers.

What "objective thing"?

Remember, perception is a subjective thing.

Presumably the matrix works in an analogous fashion to our own computers: it stores data that can be used to represent whatever the programmer wishes. In the case of the matrix, in addition to the electrical signals sent into the enmatrixed person's brain, there is presumably a set of data representing the matrix chair. This is what I have meant by "information structure": simply the set of data in the matrix that represents the chair. This information structure is used in conjunction with information structures representing other objects in the room, electromagnetic radiaton, the laws of physics, and so on, in order to generate the characteristic electrical signals sent to the brain in order to illicit the perception of a chair.

Right...I guess there's not getting around such terminology as "illicit the perception of a chair", is there? Oh well...

So, a matrix chair does exist in an objective sense.

Whoah! *Non-sequitor Alert* :wink:

Seriously, doesn't everything that you've just said (about the fact that certain bits of information illicit a perception of that which isn't there at all) contradict your supposed conclusion?

It is distinct ontologically from what we would call a "real" chair insofar as, at the most basic level, a "real" chair is "physical" and a matrix chair is "computational." But it exists in this manner nonetheless.

But there was no matrix chair; not even in your own explanation, up the point of forming a conclusion. I don't see how you went from a mostly agreeable explanation (agreeable to me, that is), to stating that there must be a matrix chair. When you think about it, if the bits of information are sufficient to illicit the same mental perception as that illicited when one sees a chair, why should their be a "matrix chair"?
 
Last edited:
  • #45
Originally posted by Mentat
When you think about it, if the bits of information are sufficient to illicit the same mental perception as that illicited when one sees a chair, why should their be a "matrix chair"?

But that's exactly the point: the "matrix chair" is bits of information existing in the computer matrix. That is precisely the objective identity of the "matrix chair."

Perhaps this diagram I drew up will be helpful in illustrating my point: http://www.people.cornell.edu/pages/bm56/matrix.GIF . (Excuse my drawing of the atom; I drew it this way just for simplicity.)

Presumably the matrix must store sets of data to represent the objects it displays to its enmatrixed inhabitants. If you are familiar with the term "object oriented programming," I am proposing that the "matrix chair" is one such computational 'object.' This is what I have meant by "information structure."

A person in the "real world" will see the same phenomenological chair as a person hooked up to the matrix. In both cases, the phenomenological perception is indicative of an objectively existing thing. In the case of the "real world," it is an object composed of atoms / molecules. In the case of the matrix, it is an object composed of bits of information in computers. So the objective identity of the "real chair" vis a vis the "matrix chair" differs in its ontology, but both are objectively existent things nonetheless.
 
Last edited by a moderator:
  • #46
Originally posted by hypnagogue
But that's exactly the point: the "matrix chair" is bits of information existing in the computer matrix. That is precisely the objective identity of the "matrix chair."

Perhaps this diagram I drew up will be helpful in illustrating my point: http://www.people.cornell.edu/pages/bm56/matrix.GIF . (Excuse my drawing of the atom; I drew it this way just for simplicity.)

Well, there is a problem here:

There shouldn't be a set of information that means "chair" in the matrix computer, since the matrix computer's only job is to make you see a chair, and thus all there is in the computer is a program to send electrical stimulus into the brain, sufficient to cause subjective experience of a chair.

Besides this, I think we are leaving the main point. I suppose I can agree that there's no way for us to tell whether we are in a "matrix" or not, but isn't that an added assumption? Doesn't it satisfy Occam's Razor to assume only what we can percieve (which leads to the Materialistic philosophy that I'm (currently) holding)?

A person in the "real world" will see the same phenomenological chair as a person hooked up to the matrix. In both cases, the phenomenological perception is indicative of an objectively existing thing. In the case of the "real world," it is an object composed of atoms / molecules. In the case of the matrix, it is an object composed of bits of information in computers. So the objective identity of the "real chair" vis a vis the "matrix chair" differs in its ontology, but both are objectively existent things nonetheless. z

But what is the relevance of this? This does not prove duality, it does, in fact, disprove it - since you are postulating that the only way for us to percieve the "phenomenological chair" is to have an external and material construct that illicits material brain activity...isn't this bad for your argument?
 
Last edited by a moderator:
  • #47
Originally posted by Mentat
Well, there is a problem here:

There shouldn't be a set of information that means "chair" in the matrix computer, since the matrix computer's only job is to make you see a chair, and thus all there is in the computer is a program to send electrical stimulus into the brain, sufficient to cause subjective experience of a chair.

Yes, but since there are multiple people hooked up to the matrix, and it presents to all of them a logically consistent world, there must be some underlying representation to keep track of everything that is going on in the matrix "world." If there were not, it wouldn't be possible to make the matrix "world" logically consistent.

Besides this, I think we are leaving the main point. I suppose I can agree that there's no way for us to tell whether we are in a "matrix" or not, but isn't that an added assumption? Doesn't it satisfy Occam's Razor to assume only what we can percieve (which leads to the Materialistic philosophy that I'm (currently) holding)?

Occam's Razor has nothing to do with the main point. I am only interested in showing how the idea of dualism (or more specifically, the idea of physical/non-physical interactions) can be logically consistent. I am not trying to claim that we actually live in a dualistic world; rather, I just aim to dispell notions that dualism is untenable simply because it posits a logically inconsistent framework (hence, "Dualism made intelligible").

But what is the relevance of this? This does not prove duality, it does, in fact, disprove it - since you are postulating that the only way for us to percieve the "phenomenological chair" is to have an external and material construct that illicits material brain activity...isn't this bad for your argument?

No, because calling the 'external' world in which the matrix computers exist "physical" is begging the question. This is precisely the issue that is up for grabs, so we cannot assume right away that this external world can rightfully be called "physical" in the first place-- I will get back to this. For now, I am just trying to get you to come to an agreement that it is incorrect to say that things perceived in the matrix are not "real" or do not exist, since you repeatedly bring this issue up as an objection to my main argument in conjunction with the homunculan problem. In reality, the homunculan problem has no effect on my argument. I think you will agree to this once you agree that things perceived in the matrix do in fact exist in some objective sense.
 
Last edited:
  • #48
I am here assuming that being conscious entails having a mind although having a mind does not necessarily imply being conscious, there is, I would say, nothing self-contradictory in the idea of unconscious mental events. However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.

Originally posted by hypnagogue
No, because calling the 'external' world in which the matrix computers exist "physical" is begging the question. This is precisely the issue that is up for grabs, so we cannot assume right away that this external world can rightfully be called "physical" in the first place-- I will get back to this. For now, I am just trying to get you to come to an agreement that it is incorrect to say that things perceived in the matrix are not "real" or do not exist, since you repeatedly bring this issue up as an objection to my main argument in conjunction with the homunculan problem. In reality, the homunculan problem has no effect on my argument. I think you will agree to this once you agree that things perceived in the matrix do in fact exist in some objective sense.

I agree mostly here with you, hypnagogue. But, I would say that this seems rather like a a priori standpoint. Two worlds existent or nonexistent, conscious or non-conscious is and are equally probable candidates for actualization. The mind operates even while after you are dead and this is very logical because the mind has some degree of autonomy in determining the course of events. While this happens, even though the body is in a non physical state of action the mind and brain are still functioning, and the brain is the centreal processor and factory for the body. I also concur that the external world cannot rightfully physical but it is quite possible. Even if you compare the 'external world' to that of lucid dreams you can compare very equally that the two have lots in common.

1a) The mind stays active | the body is inactive.
1b) Even though the body is inactive, the mind can control the body.

By stating this you can conclude that even if you are in a non physical external "reality" you can still be physically active.

The mind makes non-real things seem real, the Matrix does the same. Even if the Matrix is AI, the lucid dreaming that one does while sleeping can make the nonphysical world 'seem' very physical.

Thus you can make the statement: Lucid dreaming essentially means dreaming while knowing that you are dreaming. The Matrix essentially means the world that you are "living" in is 'not' the real world you are imagining.
 
Last edited:
  • #49
Originally posted by hypnagogue
Yes, but since there are multiple people hooked up to the matrix, and it presents to all of them a logically consistent world, there must be some underlying representation to keep track of everything that is going on in the matrix "world." If there were not, it wouldn't be possible to make the matrix "world" logically consistent.

I disagree. If a computer were to use the same stimuli on different people, the result should be the same. There needn't be any "underlying representation" (which strikes me as (to use your term) "phenomenological chairs flying around").

Occam's Razor has nothing to do with the main point. I am only interested in showing how the idea of dualism (or more specifically, the idea of physical/non-physical interactions) can be logically consistent. I am not trying to claim that we actually live in a dualistic world; rather, I just aim to dispell notions that dualism is untenable simply because it posits a logically inconsistent framework (hence, "Dualism made intelligible").

Thus your argument, is that it is possible for Dualism to be logically consistent (note: you are not taking a neutral PoV, which would be to refute the arguments of the non-dualist, but are instead making a case for the logical consistency of dualism), and that argument should be able to stand, with less assumptions, against the argument to the contrary, shouldn't it?

No, because calling the 'external' world in which the matrix computers exist "physical" is begging the question. This is precisely the issue that is up for grabs, so we cannot assume right away that this external world can rightfully be called "physical" in the first place-- I will get back to this. For now, I am just trying to get you to come to an agreement that it is incorrect to say that things perceived in the matrix are not "real" or do not exist, since you repeatedly bring this issue up as an objection to my main argument in conjunction with the homunculan problem. In reality, the homunculan problem has no effect on my argument. I think you will agree to this once you agree that things perceived in the matrix do in fact exist in some objective sense.

But I don't see any reason to agree to this. If there are objective entities, that elicit subjective awareness in the minds of humans, then these objective entities must exist as "phenomenological chairs floating around", so to speak. If, OTOH, Dennett is right, and there are no objective entities in the matrix, then there is only one reality, in which electrochemical stimulation produces subjective awareness in humans (no extra entities required).

The homunculun problem only comes in when one assumes that there are objective entities, that constitute the phenomenological entities in subjective experience...if these things existed, there would need to be some way (in the "physical" (non-matrixed) realm) for me to "see" them, even though they are inside, where I have no "eyes" (and so on, but you know the rest, I think).
 
  • #50
Originally posted by Jeebus
I am here assuming that being conscious entails having a mind although having a mind does not necessarily imply being conscious, there is, I would say, nothing self-contradictory in the idea of unconscious mental events.

How exactly do you define an "unconscious mental event"?

However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.

Explain, please.
 
  • #51
Originally posted by Mentat
I disagree. If a computer were to use the same stimuli on different people, the result should be the same. There needn't be any "underlying representation" (which strikes me as (to use your term) "phenomenological chairs flying around").

There are no phenomenological chairs flying around (beyond what is perceived by each individual, of course).

Take the case of two people in the matrix, A and B, looking at the same chair from different angles. The computer cannot be feeding the same stimuli into both people, or else they would see the chair from the same angle. There must be some representation of the chair stored as data in the computer to ensure that a) what A and B see is logically consistent with their respective POVs and b) A and B see a logically consistent construct when they look at the chair.

For instance, say A is looking at the chair from directly above, and say B is looking at the chair directly from the side. Suppose there is a circular stain on the seat of the chair. To satisfy a), A must see what looks essentially like a square, and B must see essentially what looks like an angular, lower case 'h'. To satisfy b), A must see the stain on the chair appear to be perfectly circular, and B must see the stain as a compressed ellipse, in such a way that it is consistent with looking at the circular stain from his glancing angle. The computer cannot satisfy a) or b) without keeping track of where the chair is located in the room, or where the observors are with respect to the chair. This mechanism of "keeping track" is simply the computer's internal representation of the room, the chair, and the observors' "matrix bodies." If there were no such internal mechanisms for keeping track of where things were, there could not be a logically consistent presentation of the room to both A and B.

Thus your argument, is that it is possible for Dualism to be logically consistent (note: you are not taking a neutral PoV, which would be to refute the arguments of the non-dualist, but are instead making a case for the logical consistency of dualism), and that argument should be able to stand, with less assumptions, against the argument to the contrary, shouldn't it?

My argument is simply that dualism cannot be ruled out on purely logical bases; that is, that dualism can be logically consistent. We may doubt its veracity on the basis of heuristics such as OR, but that is not a purely logical criterion of judgement; it says nothing about the logical consistency of the framework.

But I don't see any reason to agree to this. If there are objective entities, that elicit subjective awareness in the minds of humans, then these objective entities must exist as "phenomenological chairs floating around", so to speak. If, OTOH, Dennett is right, and there are no objective entities in the matrix, then there is only one reality, in which electrochemical stimulation produces subjective awareness in humans (no extra entities required).

You are still thinking in the wrong terms. The matrix is basically a bunch of data stored in computers. What people hooked up to the matrix see is not the most fundamental aspect of the matrix-- the most fundamental aspect is the data in the computers. It is a simple analogy to how we usually think of the real world.

data in matrix : matrix perceivers :: atoms/photons : 'real world' perceivers

This analogy works insofar as in both cases, data and atoms/photons work as objectively existing generators of logically consistent input into human brains.
 
  • #52
Originally posted by hypnagogue
There are no phenomenological chairs flying around (beyond what is perceived by each individual, of course).

Take the case of two people in the matrix, A and B, looking at the same chair from different angles. The computer cannot be feeding the same stimuli into both people, or else they would see the chair from the same angle.

Which is why Dennett doesn't believe that "matrix worlds" can be created: It takes too much information.

You see, as far as Dennett (along with other such Materialist philosophers) is concerned, there would not be a "representation of a chair", but instead there would be a slightly different stimulus for each possible angle. NOTE: These slightly different stimuli released when necessary, and do not exist as an informational construct, containing all the information about a chair (which would basically be a chair anyway).

btw, when I sad that they "do not exist as...", I meant that they needn't exist as... because it works without that postulate, and thus good 'ol Occam rules in my favor on this matter.

There must be some representation of the chair stored as data in the computer to ensure that a) what A and B see is logically consistent with their respective POVs and b) A and B see a logically consistent construct when they look at the chair.

Actually, that is not true. As stated above, there could just be a program that calculates (when necessary, and at no other time) the stimulus required for one to see a chair at a particular angle.

For instance, say A is looking at the chair from directly above, and say B is looking at the chair directly from the side. Suppose there is a circular stain on the seat of the chair. To satisfy a), A must see what looks essentially like a square, and B must see essentially what looks like an angular, lower case 'h'. To satisfy b), A must see the stain on the chair appear to be perfectly circular, and B must see the stain as a compressed ellipse, in such a way that it is consistent with looking at the circular stain from his glancing angle. The computer cannot satisfy a) or b) without keeping track of where the chair is located in the room, or where the observors are with respect to the chair. This mechanism of "keeping track" is simply the computer's internal representation of the room, the chair, and the observors' "matrix bodies." If there were no such internal mechanisms for keeping track of where things were, there could not be a logically consistent presentation of the room to both A and B.

Again, there could, indeed, be such a logical presentation...it would just require a whole lot more information...ergo, Dennett thinks of "matrix worlds" as possibilites in principle, and barely that! :wink:

My argument is simply that dualism cannot be ruled out on purely logical bases; that is, that dualism can be logically consistent. We may doubt its veracity on the basis of heuristics such as OR, but that is not a purely logical criterion of judgement; it says nothing about the logical consistency of the framework.

But it does say that any two logically consistent ideas will be judged by the amount of assumptions made...therefore, if you make yours full of assumptions, while the other has few, you may establish logical consistency, but this will quickly be cut of by the "Razor".

You are still thinking in the wrong terms. The matrix is basically a bunch of data stored in computers. What people hooked up to the matrix see is not the most fundamental aspect of the matrix-- the most fundamental aspect is the data in the computers. It is a simple analogy to how we usually think of the real world.

But, think about what you are saying. You are referring to "data" as though it were a static representation inside the computer. You are almost toying with the idea of an application of the h-problem to the matrix computers themselves, since they have to contain all these phenomenological chairs in their CPUs instead of just the capacity (in software and hardware) to produce them at will, the latter being the way that computers actually work AFAIK.

This analogy works insofar as in both cases, data and atoms/photons work as objectively existing generators of logically consistent input into human brains.

But, again, the data doesn't just "sit there" ready to be used as stimulus (that's what happens in the real world, but not inside a computer's CPU), it is only a matter of stimulation from human to computer, causing a stimulation back from the computer to the human.
 
  • #53
Originally posted by Mentat
Actually, that is not true. As stated above, there could just be a program that calculates (when necessary, and at no other time) the stimulus required for one to see a chair at a particular angle.

Yes, but what happens in this case? The computer fetches data lying around. It says "there is a human perceiver whose 'matrix body' is in room 17, so I need to fetch information about room 17 and present it the perceiver." Even if it calculates these things dynamically, it still must have an internal representation of what is there.

Again, there could, indeed, be such a logical presentation...it would just require a whole lot more information...ergo, Dennett thinks of "matrix worlds" as possibilites in principle, and barely that! :wink:

Possibility in principle is all that is needed.

But it does say that any two logically consistent ideas will be judged by the amount of assumptions made...therefore, if you make yours full of assumptions, while the other has few, you may establish logical consistency, but this will quickly be cut of by the "Razor".

I am not concerned with that. I am concerned with objecting to statements to the effect that dualism cannot be a logically consistent framework.

But, think about what you are saying. You are referring to "data" as though it were a static representation inside the computer. You are almost toying with the idea of an application of the h-problem to the matrix computers themselves, since they have to contain all these phenomenological chairs in their CPUs instead of just the capacity (in software and hardware) to produce them at will, the latter being the way that computers actually work AFAIK.

The computers do not contain phenomenological chairs! They simply contain data sufficient for producing input into a human brain to illicit subjective perceptions of phenomenological chairs. The phenomenology only occurs when human brains are introduced into the mix.

But, again, the data doesn't just "sit there" ready to be used as stimulus (that's what happens in the real world, but not inside a computer's CPU), it is only a matter of stimulation from human to computer, causing a stimulation back from the computer to the human.

What you are saying is that the matrix doesn't (or needn't) calculate proper inputs into human brains when there are none around to perceive them. And I totally agree with this. Nonetheless, when a human perceiver is there, some data representing his environment must be fetched in order to stimulate him properly.
 
  • #54
I want you two to continue on but I just thought I would interject and say that I completely understand what Hypnagogue is saying. It makes all the sense in the world to me. It appears to me that Mentat still has not really grasped the point of the argument. I too see that whether the matrix has data ready made for brain stimulation or whether it caluculates on demand as irrelevant to the main point. Tha algorythms alone used to calculate are objective stimulators equivalent to atoms in the analogy used. Carry on.
 
  • #55
Originally posted by hypnagogue
Yes, but what happens in this case? The computer fetches data lying around. It says "there is a human perceiver whose 'matrix body' is in room 17, so I need to fetch information about room 17 and present it the perceiver." Even if it calculates these things dynamically, it still must have an internal representation of what is there.

You mean it has to have a set of stimuli that it is programmed to produce under these particular circumstances, right?

The computers do not contain phenomenological chairs! They simply contain data sufficient for producing input into a human brain to illicit subjective perceptions of phenomenological chairs. The phenomenology only occurs when human brains are introduced into the mix.

I don't understand this last sentence. As it is, a computer doesn't contain a static set of data that equals "chair from this position", it just has programs of what parts of the brain to stimulate at any given time, right?

What you are saying is that the matrix doesn't (or needn't) calculate proper inputs into human brains when there are none around to perceive them. And I totally agree with this. Nonetheless, when a human perceiver is there, some data representing his environment must be fetched in order to stimulate him properly.

And that's what I'm arguing against, since no environment needs to be "fetched" (think about the connotations of that term, since you already know what I'd say about it :wink:)at all; all that needs to happen is for the little probe to stimulate the right neurons, as it is programmed to do. What am I getting wrong?
 
  • #56
Originally posted by Fliption
I want you two to continue on but I just thought I would interject and say that I completely understand what Hypnagogue is saying. It makes all the sense in the world to me. It appears to me that Mentat still has not really grasped the point of the argument. I too see that whether the matrix has data ready made for brain stimulation or whether it caluculates on demand as irrelevant to the main point. Tha algorythms alone used to calculate are objective stimulators equivalent to atoms in the analogy used. Carry on.

Algorithms and programs are what the "probe" acts on, nothing more (under Dennett's materialistic theory, that is). As it is, I may not have grasped the point of the argument, and I apologize if I'm being a slow learner; but I think I have grasped the points he's trying to make, and even admitted to their validity, but I disagree with them, and am trying to present a counter-argument.
 
  • #57
Originally posted by Mentat
How exactly do you define an "unconscious mental event"?


An unconscious mental event is a prime example of a lucid dream, where one dreams the reality but they imagining is the only think the consciousness is suppling the mental event with for the unconscious state of dreaming.


Originally posted by me
However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.



Mentat said:
Explain, please.


The consciousness basically is the most vital part of the brain, that and the heart. It adapts to other human characteristics and operates most of human activity. While the Matrix is an ex gratia imaginative world like a dream state. You think you are being very physical but all you are doing or more what arent you doing is -- the consciousness is making it seem rather physical and thus really mindless in the existent world.

For example,

Why should our actual world correspond with Universe A rather than with Universe 2A? If this is a valid question it admits of only two answers. Either there is no reason at all, it is just a God-given (contradictory depending on your belief) or contingent fact that that is how things actually are, like the fact that anything at 11 should exist rather than nothing, or else there is some reason, for example we might suppose that the world we know could not have evolved as it has done had it not been for the intervention of mind.

A fortiori, we should note that it is only in its derived sense that we can define or explicate what we mean by consciousness. In its basic sense it can no more be defined than any other primitive concept. With any primitive concept, either one understands what is intended or one fails to understand.

You see?
 
  • #58
Originally posted by Jeebus
An unconscious mental event is a prime example of a lucid dream, where one dreams the reality but they imagining is the only think the consciousness is suppling the mental event with for the unconscious state of dreaming.

Rephrase, please...maybe it's a grammar thing, or something, but I'm having difficulty understanding your explanations of late. I apologize for this; be patient with me, English is my second language you know (not really...yes, I did learn it second, but I was 1 1/2 years old; now I speak it even better than my original language (environment, and all that), so it's not much of an excuse ).

Originally posted by Jeebus
However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.

Mentat said:
Explain, please.

The consciousness basically is the most vital part of the brain, that and the heart. It adapts to other human characteristics and operates most of human activity. While the Matrix is an ex gratia imaginative world like a dream state. You think you are being very physical but all you are doing or more what arent you doing is -- the consciousness is making it seem rather physical and thus really mindless in the existent world.

For example,

Why should our actual world correspond with Universe A rather than with Universe 2A? If this is a valid question it admits of only two answers. Either there is no reason at all, it is just a God-given (contradictory depending on your belief) or contingent fact that that is how things actually are, like the fact that anything at 11 should exist rather than nothing, or else there is some reason, for example we might suppose that the world we know could not have evolved as it has done had it not been for the intervention of mind.

A fortiori, we should note that it is only in its derived sense that we can define or explicate what we mean by consciousness. In its basic sense it can no more be defined than any other primitive concept. With any primitive concept, either one understands what is intended or one fails to understand.

You see? [/QUOTE]

Yes, I get it now (mostly). Joseph LeDoux made something of the same point, at the beginning of Synaptic Self (an excellent book, btw), and William H. Calvin mentioned it in passing in The Cerebral Code (also a good book, but very hard to understand if you haven't learned some neurophysiology, evolutionary biology, and philosophy of the mind, before having read it...fortunately, there's a good glossary at the end :smile:). People tend to make a little too much out of consciousness' role in the mind. Sure, it's the most noticable, and it's hard to refer to a "mind" without referring to consciousness (like you mentioned in your post), but if it weren't for the myriad unconscious processes in the brain (Dennett's "stupid demons"), there would be no consciousness ITFP.

Also, as to the undefinability of consciousness, I tend to disagree, since Dennett has proposed a working hypothesis on how consciousness can be definable in Materialistic terms (though he doesn't actually define it, he shows that it can be done, and explains some of the necessities for such a theory (William Calvin in the aforementioned book does a very good job of making a technical theory of consciousness (which does, btw, fit in with Dennett's guidelines), and it appears that Gerald M. Edelman and Giulio Tononi did a fine job, as well, in A Universe of Consciousness, but I've really just started reading it). So, if Dennett (and these others that I've mentioned) is/are right, then consciousness can indeed be defined, just (perhaps) not as we might have expected.
 
  • #59
Originally posted by Mentat
You mean it has to have a set of stimuli that it is programmed to produce under these particular circumstances, right?

Yes. Let us consider for our argument a set of perceptual inputs that represents a matrix chair at a particular location under a particular set of circumstances (such as lighting, etc), as perceived by an observor at a particular location with respect to the chair. There are only really two coherent ways in which a matrix computer could feed this perceptual input into the brain of an observor at any given time.

1) Store sets of data representing objects in the matrix environment such as chairs (this corresponds roughly to our notion of the atoms which compose a 'real' chair). When an en-matrixed observor is present, use simulated laws of physics to dynamically generate perceptual input and feed it into the observor's brain. Let us refer to this type of object representation in the matrix as explicit representation.

2) Store sets of data representing every possible set of perceptual input from the matrix environment. This way, when an en-matrixed observor is present, the perceptual input needn't be dynamically generated, since it is already stored as static data in the computer. All the computer needs to do is check the observor's location and viewing angle and fetch the corresponding set of perceptual input from its database. In this scheme, there is no set of data uniquely designated as "chair," but rather, there are sets of perceptual data representing all the possible ways in which the chair could impress perceptual sensations onto the observor. Let us refer to the union of all the sets of perceptual representations related to a matrix object (such as the chair) as implicit representation.

Now, in either scheme, what I have argued for holds; the perceptual input perceived phenomenologically by the observor is indicative of an objectively existing thing, which is simply data in the computer matrix. In 1), the obersvor perceives a data object that is explicitly designated as a discrete object (a chair) in the computer, in an analagous fashion to how we say we see an objectively existing chair in the 'real world.' In 2), the observor perceives a subset of the data in the computer which implicitly connotes the existence of a discretely existing object. In both cases, the matrix must have some sort of representation of the objects in its world, be it explicit or implicit; if it did not, then a coherent perceptual world could not be displayed to the observors.

Notice, however, that there is a problem with 2). Here we have assumed that there is no explicit representation at all of any of the matrix environment. However, in order for 2) to work coherently, the computer must at least be able to keep track of the observor's location in the environment; in order to do this, there must at least be an explicit representation in the computer of a set of spacetime coordinates for the matrix environment.

But there is a much more grave problem with scenario 2). 2) works well enough in a static environment, but the matrix environment is dynamic, since en-matrixed people are allowed to interact with it and change it. So it is all well and good to have a prerecorded set of perceptual inputs that a chair could possibly impinge upon an observor without explicitly representing the chair, but what do we do if the observor actually moves the chair? The only way 2) can work coherently is if the matrix predicts the actions of all its inhabitants, starting from the initial conditions until termination of the simulation, and then generates the static perceptual inputs accordingly. Clearly 1) is the much more feasible scenario.

Think of a video game in which you observe, move through, and interact with a simulated 3-dimensional environment. Here, too, all the game console needs to do is produce the correct set of perceptual inputs (through the monitor) to give the gamer the illusion (albeit a much poorer illusion than the matrix) that s/he is immersed in an interactive 3-dimensional world. But, of course, the simplest (and indeed probably the only really feasible) way for the computer game to do this is to store explicit representations of the objects in the environment in conjunction with simulated laws of physics.
 
Last edited:
  • #60
Originally posted by hypnagogue
Yes. Let us consider for our argument a set of perceptual inputs that represents a matrix chair at a particular location under a particular set of circumstances (such as lighting, etc), as perceived by an observor at a particular location with respect to the chair. There are only really two coherent ways in which a matrix computer could feed this perceptual input into the brain of an observor at any given time.

1) Store sets of data representing objects in the matrix environment such as chairs (this corresponds roughly to our notion of the atoms which compose a 'real' chair). When an en-matrixed observor is present, use simulated laws of physics to dynamically generate perceptual input and feed it into the observor's brain. Let us refer to this type of object representation in the matrix as explicit representation.

There are two problems so far:

1) Information about what stimulus will be applied to the "en-matrixed observer"'s brain is not static, but is programed to be activated by other activity in that brain. It is not activated until this other activity occurs.

2) What good would it do the computer to have both the program for the chair, and the program for the stimulus to make someone experience a chair, working at all times? It works just fine for it to respond only to his activity, and thus the opposite of existentialism is at work...the chair is only there when he's looking at it.

2) Store sets of data representing every possible set of perceptual input from the matrix environment. This way, when an en-matrixed observor is present, the perceptual input needn't be dynamically generated, since it is already stored as static data in the computer.

But "static data" just takes the place of phenomenological chairs flying around, since all of material reality is a collection of "static data". In a matrix computer system, there needn't be any such static data, but should instead be a set of programs that are activated by particular activities in the observer's brain, and that produce a chair for his inspection.

All the computer needs to do is check the observor's location and viewing angle and fetch the corresponding set of perceptual input from its database.

But this means that the observer moves (if the computer must check his location). We cannot remain in the realm of analogy, hypna, we also have to think about what it actually happening. The observer is tied to a chair on some ship near the Earth's core, and he hasn't moved an inch since he was "plugged in". Thus, certain brain activities may translate as part of the "movement" program of the Matrix, but he hasn't moved at all.

In this scheme, there is no set of data uniquely designated as "chair," but rather, there are sets of perceptual data representing all the possible ways in which the chair could impress perceptual sensations onto the observor. Let us refer to the union of all the sets of perceptual representations related to a matrix object (such as the chair) as implicit representation.

But this is like an existentialism in a computer program, and I don't think that's the way computers work. After all, we already have programs that can allow me to see a chair from all possible angles, and in different lighting, but there is no static set of data in the computer for the chair, merely for the program that illicits that particular representation on the monitor.

if it did not, then a coherent perceptual world could not be displayed to the observors.

Wrong, and that's practically the whole point of Consciousness Explained. Dennett was trying to show that we didn't need this paradoxical dualism in order to have a world with consciousness. In the matrix, there are programs that elicit certain stimuli due to particular activities in the brain's of the en-matrixed people. As it is, this program would have to be very complex, since it would have to account for all possible factors, but it would not have to do this when there was no observer present. After all, what good is a static representation of a chair to the computer itself (with no "observers" to stimulate)?

But there is a much more grave problem with scenario 2). 2) works well enough in a static environment, but the matrix environment is dynamic, since en-matrixed people are allowed to interact with it and change it. So it is all well and good to have a prerecorded set of perceptual inputs that a chair could possibly impinge upon an observor without explicitly representing the chair, but what do we do if the observor actually moves the chair? The only way 2) can work coherently is if the matrix predicts the actions of all its inhabitants, starting from the initial conditions until termination of the simulation, and then generates the static perceptual inputs accordingly. Clearly 1) is the much more feasible scenario.

Think of a video game in which you observe, move through, and interact with a simulated 3-dimensional environment. Here, too, all the game console needs to do is produce the correct set of perceptual inputs (through the monitor) to give the gamer the illusion (albeit a much poorer illusion than the matrix) that s/he is immersed in an interactive 3-dimensional world. But, of course, the simplest (and indeed probably the only really feasible) way for the computer game to do this is to store explicit representations of the objects in the environment in conjunction with simulated laws of physics.

And yet this is not (AFAIK) what video games do. For example, if I'm playing Donkey Kong 64, and am in room with the K. Lumsy, there only need be the stimulations to my television - which, in turn, stimulates my retina - to produce certain photonic emissions (which, in turn, stimulate certain triangular arrays in my neocortex)...there needn't be any representation whatsoever of Kranky Kong in his lab, or of Candy Kong in her shop, since I'm not there and the game console has no use for such representations.
 
  • #61
Originally posted by Mentat
But "static data" just takes the place of phenomenological chairs flying around, since all of material reality is a collection of "static data". In a matrix computer system, there needn't be any such static data, but should instead be a set of programs that are activated by particular activities in the observer's brain, and that produce a chair for his inspection.

So the program produces a chair when the observor is looking. What happens when the observor looks away? In a logically consistent world, when he looks back to where the chair was, it should still be there. How does the computer take this into account? The only way is for it to store information about the chair, even when the observor is not looking at it. If this were not done, the computer would not be able to reliably reproduce the chair in that same location every time the observor looked there.

By way of analogy, the information that represents, stands for, codes for-- however you want to say it-- your web browser exists in your computer's hard drive, even when you are not actively running (looking at) your browser's program.

But this means that the observer moves (if the computer must check his location). We cannot remain in the realm of analogy, hypna, we also have to think about what it actually happening. The observer is tied to a chair on some ship near the Earth's core, and he hasn't moved an inch since he was "plugged in". Thus, certain brain activities may translate as part of the "movement" program of the Matrix, but he hasn't moved at all.

No kidding. I didn't say anywhere that the observor was moving in the 'real' world. I meant that the computer must keep track of where the observor is located in the simulated matrix world. This of course is not a literal physical location, just abstract data representing a location in an abstract world made of bits.

But this is like an existentialism in a computer program, and I don't think that's the way computers work. After all, we already have programs that can allow me to see a chair from all possible angles, and in different lighting, but there is no static set of data in the computer for the chair, merely for the program that illicits that particular representation on the monitor.

If the simulated world is to be an interactive one, then there must be some internal representation of the objects within it, or the computer must be able to precisely predict all actions taken by the participants. See my previous post.

Wrong, and that's practically the whole point of Consciousness Explained. Dennett was trying to show that we didn't need this paradoxical dualism in order to have a world with consciousness.

Whoa, hold your horses. I never said we need dualism to explain consciousness. I said we need internal data representation to explain how an interactive world like the matrix can work.

In the matrix, there are programs that elicit certain stimuli due to particular activities in the brain's of the en-matrixed people. As it is, this program would have to be very complex, since it would have to account for all possible factors, but it would not have to do this when there was no observer present. After all, what good is a static representation of a chair to the computer itself (with no "observers" to stimulate)?

Again, you misunderstand. The computer needn't compute all the necessary things for human perception when an observor is not present. But it does need to store some sort of information in order to retrieve it for when the observor comes around, so that it can then do its appropriate computations.

Assume we play a game where we navigate through a 3 dimensional world, except instead of doing this through a computer, we do it through pencils, paper, and imagination. I have written down on a paper, "Room 17: It is a plain, cubic room. There is a chair in the back left corner of the room." I read this information to you when you have 'entered' Room 17. When you 'come back' to Room 17, I read it to you again, and sure enough, the chair is still in the back left corner.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.

Say we stop playing and then resume 3 months later, and you remember the details about Room 17 but I do not. I also seem to have lost the paper with the information about Room 17 written on it. So I make something up, and you say, "Hey! That's not an accurate description of Room 17." Without the information stored on the paper, I have lost the ability to make our imaginary world logically consistent. Likewise for your version of the matrix.

And yet this is not (AFAIK) what video games do. For example, if I'm playing Donkey Kong 64, and am in room with the K. Lumsy, there only need be the stimulations to my television - which, in turn, stimulates my retina - to produce certain photonic emissions (which, in turn, stimulate certain triangular arrays in my neocortex)...there needn't be any representation whatsoever of Kranky Kong in his lab, or of Candy Kong in her shop, since I'm not there and the game console has no use for such representations.

This is like saying this very post you're reading needn't be stored as data on a computer somewhere-- after all, your computer only needs to make the appropriate stimulations to your monitor, which in turn stimulates your retina, and so on, to have the experience of reading this post.

How does the computer make those appropriate stimulations to the monitor if it is not drawing it from some stored information? Is it doing it randomly? No, clearly there must be some sort of data in the server hard drive that represents this post, which can be fetched and displayed to you when requested/needed. Likewise with Donkey Kong and the matrix.
 
  • #62
Originally posted by hypnagogue
So the program produces a chair when the observor is looking. What happens when the observor looks away?

It stops producing this stimulus, since it has no reason to anymore.

In a logically consistent world, when he looks back to where the chair was, it should still be there.

Indeed. When he looks back the stimulus from his brain to computer, to produce the stimulus from computer to brain of a chair, is re-activated.

How does the computer take this into account? The only way is for it to store information about the chair, even when the observor is not looking at it. If this were not done, the computer would not be able to reliably reproduce the chair in that same location every time the observor looked there.

If it stores information about what a chair is supposed to look like under all of the given circumstances then what you have is a program that deduces, from the stimuli given by the human's brain, which stimulus it should (in turn) give back to his brain to produce the illusion of a chair.

By way of analogy, the information that represents, stands for, codes for-- however you want to say it-- your web browser exists in your computer's hard drive, even when you are not actively running (looking at) your browser's program.

True enough, but all the computer has to remember is the program, it doesn't have a static representation of this particular page at all times.

If the simulated world is to be an interactive one, then there must be some internal representation of the objects within it, or the computer must be able to precisely predict all actions taken by the participants. See my previous post.

Well, that prediction part is more of what Dennett was worried about (which is why he believed it would lead to combinatorial explosion). As it is, there should not be any static representations of chairs in the matrix; but even if there can be, there needn't be since such predictions (or reactions to current stimuli that will cause later stimuli) can occur.

Whoa, hold your horses. I never said we need dualism to explain consciousness. I said we need internal data representation to explain how an interactive world like the matrix can work.

But we don't, since we have the Dennett model of actual interaction, between the brain and the computer. Each new stimulus from the brain causes the computer to produce the proper subsequent stimulus for the brain.

Again, you misunderstand. The computer needn't compute all the necessary things for human perception when an observor is not present. But it does need to store some sort of information in order to retrieve it for when the observor comes around, so that it can then do its appropriate computations.

What if it has a program that dictates only "this stimulus means that that stimulus is the appropriate response; while this stimulus means that that other stimulus is the appropriate response"?

Assume we play a game where we navigate through a 3 dimensional world, except instead of doing this through a computer, we do it through pencils, paper, and imagination. I have written down on a paper, "Room 17: It is a plain, cubic room. There is a chair in the back left corner of the room." I read this information to you when you have 'entered' Room 17. When you 'come back' to Room 17, I read it to you again, and sure enough, the chair is still in the back left corner.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.

But this is not a correct analogy to a computer's processes. If I were to come to you and you were to re-draw Room 17, then you would be doing what a computer does, since the computer has no use for such static representations until stimulated by an observer, and then only until stimulation ceases.

Say we stop playing and then resume 3 months later, and you remember the details about Room 17 but I do not. I also seem to have lost the paper with the information about Room 17 written on it. So I make something up, and you say, "Hey! That's not an accurate description of Room 17." Without the information stored on the paper, I have lost the ability to make our imaginary world logically consistent. Likewise for your version of the matrix.

But not if the program is written so as to produce each particular pixel of the representation in the order that it is supposed to in response to that particular stimulus.

This is like saying this very post you're reading needn't be stored as data on a computer somewhere-- after all, your computer only needs to make the appropriate stimulations to your monitor, which in turn stimulates your retina, and so on, to have the experience of reading this post.

How does the computer make those appropriate stimulations to the monitor if it is not drawing it from some stored information? Is it doing it randomly? No, clearly there must be some sort of data in the server hard drive that represents this post, which can be fetched and displayed to you when requested/needed.

Clearly you are not speaking from a knowledge of computers, but from a knowledge of what you believe "should" be the case with them (no offense is intended here, btw, I'm just making an observation).

However, I've been talking to some people, and it's becoming more and more apparent to me that the programs that run simulations are set by programs to respond to different stimuli (in this case wherever I might click with my mouse or whatever key I might type on my keyboard) in the appropriate ways, meaning that there is nothing "written on paper" - merely a lot of "paper", a lot of "ink", and a lot of programs that teach it what to do in response to whatever stimulus.
 
  • #63
Originally posted by Mentat
It stops producing this stimulus, since it has no reason to anymore.

Yes.

Indeed. When he looks back the stimulus from his brain to computer, to produce the stimulus from computer to brain of a chair, is re-activated.

Yes.

If it stores information about what a chair is supposed to look like under all of the given circumstances then what you have is a program that deduces, from the stimuli given by the human's brain, which stimulus it should (in turn) give back to his brain to produce the illusion of a chair.

But the computer can't know what circumstance the chair is in unless it stores information to that effect.

True enough, but all the computer has to remember is the program, it doesn't have a static representation of this particular page at all times.

Nor did I say it (your web browser program) had a static representation of this page at all times.

You seem to still think I am saying that the 'data object' of the chair includes the sensory output characteristic to it. I am not. I saying this data object acts (partially) as a generator of those inputs by storing relevant information about the chair. As I have said, at least the chair's location in 'matrix space' must be recorded, and probably additional information (such as, Bob dropped grape juice on this chair and so it has a stain). This information can then be fetched from the database and used to produce appropriate sensory inputs when an observor is present.

Well, that prediction part is more of what Dennett was worried about (which is why he believed it would lead to combinatorial explosion). As it is, there should not be any static representations of chairs in the matrix; but even if there can be, there needn't be since such predictions (or reactions to current stimuli that will cause later stimuli) can occur.

A logically consistent world cannot be created entirely from information from observors' brains. For instance, say Bob spilled grape juice on the chair yesterday, but forgot about it. Jane saw him spill it, and remembers it vividly. We now have two contradictory sets of information about the chair with no way to decide which is right and which is wrong. The solution is to explicitly store information to the effect that grape juice has been spilled on the chair.

What if it has a program that dictates only "this stimulus means that that stimulus is the appropriate response; while this stimulus means that that other stimulus is the appropriate response"?

Then we can't have a logically consistent world. Jane's expectation that the chair should be stained means the appropriate response is to show her a chair with a stain on it. Bob's expectation that the chair should not be stained means the appopriate response is to show him a chair without a stain. Now Jane says to Bob, "nasty stain there, huh?" and Bob disagrees that there is even a stain on the chair. Logical inconsistency.

But this is not a correct analogy to a computer's processes. If I were to come to you and you were to re-draw Room 17, then you would be doing what a computer does, since the computer has no use for such static representations until stimulated by an observer, and then only until stimulation ceases.

I do 're-draw' Room 17 everytime you re-enter it, by reading aloud the description of it to you. I explicitly said this.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.

Indeed, I have no use for the piece of paper that has information about Room 17 until you 'enter' it. But once you do enter Room 17, I need that information stored away on the piece of paper to tell you (generate stimuli) about it. Just like the matrix has no use for information about the location of a chair in a room until an observor enters the room; at that point, the matrix fetches information from its database about the room, so that it can use it to generate the proper stimuli for the observor. If the matrix did not do this, it would have no way of 'remembering' where the chair should be located in this room.

But not if the program is written so as to produce each particular pixel of the representation in the order that it is supposed to in response to that particular stimulus.

It doesn't know what it's supposed to do without information to this effect. If it draws this information entirely from human brains, many logical inconsistencies will arise, since everyone has differing internal representations of what the (matrix) world out there looks like. Therefore, to make a logically consistent world, the matrix must store some information about this world on its own database.

Clearly you are not speaking from a knowledge of computers, but from a knowledge of what you believe "should" be the case with them (no offense is intended here, btw, I'm just making an observation).

Clearly I should just hand over my degree in computer science to you right now and be done with this conversation.

However, I've been talking to some people, and it's becoming more and more apparent to me that the programs that run simulations are set by programs to respond to different stimuli (in this case wherever I might click with my mouse or whatever key I might type on my keyboard) in the appropriate ways, meaning that there is nothing "written on paper" - merely a lot of "paper", a lot of "ink", and a lot of programs that teach it what to do in response to whatever stimulus.

There is still informational representation. If you are talking about something like neural nets, the representation is implicit and much more abstract, but it still exists.

Let's say I'm playing a flight simulator. There is a tall red building to my left. I turn right so that I can no longer see the building. Then I turn back around, and I can see the building again. How did the program remember that there was supposed to be a tall red building there? It had information stored that says something to the effect that "there is a tall red building here." When that 'here' is located in the current field of vision, then the computer actively uses the stored information to render the image of the building.
 
  • #64
Quote-response didn't seem appropriate in this case, so I'm just going to try to respond to all the points that need responding to on this post, without the use of quotes...

I'd like to first of all say that I didn't mean to sound condescending in any way when I said you didn't sound like you were speaking from an understanding of computer programs...you were just using a lot of "I thinks", IMO, and so it didn't seem like you were basing this on actual knowledge about the way a computer works.

Anyway, I want to concede a little here, but want to be clear as to how much I'm conceding. You see, I understand that a certain bit (no pun intended) of information must exist that indicates that, when a plane is pointed in that direction there must appear a representation of a red building. However, that's not really what I was fighting. I was fighting against the idea that the sensory outputs would remain as a static representation, ready to be used at any given time. This is the only way I could see that you could apply the workings of a matrix computer to the dualistic idea of consciousness.

As it is, I'm willing to admit that these collections of information must exist in "computer language" :wink:, but the sensory outputs are re-reproduced (maybe just "reproduced" is correct here, I'm not sure ) every time the stimulus from the brain equals the appropriate cause for that particular effect. In this case, there doesn't appear to be any relevance of this analogy to the dualistic approach to consciousness.
 
  • #65
Originally posted by Mentat
I'd like to first of all say that I didn't mean to sound condescending in any way when I said you didn't sound like you were speaking from an understanding of computer programs...you were just using a lot of "I thinks", IMO, and so it didn't seem like you were basing this on actual knowledge about the way a computer works.

You could have said "It seems that..." instead of "Clearly..." :wink: But in any case, don't sweat it.

Anyway, I want to concede a little here, but want to be clear as to how much I'm conceding. You see, I understand that a certain bit (no pun intended) of information must exist that indicates that, when a plane is pointed in that direction there must appear a representation of a red building. However, that's not really what I was fighting. I was fighting against the idea that the sensory outputs would remain as a static representation, ready to be used at any given time. This is the only way I could see that you could apply the workings of a matrix computer to the dualistic idea of consciousness.

We'll get back to that last point. :wink: For now, all I wanted to do was show that it is incorrect to say that (for instance) a matrix chair does not exist. It does exist, insofar as it exists as information in the computer matrix which represent its attributes-- structure, mass, location, etc. I never claimed that this perpetually existent set of data includes the sensory outputs associated with the chair; rather, I tried to make it clear that these sensory outputs were dynamically produced as a function of the observor's reference frame, the information representing the object, and the computer's simulated 'physical laws.'

As it is, I'm willing to admit that these collections of information must exist in "computer language" :wink:, but the sensory outputs are re-reproduced (maybe just "reproduced" is correct here, I'm not sure ) every time the stimulus from the brain equals the appropriate cause for that particular effect. In this case, there doesn't appear to be any relevance of this analogy to the dualistic approach to consciousness.

Well, again, I just wanted to clarify that matrix chairs do exist. This is only a side issue for the main argument, which is well buried by now. :smile: I will look over the history of this thread and proceed with presenting the argument soon.
 
  • #66
Originally posted by hypnagogue
We'll get back to that last point. :wink:

I'm going to hold you to that. :smile:

For now, all I wanted to do was show that it is incorrect to say that (for instance) a matrix chair does not exist. It does exist, insofar as it exists as information in the computer matrix which represent its attributes-- structure, mass, location, etc. I never claimed that this perpetually existent set of data includes the sensory outputs associated with the chair; rather, I tried to make it clear that these sensory outputs were dynamically produced as a function of the observor's reference frame, the information representing the object, and the computer's simulated 'physical laws.'

So, doesn't that contradict the dualistic approach, which would give a separate (and static) existence to the matrix chair itself, and wouldn't allow it to be nothing but stimulus that the "probe" gives our brain that happens to be directed by an information structure in the computer's programming? IOW, dualism gives a sort of existentialist approach to phenomenological entities, which is what I thought was what was being considered in the matrix analogy.
 
  • #67
OK, let's clean the slate and start over from the beginning. I'm just going to post some propositions one by one, and hopefully we can come to an agreement on them before we proceed. (By the way, don't go assuming how I am going to use this or that in my argument-- I think that approach caused some confusion previously... just tell me if you agree or disagree with these statements. :wink: Hopefully I can make this clear and straightforward.)

Proposition 1:
An object/phenomenon/entity X is physical if and only if it is possible in principle to observe X in the objective world by using objective measurements. Otherwise it is non-physical.

Agree or disagree?
 
Last edited:
  • #68
Originally posted by hypnagogue
OK, let's clean the slate and start over from the beginning. I'm just going to post some propositions one by one, and hopefully we can come to an agreement on them before we proceed. (By the way, don't go assuming how I am going to use this or that in my argument-- I think that approach caused some confusion previously... just tell me if you agree or disagree with these statements. :wink: Hopefully I can make this clear and straightforward.)

Thank you, hypna. I'm sorry for being so confused in my views, but I just don't see how you can be right here. This approach will probably make it easier for me...

Proposition 1:
An object/phenomenon/entity X is physical if and only if it is possible in principle to observe X in the objective world by using objective measurements. Otherwise it is non-physical.

Agree or disagree?

Disagree. I cannot observe an electron.
 
  • #69
Originally posted by Mentat
Disagree. I cannot observe an electron.

Sorry, bad phrasing. Replace "observed" with "detected."
 
  • #70
Originally posted by hypnagogue
Sorry, bad phrasing. Replace "observed" with "detected."

Then, for the purpose of being a "good sport": Agree.
 

Similar threads

  • General Discussion
Replies
16
Views
1K
Replies
15
Views
1K
Replies
1
Views
850
  • General Discussion
Replies
2
Views
902
  • General Discussion
Replies
4
Views
647
  • Poll
  • General Discussion
8
Replies
246
Views
30K
  • General Discussion
Replies
2
Views
2K
Replies
2
Views
1K
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
3K
Back
Top