The Mind is a Terrible Thing to WASTE

  • Thread starter Q_Goest
  • Start date
  • Tags
    Mind
In summary: I think it is important to question what we mean by "influence". Perhaps some other term would be more appropriate. In summary, there is a discussion about whether mental phenomena are just an epiphenomena or not. There are several attempts to avoid this conclusion, but it seems that the exclusion argument is still valid.
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
Are mental phenomena such as experience, thought and qualia just an epiphenomena?

The "exclusion argument" per Yablo:
(1) If an event x is causally sufficient for an event y, then no event x* distinct from x is causally relevant to y (exclusion).
(2) For every physical event y, some physical event x is causally sufficient for y (physical determinism).
(3) For every physical event x and mental event x*, x is distinct from x* (dualism).
(4) So: for every physical event y, no mental event x* is causally relevant to y (epiphenomenalism).
Ref: Yablo (1992) "Mental Causation"

To put it even more simply, the cause of a switch in a computer changing position is due to the local affects of voltage on that particular switch. The switch is not affected by the configuration of the entire computer, only on the local causal actions.

It seems the exclusion argument is perfectly valid when applied to computationalism. At the other extreme, it isn't valid for religious beliefs since religious beliefs accept a dualistic basis.

There are however, numerous attempts to avoid this conclusion. It seems engineers, philosophers and scientists are not ready to accept that all mental phenomena are an epiphenomena. But if we try to avoid the exclusion argument* then don't we also have to give up computationalism? Or must we toss the baby (the mind) out with the bath water regardless?

*For example: Alwyn Scott claimes that "nonlinear phenomena are those for which the whole is greater than the sum of its parts" and attempts to claim the mind, as well as every other nonlinear phenomena, is irreducible and that medium and weak downward causation is applicable.
See Scott, "Reductionism Revisited" also Emmeche et al, "Levels, Emergence and Three Versions of Downward Causation". PM for papers.
 
Physics news on Phys.org
  • #2
Q_Goest, very interesting topic.

To account for the mind body dualism, I (no googling here), would have to initially propose, mental events as being the informative aspect of physical events. Qualia indeed seems like information overflow from a physical process.

A really basic analogy consists of two rooms where one room is called the P-room and the other the M-room. All physical events occur in the P-room. For some events in P, M is informed and with other events in P, M is not informed.

Epiphenomenalism claims there is no information transfer from the M-room, to the P-room , such that a new event in the P-room may occur.

There are two cases to consider. The first is whether there are actually any causal events in the M-room. This question lies on whether M-room is solely an informative state, intrinsically relying on the P-room for itz computations. The second is whether casual events are permisible in the M-room.

Let us suppose casual events are permissible in the M-room, then it is either those casual events are meaningless because only informed hermits live in the M-room, or there is some mechanism which allow the M-room to influence the activities of the P-room.

It is quite possible mental events are only there to inform, making the M-room look like, "The Mind is a Terrible Thing to WASTE", otherwise the P-room and the M-room are entangled!

p.s. Q_Goest, my brain states are supporting qualia, that wants to identify you as familiar!
 
  • #3
basePARTICLE said:
Let us suppose casual events are permissible in the M-room, then it is either those casual events are meaningless because only informed hermits live in the M-room, or there is some mechanism which allow the M-room to influence the activities of the P-room.
Given computationalism, there doesn’t seem to be any way M can affect P. Would you agree? In which case, M consists of informed hermits, unable to influence the physical world.

I think that’s a fundamental conclusion we have to accept, and a conclusion many computationalists would like to avoid. But I see no way to avoid it. To the best of my knowledge, no one has argued that computationalism allows for downward causation (ie: for mental phenomena to have influence on the physical).

On the other hand, classical mechanics allows for nonlinear phenomena which Scott describes as “those for which the whole is greater than the sum of its parts”.

Let’s assume computationalism is false, and suggest only that there are nonlinear, or other phenomena which are classical and are unlike computations such that those phenomena might give us a ‘loop hole’ to allow for mental causation.

Does such a loop hole exist? Or are we still stuck accepting the exclusion argument?
 
  • #4
This seems to presupposes that there is a significant difference between mental and physical events/processes.
 
  • #5
Moridin said:
This seems to presupposes that there is a significant difference between mental and physical events/processes.

Yes and no. What is the purpose of qualia if the phenomena has no influence on the behavior of the physical system? What is the purpose of having thoughts, desires, wishes or anything related to mental phenomena if none of that has any influence whatsoever on what we do?
 
  • #6
If consciousness and related phenomena are identical to certain physical brain states, then I see no issue with mental causation. Am I missing something?
 
  • #7
Moridin said:
If consciousness and related phenomena are identical to certain physical brain states, then I see no issue with mental causation. Am I missing something?
Demonstrate it. :approve:

I can see the issues, itz asymmetry for 1 thing.
 
  • #8
Q_Goest said:
Given computationalism, there doesn’t seem to be any way M can affect P. Would you agree? In which case, M consists of informed hermits, unable to influence the physical world.

I think that’s a fundamental conclusion we have to accept, and a conclusion many computationalists would like to avoid. But I see no way to avoid it. To the best of my knowledge, no one has argued that computationalism allows for downward causation (ie: for mental phenomena to have influence on the physical).

On the other hand, classical mechanics allows for nonlinear phenomena which Scott describes as “those for which the whole is greater than the sum of its parts”.

Let’s assume computationalism is false, and suggest only that there are nonlinear, or other phenomena which are classical and are unlike computations such that those phenomena might give us a ‘loop hole’ to allow for mental causation.

Does such a loop hole exist? Or are we still stuck accepting the exclusion argument
?
I agree that seems the case for human babies.

If For all P entails M then M cannot imply P.
If For some P entails M then for some M, M can possibly imply some P.
 
Last edited:
  • #9
Moridin said:
If consciousness and related phenomena are identical to certain physical brain states, then I see no issue with mental causation. Am I missing something?
I think it's important to first acknowledge what paradigm you wish to base a claim on. For example, let's base your claim on computationalism. In this case, you are essentially saying the physical computational state of the entire computer (equates to physical brain state) has a mental state which is causing something. But for the mental state to cause something, it must cause something physical, such as the change in state of a particular switch. This is called "downward causation".

The exclusion argument simply acknowledges that for some physical state (P), there is some physical cause x which results in physical event y. If this physical state gives rise to a mental state (M), that's fine. Nothing wrong with that yet. But M is physically distinct from P. So if there is some event x* which is caused by M, then it is irrelevant to any event y since the physical cause x, is sufficient to explain y.

clear as mud?

The exclusion argument is perfectly acceptable for computationalism as far as I can tell. However, there are various ways philosophers, physicists, biologists, ... engineers, would like to get around this and actually have mental states be something other than an epiphenomena. Hence the interest in various definitions of downward causation.
 
  • #10
The exclusion argument simply acknowledges that for some physical state (P), there is some physical cause x which results in physical event y. If this physical state gives rise to a mental state (M), that's fine. Nothing wrong with that yet. But M is physically distinct from P. So if there is some event x* which is caused by M, then it is irrelevant to any event y since the physical cause x, is sufficient to explain y.

There seems to be a fundamental assumption here that a mental state is somehow distinct from a physical state? I'm sorry, it is getting late, so my brain is getting heavy.
 
  • #11
Q_Goest,

Let me define conscious physical action A, as P1 + P2, where P1 and P2 are physically distinct, but spatially proximate. P1 implies M and M does not imply P1, that is, there is no symmetry between P1 and M. However P2 takes its configuration from M, and there is also no symmetry between P2 and M, in the sense that P2 does not affect M.

(1) For automata, aware action, A is twice P1.
(2) When in a coma: P1 does not imply M, therefore M cannot imply P2, and as such the patent is alive at <P1, but unaware because A has not reached its treshold value.
(3) For life there is a minimum value/output of P1.

This configuration seems a possible solution, when thinking about how automata can actually be aware in a sure and robust environment.

The possibility for M to influence A, lies on the extent P2 will configure itself in relation to M, and whether M is solely dependent on P1 for itz own configuration.
 
  • #12
Q_Goest said:
Are mental phenomena such as experience, thought and qualia just an epiphenomena?

I find the idea of 'emergent' phenomena, much more compelling than reducing things to the epiphenomenal, but that's mostly a matter of intuition on my part. Reductionist thinking falls apart, in my view, because it assumes a kind of 'atomism' is implicit in causation, which I don't think is really supported in modern science.

I'm not trying to imply anything metaphysical here, I'm thinking its more a matter of focus. We see objects causing things, but that's an artificial idea. Everything works within systems, and systems within systems. I'm not sure I would say that the sum is greater than the parts, but rather, the sum is simply of a different set. Although that last part is not strictly mathematical.

I wouldn't say 'the mind' is anything special, its just a very obvious manifestation of emergent phenomena as I understand it.

Just my two cents though.
 
Last edited:
  • #13
Ah, this is a great topic. But it is almost impossible to give you an easily intelligible answer. Sorry!

There is a great article by Sober & Shapiro 2007. See this page: http://philosophy.wisc.edu/shapiro/

Shapiro is very much pro-mental causation. As I recall, Sober & Shapiro think that counterfactuals of the form: "If there was a change in mental state X, there'd be a change in physical [i.e. neural] supervenience base Y" can be true counterfactuals - and that this is a pretty good criterion for causation.
 
Last edited:
  • #14
Moridin said:
If consciousness and related phenomena are identical to certain physical brain states, then I see no issue with mental causation. Am I missing something?

You're correct (as far as I know). Mental causation is a problem for dualists.
 
  • #15
Moridin said:
There seems to be a fundamental assumption here that a mental state is somehow distinct from a physical state? I'm sorry, it is getting late, so my brain is getting heavy.

Lord Ping said:
You're correct (as far as I know). Mental causation is a problem for dualists.

basePARTICLE said:
Q_Goest,

Let me define conscious physical action A, as P1 + P2, where P1 and P2 are physically distinct, but spatially proximate. P1 implies M and M does not imply P1, that is, there is no symmetry between P1 and M. However P2 takes its configuration from M, and there is also no symmetry between P2 and M, in the sense that P2 does not affect M.

(1) For automata, aware action, A is twice P1.
(2) When in a coma: P1 does not imply M, therefore M cannot imply P2, and as such the patent is alive at <P1, but unaware because A has not reached its treshold value.
(3) For life there is a minimum value/output of P1.

This configuration seems a possible solution, when thinking about how automata can actually be aware in a sure and robust environment.

The possibility for M to influence A, lies on the extent P2 will configure itself in relation to M, and whether M is solely dependent on P1 for itz own configuration.

For Moridin, Lord Ping, and I think (not sure) this applies to basePARTICLE too.

Consider an allegedly conscious computer named Hal with a hand that can feel pain. Hal sticks that hand on a hot surface and suddenly withdraws the hand. We might say the computer felt pain. The heat sensor activated when the temperature rose, which made various switches inside the computer change state, which made various other switches send a signal to say “ouch”, another set of switches to the hydraulics to withdraw the hand, another set of switches to … etc…

To Moridin’s question, how is this mental state distinct from the physical state? We may say Hal felt pain, which caused Hal to withdraw his hand, say “ouch” and all the other things he did. But if Hal DIDN’T feel anything whatsoever (ie: was a p-zombie) then would his reaction be any different? This, assuming all the circuitry were identical of course. If the circuitry were identical, then Hal would have done the same thing, regardless of the fact he was a p-zombie, because it is the physical state which resulted in the behavior, NOT the mental state.

If that still isn’t clear, consider that Hal may have felt something soft when he touched the hot plate, or had an orgasm, or somehow enjoyed the sensation of touching something hot. Regardless of what experience Hal had, his reaction would be the same, he would withdraw his hand and say “ouch” and when queried about the experience, he would say it hurt and would never do it again, because the behavior was determined by the interaction of various switches, not by some mental state.

Basically, we can explain everything about the behavior without resorting to “mental states”, so we should consider the mental and physical states to be “distinct”.

Lord Ping said:
Ah, this is a great topic. But it is almost impossible to give you an easily intelligible answer. Sorry!

There is a great article by Sober & Shapiro 2007. See this page: http://philosophy.wisc.edu/shapiro/

Shapiro is very much pro-mental causation. As I recall, Sober & Shapiro think that counterfactuals of the form: "If there was a change in mental state X, there'd be a change in physical [i.e. neural] supervenience base Y" can be true counterfactuals - and that this is a pretty good criterion for causation.

Regarding the paper, can you post a link to the paper and perhaps quote what you feel is relavent?
 
  • #16
Moridin - doesn't there have to be a perfect inviolate one-to-one relationship with a quale and a phsyicial brain state? I believe Dennett makes a thought experiment as a counterargument. It supposes some of the correspondences of qualia to physical brain states or areas are "rerouted" - like red being routed to green.

He then warms to the effect that 'we couldn't tell the difference, so therefore qualia cannot be defined'. I am reasonably sure this is not the case, from a scientific point of view. We could tell the difference. People who are color blind routinely report seeing colors, eg green, that they cannot perceive. And have never perceived. These reports occur apparently either from an extremely vivid dream or from sleep deprivation.

Given this phenomenon, the color interpretation must more than just "red" coming through the optic nerve. It must route to a predestined or hardwired group of neurons. Neurons just for red. Not orange. Therefore sending in data about a red object that was yesterday red and then today orange is certainly going to cause a problem.

Edit: oops, forgot the link :smile:
http://ase.tufts.edu/cogstud/papers/quinqual.htm
 
Last edited:
  • #17
He then warms to the effect that 'we couldn't tell the difference, so therefore qualia cannot be defined'. I am reasonably sure this is not the case, from a scientific point of view. We could tell the difference. People who are color blind routinely report seeing colors, eg green, that they cannot perceive. And have never perceived. These reports occur apparently either from an extremely vivid dream or from sleep deprivation.

How would they know it is a color they are perceiving? If I never have seen a unicorn in my life or heard it described, how would I know I've seen a unicorn?

Given this phenomenon, the color interpretation must more than just "red" coming through the optic nerve. It must route to a predestined or hardwired group of neurons. Neurons just for red. Not orange. Therefore sending in data about a red object that was yesterday red and then today orange is certainly going to cause a problem.

I'm not sure I follow? It seems to be a version of Mary the color scientist?
 
  • #18
Q_Goest said:
Regarding the paper, can you post a link to the paper and perhaps quote what you feel is relavent?

It's the paper with Sober on the link I gave earlier. "Epiphenomenalism". This one:

http://philosophy.wisc.edu/shapiro/HomePage/shapiro%20and%20sober.pdf

It's not an easy paper. But it does include a defense of mental causation on manipulationist grounds. A manipulation of mental state M1 would result in a change in mental state M2 (even if it's via the physical supervenience base) - and so M1 causes M2.
 
Last edited by a moderator:
  • #19
Hi Lord Ping,
Thanks for the link. Very much appreciated.

I agree with your evaluation of the paper. As you say, Shapiro is trying to avoid epiphenomenalism. In so doing, he’s also arguing against Kim and thus against many very similar papers. Kim’s work is highly regarded as you probably know, so if Shapiro wants to shoot down Kim’s work he has quite a serious task ahead of him. Personally, I like Kim’s “causal inheritance principal” that Shapiro refers to on page 10, and I disagree Shapiro gets around it.

Note that Shapiro, like many philosophers, doesn’t distinguish between different models of consciousness. I disagree with this approach, but it’s not uncommon. Had he carefully specified his fundamental assumptions, I might actually be in agreement. Shapiro wants as you say, to link M1 to P1 and thus suggest simply that M2 is a causal influence by appealing to what he calls “functional model of reduction”. On functional reduction:

Kim is moved by an idea he calls the functional model of reduction. According to this model, reduction proceeds first by functionalizing the property to be reduced. Functionalization consists in defining a property in terms of its functional role. Functional reduction is then the identification of an instance of a functional property with the instance of the property that realizes it. To the extent that mental states can be defined by their frunctional roles, they are candidates for functional reduction.

With functional reduction, Shapiro seems to agree with Kim. He uses this concept as he summarizes his case:

Functionalization does not challenge the efficacy of the functional property because the functional property is identical to the first-order property that realizes it. M1 can cause M2 because an instance of M1 is an instance of P1, and P1 causes P2, which in turn is identical to M2. Problem solved.

I disagree it is so simple. Let’s apply this to computationalism, such as a conventional computer made up of interconnected switches. In this case, the computer is P1 which allegedly has a mental state M1. The computer changes in some deterministic fashion to physical state P2 and alleged mental state M2. Shapiro would have us believe that since P1 is the physical substrate of M1, then we can’t separate these. So when P1 changes to P2, it is just as acceptable to claim that M1 caused M2 which is the same as P2.

There’s a problem with this, and I’m not sure if he addresses this or not. If so, it must be in the conclusion where he says:

Given our thesis that macro-causation entails micro-causation, but not conversely, it is possible that a given macro-variable X is causally inert though its microsupervenience base MSB(X) is causally efficacious with respect to some effect term Y. However, this possibility is not the setting within which famiilar and unproblematic examples of epiphenomenalism arise.

I’m not sure this is the same issue as my own, but thought I’d point it out just in case.

The problem seems to be that P1 has what Shapiro describes as a microsupervenience base, MSB(X). In the case of a computer, MSB(X) consists of individual switches. Each switch then, is a single bit of binary information. We could reduce each switch to parts but since the amount of information that applies to P1 in a single switch is only 1 of 2 possible values, further reduction of the switch seems superfluous.

Computationalism of course, posits that this single bit of binary information represented by a switch has no corresponding mental state by itself. The mental state M1 “emerges” from the totality of the entire MSB(X). P1 on the other hand is a summation of the entire MSB(X). P1 itself obtains its causal nature from the switches. The switches don’t change state because of physical state P1, they change state because of the local affect of voltage. Any state P1 therefore is entirely dependant on, and a summation of, the individual switches – what Shapiro is calling MSB(X). Therefore, any emergent property such as M1, is reliant on the MSB(X) of P1. If this is true, then M1 has no causal influence over the change of P1 to P2 – the change in state is dependant on the local, causal affects of voltage on the switches that make up the microsupervenience base.

I’ve tried to apply Shapiro’s argument to computationalism, and I believe it fails. Perhaps Shapiro has addressed this concern, it’s hard to say.

Note that the above argument might fail if we suggested that the physical base P1 was irreducible such as the argument provided by Scott for nonlinear phenomena. However, there is nothing physically nonlinear about a computer.

It could be we are culturally predisposed as engineers or scientists to the reductionist point of view. Shapiro on the other hand seems to want to avoid reductionism. He’s not alone in this, but a computer is certainly reducible to its constituent parts. I see no arguing that point. A computer does what it does just like a series of dominoes falling over. There’s no causal influence created by a mental state M1 on a series of dominoes falling over, and certainly, the physical state of a set of dominoes can be mapped to a physical state of a computer. The problem of mental states being epiphenomenal for a comptational device isn’t something that can be avoided IMHO.
 

Related to The Mind is a Terrible Thing to WASTE

1. What does "The Mind is a Terrible Thing to WASTE" mean?

"The Mind is a Terrible Thing to WASTE" is a phrase that emphasizes the importance of using one's mind to its full potential. It suggests that wasting one's intellectual abilities is a great loss.

2. Is "The Mind is a Terrible Thing to WASTE" a scientific concept?

No, "The Mind is a Terrible Thing to WASTE" is not a scientific concept. It is a popular phrase that highlights the significance of utilizing one's cognitive abilities.

3. How can one avoid wasting their mind?

To avoid wasting one's mind, it is important to engage in activities that stimulate the brain and to constantly challenge oneself intellectually. This can include reading, learning new skills, solving puzzles, and engaging in critical thinking activities.

4. What are the consequences of wasting one's mind?

The consequences of wasting one's mind can include missed opportunities, lower cognitive abilities, decreased problem-solving skills, and a lack of personal growth and fulfillment.

5. Can the mind be "wasted" due to external factors?

Yes, external factors such as lack of access to education, limited resources, and societal barriers can contribute to the wasting of one's mind. It is important for societies to provide equal opportunities for individuals to develop and use their minds to their full potential.

Similar threads

Replies
113
Views
19K
Replies
3
Views
2K
Replies
6
Views
5K
  • General Discussion
Replies
2
Views
3K
  • General Discussion
3
Replies
77
Views
15K
  • General Discussion
Replies
5
Views
5K
  • General Discussion
2
Replies
40
Views
7K
Replies
3
Views
2K
Replies
11
Views
3K
  • General Discussion
Replies
18
Views
5K
Back
Top