EDITOR: Jonathan Shear
Could a 1997 papers’ collection still say something meaningful about the present consciousness studies state of art? Maybe not. Neuroscientific research has grown quite fast in the last decade. On a philosophical level though, the issue seems to have changed quite little, and this book is very useful to get a general picture of what it is referred to as ‘the hard problem of consciousness’. Since many authors here question that physical sciences would ever be able to give any definitive answer to such problem, probably none of the most recent disruptive scientific discoveries – Higgs-Boson or gravitational fields – would change their minds anyway. The freshness of the present work, at least from a philosophical standpoint, became surprisingly clear to me after I saw last Chalmers’ TED Talk, given in 2014, where the same basic issues are laid out; in an even more recent episode of the Waking Up Podcast, Chalmers brings on the same arguments you’ll find here.
You would certainly recognise how often I disagree (in the footnotes) with authors who propose alternative views to the traditional explanation of consciousness, which roots it in neurophysiological workings. I am particularly fond of the last work of Sean Carroll, whose wit is especially precious in striking out philosophical zombies and downward causations. Nevertheless, as Chalmers’ TED Talk and Sam Harris’ enthusiasm reminded me, these objections are far from being defeated. My attempt with the present review is to present the arguments of the authors in the most neutral way, to be faithful with their explanation and leave any personal disagreement in the footnotes, so that you could judge by yourself how ungrounded, I believe, many of the presented arguments are. I’ve been a little disappointed to find out that the collection presented by Shear is not well balanced. Papers’ majority falls in the category of those who sympathise with Chalmers’ position; skeptics are in minority, maybe because of Shear’s selection, maybe because skeptics themselves deemed such argumentation not worthy of being engaged with in the first place. Another good reason to listen to the fringe is that science is becoming increasingly conservative for the way it is structured. That is a good thing, except for the fact that discoveries are much more rare and expensive. Stretching a little these constraints is surely a good creative practice, and strengthens one’s critical abilities as well.
The present review mimics the structure of the book. To Chalmers’ keynote article, follow 26 different papers, all of them being previously published in the Journal of Consciousness Studies, closed by a Chalmers’ paper in response. From Dennett to Price, a book section called “Deflationary Perspectives“, the general approach would be that of reducing the hard problem to something that physical sciences would be able to fully figure out. From McGinn to Robinson, “The Explanatory Gap” section, authors remark that the hard problem is really hard – some, that it is even insoluble. From Clarke to Bilodueau, the “Physics” section, quantum mechanics perspectives are used to shed light on the problem. From Crick and Koch to MacLennan, the “Neuroscience and Cognitive Science” section, authors tackle hypotheses that link consciousness with cognitive sciences. From Seager to Hut and Shepard, the “Rethinking Nature” section, is explored the possibility of defining consciousness as a general feature of the universe. From Velmans to Shear, in the last “First-Person Perspectives” section, it is argued that a new science of subjective phenomena is needed. The symposium is summarised in Chalmers’ response paper.
Footnotes, although labeled as ‘References’, are personal comments or useful integrations for a deeper understanding.
Here you can download the ebook version of this review.
Facing Up to the Problem of Consciousness – by D. Chalmers
Chalmers starts off by making a distinction among consciousness studies between easy problems and the hard problem.
Easy problems are those explanations we still haven’t figured out about functions and abilities of consciousness. The hard problem is the impossibility to reduce “what it feels like to be a human” to mere physical, functional terms. Chalmers refers to such feeling as experience, others as phenomenal experience or qualia. He states that experience must be caused by something more than anything we would be able to detect at a neurophysiological level. He further argues that as functionally identical organisms could be conceptually conceived as having experience or not, then we must look for a deeper explanation of experience beyond brain functionalities.
Chalmers claims that an extra ingredient is needed. He suggests to take experience as a fundamental, irreducible ingredient of any consciousness theory; he calls it naturalistic dualism, as, in his opinion, there is no contrast with established laws of psychics because he simply adds “further bridging principles to explain how experience arises from physical processes” (p.20, original italics). Thus he names it a nonreductive theory of consciousness.
Chalmers constructs his theory upon the following three principles, the first and the second of those are less fundamental (and controversial) than the third:
- The principle of structural coherence, which states that processes of consciousness (characterised by Chalmers as the phenomenon of experience) and awareness (the ‘easy problems’ stuff) are structurally coherent.
- The principle of organisational invariance, which states that “what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components” (p.25).
- The double-aspect theory of information. This is how the argument goes: since “the differences between phenomenal states have a structure that corresponds directly to the differences embedded in physical processes” (p.26), then physical processing and conscious experience share some properties, and a double-aspect of information itself can therefore be inferred.
Facing Backwards on the Problem of Consciousness – by D. Dennett
Dennett makes some parallels with the old issue of vitalism to point out how those functions that Chalmers judged insufficient to explain the subjectivity of qualia are in fact responsible for them wondering about “how consciousness could possible reside in a brain” (p.35): he further adds that without those functions, there would be nothing left to wonder about.
Dennett leaves Chalmers with the burden of finding an independent ground – as physicists have done to account for the introduction of fundamentals such as mass, charge and space-time – to support his claim that information should arise to the same ontological realm and considered to be a fundamental property of the universe.
The Hornswoggle Problem – by P. Churchland
Church land sets out to prove how Chalmers has carved up a problem space that wasn’t actually there. She asks rhetorically: “What exactly is the evidence that we could explain all the ‘easy’ phenomena and still not understand the neural mechanisms for consciousness?” (p.38) She places the main proof onto the zombie thought-experiment, the notion that is conceptually possible for a perfectly functioning entity not to experience anything. She remarks that “saying something is possible does not thereby guarantee it is a possibility”. She further adds how the demarcation between easy problems and the hard problem might be way less defined than Chalmers lays it down.
She brings back the issue at how little we actually know about how to figure out the supposedly easy problems, and calls for an argumentum ad ignorantiam, as Chalmers makes out of the incompleteness of our state of understanding a metaphysical ground for his argumentation in favour of a new fundamental property of consciousness. “The mysteriousness of a problem is not a fact about the problem, it is not a metaphysical feature of the universe – it is an epistemological fact about us.” (p.42)
The final thrust to Chalmers’ essential distinction is made in the light of the history of science: more often than never, the tractability of problems might be misplaced. Back in the Fifties, people believed it would have been much easier to figure out the folding of proteins than the copying processes. The story turned out to be different. Churchland closes with the following: “When not much is known about a topic, don’t take terribly seriously someone else’s heartfelt conviction about what problems are scientifically tractable. Learn the science, do the science, and see what happens”. (p.43)
Function and Phenomenology: Closing the Explanatory Gap – by T. Clark
The argument Clark makes is much about how scientific theories should be developed, and how much Chalmers’ theory doesn’t fit those requirements. In particular, if science is the practice of bayesianly incorporating the explained phenomena into an existing theoretical framework, it has to be done with the minimal changes to the original framework. Chalmers’ theory poses the facto a dualistic reality, which has been ruled out by physical inquiry.
Another fundamental aspect of scientific procedures is that by the rules of falsifiability, the burden of proof relies on those who try to add something to any theory. Furthermore, we generally “shouldn’t posit as fundamental that which we are seeking to explain.” (p.47) Chalmers avoids to present proofs in favour of the argument of why the dual-aspect of information should be established as a fundamental feature of reality, and places experience as necessary to account for the same existence of conscious phenomena.
Clark further expands upon why Chalmers even falls prey to such machineries in the first place. One of the reasons in that Cartesianism still lingers quite strongly, albeit it has evidently faded in the last century or so. Another is related to a sort of anthropocentric bias, whereas qualitative experiences are linked to complex organisms like us and no place is left for those who don’t show the ‘right’ characteristics to be candidates for experiencing consciousness. The point is, we don’t know well enough what is responsible for those experiences to rule out a priori who has them and who doesn’t. Clark further proceeds to say that qualia may simple be some aspects of specific kinds of functional organisations. The third point the author makes in explanation of the special role of consciousness is related to the second: humans like to think at themselves as unique mainly because of their conscious functions, particularly rational agency. The root fear is that there will be nothing left for free will after we’ll accept consciousness as mere physical process. Thus, the creation of such dualism to defend our purported specialness as human beings. Clark doesn’t address such emotional concern.
The author interestingly turns the problem of the ineffability of qualia upside down: since “as subjects we are constituted by and identical to cognitive processes which themselves instantiate qualia [the identity hypothesis], qualia are what it is for us to be these processes” (p.51, original italics). The hypothesis stems as the most reasonable one in consequence of the previously description of how scientifically theories must proceed. “The ineffability of qualia, among their other properties, is thus a consequence of and explained by the functional identity hypothesis.” (p.51)
Ineffability of subjective experience as proof of functional identity is taken even further, when Clark says that such opaqueness “could be a clue to their [qualia] not having a determinate intrinsic nature.” (p.55) Being qualia such a basic function of who we are, how could we possibly develop a perspective on them? Such perspective rules out, counter-intuitively, that we are actually having a first person point of view of our experience. And the fact of not being able to speak about ‘that’ subjective experience exactly as it is uniquely felt by the person, is itself a proof of its inconsistency as having an intrinsic nature.
Chalmers’ second principle is even used to support the very notion of the identity hypothesis: organisational invariance is indeed a quite strong argument in favour of a close relationship between qualities and functions.
Clark concludes by pointing at how intrinsic, essential approaches of consciousness would structurally be resistant to any scientific, functional inquiry, a they are built to escape any objective definition. He places the last challenge of science to be the defeat of that resistant intuition, which places subjectivity as an ‘ontologically separate world’, as it emerges so strongly in our everyday experience as such a peculiar and strangely, uniquely different from everything we see around us.
The Why of Consciousness: A Non-Issue for Materialists – by V. Hardcastle
Hardcastle position could be summoned as follows: “pointing out the relevant brain activity conjoined with explaining the structure of experience and some functional story about what being conscious buys us biologically would be a complete theory of consciousness.” (p.62)
She poses Chalmers’ move of placing consciousness as a ’brute fact about the world’ to be wrong for the following reasons: brute facts are necessarily basic, and biological facts have all shown to be dependent upon even more basic physical principles. Taking consciousness out of the biological realm seems to overcome such objection, but then is left with nothing to be supported with, and it then fails to account as an ontologically new category. She further argues against the supposedly phenomenal nature of information – one of its double-aspects according to Chalmers – as since Freud we know that much of our information processing is unconscious.
Hardcastle shows how the matter of the controversy might be genuinely doctrinaire: just as some won’t accept any descriptive, functional explanation about the motives of the wateriness of water and the aliveness of life, any identity statements about the nature of consciousness are going to fall short as well.
She then readily points out how the choice of opting in or out the scientific game is much a matter of accepting its rules; she defends the materialistic approach as genuinely coherent with the current scientific model, and defies to provide any further argument to those who antecedently chose not to get along with the game as it is set up.
There Is No Hard Problem of Consciousness – by K. O’Hara and T. Scutt
In this paper, authors discuss both methodological and philosophical reasons to ignore the hard problem of consciousness as proposed by Chalmers, and distancing themselves from a mere eliminativism.
The methodological reasons go like this: since the hard problem of consciousness is far from being defined, since it lacks any basic idea of how an approach to its solution should be carried on, we should focus on those things – the easy problems, at least in principle – that would provide us with further understanding about the nature of consciousness. This is the pragmatic take on the problem.
Since methodological reasons alone won’t suffice a complete argumentation for setting the hard problem aside, authors provide philosophical, a priori instances as well. The first of them is a context argument: even though exploring all the easy problems might not lead to a solution for hard problem, we can not nevertheless decide it in advance. Furthermore, as history of science has repeatedly shown, advances in any field change the very way we come to understand it. It might very well be that easy problems’ solutions would adjust our whole understanding of what consciousness is, hence shifting how the current framing of the problem.
The second philosophical argument is an epistemological one, and goes as follows: we cannot pretend to even understand an eventual solution to the hard problem, until we map an established, well-understood concept of what consciousness is in the first place. It is widely acknowledged, Chalmers included, that studies of consciousness are quite ripe. Just as Democritus and Lucretius shot in the dark and correctly guessed the atomic nature of reality, the argument goes, we could be doing the same in the field of consciousness, but could never prove why that theory is right. In the economy of research, then, previously outlined methodological reasons acquire new strength from such reasoning.
O’Hara and Scutt then examine supposed attempts to solve the hard problem (Edelman 1992, Crick 1994), mainly bringing them back as covert attempt to use easy problems’ solution for the hard problem one.
The charge of eliminativism is strong, and both authors acknowledge that. They rebut it by offering a practical use of the concept of consciousness that would make it precious, if not ultimately defined, for the advancement of research. They outline the field of anaesthesiology, where if we would deny the very phenomenon of consciousness, unsolvable problems would arise.
Should We Expect to Feel as if We Understand Consciousness? – by M. Price
The cornerstone of this paper is to question the assumption that the explanatory gap between the objective description of the brain and the subjective experience is somehow problematic. He does that in three different ways.
First, Price argues, explanatory gaps are far from being rare in our explanatory account of causal relationships. He poses the issue in the following terms: every-day causal inferences are what allow us to feel a sense of understanding when we recognise any event A to be directly responsible for event B. It is a natural tendency, indispensable for our survivorship, but it may be not the entire story. In fact, a philosophical extreme counterpoint could traced back to David Hume and his Regularity Conception of causality.
In a nutshell, “The idea of a causal nexus is in principle non-sensical because ground A and outcome B cannot at the same time be different from one another and account for each other. […] Causation ‘as it really is’ consists just of regularities in the relationships between states of affairs in the world.” (p.85, original italics) The idea is that there is no causal relationship between objects at a fundamental level, and that sounds much of what quantum mechanics is telling us. Popper (1980) insisted upon the fact that causation as such in not a necessary condition of scientific inquiry, for its usefulness is established in its ability to distinguish between accidental regularities and consistent ones, the latter ending up to be labeled as laws.
This is how second and the third parts of Price’s paper are intertwined: If our innate ability to cover up those major gaps helps to define the consciousness gap as a particularly problematic one, for it somehow escapes from our powerful pattern-recognition habits (2nd), it is useful to understand how we usually explain causation in a psychological way (3rd).
Price stands heavily on a paper by E. Rosch (1994), who got to the brilliant intuition that “explanations that derive events from something other than themselves only come to feel like explanations because somewhere along the line they surreptitiously accomplish the trick of introducing the outcome itself.” (p.87) How is a logically circular explanation of causality tricked into a psychologically sound one?
1. Transfer of a property from ground to outcome, i.e. the transferability of the property of motion. Such trick doesn’t apply to consciousness, as its subjective nature – the supposed emergence – could never be mistaken with the properties of the ground, i.e. brains.
2. Perceiving an object or intending an action: we are naturally led to believe that our perceptions are similar to the object of such perceptions, as much as any action could be ascribable to some underlying intention. That is not the case with the particular matter of consciousness, whereas the issue is not to understand how these relationships work, rather to get to know how consciousness itself comes about as a subjective experience.
3. Seeing grounds and outcomes as the same entity, but transformed in some way. Here lies a strong objection against the identity thesis: “Leibniz’s Law of the Identity of Indiscernables requires that all properties of identical entities are shared, whereas the crux of the mind-brain problem is exactly that the physically described pain does not share the crucial property of first person subjectivity.” (p.89) This clashes with Clark’s take of why we can’t even produce a first person perspective of our experience. “It is all very well to think of consciousness and its ground as the same thing viewed from differing perspectives,” Price’s argument goes, “but this merely begs the question of how such radically differing perspectives can come about.” (p.89)
4. Seeing an outcome as a property of a category to which the ground belongs, or the general theory of panpsychism, a peculiar version of which is Chalmers’ phenomenal aspect of information. This is the position of those who see consciousness as a property of all things, which makes much easier to close the mind-body gap. The fact is, panpsychism as a very weak appeal in current scientific consensus – if none.
Price adds two further concepts to clarify what may additionally lead to an ‘obscuration’ of explanatory gaps: similarity and familiarity. These are two well-known mechanisms that help us get that feeling of understanding, even though a logical explanation may not be provided. “If similarity and familiarity help to obscure explanatory gaps, then when we encounter unfamiliar relationships which are also unlike anything else in our experience, […] explanatory gaps will be particularly obvious. […] Consciousness might find itself in a similar boat.” (p.91)
So, what is Price argument for? That we shouldn’t trust too much in a feeling of understanding to tell whether we are getting closer to solve the hard problem. He wants us to fully acknowledge its psychology, and keep in mind that what we perceive is not only a matter of what’s ‘out there’.
Consciousness and Space – by C. McGinn
McGinn goes back to Descartes’ body-mind understanding to defend the following position: conceiving of mind as a non-spatial entity well suits our ordinary understanding of mental phenomena. It is hard to deny, in his opinion, that we can’t help but evaluate conscious states as unperceivable, for we are unable to characterise them with any sort of spatial location, by means of external observation. Experiences wouldn’t even be detectable by eyes sharper than ours: such is our condition so that we can safely say that ‘consciousness is not a thing’.
McGinn acknowledges the fact that we do conceive of our brain as the main location of thoughts, and that we don’t think of ourselves as being anywhere else other than in our body. Still, he manages to assert that such point doesn’t “go very far in undermining the intrinsic non-spatiality of the mental.” (p.99) He further argues that our locating phenomenal events in space is a ‘sort of courtesy’, and the fact that we do so in such an approximate way to be a mark of their intrinsic non-spatial nature. Mentioning the notion of solidity, McGinn maintains that mental phenomena violate the principle, as they aren’t in competition for space. 1)This supposition is based on the ground that mental things somehow don’t have spatial properties. It seems to me that the simple fact of not being able to produce two contemporary thoughts should tell us enough about the issue, since today’s scientific framework takes space and time to be radically unified. The very concept of mental causation, could be supported only in the absence of a non-spatial conception of mental phenomena.
McGinn tries to account for the non-spatial nature of consciousness in the following terms: since physical properties of the universe came about with the big bang, since it is conceivable that something before the big bang actually existed and it somehow, exceptionally gave birth to our universe, it is possible that this very pre-spatial reality level is responsible for the non-spatial nature of consciousness.
Thus, McGinn argues, since “the brain cannot have merely the spatial properties recognised in current physical science” (p.103), we need to rethink about the very nature of space as we are accustomed to believe it is. Being consciousness such a spatial anomaly, which is not possible to locate in any way, there must be some other properties about space which we don’t yet know, the argument goes. 2)In support of such claim, he posits that reductionists views in the relation between scientific fields should be rejected, as certain problems are not transferrable between them. Take for example, McGinn says, how “grotesque to claim that the problem of how the dinosaurs became extinct shows any inadequacy in the basic laws of physics!” (p.104). He couldn’t better predict the theme of last Lisa Randall’s bestseller!
We may even not be in the position to discover such new properties, for genuine epistemological reasons. “We represent the mental by relying upon our folk theory of space because that theory lies at the root of our being able to represent at all – not because the mental itself has a nature that craves such a mode of representation.” (p.107) Furthermore, “To represent consciousness as it is in itself – neat, as it were – we would need to let go of the spatial skeleton of our thought. […] So there is no real prospect of our achieving a spatially nonderivative style of thought about consciousness.” (107)
It seems to McGinn that grasping such claimed additional meta-spatial nature of space is beyond our possibilities. This is how the paper ends.
Giving Up on the Hard Problem of Consciousness – E. Mills
Mills frames Chalmers’ problem as “the problem of providing a non-causal explanation of the production of consciousness by physical processes”. (p.110) Therefore, the aim of a theory of consciousness should be to explain why ‘the causes of experience have the effect they do’.
First type of such theories would be a deeper causal explanation of why mental phenomena arise. Still, these very explanations wouldn’t be able to answer why such features would produce consciousness. The second type is slightly different, for it tries to detect some sort of physical mechanism that would give birth to mental phenomena. But it still is a mechanism. Therefore, the hard problem is insoluble.
Mills endorses Chalmers’ move of making experience a fundamental feature and proceeding to build a theory upon it, as “fundamental entities can interact in lawful ways, and a theory which states these laws an be both true and useful.” (p.111) 3)This statement holds true for fundamental entities that have demonstrated, strict relationships with other fundamental entities, such as physical particles. However, consciousness doesn’t have this status, as there is nothing else that supports its supposed fundamental nature. Nevertheless, Mills attacks Chalmers’ double-aspect principle as problematic, as if it has unrestricted applicability, then everything, even a pin, would be conscious; furthermore, if it is somehow restricted, why it is so still remains unanswered. Anyway, the argument goes, the double-aspect principle “still merely asserts that informational states correspond to phenomenal ones. It still says nothing about why these correspondences hold.” (p.113)
The fact that we won’t be able to solve the hard problem, shouldn’t annoy us. Just as Newton was charged of mysticism for not being able to qualify gravity, we may very well be content with the insolubility of the problem. 4)This understates recent advancements in physics toward the confirmation of gravitons, and contrasts with the spirit of science that would always look for even more fundamental underlying laws. “We inherit from Hume the view that once we have reached fundamental laws governing empirical phenomena, there is no further explaining why these laws should be true”. (p.115) We should apply the same reasoning to consciousness as well. 5)I see this last move as an alternative way of placing consciousness in the realm of fundamental entities, just as Chalmers does.
There Are No Easy Problems of Consciousness – by E. J. Lowe
Lowe reproaches Chalmers’ for giving too much credit to physicalism. In his view, there are no ‘easy’ problems of consciousness, as the use of supposed merely functional activities – such as ‘discrimination’, ‘control’ and ‘report’ – is both aimed at describing conscious activities like those performed by humans as well as machine behaviours. Lowe points out that there is no reason to believe that such analogies are correct, employing the pathetic fallacy 6)https://en.wikipedia.org/wiki/Pathetic_fallacy as defence.
Lowe further upholds the kantian ‘Thoughts without content are empty, intuitions without concepts are blind’ to support the subtlety of the relationship between conceptual content and perceptual experience. It follows that ascribing genuine thoughts, complete of an articulated conceptual structure, to machines, which are essentially characterised by a lack of qualitative experience, is a rather weak argument in favour of a clear-cut demarcation between ‘easy’ and ‘hard’ problems of consciousness.
Lowe takes apart also Chalmers’ characterisation of human cognition based on Shannonian notion of information. It is fully appropriate to describe “the activities of computing machines, but is wholly inappropriate for characterising the cognitive states – beliefs, thoughts and judgements – of human beings.” (p.119) Why so? The difference relies in the properties of the informational state as described by Chalmers, as it seems to be missing any ‘conceptually articulated content’, as opposed to thoughts, beliefs and judgements of human nature. Lowe suggests an example to clarify is point: take a trunk tree section and consider it as an ‘informational state‘ of the tree. Clearly, the argument goes, even though we can infer from the number of the rings some information as age and the like, the ring pattern itself doesn’t embody the concepts of time and number. Concepts are to be ascertained only to human beings, as them and only them tell the age of the tree precisely because they hold the concepts of time and number.
Proceeding on the same line of thought, Lowe considers Chalmers’ definition of awareness suitable to computers, not to humans. As he pointed out earlier, such a physicalist account of functions related to the ‘easy problems’ doesn’t fully explain its complexity, and the computer-human analogy for that part falls prey to this ingenuity.
I found difficult to meaningfully synthesise how Lowe supports his point, and I therefore report a full quote: “If by by ‘producing a report on internal states’ Chalmers just means generating a second-order informational state (in the Shannonian sense of ‘information’), then although this is something which can indeed be perfectly well explained in a mechanistic way, it is not the sort of thing than needs to be explained when we are talking about the ability of human subjects to express in words their knowledge of the contents of their own thoughts and experiences – for such an ability demands the possession of genuine concepts, not only concepts of the things those thoughts and experiences are about but also the very concepts of thought and experience themselves. And the truth is that we have not the slightest reason to believe that a ‘mechanistic’ explanation is available, even in principle, for the capacity of creatures like ourselves to deploy the concepts of thought and experience and to ascribe the possession of such concepts to ourselves.” (p.120, original italics) The underlying argument is that the meta-ability to think about our experiences and thoughts as concepts themselves, is an insurmountable obstacle to their reduction at mere functional process.
If function is so relevantly characterised by produced behaviours, Lowe deduces that behaviours that are mechanistically characterised will be very well suitable to mechanistic explanations; the point is, behaviours would well be seen otherwise, if only their conceptual structure as previously described wouldn’t be overlooked. “Once we appreciate the Kantian point that genuine thought, with real conceptual content, is only available to creatures with a capacity for perceptual experiences bearing not only intentional but also phenomenal content, we see that the sort of phenomenal consciousness which we humans enjoy but which computers and trees do not, far from being an epiphenomenon of ‘information-processing’ in our brains, is an absolutely indispensable element in our cognitive make-up, without which we could not properly be described as thinking being at all.” (p.121)
Lowe’s conclusion is far from producing any positive argument in favour of physical reductionism: it would not be able to tell anything meaningful about the ‘easy’ problems – in fact, it would not have the tools to say anything about any aspect of consciousness.
The Easy Problems Ain’t So Easy – D. Hodgson
Hodgson advances the hypothesis that certain functions have a causal role in producing the experience of consciousness, although they behave so that objective sciences will never be able to detect them. It follows that so-called easy problems could not be addressed without answering the hard problem.
Hodgson delivers his argumentation by first analysing the inefficacy of the scientific method to explain human free will, given that choices are “selected by the person or agent for non-conclusive reasons.” (p.126) Non-conclusive reasoning is a type of inductive reasoning, which is rationally compelling but lacks the ground of deductive logic. It is characterised by its defeasibility, or, it is vulnerable to generalisation errors. 7)https://en.wikipedia.org/wiki/Defeasible_reasoning; Cruz, Joseph (1999) Contemporary Theories of Knowledge (Studies in Epistemology and Cognitive Theory), p.36 Hodgson further explains our commonsense understanding of choice as not purely characterised by neither pre-deterministic laws, neither random laws ‘within pre-determined probability parameters’; “each choice is a unique efficacious event, in which non-conclusive reasons are resolved by a decision. And if that is right, then conscious experiences have a role which can’t be fully replicated or simulated by mechanisms which simply operate in accordance with universal laws, with or without randomness.” (pp.126-127) Such is Hodgson’s functional – although non-detectible – explanation of conscious phenomena.
Hodgson supports the previous argument with the following interesting observations:
- If all mind/brain functions can be explained with the detection of specific performing mechanisms, then there would be no functional explanation of consciousness, for the impersonal, lawful development of any system would exclude the possibility of choice, and therefore of any efficacious subjectivity role.
- Conscious and non-conscious systems have differences that cannot account for a merely functional explanation of human reasoning and of specific sensations, such as pain or colour perception.
- The evolutionary preference of (fallible) consciousness over the enormous computing powers of unconscious brains in the process of facing new situations.
Correspondingly, Hodgson acknowledges that to suggest something else than universal laws could be seen as appeal to superstition, that cognitive sciences are advancing and that new discoveries will be made, and that psychological experiments have illustrated how fallible human commonsense reasoning is. On this last point, the author notices that even though commonsense has some fallacies, it is nevertheless the ground of our understanding, and that we can’t just jettison it.
Hodgson illustrates the explanatory power of non-conclusive reasoning with the following example: that pain sensation has an irreducible role, because it leaves us the option of responding in different ways, and doesn’t trigger any automatic responses. If we then are able to “treat both the pain and the opposing considerations as non-conclusive […] choices are not just the working out of mechanisms obeying universal laws or computational rules.” (p.129)
Hodgson finally integrates Chalmers’ proposal in the following way:
- If experience is fundamental, then it also is the subject and his related feature of choice.
- If choice is not completely reducible to a physical explanation of the universe, then any eventual bridging law between mental and physical realms would never be fully determinative.
- If “neural isomorphs are possible, they could develop differently, not just because of possible randomness but also […] because they may choose differently between alternatives left open by the systems and applicable universal laws.” (p.131, original italics)
- In a double-aspect manner, the brain/mind should be considered as as physical/mental whole, where none of its physical or mental aspects could fully account for its whole functioning.
Facing Ourselves: Incorrigibility and the Mind-Body Problem – by R. Warner
Warner supports his view of the inefficacy of physical sciences to account for the rise of mental states by introducing the concept of qualified incorrigibility.
Warner attacks the validity that science ‘of the sort we now accept’ could fully describe the mind, for such qualification has no explanatory powers. He contests that even by extending our current fundamental understanding of science – a practice oftentimes used to explain phenomena that couldn’t fit in previous theories – by adding some psycho-physical principles, we would not reach a satisfactory explanation of mental phenomena, as those principles would either violate physical conservation laws or not be able to produce any physical effect at all.
At this point, it seems that non-reductive theories of the type Chalmers proposed are definitely defied. Warner proposes instead “to abandon the assumption that psycho-physical principles will not interfere with physical laws”, but instead to “look for new conservation laws.” (p.138) The burden of proof for such an overwhelming need would rely on the idea of incorrigibility.
Science’s drive is to “correct all distorting influences that make the world appear to be different than it really is” (p.140), so that every item would become a mind-independent one. Here is where the role of incorrigibility comes into play. The traditional conception of incorrigibility is: “for at least some mental states, necessarily, if one believes that one is in that state, the one really is.” (p.140, original italics) Warner proceeds to qualify incorrigibility by noticing that in order to be valid, incorrigibility has to be unimpaired (sensation is not influenced by drugs, anxiety, lack of attention, hypnosis), 8)A detailed explanation of what impairment is and what is not, as it is so crucial to the point, would be much needed, but the author only provided references of his 1986, 1989, 1992, 1993 and 1994 works. non-inferential (the belief of being in pain, for example, arises simultaneously with the sensation; is not a matter of inference) and non-causally or non-nomologically necessary (as, once the sensation is recognised as being non-inferential and unimpaired, “there is nothing more to do to ensure that [that sensation] is true.” (p.143, original italics) ). In relation to unconscious impairments, Warner further adds in a rather Buddhist fashion that they could be “fairly pervasive and that truthful self-consciousness can be an achievement greatly to be prized.” (p.143)
What is the consequence of qualified incorrigibility? It guarantees that some events could be fully recognised by subjective inquiry in a truthful manner. This provides Warner with the argument for including within physical sciences ‘an account of mind-dependent items’. Warner contests Chalmers’ view of reportability as an ‘easy problem of consciousness’, as it precisely describes the incorrigibility issue. It would be incorrigibility that “captures the ‘subjectivity of experience’. […] Many kinds of mental states enjoy qualified incorrigibility – including beliefs, desires, emotions. […] Experience simply happens to be the domain of the most convincing examples of incorrigibility.” (p.145)
Warner closes his paper by arguing that embracing incorrigibility is necessary, and that attempting to “construct a picture of the mind in mind-independent terms is to erase the mental from the picture altogether.” (p.145) Warner defends the subjective, first-person as necessary, as any free will deliberation is not a consequence of external observation of one’s intentions, rather a direct knowledge of the necessity of qualified incorrigible recognition.
The Hardness of the Hard Problem – by W. Robinson
Given that the Hard Problem could be formulated as follows:
(HP) Why should a subject S have a conscious experience of type F whenever S has a neural event of type G? (p.149),
Robinson proceeds to briefly examine some reactions to the (HP), such as the eliminatisivt, the functional – to which he answers that identifying any regularities between conscious experiences and neural events would leave the question open to why such regularities even exist – and the scientifically faithful – to which he rebuts that if no scientific progress about (HP) has been done in the last 300 years, then it may be very well be the case that current scientific methods are not suitable for giving an answer to (HP).
Robinson advances the hypothesis that our current scientific framework can’t answer (HP), and that we must understand why that is a fact and ‘remove the sting’ from the issue. He proceeds to examine two key components of his argument:
(F1) Explanations of regularities between occurrences of two kinds proceed by deriving a matching between the structures of those occurrences. (p.151)
He offers the following example to make the point: to be valid, the explanation of why H2O is liquid at T temperature needs not only the property description of H2O molecules at T temperature, but also an implicit premise that would characterise liquidity with some structural properties – like conforming to shapes but not volumes, etc. Robinson extracts a corollary to (F1) which is
(F2) If a regularity involving a given kind is to be explained, the kind must be expressible as a structure,
from which he derives a stronger claim, that
(F3) Whenever we can find a structure in a kind of occurrence, there is hope of finding an explanation of it. (p.152)
This is the form that all functional explanations take more or less for granted. The second fact that would be needed to account for the hardness of (HP) is the following:
(F4) Among the properties of conscious experiences, there is always at least one that has no structural expression. (p.153)
Robinson offers the examples of painful sensations, which could not be necessarily described with some sort of regularity in intensity or spatial pervasiveness, and would therefore maintain some intrinsic, although non-structural properties. An interesting rebuttal is provided to the held belief that knowing more about the relational properties of the phenomenal realm would tend the total unexplained toward zero: “In all examples of which I am aware, in which we find structure within the phenomenal realm and then explain it, terms are required for the explanatory relations that are themselves properties of phenomenal (=conscious) experiences. Thus, each case provides no net shrinkage of the amount that needs to be explained, and therefore there is no reason to suppose there will be convergence to zero.” (p.153)
Any functional explanation would then chase after the next structure to explain the previous, as (F3) suggests, but it ends up introducing new elements that need further functional explanations. It follows that “the fact that each explanation of one property re-introduces the Hard Problem for another property ought to convince us that (F4) is indeed correct.” (p.155)
Conclusion: the Hard Problem may need a shift in the very way we produce explanations to be satisfactorily solved. Robinson argues that because “the conditions for finding intellectual satisfaction are contingent,” they are in principle changeable. What in particular should be changed about our understanding, is that we are currently unable to accept ‘structureless properties’ as a fundamental feature. Our current conceptual framework doesn’t provide any solution to the hardness of the Hard Problem. 9)I omitted to report how the author deals with the two major objections to his argument, namely that his account doesn’t ‘invoke the subjectivity of conscious experiences’ and that physical perceptions per se could not have conscious experiences. Robinson answers the first objection by firsthand designate consciousness as one among the properties of conscious experiences – remember the example of pain – then by rejecting relational theories [for X to be a conscious X is for it to stand in some relation, R, to some other thing, Y] and therefore characterising the consciousness of conscious experiences as intrinsic and essential, i.e. “that they cannot exist without being conscious.” (p.157) He the proceeds to answer the second objection by sectioning colour perception into the molecular properties of the colour (colourO), and the ‘conscious experience that goes together with the perception’ of colourO things by a subject S, named colourC(S). At that point, subjects would necessarily learn to label colourC(S) experiences to colourO objects by some causal neural paths, which would be named colourN(S). Robinson claims that only colourC(S) properties are those which are conscious.
The Nonlocality of Mind – by C.J.S. Clark
Clark’s general abstract is that mind could not be found in any spatial location, not even those higher dimensional spaces that mathematics allow. He starts off by embracing the Cartesian view that the existence of mind is axiomatic, and that we should study the mind first and foremost by an experiential perspective. “It seems unnatural to derive mind from physics, because this would be to try to explain something obvious and immediate (mind) from something (physics) that is an indirect construction of mind.” (p.166) His approach is then to rewrite physical explanations to suit the privileged experience dictated by the mind.
While characterising which conscious states should be considered as spatial and which shouldn’t, Clark posits that all percepts belong to the first category, while all other thoughts reside in the latter. He further proceeds to clarify how the compresence of spatial and nonspatial thoughts give rise to various degrees of blurred spatial definitions of mind: that a thought of a distant star appears to stay together with the thought of a nearby car, when in fact those relations happen independently of an Euclidean conception of space. A spatial connotation of the mental would therefore be misleading.
Common objections to nonlocality described by the author are:
- that special relativity should define mental events as spatially characterised, for it is possible to say that mental event A is followed by mental event B. Clark although considers the argument invalid, for it begs the question by assuming that special relativity should be applicable to mental events;
- that “whatever we may think about mind, most of us hold that our decisions have physical consequences – so a given decision affects a particular region of space-time.” (p.169) There would be some mental region R to which we could attribute properties that link to a past event Q or a future event P. Clark denies the objection to be valid, for examples in which sets Q and P hold up without R could be constructed.
After Cartesian dualism and ephiphenomenalism have been ruled out, there is only one further possibility: the re-examination of physical laws in the light of quantum theory. Because qualified Newtonian approach is invalidated by the EPR effect, then a quantum logic formulation in the footsteps of Mackey (1963) is the ground for the development of nonlocality applied to consciousness.
Given the results of the EPR effect, which strongly imply a global nonlocality, here is how Clark proposes we go on: “We do not start off assuming that the universe is composed of independent atoms. So global effects do not require special mechanisms to make them happen; rather, special mechanisms are required to break things down to the point where physics becomes local.” (p.172) Furthermore, if those decohering events which lay out local consequences could’t be observed from the outside, we must find out a way to formulate how they engender from within.
Clark links such nonlocal physical property with consciousness, which should be carried by brain processes but remain separated from them, as much as charge is carried by particles. And if the mind is fundamentally nonlocal, then also its structure should be so.
Such turnaround of physics should be matched by ‘putting mind first’: “We would be in a position to understand how it was that mind could actually do something in the cosmos […] by determining which decohering histories of questions [= collapses of the quantum wavefunction] are realised in the process of self-observation that is embodied in consciousness.” (p.174) Clarks claims that human free will as we define it pairs “the essence of quantum logic, where the range of possibilities is not fixed in advance.” (p.174)
Consciousness assumes therefore a precise framework, an emergence of the interplay of the non-local nature of mind and the physical, local property of matter.
Conscious Events as Orchestrated Space-Time Selections – by S. Hameroff and R. Penrose
Hameroff and Penrose build their theory around the concept that experiential phenomena are inseparable from the physical universe, in a way that is so profound that could be hardly detectible, except for the non-computability of conscious processes (they hold that some conscious states could not be derived from previous ones by algorithmic processes). They engender such property to the undecidable nature of the wavefunction collapse. The self-reduction – not an external, randomly triggered one – of the wavefunction is essential to the rise of consciousness, under the special conditions that “Only large collections of particles acting coherently in a single macroscopic quantum state could possibly sustain isolation and support coherent superposition in a timeframe brief enough to be relevant to our consciousness.” (p.179)
Penrose challenges conventional Copenhagen interpretation, which relies on random probability, weighted according to laws that describe the evolution of a previous state into the next, to explain the wave collapse, by including an underlying, non-computational unknown as a more accurate description of the wave collapse. He supports a gravitational account of how wavefunctions may collapse when they reach a certain threshold – an objective explanation of the quantum state reduction (OR). 10)Penrose argues that gravity curvature of space-time hasn’t been considered by quantum physicists. The official claim is that gravitational forces at microlevels are so tiny that considering them wouldn’t make any difference. Penrose instead argues that even almost-undetectable differences may have large effects. He proposes a superposed state made of different space-time sets, each of them ascribable to the possible “places” that the particle assumes in different states of the superposition and subsequent space-time curvatures due to its exerted gravity force. Such superposition is unstable, and will therefore decay under precise laws into the observable geometry that we get at the quantum state reduction. Penrose acknowledges that there is no consensus upon how objective reductions happen, but he sees no plausible alternatives to his proposal. Authors conclude that “If, as some philosophers contend, experience is contained in space-time, OR [objective reduction] events are self-organising processes in that experiential medium, and a candidate for consciousness.” (p.184) As to say, that if experience is a feature of space-time, it may also have its particular OR events, and ‘a candidate for consciousness’. They individuate cytoskeleton neural microtubules as the appropriate place for coherent superposition and OR to occur within the human body.
This part becomes a little tricky to explain without the help of proper images. 11)Available here. Anyway, in a nutshell, microtubules were identified because they answer a set of preconditions, namely 1) high prevalence, 2) functional importance, 3) crystal-like structure, 4) ability to be isolated from external interaction/observation, 5) functionally coupled to quantum-level events, 6) appropriate for information processing, 7) cylindrical. In the proposed model, quantum coherence emerges in the tubules and operates until the superposition of its components (the tubulins) reach the critical threshold when they collapse. The resultant OR is a ‘time-irreversible process’ and corresponds to the experienced event of ‘now’.
Penrose and Hameroff explain the coherence of individual, separated ORs – what they call Orchestrated OR (Orch OR) – as a function of microtubule-associated proteins, which would prevent tubules from being completely isolated, and therefore allowing a sort of coordination between different “set probabilities for collapse outcomes.” (p.188) This is how authors describe the stream of consciousness – as a continuum of Orch OR events, which account for the coherency of conscious experience as we know.
“If experience is a quality of space-time, then Orch OR indeed begins to address the ‘hard problem’ of consciousness in a serious way,” Penrose and Hameroff conclude.
The Hard Problem: A Quantum Approach – by H. Stapp
Given that nor classical mechanics nor dualism are accountable for an explanation of the ‘hard problem’ of consciousness within pure physical boundaries, Stapp puts forward the classical Copenhangen interpretation of quantum mechanics as the needed framework to build within a new, physical understanding of the issue. He quotes Bohr to illustrate the essence of the Copenhagen interpretation in the following terms:
In our description of nature the purpose is not to disclose the real essence of phenomena but only to track down as far as possible relations between the multifold aspects of our experience. (Bohr, 1934, p.18 – quoted at p.198 of the book; my italics)
Thus, in Stapp’s view, experiences of observers are brought into physical theory. The set of mathematical rules object of physical studies should be considered as the description of our classical understanding of nature. The nature of the quantum wavefunction is the set of “probabilities for, or tendencies for, our perceptions to be various possible specified perceptions,” and “the experience of the observer becomes what the theory is about.” (pp.199-200) Since no physical description could be attached to the quantum state, as it mathematically represents a set of probabilities, what our physical laws are made for is to describe ‘classically describable’ perceptions.
Stapp provides us with a brief description of alternative ontological interpretations: Bohm’s model, Everett-many-worlds model, Heisenberg model and Wigner-von Neumann interpretation. Their aim is to get rid of the experiential component and coherently describe the universe. Stapp remarks that they are all dualistic in some sense, as they obey to two different, intertwined dynamical laws: that of the wave function, and that of its reduction.
Bohm’s model could be described as a particle that surfs on the wave. As the wave spreads out its many ‘branches’, each of them would have a particle on top which would describe what our experience of the wave will be. This many-particle scenario is the set of possible outcomes from which would emerge that one we are going to experience. Bohm’s explanation allows to answer to question of why we come to know only one aspect of the wavefunction, as the wave itself is not really a physical object, rather a probability density function. The causal explanation here of why that ‘surfer’ becomes available to our experiences would be that of assigning appropriate statistical weights to the initial conditions of the wavefunction.
Stapp considers the Bohm’s model useful, but not enough parsimonious, as it has so many ‘branches’ that we’ll never get to know for real. He recurs to the Heisenberg model as a better alternative, which consists of ‘actual events’ – those we experience – and a set of ‘objective tendencies for those events to occur’. There our hypothetical ‘branches’ would be cut off once the ‘actual event’ takes place. The problem with Heisenberg’s model, again, is that we don’t know why it so happens.
Everett’s model has a different appeal altogether. From a simulation of the brain/mind complex as a quantum object, since Everett’s approach has no wave collapse events, the simultaneity of different ‘branches’ could describe the brain as just one entity – the evolving wave as a set of different brain activities, which could correspond to different psychological persons. The problem with this theory – besides being hardly verifiable – is that the wave function, as we have seen, does not describe any physical object, as it involves probabilities, and therefore necessary ‘or’ characteristics: either one branch exists, either some other one.
Stapp recurs to the Wigner-von Neumann interpretation, which suggests that wave reductions should occur in concomitance with conscious events. Wigner-von Neumann interpretations, as all the previous, are dualistic, as they have “a component that can be naturally identified as the quantum analogue of the matter of quantum mechanics, and a second aspect that is associated with choices from among the possible experiences.”(p.204, original italics) He then proceeds to analyse how a quantum description of the brain/mind would be linked to the present interpretation of the quantum model.
By running a simulation, the body/brain would evolve into a superposition of different possibilities, a set of randomly-generated ‘plans of action’ that the brain would implement for its survival; thus the wave reduction would pick one of those plans and allow it to be executed. In such a view, the collapse of the wave is “a natural consequence of the fact that wave function does not represent actuality itself, but rather […] merely ‘objective tendencies’ for the next actual event.” (p.206) According to Stapp, the wave collapse would then substantiate the psychological event as we know it.
Stapp supports his claim by stressing how such model is efficacious, namely it suits the widely-accepted hypothesis that consciousness arose as an evolutionary response to aid survival. Bohm and Everett models wouldn’t support this feature, as they are completely deterministic. The quantum local random generation of ‘templates of action’ is the basket from which is picked the discrete event that would determine and actualise the behaviour of the organism. Stapp goes even further as comparing the universe with a ‘giant mind’, for the underlying wavefunction should be considered as having a subjective nature in virtue of its probabilistic description. “The great and essential move of the Copenhagen interpretation was precisely to realise that although no classical aspect naturally pops out from the quantum physical reality, […] (certain of) our experiences are, in fact, classically describable, and hence the empirically observable classical aspect of nature can be brought consistently into physical theory by introducing our (classically describable) experiences, per se, directly into the theory as the very thing that the theory is about.” (p.208)
What about the causation part? How do these wave reductions occur, and why are they the way they are? Stapp provides the following explanation: by virtue of the Copenhagen interpretation, which sees superposition of states as a set of possible experiences, the brain would then be the ‘host’ of a selection process among those different experiences, and the actualisation would become self-defined. What is described from physicists as a mere random process would actually be a nonlocal one. Stapp therefore manages to describe conscious experiences as entirely engendered by physical theory.
Postulating William James’ definition of ‘thought is itself the thinker’ (James, 1890/1950, p.401), Stapp explains the stream stream of consciousness as a sequence of discrete psychological events, which are bound together with an enduring sense of self by virtue of a sort of ‘fringe’ that would surround every though, providing a stable background which should be identified as the feeling of self. What, then, about free will? Stapp shifts the focus from the multitude of neural events – which would be indescribably messy – to the organism as a whole, thus assigning the choice to it. Randomness would then be restricted to the set of ‘action templates’, produced by the operational physical constraints of the body. A deterministic evolution at a microscopical level would thus be punctuated by top-down organic choices. 12)Much what random variability is accounted for in Darwinian evolutionary theory.
This theory stands upon the necessary missing element of why the quantum wave collapses. Physical theory by itself cannot provide such an element, thus the only thing we know exists besides physical laws, namely experience, should account for it. Experience is therefore fundamental in such that realities are fundamentally experiential; its emergent, particular aspect is what we call consciousness. 13)To a physics-illiterate like me, the theory seemed coherent, almost convincing. Good science is to be especially skeptical when inclined to accept something, so I searched for some rebuts to the exposed theory. Two very good responses that made me strongly correct Stapp’s interpretation could be found at this blog post by philosopher and skeptic Massimo Pigliucci, and a much broader analysis of the issue by physicist Michael Nauenberg. Nauenberg further argues here that the very interpretation of the wavefunction as a non-physical object should determine that “there isn’t any mystery that its mathematical form must change abruptly after a measurement has been performed.” Furthermore, much of misunderstandings which arose after von Neumann’s work is that he simplistically considered the measurement apparatus as a superposition of two states, the “fired” or the “unfired” state. A correct approach would be that of characterising any macro-object such as Geiger apparatus as a recorder of atomic events, that by rules of thermodynamics (arrow of time) should be considered an irreversible process. Irreversible processes, of course, are everything there is about the collapse of superposition, and would have nothing to do with the presence of a conscious being. Nauenberg remarks that Wigner was in fact the only major physicist (Nobel prize) to support the role of consciousness in the collapse of the function.
Physics, Machines and the Hard Problem – D. Bilodeau
Bilodeau bases his work on the the fact that interpretations to physical findings among quantum mechanics are controversial, and that physics itself would therefore be not suitable for an ontological description of mind. He goes on arguing that the prevalent interpretation of quantum mechanics is conservative, for it does not fully acknowledge the real threat it has posed to an exclusive objective view of reality. In his words, “The ‘objective nature of reality’ […] is maintained by shifting everything we think of as objective physical fact […] over to the subjective side of the Cartesian split.” (p.220) 14)Not much is provided to explain such claim, besides clinging to an unsolved measurement problem – not quite unsolved, as Nauenberg previously showed – and a 1929 Bohr quote about the indeterminacy of the subject-object boundaries of perception, where “no sharp separation between subject and object can be maintained, since the perceiving subject also belongs to our mental content.” (quoted at p.219)
Bilodeau stems from such statement that the analytical habits we have developed have more to do with the workings of our minds, rather that with the nature of reality. In other words, the geometries physics is so devoted to would be nothing than a mind projection on reality. He focuses on the idea of dynamics to illustrate his point.
A descriptive account of the physical world would consist of an ‘historical’ description and a dynamical description. The former is the definition of an object relative to its location in space and time; and because space and time are “means of ordering our thoughts about experience,” (p.222) then historical descriptions would pertain to the realm of experience, and are determined by observation. The latter is the definition of an object as a system of abstracted, typical properties, which could be determined by deduction and which would correspond to observed features of experience, and consisted in a way that could be described and predicted by general physical laws. Both historical and dynamical descriptions would not be defined a priori.
Given that laws of physics pertain to the dynamical description – as they are seen by the author, as laws of action, not of being – the mechanical worldview would try to describe the whole universe in dynamical terms, “so that the typical and particular become equivalent and no aspect of reality is excluded from the abstract representation.” (p.223) The problem with this view according to Bilodeau is that the physical reality of the world could not be differentiated from its abstract description, if everything could exist as a mere mathematical structure. Therefore, the special geometry we identify with physical reality should be linked with our mental abilities, namely, that “it is the subjective which makes the objective physical.” (p.223) What physics describes, would therefore be an “empirical manifestation of a non-mechanical mode of existence.” (p.224)
Bilodeau remarks how the wavefunction shouldn’t be misunderstood for a physical object. He sees it as a set of ‘causal propensities’ of an empirical event, for it wouldn’t count as ‘reality’ in a proper sense. The difference between such empirical and abstract aspects of reality are carried out by recalling the ‘classical nature of the apparatus’ – by virtue of Bohr’s definition, and drawing a subject-objective duality in the following terms: “The apparatus is simply the experiment approached from the historical empirical point of view. The microsystem is the experiment approached from the abstract dynamical point of view. These are the two aspects of the same thing.” (p.226) 15)There is a mistake in the argument: a dual aspect of reality could not really stand simply because the claimed ‘objective’ part is not physical, as the wavefunction has no physical properties. This is strange, because Bilodeau himself didn’t miss to notice this feature of the superposition, although he claims it should be recalled in a dual-aspect theory of reality.
The author proposes to substitute physics as a basic description of reality with an even more basic, ‘that-which-is’ concept of reality; “not as a set of all particular things (events, objects, ideas, feelings, etc.), nor as a structure, but rather as simply the ultimate referent of all we say about any of those things.” (p.227) 16)This is the clarification you might have waited for when reading about the concocted underlying “non-mechanical mode of existence.”
So what is there to say about consciousness? First and foremost, machine and organisms should not be strictly compared in a ‘brain machine’ fashion, for the function of a machine is imposed by the designer, while the function of an organism arises from the inside; furthermore, organisms would have such unpredictable patterns and unlimited set of states that they could not be matched by anything mechanical. It follows that “just as we cannot abstract ‘cellness’ or ‘organicness’ from a cell and build it into a machine, neither can we abstract consciousness from a brain.” (p.230) Rather, consciousness should be an exclusive result of an organic process. Chalmers’ ‘hard problem’ of consciousness would be transcended by embracing a wider nonmechanical ontology. 17)By virtue of what has been previously exposed, I trust you may infer how weak such argument is. Nevertheless, being this a comparative collection of papers on consciousness, I wanted to include it as well.
Why Neuroscience May Be Able to Explain Consciousness – by F. Crick and C. Koch
Crick and Koch propose that we explain consciousness by locating those neural activities which are directly responsible for consciousness. They appeal to clinical examples as prosopagnosia to account for a basically neural explanation of subjective phenomena.
In a neuroscientific view, information is processed by neurons in a semihierarchical manner, whereas basic neuronal correlates contribute to performed actions by ‘send up’ the information to higher-processing structures, until it is defined in a property we recognise as motor-like, or visual-like, etc. They recall the famous experiment of Mary, the woman who studied everything about the red colour but never had an experience of red, to explain that Mary doesn’t know ‘what is is like’ to see the red colour precisely because she never had “an explicit neural representation of [the] colour in the brain, only of the words and ideas associated with [it].” (p.238)
This would be the main reason of why we cannot convey to others the exact nature of any subjective experience. The communication would inevitably be carried out by different neural paths (the motor part and subsequent verbalising part) than those which were directly involved in the subjective perception of the experience. So what’s available to inter-subjective verification and inquiry is not the very nature of the conscious experience, but only its relation to other ones.
Chalmers’ suggestion is that we approach the problem of why in the world we have experiences by introducing the double-aspect information theory. Crick and Koch suggest that we look for neural correlates responsible for meaning, namely how neurons that code some visual information, for example, are linked to others that are responsible for making sense out of such perception. “It would be useful to try to determine what features a neural network […] must have to generate meaning. It is possible that such exercises will suggest the neural basis of meaning. The hard problem of consciousness may then appear in an entirely new light.” (p.239)
Understanding Subjectivity: Global Workspace Theory and the Resurrection of the Observing Self – B. Baars
Baars starts from Chalmers’ endorsement of Global Workspace Theory – the fact that single conscious contents exert global consequences and are available to to unconscious and conscious systems – to draw a finer distinction than ‘easy-hard’ problems proposed by Chalmers’, namely the definition of subjectivity as consisting of an observing self of consciousness’ contents.
The author’s claim is that subjectivity has been ostracised in XX century, only to be recovered by Thomas Nagel under the “what it is like to be something” definition. Baars laments how such description – which he calls an ’empathy criterion’ of consciousness – is momentarily useless for scientific inquiry, with the result of keeping subjectivity out of a meaningful conversation about consciousness. He proposes we recover the traditional philosophical concept of subjectivity as ‘everything that has to do with a sense of self’, which has been extensively and fruitfully used among philosophical and psychological research, in order to move forward in defining what consciousness is.
- Consciousness is an attribution of and accompanied by a subjective sense of self, for unconscious events have the strong characteristic of not being available to subjective experience.
- There is an interpenetration of ‘easy’ and ‘hard’ issues pertaining to consciousness, which could be exemplified as follows: the purported ‘easy’ problem of discrimination – the ability of distinguishing yellow colour from red colour, for example – could be performed by non-conscious entities such as computers; but an empirical observation, at least for conscious beings, would tell us that fatigue or distraction would either impair or strongly reduce the performance of such ability.
- Similarly, causal interactions could be identified between ‘hard’ and ‘easy’ aspects of consciousness, at least in conscious creatures. Trying to keep in mind a series of numbers while reading this paragraph would seriously affect your information-processing abilities, by virtue of limited working memory which characterises our brains.
- Sense of self and contents of consciousness are independent: we can perfectly go through our days with a relatively stable sense of self and experiencing many phenomena, and also hear to a story and identify with its characters. 18)“In technical jargon, conscious contents and self may be orthogonal constructs, which always coexist but do not necessarily covary.” (p.245)
- Consciousness creates access for self, as what characterises our ability to retrieve present or past conscious events is eminently a function of the ‘I’ we experience as a sense of self.
- The sense of self could be rescued from self philosophical denial (as postulating an external observer would constitute no explanation, as Gilbert Ryle repeatedly pointed out) by embracing psychological and neurophysiological findings, which respectively characterise ‘self’ as a multitude of pattern recognisers and as self-systems which could be detected in our very brains, such as the so-called sensorimotor homunculus.
“The reader can consult his or her own experience to see whether […] conscious events are accompanied by a sense of subjectivity […] But is it real consciousness, with real subjectivity? What else would it be? A clever imitation? Nature is not in the habit of creating two mirror-image phenomena, one for real functioning, the other just for a private show. The ‘easy’ and ‘hard’ parts of mental functioning are merely who different aspects of the same thing.” (p.247, original italics)
The Elements of Consciousness and Their Neurodynamical Correlates – by B. MacLennan
MacLennan confronts us with the fact that standard reduction procedures within the scientific environment, by virtue of reducing objective features to further objective ones, is not equipped to give satisfactory solutions to the hard problem, which could be characterised as the relation between the subjective and the objective.
Although the investigation of consciousness has the epistemological limit of being observed through itself, MacLennan proposes that we identify some stable characteristics of consciousness to separate them from the changing content, by means of methods that could be publicly validated. Using phenomenological terms, we could say that everything we could experience belongs to the phenomenal world, a ‘structure of potential experiences.’ Phenomena, therefore, are the contents which appear in our consciousness, and constitute the basic blocks of knowledge – our set of data (given things, from Latin). MacLennan warns us not to fall pray of simplifications concerning phenomena, which are not simple as they could seem: they are a complex agglomerate of information which does not only pertain to the here-and-now, but also to the future in terms of expectations and to the past in form of memories. Various experiments have shown how percepts are influenced by what we expect them to be, and our mind fills in the gaps of various sensory data to constitute an experience of flow in presence of discrete percepts, for example. MacLennan identifies such process as “the continuity of subjective time.” (p.252)
Just as neurological functioning could be reduced in an objective-to-objective fashion, MacLennan proposes we reduce phenomena ‘subjective-to-subjective’ into protophenomena, a sort of ‘atom’ of consciousness. He maps protophenomena onto certain activity sites of the brain which are responsible for subjective perception, specifically neurons’ receptive fields. Reduction to receptive fields would vary from classical objective reduction because of their functional properties, namely their ability to receive inputs not only from sensory data but from more abstract ones, like interpretations and expectations. By virtue of such high correlation, we could say that all protophenomena depend on others, that sensory and non-sensory protophenomena are strictly related and imply a much more complex overall picture than the simple objective-to-objective reduction.
Where should these activity sites be located? MacLennan individuates them as synapses, but does not exclude that Hameroff’s microtubules could count as activity sites as well. Because protophenomena per se are not sufficient to produce macro-conscious events as we experience them, we need them to change in a coherent way, just as coherent patterns of atoms reveal themselves as physically detectible phenomena. 19)How do exactly protophenomena coherently produce phenomenal experience? “A population of protophenomena dependent on the same inout protophenomena has a Conditional Probability Density Field (CPDF) that is the product of the CPDFs of all the high-intensity input protophenomena, that is, of all the input protophenomena present in the current conscious state. The CPDFs of individual protophenomena can be quite broad, but in the joint response to the same input of a large number, the product can be very narrow, so that they define a phenomenal state quite precisely.” (p.255) Furthermore, protophenomena should be considered theoretical entities, a useful way to proceed toward a fruitful understanding of consciousness that would need to be confirmed along the way.
How does the author link mere neurological connections and the experienced sense of meaning? Increased activity of a protophenomena would effect – either stimulate or inhibit – those which depend on it, thus constraining the set of possible conscious states and their evolution in time. Conscious states would somehow be necessarily defined as long as underlying neuronal connections remain unvaried.
An interesting overall nondeterministic relation among protophenomena is thus proposed: “Even nonsensory neurons depend on non-neural processes, such as he physiology of he brain, and the physical environment of the body. Although these effects can sometimes be treated as extra, hidden inputs to the synapses, they are often nonlinear and comparatively nonspecific in their effects, so it is usually better to treat them as phenomenologically undetermined alterations of the characteristic patterns of the affected protophenomena.” (p.258) And neuroplasticity affects protophenomena by altering their relations as a result of changing habits or learning, such as the generation of brand new synapses may well be related to the generation of new protophenomena as well. Such is the ‘flexible ontology’ of phenomenal world that the author refers to.
MacLennan brings to completion the paper with the following implications:
- Consciousness is a matter of degree, by virtue of its relation with the complexity underlying neural correlates; in such a view, simpler organisms should have less definite perceptual experiences, due less coherent CPDFs.
- If we would be able to duplicate the exact information-processing properties of synapses, we would create a conscious experience; therefore, AI will have protophenomena and be conscious.
- Subjective experience of various investigated phenomena must be the way it is; an example is provided in terms of the impossibility of pitch inversions – sound perception realm – where pitch sensations are mapped neurologically, and could not produce the same experience for high and low ones.
- Consciousness is a unitary emergent property of the relations among individual protophenomena, and its unity could be measured in principle by measuring the ‘tightness’ between those relations. Thus we can say that also consciousness’ unity is a matter of degree.
- Unconscious mental processes could be variously interpreted as 1) protophenomena with a low-degree coherence, thus not emergent at a conscious level; 2) what we commonly perceive of the world may be not the only conscious protophenomena population: other coherent protophenomena emergencies could very well be conscious as well, but manifest themselves in different ways, like dreams, urges and the like; 3) that according to one Sherrington and Pribram hypothesis, unconscious mind might pertain to simple mechanisms like fire-unfire axons, providing the feature of be instinctive, less reflective.
MacLennan proposes that we adopt irreducibility in both phenomenological and neurological realms to further proceed along a fruitful investigation: “The present theory is dualistic in the sense that certain objects in certain situations (namely, activity sites in a functioning brain) have fundamental properties (protophenomena and their intensities), which are not reducible to physical properties. It is also dualistic in that the inherently private fact of experience is nor reducible to the phenomena experienced, which are all potentially public […] Nevertheless, it is a kind of monism in postulating one ‘stuff’, which happens to have two fundamental, mutually irreducible aspects (phenomenal and physical).” (p.265)
Consciousness, Information and Panpsychism – by W. Seager
Seager’s aim is to address the ‘generation problem’, the explanation of why and how experience should be generated by physical stuff in specific configurations, and to show how it is crucial and ultimately unavoidable, so that it would oblige us to find an even more radical explanation that that proposed by Chalmers.
But first, let Seager put forward the evidence of why we should reject any denial of the generation problem and thus proceed from its necessity. The most recognised debunker of the generation problem, at least in Seager’s account, is Daniel Dennett. Seager paraphrases Dennett’s arguments as follows: the existence of conscious experience for a bat could be proven by finding out more about his nervous system, and linking what within the system is responsible for behaviour modulations. Seager remarks that unconscious mechanisms modulate behaviour as well, so consciousness wouldn’t be a distinctive property of behaviour. He suggests that maybe what would modulate behaviour would be neural representations above a certain intensity so that they would be ‘conscious’; but then the generation problem would reckon in its full inexplicability.20)Of course, what I have just exposed here is the main argument Seager moves against Dennett. The argument thus reported is convincing, but I don’t know Dennett position well enough to avail that Seager was fair in his interpretation. Dennett states in the reported paper that “Whether people realise it or not, it is precisely the ‘remarkable functions associated with’ consciousness that drive them to wonder about how consciousness could possibly reside in a brain. In fact, if you carefully dissociate all these remarkable functions from consciousness – in your own, first-person case – there is nothing left for you to wonder about.” (p.35 original italics)
Author’s perplexities with Chalmers’ theory lie in the fact that a proposed fundamental feature of consciousness could not also be dependent upon a functional description, proposed by Chalmers under the principle of ‘organisational invariance’; consciousness would lose its fundamentality. Furthermore, an explanatory exclusion is posed to those who want consciousness to characterise only some specific kinds of functional descriptions. Seager rejects radical emergentism – the position by which consciousness as the product of specific physical assemblages has causal powers that differ from the causal power of its constituents – as ‘not very attractive’.21)Seager offers no explanations of why such possibility should be considered unattractive. It seems the case that further argumentation is needed, considered how this would be the sole point to reject functionalism as a likely explanation of conscious phenomena. I really am astonished that nothing more has been said. Chalmers’ theory would have another weakness in that the isomorphic nature of information and conscious experience doesn’t add anything to solve the generation problem, as it does not explain how some information bearers would have experiences and others don’t.
Seager proposes we shift to a ‘more radical view of information’: information would be not only a causal process of bit-transferring, but it should be characterised also in a semantical way. His argumentation grounds on Quantum Mechanics.22)Red flag: Seager is a philosopher – although he specialised in philosophy of science – not a quantum physicist. I am not proposing that philosophers shouldn’t take Quantum Mechanics into their models, but alternative interpretations of QM than those held by the scientific consensus should be suspect, especially when put forward by non-specialists. Seager talks about the two-slit experiment and how it generates an interference pattern which has different characteristics from the combined probability of the two paths the particles could take, as we would expect. When the interference pattern disappears, it is because of a disturbance – the act of measuring – which is generally understood as unavoidable. Seager claims it is not: “there is no need to posit disturbance in order to explain the loss of the interference pattern; mere information about which path the particles take will suffice.” (p.275) Seager pulls in the thought-experiment of placing a perfect detector which would not alter the particle state, and nevertheless determine its path: the interference pattern would disappear, “despite having no effect on the state of the particles” (p.275)23)It is not clear how particles’ state could not be affected, considered that we can’t possibly know what their previous state was. If the superposition of particles’ state passing through the two slits gains a concomitant interference pattern picture, then the detector has not been put in place, since the interference pattern implies a wave-like behaviour, implying the impossibility of predicting where the particle has gone. If we place the detector and know exactly the position of the particle, the interference pattern would disappear, as we are now ‘seeing’ the particle-like nature of the phenomenon. Can we say that no effect on the particles’ state has been exerted? No, since we can’t possibly know what the state was like before the detector was placed. Seager proposes therefore that Quantum Theory devises detectors to be carriers of a kind of information which is not ‘bit capacity’, for it doesn’t change the particles’ state, but which is able to differentiate between (and produce) the existence or nonexistence of the interference pattern by virtue of semantic properties. “The natural interpretation of [the] basic two-slit experiment is that there is a noncausal, but information laden connection amongst the elements of a quantum system. And this connection is not a bit channel or any sort of causal process (which shows, once again, incidentally, that we are dealing here with a semantic sense of information). Here, perhaps, we find a new, nontrivial and highly significant sense in which information is truly a fundamental feature of the world (maybe the fundamental feature).” (p.276, original italics)
Here lie the foundations of what Seager rightly calls panpsychism. He identifies four major objections to panpsychism:
- The combination problem: how would units of experience merge into the complex phenomena we call consciousness? This suggests panpsychism has a generation problem of its own.
- The unconscious mentality problem: if every atom has a mental aspect, how could we tell which of them have conscious properties and which don’t?
- The completeness problem: if consciousness was a fundamental property of the universe, we would expect it to show some causal effects that differ from those which can understood in simple physical terms. How comes such observations aren’t available?
- The no sign problem: no evidence of a nonphysical dimension of nature has been produced.
Seager answers as follows:
- QM clearly shows in the two-slit experiment that the state superposition is not a half-half mixture of particles passing through the left and the right slit. It has a different property altogether. It is therefore not a mystery that mental units could combine in a way such that new emergent properties arise.24)Needless to say, maybe, but such explanation could very well account as an answer to the generation problem itself; no panpsychism would be needed at that point. Somehow, Seager fails to see this point when he asserts that “quantum coherence cannot solve the generation problem satisfactorily, but it might solve the combination problem.” (p.283)
- [not directly addressed]
- “As a physical theory, QM asserts that there is no explanation of certain processes since these involve an entirely random ‘choice’ amongst alternative possibilities. The world’s behaviour does leave room for an additional fundamental feature with its own distinctive role.” (p.280)25)I think the inconsistency of Seager’s point has been previously well-illustrated: randomness of the wavefunction collapse does not imply any hidden variable, even less so conscious causality.
- If we accept that “there is no apparent sign of any gravitation between subatomic particles but since we take gravitation to be fundamental we are willing to accept that the gravitation force between to electrons really does exist”, we would then “expect that the effects of the ‘degree’ of consciousness associated with the elemental units of physical nature would be entirely undetectable.” (p.280)26)Seager is willing to propose an essentially useless explanation – as it subtracts itself from falsifiability, how could it be useful? – to account for panpsychism.
Seager’s panpsychism has, by author’s words, no empirical consequences, for it is intended to be a ‘purely philosophical theory’. Nevertheless, he is not afraid to state that it naturally follows from the incompleteness of the current physical world-view, “as evidenced by the fact that physically identical systems can nonetheless act in different ways. The ‘hidden variable’ is not physical but a form of elementary consciousness”. (p.282)
Rethinking Nature: A Hard Problem within the Hard Problem – by G. Rosenberg
Rosenberg poses himself among ‘The Liberal Naturalists’, those who are wiling to occupy the middle ground between those who deny the difficulty of the hard problem (‘The Gung-Ho Reductionists’) and those who think that the problem is unsolvable (‘The New Mysterians’). What characterises them is that “they are willing to suppose the existence of fundamental properties and laws beyond the properties and laws invoked by physics.” (p.288)27)Can such people even call themselves as Naturalists? In fact, the Stanford Encyclopedia of Philosophy clarifies that Naturalism is not much of an informative term anymore. “For better or worse, “naturalism” is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as “non-naturalists”. This inevitably leads to a divergence in understanding the requirements of “naturalism”. Those philosophers with relatively weak naturalist commitments are inclined to understand “naturalism” in a unrestrictive way, in order not to disqualify themselves as “naturalists”, while those who uphold stronger naturalist doctrines are happy to set the bar for “naturalism” higher.” Source: http://plato.stanford.edu/entries/naturalism/ What they propose, is that we think of the unified collection of each individual’s qualia as a unique ‘qualitative field’, which should be treated as fundamental, and that is situated beyond the mind.
The problems with the cognitive (specifically, high-functionality cognition, which should be opposed to low-functionality cognition as non-cognition) explanation of consciousness could be summarised as follows:
- Complexity criterion: ‘If a system reaches level of complexity N, then a qualitative field must arise from and co-evolve with it‘. Rosenberg argues that such explanation is fallacious, for it relies on concepts – ‘system’ and ‘complexity’ that are too difficult to simplify in a way that would suffice to characterise them as fundamental laws.28)I think the flaws here are to pretend that 1) there is a fundamental law of consciousness, instead of providing a mere agglomerate of characteristics as definition and 2) that consciousness laws should have the same elegance and simplicity as atomic laws of nature. Consciousness is invariably not an elemental property of matter (see the critic to panpsychism provided earlier), and the parsimony constraint should convince us that everything we know about particles wouldn’t be altered by inserting a qualitative property to them; they would behave just as they do under the current merely physical description. A theory of ‘degradation of cognition’ would imply panpsychism, so we would have to arbitrarily set a cut-off between sufficiently complex systems in order to be conscious: paradoxically, a cut-off of just one neuron would make the difference between a conscious and an unconscious system, but we could almost impossibly detect any difference in their behaviour.29)This objection is somehow more difficult to reject. Consider though the poetic naturalistic approach (illustrated by Sean Carroll in The Big Picture and deepened in this blog post: beyond a purely physical, elemental level, we try to make sense of the world in a way that would be useful to us. We can therefore set an arbitrary cut-off between conscious and non-conscious systems because, as explained before, consciousness is nowhere to be found as a fundamental property of the universe, at least not as fundamental as quantum fields. It is up to us to characterise which systems would be sufficiently complex to be called ‘conscious’, and which don’t, just as we decide to call ‘flounder’ and ‘tuna’ two different types of fish. Of course, defining consciousness would be slightly more difficult than that, and so we can’t expect to define simple consciousness laws. Furthermore, it is not meaningful to say that we can just subtract one neuron to consider an organism not conscious: complex systems as by definition populated by hundred of thousands of neurons, so clearly taking them in unities would not count as a meaningful consciousness threshold.
- Functionality criterion: ‘If a system evidences paradigmatically cognitive capacities XYZ, then it will have an associated qualitative field co-evolving with it’. Rosenberg insists that “the kinds of laws we are looking for are on the same level as those governing gravitation, motion, and mass.” (p.293) So the problem here would be that of characterising cognition in a way that would be simple enough to meet Rosenberg’s constraints. Even a functional account of cognition would not suffice, because it would still need teleology to exist.30)This last point proves ever further than fundamental laws of consciousness that are comparable to fundamental physical laws is not what we should be looking for.
- Biology criterion: ‘‘If a system reaches level of complexity N and is carbon based, then a qualitative field must arise from and co-evolve with it‘. In addition to the complexity and functionality problems described above, a biological constraint will further require us to define what exactly about biology is necessary for consciousness to arise.
Rosenberg identifies two major intuitions that stand in the way of making panpsychism palatable: “1) we have no evidence for qualitative fields outside of cognitive contexts, and 2) the mere supposition is incoherent since it requires experiences without experiencers.” (p.297) To the first intuition, Rosenberg responds that every explanation of consciousness that would point at experiences other than subjective ones would not be supported by evidence;31)this is, in my opinion, a further point not to look for a fundamental law of consciousness, as it is fundamentally subjective. we shouldn’t then have a pre-theoretical bias influencing us on defining which systems are conscious, but we should nevertheless include with empirical confidence those which are evidently conscious and be much more cautious to exclude anything, since we have less information about ‘what they would feel like’. To undermine the second intuition, Rosenberg suggests that we think about our ability to regard qualia as “experiential objects”; only the feeling of awareness would be irreducibly cognitive, for there is nothing about awareness that could be observed from the outside within my own conscious experience. He concludes by saying that qualia should nevertheless be considered as independent objects ‘out there’, that they are essentially dependent on minds; it does however prove us that we can conceive of qualia as something independent from the mind. Furthermore, when we attach qualitative experiences to non-cognitive systems, we are trying to fulfil an analogy such as “humans have conscious experiences and thermostats have experiences X”; therefore, “the difficulty of imagining qualitative fields that are not associated with minds comes from a shortcoming in our empathy, and not from a fundamental conceptual incoherence.” (p.300)32)I hasten to remind the reader that this is an evident pathetic fallacy.
Solutions to the Hard Problem of Consciousness – B. Libet
Libet endorses Chalmers investigation of the ‘hard problem’, in that it evidences how physical fundamental properties are susceptible to a priori inferences (that atoms have a certain mass, for example, and should be accepted as a brute fact); it would therefore equally reasonable to elevate consciousness to the same fundamental, irreducible standard.
Libet although has some problems with Chalmers’ ‘psychophysical principles’; he briefly states his concerns as follows:
- The principle of structural coherence would be flawed, because awareness has experimentally not been related with many of the ‘easy problem’ processes that Chalmers described. Libet describes awareness as an exquisite subjective phenomenon, much like consciousness is; the principle of structural coherence loses all its explanatory power.
- The principle of organisational invariance link consciousness with observable behaviour; in fact, Libet stresses that there are numerous examples of functional behaviour that are carried out without the subject being aware of it. “The distinguishing feature for a conscious experience is an introspective report by the individual who alone has access to the subjective experience.” (p.302)
- Chalmers’ double-aspect of information relies strongly on the principle of organisational invariance, in that it relates physical information to certain ‘phenomenal spaces’; such functional account would have the same flaws of the previous principle.
Libet has proposed (1994) a testable – and here the author stresses the merits of his hypothesis – theory of consciousness, which would explain the phenomenon non-reductively as a ‘conscious mental field’ (CMF) that would emerge from a particular set of neural activities.33)The designed experiment, it seems, was never carried out. Libet’s hypothesis sounds much as contemporary electromagnetic-field theories of consciousness.
Turning ‘the Hard Problem’ Upside Down and Sideways – by P.Hut and R. Shepard
First of all, authors say, let’s define what the ‘hard problem’ is not: it is not the problem of providing a scientific explanation of how matter, as the brains are, can produce intelligent behaviour, for we haven’t defined yet what limitations a complex physical system should have. What the ‘hard problem’ is, instead, is to understand how the ‘first-person’ subjective quality of experience, which is seemingly unphysical, could arise from matter, for any closer understanding of the brain’s functions hasn’t provided any clue to reduce such chasm.
Hut and Shepard stress that there are some serious problems with the current scientific approach to the problem of consciousness:
- Although conscious experience could not adequately be described in purely physical terms, it is supposed to arise from complex systems like the brain, while some regions produce conscious experience and some don’t, even though these regions are physically undistinguishable; the ‘evidently nonphysical’ characterisation of consciousness could therefore hardly be explained.34)Two things: first of all, we cannot claim that we know what exactly is going on in the brain, as to say that two regions are physically identical. No serious neuroscientist has never said so, even now, almost 20 years after this book was printed. So claims about the exact nature of the brain back in the late Nineties seem at least naive, and that is even more alarming, considering that Shepard is a cognitive scientist. Even more worrisome, is assuming that conscious experience is ‘evidently nonphysical’ and not get in the trouble of defending such position, if not by saying that is ‘commonly accepted’.
- It has not been defined yet an accepted criterion to decide whether a physical process is characterised by conscious experience or not from external observation. No fundamental property has been identified to characterise conscious systems from non-conscious systems, and even if we would be able to discover that the firing of a particular neuron was responsible for a conscious event, that would tell us nothing about why an indistinguishable physical event would not be accompanied by conscious experience.35)Authors here leverage the use of the word ‘indistinguishable’ to subtend that everything we know about conscious and non-conscious events leads to identical physical explanations; even though in this case it might be appropriate to say that two firing neurons are somehow identical physical processes, if we know nothing about the hundreds of their neural connections, we can’t attach any meaning to the firing of a single neuron.
- “If nonphysical conscious experience is taken to have a causal influence back on the physical process from which it arose (psychophysical interactionism), how is this to be reconciled with the fundamental assumption of science that every physical state of a system is strictly determined by a preceding physical state of the system […]?” (p.307)36)In fact, this is quite more than assumption, and it should undoubtedly tell us that conscious experience is a physical process.
Authors propose that we turn the problem upside down, that we start to see everything as based in experience: atoms, molecules and fields are no subject to direct experience, they are pure abstractions; what we refer to as concepts and words, for example, are nothing but ‘meaningless arrangements of molecules’ and ‘constellations of qualia’ in the mind of scientists.37)These sets of molecules and qualia are so ‘meaningless’ and randomly put together that we base our very existence on them! I propose that authors could well define arrangements of molecules as meaningless, but then do not use these meaningless objects to set up a new theory. Rest in solipsism.
Thus the ‘hard problem’ could be softened: macro-objects of commonsense perception and micro-objects of scientific inquiry become equally useful elements to describe the world around us. The problem of existence of other minds posed by the initial solipsism is to be softened by viewing intersubjectivity as “expressing properties that are inherent in subjective conscious experience, but in addition are mutually agreed upon by different subjects.” (p.309) Still, the biggest mystery would be the existence of an objective physical world. “Everything we experience (whether ‘out there’ or ‘in here’) is, alike, a part of our experience.” (p.310) Spatial and temporal extensions withstand the same treatment, and are no more independently existing features of the universe.
What Hut and Shepard propose departs from traditional idealism: they do not deny that there may exist something behind the experienced phenomena, but any physical law should be considered nothing more than useful hypothesis, to the extent that they allow us to ‘predict the regularities of our experience’.
Here authors introduce the notion of turning the hard problem sideways: we should try to consider both physical reality and experience to provide a grounding for reality. Intersubjectivity would not be a mere ‘superposition of subjective and objective properties’ anymore. And the solipsistic approach is a good starting point to feel and call the thinking brain as ‘my own’ with sufficient confidence, and make me able to extend the study of consciousness through my consciousness, just as maths alone is sufficient to model maths. Maths and consciousness unequaled can be described self-reflexively. Moreover, we should grant reality a necessary structure for conscious events to occur; this aspect of reality would although be different from our classical notions of space and reality for the reasons described above.
If both upside down and sideways explanations are merged, we could come up with the hypothesis that matter and consciousness are both “emergent properties of underlying and more fundamental aspects of reality” (p.314), which we should call X for lack of better definitions. To explain their view, Hut and Shepard recur to the following physical analogy: if we should explain what time is to someone who has a form of selective amnesia, how would we do it? We could take a series of snapshots and point out how certain objects have moved around the space, while some haven’t, and describe the moving objects as having ‘more motion’ than still ones. The point is, we would not be able to explain time without relying on the very concept of time, as to illustrate, for example, why some snapshots were taken before others; furthermore the whole explanation would unroll in time.38)What if we proposed instead a thermodynamical explanation of time? The arrow of time defines time as the necessity for which the universe has overall larger entropy than the moment before. This is how we came to infer that something like the Big Bang could have taken place. Such explanation would need no concept of time, except for the observers to exist and be able to link present experiences to previous ones, even if only for 2 seconds: it would just be a matter of comparing different universe states, and consistently observing that the snapshot of the second moment has an higher overall entropy than the snapshot of the first moment. The thermodynamical explanation would clear some of the messiness around a ‘snapshot explanation’. Of course the concept of time would be experientially, implicitly involved. I cannot otherwise imagine how a thought-experiment itself could be conceived out of time. Lets unfold the analogy, in that consciousness would be the equivalent of motion in the previous example. Just as the amnesic man had to infer the underlying, fundamental property of time from the experience of motion, the physicist would be shown how to start from the presence of conscious experience to arrive to the underlying aspect of reality that can give rise to consciousness. X stands to consciousness as time stands to motion. X would be everything that we ‘sense’, everything that ‘makes sense’ to us, and everything we know has such nature of being meaningful, of ‘making sense’ to us. “Attempts to embed consciousness in space and time are doomed to failure, just as equivalent attempts to embed motion in space only. Yes, motion does take place in space, but it also partakes in time. Similarly, consciousness certainly takes place in space and time, but in addition seems to require an additional aspect of reality, namely X, in order for us to give a proper description of its relation with the world as described in physics.” (p.319)39)I have a major concern here, which is the proposed fundamental property of time: in thermodynamical terms, again, time is nothing more than the concept we attach to the experimental evidence that universes proceed irreversibly toward a state of increased entropy. Two snapshots from a isolated universe showing two different entropies would tell us which of them came before and which after, without the need of anyone observing the process. Time is something we infer from those spontaneously unfolding processes, and we can confidently say that they would unfold with or without us. “Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” (Richard Feynman) “However abstract our notions of atoms, quantum fields, or more exotic constructs may be, all of these notions are ultimately grounded in experience. As such, they cannot even be considered as candidates for whatever it might be, if anything, that could be considered to underlie conscious experience.” (p.321)
The Relation of Consciousness to the Material World – by M. Velmans
Velmans advances that, because consciousness could not be found “within any information processing ‘box’ within the brain” (p.326) and could therefore not be enclosed within a functional explanation, and because idealism has the appalling problem of denying the existence of an ‘outside’ world altogether, we should find a deeper level of explanation of conscious phenomena: that “consciousness and its correlated brain states may be thought of as dual aspects of a particular kind of ‘information’, which is in turn, a fundamental property of nature.” (pp.326-327)
Such property could be inferred from the fact that as conscious experiences are representational, also their neural correlates would likely be so: qualia are ‘about something’ and their physical correlates would encode the very information. So both conscious experience and neurons would express the same information in two different ways.
Consciousness further has these two specific characters: it is related to “a late-arising product of focal-attentive processing” (p.327) and to ‘information dissemination‘. The first property stems from the experimental evidence that we become conscious of decisions that we have already done, or from the fact that conscious experiences are the final stage of the entire analysis process that our brains carry through. The second derives from a Weiskrantz experiement (1974), where subjects who were hemifield blinded could nevertheless accurately guess under pressure the perception of certain physical stimulations, even though these stimuli information was not consciously available to them.
Thus the theory exposed as such is quite similar to what Chalmers has proposed. There are some problems though with the split between ‘awareness’ and ‘consciousness’, for many processes such as the ability to discriminate and categorise could be performed without ‘awareness’, just as we wouldn’t say that a computer is ‘aware’. Thus the partition proposed by Chalmers is not satisfactory.
With regard to the relation between consciousness and information, Velmans stresses that we should say that it holds true only for phenomenal aspects of information, for information processing is largely carried out in a nonconscious fashion, both by humans and machines. What would characterise information and phenomenal? Besides the already known possibilities – that it could be related to the ‘wetness’ of living organisms, to a specific threshold in neural representations to become conscious, to specific brain regions, or to a combination of the previous options – Velmans adds that information could very well always have a phenomenal aspect, until it is prevented from doing so – and the brain would perform this selection by its inhibitory powers; it would be now much more difficult to dismiss Chalmers’ hypothesis.
Velmans prospects a nonreductionist theory where the appearance of information depends on the perspective from which it is viewed, whether it is a first-person or a third-person one. He recalls Quantum Mechanics to delineate a sort of ‘psychological complementarity’ principle: “Both a (third-person) neural/physical and a (first-person) phenomenal description are required for a complete psychological understanding of subjects’ representations.” (p.334) This is just a useful analogy to facilitate comprehension though, and Velmans makes clear that the wave/particle manifestation has nothing to do with the mechanisms of consciousness: both waves and particles could in fact be detected from a third-person perspective, whereas a unified comprehension of consciousness could be reached only through the complementarity of what is accessible to both the observer and the experiencing subject.
Neurophenomenology: A Methodological Remedy for the Hard Problem – by F. Varela
(I found this picture to be quite illustrative of what has been discussed since the beginning)
Varela’s take on consciousness is phenomenological: conscious experience is irreducible. He reports the words of Searle to highlight the problem: “The ontology of the mental is an irreducibly first-person ontology … There is, in short, no way for us to picture subjectivity as part of our world view because, so to speak, the subjectivity in question is the picture.” (p.342) Varela stresses that a re-descovery of the direct quality of first-person experience could well provide a new, rejuvenating ground to all branches of knowledge.
Varela lays down the fundamentals of the phenomenological reduction (PhR) approach:
- Attitude: reduction. This is the mindful ability to think about one own’s thinking, to suspend one’s beliefs in order to open “new possibilities within our habitual mind stream” (p.344) This disposition has to be systematic to produce meaningful results.
- Intimacy: intuition. Reduction should produce a less-loaded, more vivid baggage of experience. The opening up of new mental horizons unleashes intuition, and is therefore “the basis of the criteria of truth in phenomenological analysis, the nature of its evidence.” (p.345)
- Description: invariance. Intimacy is not the end of the process, otherwise we’ll be left with a bunch of ghosts. The intuitive evidence gets expressed and re-shaped through the iterating process of communication, which has to be thought more of an embodiment, rather than an encoding. What would be produced are invariant materialisations of consistent and inter-subjectively verified intuitions.
- Training: stability. The whole process would be fruitful only if systematically repeated, and encoded in a community of phenomenologically-oriented researchers.
The bracketing of quick inferences injects new variabilities in the process. Thus, the subject-object duality vanishes into a broad filed of phenomena (what Husserl called the ‘fundamental correlation’). The nature of such duality emerges therefore as a manifestation of conscious processes, and its structure investigation casts light on its inextricable links with others’ consciousness experiences. But, warns Varela, “the line of separation – between rigour and lack of it – is not to be drawn between first and third person accounts, but determined rather by whether there is clear methodological ground leading to a communal validation of shared knowledge.” (p.348)
In this view, what does neurophenomenology bring to the solution of the hard problem? Rather than seeking for ‘extra ingredients’ or merely functionalist explanations, it seeks to find bridges between two irreducible phenomenal domains – that of matter and subjective experience. PhR provides the framework to account both for subjectivity and objectivity through intersubjective rigorous verification of one’s intuitions.
Phenomenological mastery should be encouraged as cognitive science technologies allow us to indagate conscious experiences more subtly. Varela proposes a double constraint, both on empirical questions and first-person accounts, to validate any neurobiological advance: examination in the form of reduction, production of invariants and intersubjective verification should be backed up, informed and validated upon phenomenological accounts, and viceversa. This is the missing piece of functionalist explanations of consciousness: it alienates human life. Neurophenomenology allows us to put it back in place.40)I was a little surprised to find not much references on the exposed phenomenological approach in the (limited) literature I’ve been reading lately. Maybe it is assumed that suspension of judgement is already embedded in good critical-thinking, and that too much of ‘bracketing’ could impair any advancement. The approach that I found in the mentioned literature is predominantly pragmatic, so it might be that the burden of doubt is sometimes set aside in order to advance the most likely hypothesis, in line with bayesian reasoning. In the end, sometimes it is better to advance with little doubts and avoid the costs of drowning into the details, while maintaining a bayesian approach and be ready to change one’s mind in light of new evidence.
The Hard Problem: Closing the Empirical Gap – by J. Shear
Shear’s take on the ‘hard problem’ is that its solution will depend upon a deeper understanding of conscious phenomena by empirical research. He remarks that while our comprehension of the physical world has been exponentially refined, much less has been done to explain conscious phenomena, and that our discussions on the matter “often remain based on superficial, commonsensical perception, classification and understanding of the contents of our inner awareness.” (p.361) The systematic scientific knowledge has thus to be extended to the realm of the subject.
The major objection that Shear has to overcome is that posed by Searle: that consciousness is not reducible, and yet not observable, for, by observing one’s own subjectivity, “any observation that I might care to make is itself that which was supposed to be observed.” (p.364) If such view holds true, we would be in a position where introspection could not provide empirical data, thus making any scientific aim groundless.
Shear relies on theory of mind to rebut Searle’s impasse: according to some research41)Wellman, H.M. – The child’s theory of mind (1990) , the child’s ability to discriminate between what pertains to the mental, e.g. dreams, and to the physical springs from the reflective evaluation of the criteria by which classified phenomena are intersubjectively accessible. Thus, observation and subsequent reporting could perfectly be described as real and consistent whether it would turn out to be right or fallacious, of carrying an objective or subjective content; no more should we consider that consciousness is less knowable by means of introspection than the physical is by means of perception.
Shear reminds us, vaguely echoing Varela42)see the previous paper on Neurophenomenology., that doing science has to do with the establishment of a rigorous method for intersubjective validation in accordance to specific protocols, rather than depending on the alleged physical-nonphysical nature of its objects of investigation. Thus, Shear advances, “the independence of the observer that is paradigmatically relevant to scientific methodology, and thus science itself, is that of the truth of conclusions, rather than that of objects referred to.” (p.369, original italics) If science will continue to develop in this direction, a ‘science of consciousness’ would correlate individual reports of experiences with ‘objectively observable phenomena in accord with standard objective scientific protocols.’
Now Shear addresses the development of contemplative eastern practices as extremely useful, as their inquiry in the realm of conscious phenomena has both conduced to an extremely rich reporting, and more importantly to the independent observation of a primary, ‘pure’ nature of consciousness. What could such detection tell us? Some physicists (Bohm, Wigner) have speculated that the rising of conscious phenomena from such a pure state is qualitatively almost identical to what happens in Quantum Mechanics, where matter seems to emerge from wave-like fluctuations.43)The expert reader would notice how Wigner is notoriously the only physicist who claimed consciousness to play a causal role in QM; Bohm’s model on the other hand, surprisingly well-considered among contemporary physicists (see this survey – not much consensus seems to be found anyway: see for example how Sean Carroll comments another survey) has the problem of inserting ‘hidden variables’ to be consistent (found out more at http://www.preposterousuniverse.com/blog/2008/08/08/quantum-diavlog/). Shear aims on this lines at closing the gap between materialist, idealist and nondualist resolutions of the ‘hard problem’, by pointing out how consciousness and matter should be considered as much more similar than what is ordinarily thought.
Moving Forward on the Problem of Consciousness – by D. Chalmers
In his response paper, Chalmers addresses each of the moved critiques and positive contributions in turn:
- Deflationary Critiques:
Dennett’s and Churchland’s arguments fall under type-A materialism: that once we know everything about the functions performed by the brain, we are left with nothing else, and the very question of consciousness would therefore cease to exist. The problem with this position is that it seems to deny a manifest fact about what we know – the existence of conscious experience. Chalmers adds that such a strong position needs equally strong arguments to be supported, but none of them could be found in Dennett’s and Churchland’s dissertations: the analogies of vitalism and heat are fundamentally different in their functionalist explanations from that of consciousness, because the latter lies “at the centre of our epistemic universe, rather than at a distance.” (p.383) If life and thermodynamical qualities could, in the end, be reduced to mere structures and functions, there is something unique about consciousness: we know that there is something else other than structure and function. Explaining experience would therefore be something that lies outside of a functionalistic view of consciousness.
Chalmers argues that Dennett can eliminate consciousness because he aprioristically sets into a ‘third-person absolutism’; a first-person perspective would nevertheless leave the hard problem intact. Dennet’s request of ‘independent’ evidence for the existence of experience could not be provided, because it is not ‘postulated’ to explain other phenomena in turn; experience should therefore be taken as irreducible. The scientific character of Dennett’s and Churchland’s positions could therefore be rooted in ‘third-person absolutism’, nothing more than a philosophical claim; those who are impressed by a first-person phenomenology would equally settle in an irreducible enemy camp.
Type-B materialists recognise that there is a conceptually distinct phenomenon of consciousness; they anyway subsume such difference in higher-level systems. That is, explaining structure and function with more structures and functions. Clark and Hardcastle resolve the issue by transforming an a priori difference between consciousness and functional properties into an a posteriori identity by means of correlations; this “makes the identity an explanatory primitive fact about the world.” (p.388, original italics) The bruteness of explanatorily primitive facts regularly identifies them as fundamental laws – therefore, type-B materialism would inevitably fall into the position that Chalmers is arguing for. Type-B materialism cannot work if it is not able to turn the identification into an explanations – and that is precisely the pain for everyone involved in the ‘hard problem’.44)Chalmers assumes here that “once it is noted that there is no conceptually necessary link from physical facts to phenomenal facts, it is clear that the idea of a physically identical world without consciousness is internally consistent.” (p.390) This is what legitimates philosophical zombies as useful thought-experiments to argue in favor of a non-reductive explanation of consciousness. A very good response comes from Sean Carroll in this article: “Imagine a zombie stubbed its toe. It would cry out in pain, because that’s what a human would do, and zombies behave just like humans. When you stub your toe, certain electrochemical signals bounce around your connectome, and the exact same signals bounce around the zombie connectome. If you asked it why it cried out, it could say, “Because I stubbed my toe and it hurts.” When a human says something like that, we presume it’s telling the truth. But the zombie must be lying, because zombies have no mental states such as “experiencing pain.” Why do zombies lie all the time? […] The problem is that the notion of “inner mental states” isn’t one that merely goes along for the ride as we interact with the world. It has an important role to play in accounting for how people behave. In informal speech, we certainly imagine that our mental states influence our physical actions. I am happy, and therefore I am smiling. The idea that mental properties are both separate from physical properties, and yet have no influence on them whatsoever, is harder to consistently conceive of than it might first appear. According to poetic naturalism, philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems.”
- Nonreductive Analyses:
Among those who believe that Chalmers might have overestimated the easiness of the ‘easy problems’, that ‘reportability’ and other functions could not be fully explained without consciousness (Lowe and Hodgson), Chalmers answers that his position on reportability, for example, should be interpreted as the mere presence of reports, excluding the fact that experience and thoughts are required for producing the report.
In addressing what Warner called the problem of incorrigibility, Chalmers points out that many beliefs about experience don’t have the characteristic of being incorrigible – they do not directly participate in defining our concept of experience; those which do so, would therefore be incorrigible and constitute the first-person epistemology of conscious experience.
To defend the cause of epiphenomenalism, prompted by accepting the causal closure of the physical domain, Chalmers evidences that our only evidence about the causal role of consciousness lies in our intuition that some conscious events are followed by certain physical events systematically. “But the epiphenomenalist can account for this evidence in a different way, by pointing to psychophysical laws, so our intuitions may not carry much weight here.” (p.401) “It is not obvious that consciousness must have a causal role.” (p.402)45)In refutation to this, see note n.44.
As for interactionist dualism and the possibility of denying physical systems’ causal closure, the quantum approach presented by Hodgson and Stapp show that they are not inconceivable. Chalmers goes on arguing in favour of a theory that can leave epiphenomenalism aside and preserve causal closure, while maintaining a nonreductive explanation of consciousness. How? Hawking himself (1988) has noted that there is no ‘fire’ under the equations that describe out physical reality – there is nothing that gives to it any substance. It would therefore not at all inconceivable to embrace what Bertrand Russell proposed in 192746)The Analysis of Matter, London: Kegan Paul. and say that we can “locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just coloured in its nodes.” (p.405, original italics) 47)Chalmers appeals to the fact that if this idea was true, we could combine consciousness irreducibility and causal closure, while denying epiphenomenalism. I am not sure how this should suffice to shift our understanding of the world in such a radical way, as opposed to a more simple, bayesianly well-grounded reductive approach. Furthermore, this apparent ‘naturalistic dualism’ would be nothing more than a fundamental, causal monism – and that would simply tell us that physical reality goes far beyond what our physical theories are telling us.
Chalmers further specifies with concern to his psychophysical laws, that
- “To hold that two subjects in the same functional state have the same conscious state is not to sell out to functionalism, except in an attenuated sense. Consciousness is not reduced to a functional state; it is merely associated with one. […] The invariance principle is intended as a non-fundamental law.” (p.408)
- “The ontology underlying the informational picture remains open. […] I favour the informational view largely because when I look for regularities between experience and the physical processes that underlie it, the most striking correspondences all lie at the level of information structures. We have to find something in underlying physical processes to link experience to, and information seems a plausible and universal candidate.” (p.410, original italics)
- Positive Proposals
Among the neuroscientific approaches, Baars has invoked that an ’empathy criterion’ is needed; not so for Chalmers, who remarks that solving the hard problem does not require us to have the exact experience of ‘what it is to be like a bat’, rather to explain why there is anything like that. Anyway, the process of linking consciousness and global availability, as well as searching for the fundamental principles of conscious phenomena as Crick and Koch do, will all be extremely useful and compatible both with scientific progress and the irreducibility of consciousness.
The phenomenological approach introduced by Varela and Shear is of a paramount importance, for it grounds the epistemology of the ‘hard problem’. The method is vulnerable to some flaws – such as the act of attention that subtly transforms the nature of experience and makes it therefore extremely hard to analyse, the development of an adequate formalism for gathering phenomenological data, not to mention the limits of incorrigibility – but they generally could be overcome by means of critical introspection and trust in the scientific methodology.
Whereas neurocognitive science provides the third-person data, phenomenology accounts for the first-person one. And as such correlations could be detected quite easily at a coarse-grained level, we may need new tools to find out how these two realms speak to each other in the deeper, finer-grained structures. Chalmers suggests that we speculate in the direction proposed by Penrose, Hameroff and Stapp – namely, that we consider how quantum mechanics could ground neural information-processing.
With regard to panpsychism, Chalmers reminds that it “is not required for a fundamental theory; it is not written in stone that fundamental properties have to be ubiquitous.” (p.417, original italics) What may be more of an accurate description is ‘panexperientialism, even in the form of the X fundamental introduced by Hut and Shepard. And in order to solve the ‘combination problem’, one does not need to assume that experiences should be assembled in the same way as physical particles do: Chalmers introduces the idea that informational composition may be a more appropriate way of achieving conscious macro-combinations.
- Once refused type-A materialism, we ought to look for a further phenomenon to explain consciousness;
- Once recognised that type-B materialism falls under unparalleled explanatorily primitive identities, the problem of taking consciousness as fundamental could not be avoided;
- There is a choice between holding onto the causal closure of the physical or not; quantum mechanics can allow us to break it open, but advantages are dubious;
- By choosing a causal closure instead, we are left with placing experience outside the physical network (epiphenomenalism) or inside it by virtue of ‘Russelian monism’. Chalmers favours the latter, with the provision that such panexperientialism could solve the ‘combination problem’;
- The most substantial choice would be that of the form of the proposed psychophysical theories: scientific approaches are favourite, but metaphysics should not be left out completely.
References [ + ]
|1.||↑||This supposition is based on the ground that mental things somehow don’t have spatial properties. It seems to me that the simple fact of not being able to produce two contemporary thoughts should tell us enough about the issue, since today’s scientific framework takes space and time to be radically unified.|
|2.||↑||In support of such claim, he posits that reductionists views in the relation between scientific fields should be rejected, as certain problems are not transferrable between them. Take for example, McGinn says, how “grotesque to claim that the problem of how the dinosaurs became extinct shows any inadequacy in the basic laws of physics!” (p.104). He couldn’t better predict the theme of last Lisa Randall’s bestseller!|
|3.||↑||This statement holds true for fundamental entities that have demonstrated, strict relationships with other fundamental entities, such as physical particles. However, consciousness doesn’t have this status, as there is nothing else that supports its supposed fundamental nature.|
|4.||↑||This understates recent advancements in physics toward the confirmation of gravitons, and contrasts with the spirit of science that would always look for even more fundamental underlying laws.|
|5.||↑||I see this last move as an alternative way of placing consciousness in the realm of fundamental entities, just as Chalmers does.|
|7.||↑||https://en.wikipedia.org/wiki/Defeasible_reasoning; Cruz, Joseph (1999) Contemporary Theories of Knowledge (Studies in Epistemology and Cognitive Theory), p.36|
|8.||↑||A detailed explanation of what impairment is and what is not, as it is so crucial to the point, would be much needed, but the author only provided references of his 1986, 1989, 1992, 1993 and 1994 works.|
|9.||↑||I omitted to report how the author deals with the two major objections to his argument, namely that his account doesn’t ‘invoke the subjectivity of conscious experiences’ and that physical perceptions per se could not have conscious experiences. Robinson answers the first objection by firsthand designate consciousness as one among the properties of conscious experiences – remember the example of pain – then by rejecting relational theories [for X to be a conscious X is for it to stand in some relation, R, to some other thing, Y] and therefore characterising the consciousness of conscious experiences as intrinsic and essential, i.e. “that they cannot exist without being conscious.” (p.157) He the proceeds to answer the second objection by sectioning colour perception into the molecular properties of the colour (colourO), and the ‘conscious experience that goes together with the perception’ of colourO things by a subject S, named colourC(S). At that point, subjects would necessarily learn to label colourC(S) experiences to colourO objects by some causal neural paths, which would be named colourN(S). Robinson claims that only colourC(S) properties are those which are conscious.|
|10.||↑||Penrose argues that gravity curvature of space-time hasn’t been considered by quantum physicists. The official claim is that gravitational forces at microlevels are so tiny that considering them wouldn’t make any difference. Penrose instead argues that even almost-undetectable differences may have large effects. He proposes a superposed state made of different space-time sets, each of them ascribable to the possible “places” that the particle assumes in different states of the superposition and subsequent space-time curvatures due to its exerted gravity force. Such superposition is unstable, and will therefore decay under precise laws into the observable geometry that we get at the quantum state reduction. Penrose acknowledges that there is no consensus upon how objective reductions happen, but he sees no plausible alternatives to his proposal.|
|12.||↑||Much what random variability is accounted for in Darwinian evolutionary theory.|
|13.||↑||To a physics-illiterate like me, the theory seemed coherent, almost convincing. Good science is to be especially skeptical when inclined to accept something, so I searched for some rebuts to the exposed theory. Two very good responses that made me strongly correct Stapp’s interpretation could be found at this blog post by philosopher and skeptic Massimo Pigliucci, and a much broader analysis of the issue by physicist Michael Nauenberg. Nauenberg further argues here that the very interpretation of the wavefunction as a non-physical object should determine that “there isn’t any mystery that its mathematical form must change abruptly after a measurement has been performed.” Furthermore, much of misunderstandings which arose after von Neumann’s work is that he simplistically considered the measurement apparatus as a superposition of two states, the “fired” or the “unfired” state. A correct approach would be that of characterising any macro-object such as Geiger apparatus as a recorder of atomic events, that by rules of thermodynamics (arrow of time) should be considered an irreversible process. Irreversible processes, of course, are everything there is about the collapse of superposition, and would have nothing to do with the presence of a conscious being. Nauenberg remarks that Wigner was in fact the only major physicist (Nobel prize) to support the role of consciousness in the collapse of the function.|
|14.||↑||Not much is provided to explain such claim, besides clinging to an unsolved measurement problem – not quite unsolved, as Nauenberg previously showed – and a 1929 Bohr quote about the indeterminacy of the subject-object boundaries of perception, where “no sharp separation between subject and object can be maintained, since the perceiving subject also belongs to our mental content.” (quoted at p.219)|
|15.||↑||There is a mistake in the argument: a dual aspect of reality could not really stand simply because the claimed ‘objective’ part is not physical, as the wavefunction has no physical properties. This is strange, because Bilodeau himself didn’t miss to notice this feature of the superposition, although he claims it should be recalled in a dual-aspect theory of reality.|
|16.||↑||This is the clarification you might have waited for when reading about the concocted underlying “non-mechanical mode of existence.”|
|17.||↑||By virtue of what has been previously exposed, I trust you may infer how weak such argument is. Nevertheless, being this a comparative collection of papers on consciousness, I wanted to include it as well.|
|18.||↑||“In technical jargon, conscious contents and self may be orthogonal constructs, which always coexist but do not necessarily covary.” (p.245)|
|19.||↑||How do exactly protophenomena coherently produce phenomenal experience? “A population of protophenomena dependent on the same inout protophenomena has a Conditional Probability Density Field (CPDF) that is the product of the CPDFs of all the high-intensity input protophenomena, that is, of all the input protophenomena present in the current conscious state. The CPDFs of individual protophenomena can be quite broad, but in the joint response to the same input of a large number, the product can be very narrow, so that they define a phenomenal state quite precisely.” (p.255)|
|20.||↑||Of course, what I have just exposed here is the main argument Seager moves against Dennett. The argument thus reported is convincing, but I don’t know Dennett position well enough to avail that Seager was fair in his interpretation. Dennett states in the reported paper that “Whether people realise it or not, it is precisely the ‘remarkable functions associated with’ consciousness that drive them to wonder about how consciousness could possibly reside in a brain. In fact, if you carefully dissociate all these remarkable functions from consciousness – in your own, first-person case – there is nothing left for you to wonder about.” (p.35 original italics)|
|21.||↑||Seager offers no explanations of why such possibility should be considered unattractive. It seems the case that further argumentation is needed, considered how this would be the sole point to reject functionalism as a likely explanation of conscious phenomena. I really am astonished that nothing more has been said.|
|22.||↑||Red flag: Seager is a philosopher – although he specialised in philosophy of science – not a quantum physicist. I am not proposing that philosophers shouldn’t take Quantum Mechanics into their models, but alternative interpretations of QM than those held by the scientific consensus should be suspect, especially when put forward by non-specialists.|
|23.||↑||It is not clear how particles’ state could not be affected, considered that we can’t possibly know what their previous state was. If the superposition of particles’ state passing through the two slits gains a concomitant interference pattern picture, then the detector has not been put in place, since the interference pattern implies a wave-like behaviour, implying the impossibility of predicting where the particle has gone. If we place the detector and know exactly the position of the particle, the interference pattern would disappear, as we are now ‘seeing’ the particle-like nature of the phenomenon. Can we say that no effect on the particles’ state has been exerted? No, since we can’t possibly know what the state was like before the detector was placed.|
|24.||↑||Needless to say, maybe, but such explanation could very well account as an answer to the generation problem itself; no panpsychism would be needed at that point. Somehow, Seager fails to see this point when he asserts that “quantum coherence cannot solve the generation problem satisfactorily, but it might solve the combination problem.” (p.283)|
|25.||↑||I think the inconsistency of Seager’s point has been previously well-illustrated: randomness of the wavefunction collapse does not imply any hidden variable, even less so conscious causality.|
|26.||↑||Seager is willing to propose an essentially useless explanation – as it subtracts itself from falsifiability, how could it be useful? – to account for panpsychism.|
|27.||↑||Can such people even call themselves as Naturalists? In fact, the Stanford Encyclopedia of Philosophy clarifies that Naturalism is not much of an informative term anymore. “For better or worse, “naturalism” is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as “non-naturalists”. This inevitably leads to a divergence in understanding the requirements of “naturalism”. Those philosophers with relatively weak naturalist commitments are inclined to understand “naturalism” in a unrestrictive way, in order not to disqualify themselves as “naturalists”, while those who uphold stronger naturalist doctrines are happy to set the bar for “naturalism” higher.” Source: http://plato.stanford.edu/entries/naturalism/|
|28.||↑||I think the flaws here are to pretend that 1) there is a fundamental law of consciousness, instead of providing a mere agglomerate of characteristics as definition and 2) that consciousness laws should have the same elegance and simplicity as atomic laws of nature. Consciousness is invariably not an elemental property of matter (see the critic to panpsychism provided earlier), and the parsimony constraint should convince us that everything we know about particles wouldn’t be altered by inserting a qualitative property to them; they would behave just as they do under the current merely physical description.|
|29.||↑||This objection is somehow more difficult to reject. Consider though the poetic naturalistic approach (illustrated by Sean Carroll in The Big Picture and deepened in this blog post: beyond a purely physical, elemental level, we try to make sense of the world in a way that would be useful to us. We can therefore set an arbitrary cut-off between conscious and non-conscious systems because, as explained before, consciousness is nowhere to be found as a fundamental property of the universe, at least not as fundamental as quantum fields. It is up to us to characterise which systems would be sufficiently complex to be called ‘conscious’, and which don’t, just as we decide to call ‘flounder’ and ‘tuna’ two different types of fish. Of course, defining consciousness would be slightly more difficult than that, and so we can’t expect to define simple consciousness laws. Furthermore, it is not meaningful to say that we can just subtract one neuron to consider an organism not conscious: complex systems as by definition populated by hundred of thousands of neurons, so clearly taking them in unities would not count as a meaningful consciousness threshold.|
|30.||↑||This last point proves ever further than fundamental laws of consciousness that are comparable to fundamental physical laws is not what we should be looking for.|
|31.||↑||this is, in my opinion, a further point not to look for a fundamental law of consciousness, as it is fundamentally subjective.|
|32.||↑||I hasten to remind the reader that this is an evident pathetic fallacy.|
|33.||↑||The designed experiment, it seems, was never carried out. Libet’s hypothesis sounds much as contemporary electromagnetic-field theories of consciousness.|
|34.||↑||Two things: first of all, we cannot claim that we know what exactly is going on in the brain, as to say that two regions are physically identical. No serious neuroscientist has never said so, even now, almost 20 years after this book was printed. So claims about the exact nature of the brain back in the late Nineties seem at least naive, and that is even more alarming, considering that Shepard is a cognitive scientist. Even more worrisome, is assuming that conscious experience is ‘evidently nonphysical’ and not get in the trouble of defending such position, if not by saying that is ‘commonly accepted’.|
|35.||↑||Authors here leverage the use of the word ‘indistinguishable’ to subtend that everything we know about conscious and non-conscious events leads to identical physical explanations; even though in this case it might be appropriate to say that two firing neurons are somehow identical physical processes, if we know nothing about the hundreds of their neural connections, we can’t attach any meaning to the firing of a single neuron.|
|36.||↑||In fact, this is quite more than assumption, and it should undoubtedly tell us that conscious experience is a physical process.|
|37.||↑||These sets of molecules and qualia are so ‘meaningless’ and randomly put together that we base our very existence on them! I propose that authors could well define arrangements of molecules as meaningless, but then do not use these meaningless objects to set up a new theory. Rest in solipsism.|
|38.||↑||What if we proposed instead a thermodynamical explanation of time? The arrow of time defines time as the necessity for which the universe has overall larger entropy than the moment before. This is how we came to infer that something like the Big Bang could have taken place. Such explanation would need no concept of time, except for the observers to exist and be able to link present experiences to previous ones, even if only for 2 seconds: it would just be a matter of comparing different universe states, and consistently observing that the snapshot of the second moment has an higher overall entropy than the snapshot of the first moment. The thermodynamical explanation would clear some of the messiness around a ‘snapshot explanation’. Of course the concept of time would be experientially, implicitly involved. I cannot otherwise imagine how a thought-experiment itself could be conceived out of time.|
|39.||↑||I have a major concern here, which is the proposed fundamental property of time: in thermodynamical terms, again, time is nothing more than the concept we attach to the experimental evidence that universes proceed irreversibly toward a state of increased entropy. Two snapshots from a isolated universe showing two different entropies would tell us which of them came before and which after, without the need of anyone observing the process. Time is something we infer from those spontaneously unfolding processes, and we can confidently say that they would unfold with or without us. “Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.” (Richard Feynman)|
|40.||↑||I was a little surprised to find not much references on the exposed phenomenological approach in the (limited) literature I’ve been reading lately. Maybe it is assumed that suspension of judgement is already embedded in good critical-thinking, and that too much of ‘bracketing’ could impair any advancement. The approach that I found in the mentioned literature is predominantly pragmatic, so it might be that the burden of doubt is sometimes set aside in order to advance the most likely hypothesis, in line with bayesian reasoning. In the end, sometimes it is better to advance with little doubts and avoid the costs of drowning into the details, while maintaining a bayesian approach and be ready to change one’s mind in light of new evidence.|
|41.||↑||Wellman, H.M. – The child’s theory of mind (1990)|
|42.||↑||see the previous paper on Neurophenomenology.|
|43.||↑||The expert reader would notice how Wigner is notoriously the only physicist who claimed consciousness to play a causal role in QM; Bohm’s model on the other hand, surprisingly well-considered among contemporary physicists (see this survey – not much consensus seems to be found anyway: see for example how Sean Carroll comments another survey) has the problem of inserting ‘hidden variables’ to be consistent (found out more at http://www.preposterousuniverse.com/blog/2008/08/08/quantum-diavlog/).|
|44.||↑||Chalmers assumes here that “once it is noted that there is no conceptually necessary link from physical facts to phenomenal facts, it is clear that the idea of a physically identical world without consciousness is internally consistent.” (p.390) This is what legitimates philosophical zombies as useful thought-experiments to argue in favor of a non-reductive explanation of consciousness. A very good response comes from Sean Carroll in this article: “Imagine a zombie stubbed its toe. It would cry out in pain, because that’s what a human would do, and zombies behave just like humans. When you stub your toe, certain electrochemical signals bounce around your connectome, and the exact same signals bounce around the zombie connectome. If you asked it why it cried out, it could say, “Because I stubbed my toe and it hurts.” When a human says something like that, we presume it’s telling the truth. But the zombie must be lying, because zombies have no mental states such as “experiencing pain.” Why do zombies lie all the time? […] The problem is that the notion of “inner mental states” isn’t one that merely goes along for the ride as we interact with the world. It has an important role to play in accounting for how people behave. In informal speech, we certainly imagine that our mental states influence our physical actions. I am happy, and therefore I am smiling. The idea that mental properties are both separate from physical properties, and yet have no influence on them whatsoever, is harder to consistently conceive of than it might first appear. According to poetic naturalism, philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems.”|
|45.||↑||In refutation to this, see note n.44.|
|46.||↑||The Analysis of Matter, London: Kegan Paul.|
|47.||↑||Chalmers appeals to the fact that if this idea was true, we could combine consciousness irreducibility and causal closure, while denying epiphenomenalism. I am not sure how this should suffice to shift our understanding of the world in such a radical way, as opposed to a more simple, bayesianly well-grounded reductive approach.|